May 14 00:24:23.248443 kernel: Booting Linux on physical CPU 0x0000120000 [0x413fd0c1] May 14 00:24:23.248468 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue May 13 22:16:18 -00 2025 May 14 00:24:23.248476 kernel: KASLR enabled May 14 00:24:23.248482 kernel: efi: EFI v2.7 by American Megatrends May 14 00:24:23.248487 kernel: efi: ACPI 2.0=0xec080000 SMBIOS 3.0=0xf0a1ff98 ESRT=0xea451818 RNG=0xebf10018 MEMRESERVE=0xe4636f98 May 14 00:24:23.248492 kernel: random: crng init done May 14 00:24:23.248499 kernel: secureboot: Secure boot disabled May 14 00:24:23.248505 kernel: esrt: Reserving ESRT space from 0x00000000ea451818 to 0x00000000ea451878. May 14 00:24:23.248512 kernel: ACPI: Early table checksum verification disabled May 14 00:24:23.248518 kernel: ACPI: RSDP 0x00000000EC080000 000024 (v02 Ampere) May 14 00:24:23.248524 kernel: ACPI: XSDT 0x00000000EC070000 0000A4 (v01 Ampere Altra 00000000 AMI 01000013) May 14 00:24:23.248529 kernel: ACPI: FACP 0x00000000EC050000 000114 (v06 Ampere Altra 00000000 INTL 20190509) May 14 00:24:23.248535 kernel: ACPI: DSDT 0x00000000EBFF0000 019B57 (v02 Ampere Jade 00000001 INTL 20200717) May 14 00:24:23.248541 kernel: ACPI: DBG2 0x00000000EC060000 00005C (v00 Ampere Altra 00000000 INTL 20190509) May 14 00:24:23.248549 kernel: ACPI: GTDT 0x00000000EC040000 000110 (v03 Ampere Altra 00000000 INTL 20190509) May 14 00:24:23.248555 kernel: ACPI: SSDT 0x00000000EC030000 00002D (v02 Ampere Altra 00000001 INTL 20190509) May 14 00:24:23.248561 kernel: ACPI: FIDT 0x00000000EBFE0000 00009C (v01 ALASKA A M I 01072009 AMI 00010013) May 14 00:24:23.248567 kernel: ACPI: SPCR 0x00000000EBFD0000 000050 (v02 ALASKA A M I 01072009 AMI 0005000F) May 14 00:24:23.248573 kernel: ACPI: BGRT 0x00000000EBFC0000 000038 (v01 ALASKA A M I 01072009 AMI 00010013) May 14 00:24:23.248579 kernel: ACPI: MCFG 0x00000000EBFB0000 0000AC (v01 Ampere Altra 00000001 AMP. 01000013) May 14 00:24:23.248585 kernel: ACPI: IORT 0x00000000EBFA0000 000610 (v00 Ampere Altra 00000000 AMP. 01000013) May 14 00:24:23.248591 kernel: ACPI: PPTT 0x00000000EBF80000 006E60 (v02 Ampere Altra 00000000 AMP. 01000013) May 14 00:24:23.248598 kernel: ACPI: SLIT 0x00000000EBF70000 00002D (v01 Ampere Altra 00000000 AMP. 01000013) May 14 00:24:23.248604 kernel: ACPI: SRAT 0x00000000EBF60000 0006D0 (v03 Ampere Altra 00000000 AMP. 01000013) May 14 00:24:23.248616 kernel: ACPI: APIC 0x00000000EBF90000 0019F4 (v05 Ampere Altra 00000003 AMI 01000013) May 14 00:24:23.248622 kernel: ACPI: PCCT 0x00000000EBF40000 000576 (v02 Ampere Altra 00000003 AMP. 01000013) May 14 00:24:23.248628 kernel: ACPI: WSMT 0x00000000EBF30000 000028 (v01 ALASKA A M I 01072009 AMI 00010013) May 14 00:24:23.248634 kernel: ACPI: FPDT 0x00000000EBF20000 000044 (v01 ALASKA A M I 01072009 AMI 01000013) May 14 00:24:23.248640 kernel: ACPI: SPCR: console: pl011,mmio32,0x100002600000,115200 May 14 00:24:23.248646 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x88300000-0x883fffff] May 14 00:24:23.248652 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x90000000-0xffffffff] May 14 00:24:23.248658 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0x8007fffffff] May 14 00:24:23.248664 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80100000000-0x83fffffffff] May 14 00:24:23.248670 kernel: NUMA: NODE_DATA [mem 0x83fdffca800-0x83fdffcffff] May 14 00:24:23.248675 kernel: Zone ranges: May 14 00:24:23.248683 kernel: DMA [mem 0x0000000088300000-0x00000000ffffffff] May 14 00:24:23.248689 kernel: DMA32 empty May 14 00:24:23.248695 kernel: Normal [mem 0x0000000100000000-0x0000083fffffffff] May 14 00:24:23.248700 kernel: Movable zone start for each node May 14 00:24:23.248706 kernel: Early memory node ranges May 14 00:24:23.248715 kernel: node 0: [mem 0x0000000088300000-0x00000000883fffff] May 14 00:24:23.248722 kernel: node 0: [mem 0x0000000090000000-0x0000000091ffffff] May 14 00:24:23.248731 kernel: node 0: [mem 0x0000000092000000-0x0000000093ffffff] May 14 00:24:23.248737 kernel: node 0: [mem 0x0000000094000000-0x00000000eba31fff] May 14 00:24:23.248744 kernel: node 0: [mem 0x00000000eba32000-0x00000000ebea8fff] May 14 00:24:23.248750 kernel: node 0: [mem 0x00000000ebea9000-0x00000000ebeaefff] May 14 00:24:23.248756 kernel: node 0: [mem 0x00000000ebeaf000-0x00000000ebeccfff] May 14 00:24:23.248762 kernel: node 0: [mem 0x00000000ebecd000-0x00000000ebecdfff] May 14 00:24:23.248769 kernel: node 0: [mem 0x00000000ebece000-0x00000000ebecffff] May 14 00:24:23.248775 kernel: node 0: [mem 0x00000000ebed0000-0x00000000ec0effff] May 14 00:24:23.248781 kernel: node 0: [mem 0x00000000ec0f0000-0x00000000ec0fffff] May 14 00:24:23.248788 kernel: node 0: [mem 0x00000000ec100000-0x00000000ee53ffff] May 14 00:24:23.248795 kernel: node 0: [mem 0x00000000ee540000-0x00000000f765ffff] May 14 00:24:23.248801 kernel: node 0: [mem 0x00000000f7660000-0x00000000f784ffff] May 14 00:24:23.248808 kernel: node 0: [mem 0x00000000f7850000-0x00000000f7fdffff] May 14 00:24:23.248814 kernel: node 0: [mem 0x00000000f7fe0000-0x00000000ffc8efff] May 14 00:24:23.248820 kernel: node 0: [mem 0x00000000ffc8f000-0x00000000ffc8ffff] May 14 00:24:23.248827 kernel: node 0: [mem 0x00000000ffc90000-0x00000000ffffffff] May 14 00:24:23.248833 kernel: node 0: [mem 0x0000080000000000-0x000008007fffffff] May 14 00:24:23.248839 kernel: node 0: [mem 0x0000080100000000-0x0000083fffffffff] May 14 00:24:23.248846 kernel: Initmem setup node 0 [mem 0x0000000088300000-0x0000083fffffffff] May 14 00:24:23.248852 kernel: On node 0, zone DMA: 768 pages in unavailable ranges May 14 00:24:23.248858 kernel: On node 0, zone DMA: 31744 pages in unavailable ranges May 14 00:24:23.248867 kernel: psci: probing for conduit method from ACPI. May 14 00:24:23.248873 kernel: psci: PSCIv1.1 detected in firmware. May 14 00:24:23.248879 kernel: psci: Using standard PSCI v0.2 function IDs May 14 00:24:23.248886 kernel: psci: MIGRATE_INFO_TYPE not supported. May 14 00:24:23.248892 kernel: psci: SMC Calling Convention v1.2 May 14 00:24:23.248898 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 14 00:24:23.248904 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100 -> Node 0 May 14 00:24:23.248911 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10000 -> Node 0 May 14 00:24:23.248917 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10100 -> Node 0 May 14 00:24:23.248923 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20000 -> Node 0 May 14 00:24:23.248930 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20100 -> Node 0 May 14 00:24:23.248936 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30000 -> Node 0 May 14 00:24:23.248944 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30100 -> Node 0 May 14 00:24:23.248950 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40000 -> Node 0 May 14 00:24:23.248956 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40100 -> Node 0 May 14 00:24:23.248963 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50000 -> Node 0 May 14 00:24:23.248969 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50100 -> Node 0 May 14 00:24:23.248975 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60000 -> Node 0 May 14 00:24:23.248981 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60100 -> Node 0 May 14 00:24:23.248988 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70000 -> Node 0 May 14 00:24:23.248994 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70100 -> Node 0 May 14 00:24:23.249001 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80000 -> Node 0 May 14 00:24:23.249007 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80100 -> Node 0 May 14 00:24:23.249013 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90000 -> Node 0 May 14 00:24:23.249021 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90100 -> Node 0 May 14 00:24:23.249028 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0000 -> Node 0 May 14 00:24:23.249034 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0100 -> Node 0 May 14 00:24:23.249041 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0000 -> Node 0 May 14 00:24:23.249047 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0100 -> Node 0 May 14 00:24:23.249053 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0000 -> Node 0 May 14 00:24:23.249060 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0100 -> Node 0 May 14 00:24:23.249066 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0000 -> Node 0 May 14 00:24:23.249072 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0100 -> Node 0 May 14 00:24:23.249079 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0000 -> Node 0 May 14 00:24:23.249085 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0100 -> Node 0 May 14 00:24:23.249092 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0000 -> Node 0 May 14 00:24:23.249099 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0100 -> Node 0 May 14 00:24:23.249105 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100000 -> Node 0 May 14 00:24:23.249112 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100100 -> Node 0 May 14 00:24:23.249118 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110000 -> Node 0 May 14 00:24:23.249124 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110100 -> Node 0 May 14 00:24:23.249130 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120000 -> Node 0 May 14 00:24:23.249137 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120100 -> Node 0 May 14 00:24:23.249143 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130000 -> Node 0 May 14 00:24:23.249149 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130100 -> Node 0 May 14 00:24:23.249156 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140000 -> Node 0 May 14 00:24:23.249162 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140100 -> Node 0 May 14 00:24:23.249170 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150000 -> Node 0 May 14 00:24:23.249176 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150100 -> Node 0 May 14 00:24:23.249182 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160000 -> Node 0 May 14 00:24:23.249189 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160100 -> Node 0 May 14 00:24:23.249195 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170000 -> Node 0 May 14 00:24:23.249201 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170100 -> Node 0 May 14 00:24:23.249207 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180000 -> Node 0 May 14 00:24:23.249214 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180100 -> Node 0 May 14 00:24:23.249226 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190000 -> Node 0 May 14 00:24:23.249233 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190100 -> Node 0 May 14 00:24:23.249241 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0000 -> Node 0 May 14 00:24:23.249248 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0100 -> Node 0 May 14 00:24:23.249255 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0000 -> Node 0 May 14 00:24:23.249262 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0100 -> Node 0 May 14 00:24:23.249268 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0000 -> Node 0 May 14 00:24:23.249275 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0100 -> Node 0 May 14 00:24:23.249283 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0000 -> Node 0 May 14 00:24:23.249290 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0100 -> Node 0 May 14 00:24:23.249297 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0000 -> Node 0 May 14 00:24:23.249303 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0100 -> Node 0 May 14 00:24:23.249310 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0000 -> Node 0 May 14 00:24:23.249317 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0100 -> Node 0 May 14 00:24:23.249323 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200000 -> Node 0 May 14 00:24:23.249330 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200100 -> Node 0 May 14 00:24:23.249336 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210000 -> Node 0 May 14 00:24:23.249343 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210100 -> Node 0 May 14 00:24:23.249350 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220000 -> Node 0 May 14 00:24:23.249357 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220100 -> Node 0 May 14 00:24:23.249365 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230000 -> Node 0 May 14 00:24:23.249371 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230100 -> Node 0 May 14 00:24:23.249378 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240000 -> Node 0 May 14 00:24:23.249385 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240100 -> Node 0 May 14 00:24:23.249391 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250000 -> Node 0 May 14 00:24:23.249398 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250100 -> Node 0 May 14 00:24:23.249405 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260000 -> Node 0 May 14 00:24:23.249411 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260100 -> Node 0 May 14 00:24:23.249418 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270000 -> Node 0 May 14 00:24:23.249425 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270100 -> Node 0 May 14 00:24:23.249431 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 14 00:24:23.249440 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 14 00:24:23.249447 kernel: pcpu-alloc: [0] 00 [0] 01 [0] 02 [0] 03 [0] 04 [0] 05 [0] 06 [0] 07 May 14 00:24:23.249454 kernel: pcpu-alloc: [0] 08 [0] 09 [0] 10 [0] 11 [0] 12 [0] 13 [0] 14 [0] 15 May 14 00:24:23.249460 kernel: pcpu-alloc: [0] 16 [0] 17 [0] 18 [0] 19 [0] 20 [0] 21 [0] 22 [0] 23 May 14 00:24:23.249467 kernel: pcpu-alloc: [0] 24 [0] 25 [0] 26 [0] 27 [0] 28 [0] 29 [0] 30 [0] 31 May 14 00:24:23.249474 kernel: pcpu-alloc: [0] 32 [0] 33 [0] 34 [0] 35 [0] 36 [0] 37 [0] 38 [0] 39 May 14 00:24:23.249480 kernel: pcpu-alloc: [0] 40 [0] 41 [0] 42 [0] 43 [0] 44 [0] 45 [0] 46 [0] 47 May 14 00:24:23.249487 kernel: pcpu-alloc: [0] 48 [0] 49 [0] 50 [0] 51 [0] 52 [0] 53 [0] 54 [0] 55 May 14 00:24:23.249494 kernel: pcpu-alloc: [0] 56 [0] 57 [0] 58 [0] 59 [0] 60 [0] 61 [0] 62 [0] 63 May 14 00:24:23.249500 kernel: pcpu-alloc: [0] 64 [0] 65 [0] 66 [0] 67 [0] 68 [0] 69 [0] 70 [0] 71 May 14 00:24:23.249507 kernel: pcpu-alloc: [0] 72 [0] 73 [0] 74 [0] 75 [0] 76 [0] 77 [0] 78 [0] 79 May 14 00:24:23.249515 kernel: Detected PIPT I-cache on CPU0 May 14 00:24:23.249522 kernel: CPU features: detected: GIC system register CPU interface May 14 00:24:23.249529 kernel: CPU features: detected: Virtualization Host Extensions May 14 00:24:23.249535 kernel: CPU features: detected: Hardware dirty bit management May 14 00:24:23.249542 kernel: CPU features: detected: Spectre-v4 May 14 00:24:23.249548 kernel: CPU features: detected: Spectre-BHB May 14 00:24:23.249555 kernel: CPU features: kernel page table isolation forced ON by KASLR May 14 00:24:23.249562 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 14 00:24:23.249569 kernel: CPU features: detected: ARM erratum 1418040 May 14 00:24:23.249576 kernel: CPU features: detected: SSBS not fully self-synchronizing May 14 00:24:23.249582 kernel: alternatives: applying boot alternatives May 14 00:24:23.249590 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=3174b2682629aa8ad4069807ed6fd62c10f62266ee1e150a1104f2a2fb6489b5 May 14 00:24:23.249599 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 00:24:23.249608 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 14 00:24:23.249615 kernel: printk: log_buf_len total cpu_extra contributions: 323584 bytes May 14 00:24:23.249622 kernel: printk: log_buf_len min size: 262144 bytes May 14 00:24:23.249628 kernel: printk: log_buf_len: 1048576 bytes May 14 00:24:23.249635 kernel: printk: early log buf free: 249864(95%) May 14 00:24:23.249642 kernel: Dentry cache hash table entries: 16777216 (order: 15, 134217728 bytes, linear) May 14 00:24:23.249649 kernel: Inode-cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear) May 14 00:24:23.249655 kernel: Fallback order for Node 0: 0 May 14 00:24:23.249662 kernel: Built 1 zonelists, mobility grouping on. Total pages: 65996028 May 14 00:24:23.249670 kernel: Policy zone: Normal May 14 00:24:23.249677 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 00:24:23.249684 kernel: software IO TLB: area num 128. May 14 00:24:23.249691 kernel: software IO TLB: mapped [mem 0x00000000fbc8f000-0x00000000ffc8f000] (64MB) May 14 00:24:23.249698 kernel: Memory: 262923284K/268174336K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38464K init, 897K bss, 5251052K reserved, 0K cma-reserved) May 14 00:24:23.249705 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=80, Nodes=1 May 14 00:24:23.249712 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 00:24:23.249719 kernel: rcu: RCU event tracing is enabled. May 14 00:24:23.249726 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=80. May 14 00:24:23.249733 kernel: Trampoline variant of Tasks RCU enabled. May 14 00:24:23.249740 kernel: Tracing variant of Tasks RCU enabled. May 14 00:24:23.249747 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 00:24:23.249755 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=80 May 14 00:24:23.249761 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 14 00:24:23.249768 kernel: GICv3: GIC: Using split EOI/Deactivate mode May 14 00:24:23.249775 kernel: GICv3: 672 SPIs implemented May 14 00:24:23.249782 kernel: GICv3: 0 Extended SPIs implemented May 14 00:24:23.249788 kernel: Root IRQ handler: gic_handle_irq May 14 00:24:23.249795 kernel: GICv3: GICv3 features: 16 PPIs May 14 00:24:23.249802 kernel: GICv3: CPU0: found redistributor 120000 region 0:0x00001001005c0000 May 14 00:24:23.249809 kernel: SRAT: PXM 0 -> ITS 0 -> Node 0 May 14 00:24:23.249815 kernel: SRAT: PXM 0 -> ITS 1 -> Node 0 May 14 00:24:23.249822 kernel: SRAT: PXM 0 -> ITS 2 -> Node 0 May 14 00:24:23.249828 kernel: SRAT: PXM 0 -> ITS 3 -> Node 0 May 14 00:24:23.249836 kernel: SRAT: PXM 0 -> ITS 4 -> Node 0 May 14 00:24:23.249843 kernel: SRAT: PXM 0 -> ITS 5 -> Node 0 May 14 00:24:23.249850 kernel: SRAT: PXM 0 -> ITS 6 -> Node 0 May 14 00:24:23.249856 kernel: SRAT: PXM 0 -> ITS 7 -> Node 0 May 14 00:24:23.249863 kernel: ITS [mem 0x100100040000-0x10010005ffff] May 14 00:24:23.249870 kernel: ITS@0x0000100100040000: allocated 8192 Devices @80000270000 (indirect, esz 8, psz 64K, shr 1) May 14 00:24:23.249877 kernel: ITS@0x0000100100040000: allocated 32768 Interrupt Collections @80000280000 (flat, esz 2, psz 64K, shr 1) May 14 00:24:23.249884 kernel: ITS [mem 0x100100060000-0x10010007ffff] May 14 00:24:23.249891 kernel: ITS@0x0000100100060000: allocated 8192 Devices @800002a0000 (indirect, esz 8, psz 64K, shr 1) May 14 00:24:23.249897 kernel: ITS@0x0000100100060000: allocated 32768 Interrupt Collections @800002b0000 (flat, esz 2, psz 64K, shr 1) May 14 00:24:23.249904 kernel: ITS [mem 0x100100080000-0x10010009ffff] May 14 00:24:23.249912 kernel: ITS@0x0000100100080000: allocated 8192 Devices @800002d0000 (indirect, esz 8, psz 64K, shr 1) May 14 00:24:23.249919 kernel: ITS@0x0000100100080000: allocated 32768 Interrupt Collections @800002e0000 (flat, esz 2, psz 64K, shr 1) May 14 00:24:23.249926 kernel: ITS [mem 0x1001000a0000-0x1001000bffff] May 14 00:24:23.249933 kernel: ITS@0x00001001000a0000: allocated 8192 Devices @80000300000 (indirect, esz 8, psz 64K, shr 1) May 14 00:24:23.249940 kernel: ITS@0x00001001000a0000: allocated 32768 Interrupt Collections @80000310000 (flat, esz 2, psz 64K, shr 1) May 14 00:24:23.249947 kernel: ITS [mem 0x1001000c0000-0x1001000dffff] May 14 00:24:23.249953 kernel: ITS@0x00001001000c0000: allocated 8192 Devices @80000330000 (indirect, esz 8, psz 64K, shr 1) May 14 00:24:23.249960 kernel: ITS@0x00001001000c0000: allocated 32768 Interrupt Collections @80000340000 (flat, esz 2, psz 64K, shr 1) May 14 00:24:23.249967 kernel: ITS [mem 0x1001000e0000-0x1001000fffff] May 14 00:24:23.249974 kernel: ITS@0x00001001000e0000: allocated 8192 Devices @80000360000 (indirect, esz 8, psz 64K, shr 1) May 14 00:24:23.249981 kernel: ITS@0x00001001000e0000: allocated 32768 Interrupt Collections @80000370000 (flat, esz 2, psz 64K, shr 1) May 14 00:24:23.249989 kernel: ITS [mem 0x100100100000-0x10010011ffff] May 14 00:24:23.249997 kernel: ITS@0x0000100100100000: allocated 8192 Devices @80000390000 (indirect, esz 8, psz 64K, shr 1) May 14 00:24:23.250004 kernel: ITS@0x0000100100100000: allocated 32768 Interrupt Collections @800003a0000 (flat, esz 2, psz 64K, shr 1) May 14 00:24:23.250010 kernel: ITS [mem 0x100100120000-0x10010013ffff] May 14 00:24:23.250017 kernel: ITS@0x0000100100120000: allocated 8192 Devices @800003c0000 (indirect, esz 8, psz 64K, shr 1) May 14 00:24:23.250024 kernel: ITS@0x0000100100120000: allocated 32768 Interrupt Collections @800003d0000 (flat, esz 2, psz 64K, shr 1) May 14 00:24:23.250031 kernel: GICv3: using LPI property table @0x00000800003e0000 May 14 00:24:23.250038 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000800003f0000 May 14 00:24:23.250044 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 00:24:23.250051 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.250058 kernel: ACPI GTDT: found 1 memory-mapped timer block(s). May 14 00:24:23.250066 kernel: arch_timer: cp15 and mmio timer(s) running at 25.00MHz (phys/phys). May 14 00:24:23.250073 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 14 00:24:23.250080 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 14 00:24:23.250087 kernel: Console: colour dummy device 80x25 May 14 00:24:23.250094 kernel: printk: console [tty0] enabled May 14 00:24:23.250101 kernel: ACPI: Core revision 20230628 May 14 00:24:23.250108 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 14 00:24:23.250115 kernel: pid_max: default: 81920 minimum: 640 May 14 00:24:23.250122 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 14 00:24:23.250129 kernel: landlock: Up and running. May 14 00:24:23.250137 kernel: SELinux: Initializing. May 14 00:24:23.250144 kernel: Mount-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 00:24:23.250151 kernel: Mountpoint-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 00:24:23.250158 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. May 14 00:24:23.250166 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. May 14 00:24:23.250173 kernel: rcu: Hierarchical SRCU implementation. May 14 00:24:23.250180 kernel: rcu: Max phase no-delay instances is 400. May 14 00:24:23.250187 kernel: Platform MSI: ITS@0x100100040000 domain created May 14 00:24:23.250194 kernel: Platform MSI: ITS@0x100100060000 domain created May 14 00:24:23.250202 kernel: Platform MSI: ITS@0x100100080000 domain created May 14 00:24:23.250209 kernel: Platform MSI: ITS@0x1001000a0000 domain created May 14 00:24:23.250216 kernel: Platform MSI: ITS@0x1001000c0000 domain created May 14 00:24:23.250223 kernel: Platform MSI: ITS@0x1001000e0000 domain created May 14 00:24:23.250229 kernel: Platform MSI: ITS@0x100100100000 domain created May 14 00:24:23.250236 kernel: Platform MSI: ITS@0x100100120000 domain created May 14 00:24:23.250243 kernel: PCI/MSI: ITS@0x100100040000 domain created May 14 00:24:23.250250 kernel: PCI/MSI: ITS@0x100100060000 domain created May 14 00:24:23.250257 kernel: PCI/MSI: ITS@0x100100080000 domain created May 14 00:24:23.250266 kernel: PCI/MSI: ITS@0x1001000a0000 domain created May 14 00:24:23.250272 kernel: PCI/MSI: ITS@0x1001000c0000 domain created May 14 00:24:23.250279 kernel: PCI/MSI: ITS@0x1001000e0000 domain created May 14 00:24:23.250286 kernel: PCI/MSI: ITS@0x100100100000 domain created May 14 00:24:23.250293 kernel: PCI/MSI: ITS@0x100100120000 domain created May 14 00:24:23.250300 kernel: Remapping and enabling EFI services. May 14 00:24:23.250306 kernel: smp: Bringing up secondary CPUs ... May 14 00:24:23.250314 kernel: Detected PIPT I-cache on CPU1 May 14 00:24:23.250321 kernel: GICv3: CPU1: found redistributor 1a0000 region 0:0x00001001007c0000 May 14 00:24:23.250328 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000080000800000 May 14 00:24:23.250336 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.250343 kernel: CPU1: Booted secondary processor 0x00001a0000 [0x413fd0c1] May 14 00:24:23.250350 kernel: Detected PIPT I-cache on CPU2 May 14 00:24:23.250357 kernel: GICv3: CPU2: found redistributor 140000 region 0:0x0000100100640000 May 14 00:24:23.250364 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000080000810000 May 14 00:24:23.250371 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.250378 kernel: CPU2: Booted secondary processor 0x0000140000 [0x413fd0c1] May 14 00:24:23.250385 kernel: Detected PIPT I-cache on CPU3 May 14 00:24:23.250392 kernel: GICv3: CPU3: found redistributor 1c0000 region 0:0x0000100100840000 May 14 00:24:23.250400 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000080000820000 May 14 00:24:23.250407 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.250414 kernel: CPU3: Booted secondary processor 0x00001c0000 [0x413fd0c1] May 14 00:24:23.250421 kernel: Detected PIPT I-cache on CPU4 May 14 00:24:23.250428 kernel: GICv3: CPU4: found redistributor 100000 region 0:0x0000100100540000 May 14 00:24:23.250435 kernel: GICv3: CPU4: using allocated LPI pending table @0x0000080000830000 May 14 00:24:23.250442 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.250449 kernel: CPU4: Booted secondary processor 0x0000100000 [0x413fd0c1] May 14 00:24:23.250456 kernel: Detected PIPT I-cache on CPU5 May 14 00:24:23.250462 kernel: GICv3: CPU5: found redistributor 180000 region 0:0x0000100100740000 May 14 00:24:23.250471 kernel: GICv3: CPU5: using allocated LPI pending table @0x0000080000840000 May 14 00:24:23.250478 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.250485 kernel: CPU5: Booted secondary processor 0x0000180000 [0x413fd0c1] May 14 00:24:23.250491 kernel: Detected PIPT I-cache on CPU6 May 14 00:24:23.250498 kernel: GICv3: CPU6: found redistributor 160000 region 0:0x00001001006c0000 May 14 00:24:23.250505 kernel: GICv3: CPU6: using allocated LPI pending table @0x0000080000850000 May 14 00:24:23.250512 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.250519 kernel: CPU6: Booted secondary processor 0x0000160000 [0x413fd0c1] May 14 00:24:23.250526 kernel: Detected PIPT I-cache on CPU7 May 14 00:24:23.250534 kernel: GICv3: CPU7: found redistributor 1e0000 region 0:0x00001001008c0000 May 14 00:24:23.250541 kernel: GICv3: CPU7: using allocated LPI pending table @0x0000080000860000 May 14 00:24:23.250548 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.250555 kernel: CPU7: Booted secondary processor 0x00001e0000 [0x413fd0c1] May 14 00:24:23.250562 kernel: Detected PIPT I-cache on CPU8 May 14 00:24:23.250569 kernel: GICv3: CPU8: found redistributor a0000 region 0:0x00001001003c0000 May 14 00:24:23.250576 kernel: GICv3: CPU8: using allocated LPI pending table @0x0000080000870000 May 14 00:24:23.250583 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.250589 kernel: CPU8: Booted secondary processor 0x00000a0000 [0x413fd0c1] May 14 00:24:23.250596 kernel: Detected PIPT I-cache on CPU9 May 14 00:24:23.250606 kernel: GICv3: CPU9: found redistributor 220000 region 0:0x00001001009c0000 May 14 00:24:23.250614 kernel: GICv3: CPU9: using allocated LPI pending table @0x0000080000880000 May 14 00:24:23.250621 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.250628 kernel: CPU9: Booted secondary processor 0x0000220000 [0x413fd0c1] May 14 00:24:23.250634 kernel: Detected PIPT I-cache on CPU10 May 14 00:24:23.250641 kernel: GICv3: CPU10: found redistributor c0000 region 0:0x0000100100440000 May 14 00:24:23.250648 kernel: GICv3: CPU10: using allocated LPI pending table @0x0000080000890000 May 14 00:24:23.250655 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.250662 kernel: CPU10: Booted secondary processor 0x00000c0000 [0x413fd0c1] May 14 00:24:23.250670 kernel: Detected PIPT I-cache on CPU11 May 14 00:24:23.250677 kernel: GICv3: CPU11: found redistributor 240000 region 0:0x0000100100a40000 May 14 00:24:23.250684 kernel: GICv3: CPU11: using allocated LPI pending table @0x00000800008a0000 May 14 00:24:23.250691 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.250698 kernel: CPU11: Booted secondary processor 0x0000240000 [0x413fd0c1] May 14 00:24:23.250705 kernel: Detected PIPT I-cache on CPU12 May 14 00:24:23.250712 kernel: GICv3: CPU12: found redistributor 80000 region 0:0x0000100100340000 May 14 00:24:23.250719 kernel: GICv3: CPU12: using allocated LPI pending table @0x00000800008b0000 May 14 00:24:23.250725 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.250732 kernel: CPU12: Booted secondary processor 0x0000080000 [0x413fd0c1] May 14 00:24:23.250740 kernel: Detected PIPT I-cache on CPU13 May 14 00:24:23.250748 kernel: GICv3: CPU13: found redistributor 200000 region 0:0x0000100100940000 May 14 00:24:23.250755 kernel: GICv3: CPU13: using allocated LPI pending table @0x00000800008c0000 May 14 00:24:23.250761 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.250768 kernel: CPU13: Booted secondary processor 0x0000200000 [0x413fd0c1] May 14 00:24:23.250775 kernel: Detected PIPT I-cache on CPU14 May 14 00:24:23.250782 kernel: GICv3: CPU14: found redistributor e0000 region 0:0x00001001004c0000 May 14 00:24:23.250789 kernel: GICv3: CPU14: using allocated LPI pending table @0x00000800008d0000 May 14 00:24:23.250796 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.250804 kernel: CPU14: Booted secondary processor 0x00000e0000 [0x413fd0c1] May 14 00:24:23.250811 kernel: Detected PIPT I-cache on CPU15 May 14 00:24:23.250818 kernel: GICv3: CPU15: found redistributor 260000 region 0:0x0000100100ac0000 May 14 00:24:23.250825 kernel: GICv3: CPU15: using allocated LPI pending table @0x00000800008e0000 May 14 00:24:23.250832 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.250838 kernel: CPU15: Booted secondary processor 0x0000260000 [0x413fd0c1] May 14 00:24:23.250845 kernel: Detected PIPT I-cache on CPU16 May 14 00:24:23.250852 kernel: GICv3: CPU16: found redistributor 20000 region 0:0x00001001001c0000 May 14 00:24:23.250859 kernel: GICv3: CPU16: using allocated LPI pending table @0x00000800008f0000 May 14 00:24:23.250875 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.250884 kernel: CPU16: Booted secondary processor 0x0000020000 [0x413fd0c1] May 14 00:24:23.250891 kernel: Detected PIPT I-cache on CPU17 May 14 00:24:23.250898 kernel: GICv3: CPU17: found redistributor 40000 region 0:0x0000100100240000 May 14 00:24:23.250905 kernel: GICv3: CPU17: using allocated LPI pending table @0x0000080000900000 May 14 00:24:23.250913 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.250920 kernel: CPU17: Booted secondary processor 0x0000040000 [0x413fd0c1] May 14 00:24:23.250927 kernel: Detected PIPT I-cache on CPU18 May 14 00:24:23.250934 kernel: GICv3: CPU18: found redistributor 0 region 0:0x0000100100140000 May 14 00:24:23.250941 kernel: GICv3: CPU18: using allocated LPI pending table @0x0000080000910000 May 14 00:24:23.250950 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.250957 kernel: CPU18: Booted secondary processor 0x0000000000 [0x413fd0c1] May 14 00:24:23.250964 kernel: Detected PIPT I-cache on CPU19 May 14 00:24:23.250972 kernel: GICv3: CPU19: found redistributor 60000 region 0:0x00001001002c0000 May 14 00:24:23.250979 kernel: GICv3: CPU19: using allocated LPI pending table @0x0000080000920000 May 14 00:24:23.250986 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.250995 kernel: CPU19: Booted secondary processor 0x0000060000 [0x413fd0c1] May 14 00:24:23.251002 kernel: Detected PIPT I-cache on CPU20 May 14 00:24:23.251009 kernel: GICv3: CPU20: found redistributor 130000 region 0:0x0000100100600000 May 14 00:24:23.251017 kernel: GICv3: CPU20: using allocated LPI pending table @0x0000080000930000 May 14 00:24:23.251024 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.251031 kernel: CPU20: Booted secondary processor 0x0000130000 [0x413fd0c1] May 14 00:24:23.251038 kernel: Detected PIPT I-cache on CPU21 May 14 00:24:23.251045 kernel: GICv3: CPU21: found redistributor 1b0000 region 0:0x0000100100800000 May 14 00:24:23.251053 kernel: GICv3: CPU21: using allocated LPI pending table @0x0000080000940000 May 14 00:24:23.251061 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.251069 kernel: CPU21: Booted secondary processor 0x00001b0000 [0x413fd0c1] May 14 00:24:23.251076 kernel: Detected PIPT I-cache on CPU22 May 14 00:24:23.251083 kernel: GICv3: CPU22: found redistributor 150000 region 0:0x0000100100680000 May 14 00:24:23.251090 kernel: GICv3: CPU22: using allocated LPI pending table @0x0000080000950000 May 14 00:24:23.251098 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.251105 kernel: CPU22: Booted secondary processor 0x0000150000 [0x413fd0c1] May 14 00:24:23.251112 kernel: Detected PIPT I-cache on CPU23 May 14 00:24:23.251119 kernel: GICv3: CPU23: found redistributor 1d0000 region 0:0x0000100100880000 May 14 00:24:23.251128 kernel: GICv3: CPU23: using allocated LPI pending table @0x0000080000960000 May 14 00:24:23.251136 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.251143 kernel: CPU23: Booted secondary processor 0x00001d0000 [0x413fd0c1] May 14 00:24:23.251150 kernel: Detected PIPT I-cache on CPU24 May 14 00:24:23.251157 kernel: GICv3: CPU24: found redistributor 110000 region 0:0x0000100100580000 May 14 00:24:23.251165 kernel: GICv3: CPU24: using allocated LPI pending table @0x0000080000970000 May 14 00:24:23.251173 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.251182 kernel: CPU24: Booted secondary processor 0x0000110000 [0x413fd0c1] May 14 00:24:23.251189 kernel: Detected PIPT I-cache on CPU25 May 14 00:24:23.251196 kernel: GICv3: CPU25: found redistributor 190000 region 0:0x0000100100780000 May 14 00:24:23.251206 kernel: GICv3: CPU25: using allocated LPI pending table @0x0000080000980000 May 14 00:24:23.251213 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.251220 kernel: CPU25: Booted secondary processor 0x0000190000 [0x413fd0c1] May 14 00:24:23.251227 kernel: Detected PIPT I-cache on CPU26 May 14 00:24:23.251235 kernel: GICv3: CPU26: found redistributor 170000 region 0:0x0000100100700000 May 14 00:24:23.251242 kernel: GICv3: CPU26: using allocated LPI pending table @0x0000080000990000 May 14 00:24:23.251249 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.251256 kernel: CPU26: Booted secondary processor 0x0000170000 [0x413fd0c1] May 14 00:24:23.251263 kernel: Detected PIPT I-cache on CPU27 May 14 00:24:23.251272 kernel: GICv3: CPU27: found redistributor 1f0000 region 0:0x0000100100900000 May 14 00:24:23.251280 kernel: GICv3: CPU27: using allocated LPI pending table @0x00000800009a0000 May 14 00:24:23.251287 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.251294 kernel: CPU27: Booted secondary processor 0x00001f0000 [0x413fd0c1] May 14 00:24:23.251301 kernel: Detected PIPT I-cache on CPU28 May 14 00:24:23.251308 kernel: GICv3: CPU28: found redistributor b0000 region 0:0x0000100100400000 May 14 00:24:23.251316 kernel: GICv3: CPU28: using allocated LPI pending table @0x00000800009b0000 May 14 00:24:23.251323 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.251330 kernel: CPU28: Booted secondary processor 0x00000b0000 [0x413fd0c1] May 14 00:24:23.251337 kernel: Detected PIPT I-cache on CPU29 May 14 00:24:23.251346 kernel: GICv3: CPU29: found redistributor 230000 region 0:0x0000100100a00000 May 14 00:24:23.251353 kernel: GICv3: CPU29: using allocated LPI pending table @0x00000800009c0000 May 14 00:24:23.251361 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.251368 kernel: CPU29: Booted secondary processor 0x0000230000 [0x413fd0c1] May 14 00:24:23.251375 kernel: Detected PIPT I-cache on CPU30 May 14 00:24:23.251382 kernel: GICv3: CPU30: found redistributor d0000 region 0:0x0000100100480000 May 14 00:24:23.251390 kernel: GICv3: CPU30: using allocated LPI pending table @0x00000800009d0000 May 14 00:24:23.251397 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.251404 kernel: CPU30: Booted secondary processor 0x00000d0000 [0x413fd0c1] May 14 00:24:23.251413 kernel: Detected PIPT I-cache on CPU31 May 14 00:24:23.251420 kernel: GICv3: CPU31: found redistributor 250000 region 0:0x0000100100a80000 May 14 00:24:23.251427 kernel: GICv3: CPU31: using allocated LPI pending table @0x00000800009e0000 May 14 00:24:23.251435 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.251442 kernel: CPU31: Booted secondary processor 0x0000250000 [0x413fd0c1] May 14 00:24:23.251449 kernel: Detected PIPT I-cache on CPU32 May 14 00:24:23.251457 kernel: GICv3: CPU32: found redistributor 90000 region 0:0x0000100100380000 May 14 00:24:23.251464 kernel: GICv3: CPU32: using allocated LPI pending table @0x00000800009f0000 May 14 00:24:23.251471 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.251479 kernel: CPU32: Booted secondary processor 0x0000090000 [0x413fd0c1] May 14 00:24:23.251487 kernel: Detected PIPT I-cache on CPU33 May 14 00:24:23.251495 kernel: GICv3: CPU33: found redistributor 210000 region 0:0x0000100100980000 May 14 00:24:23.251502 kernel: GICv3: CPU33: using allocated LPI pending table @0x0000080000a00000 May 14 00:24:23.251509 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.251516 kernel: CPU33: Booted secondary processor 0x0000210000 [0x413fd0c1] May 14 00:24:23.251523 kernel: Detected PIPT I-cache on CPU34 May 14 00:24:23.251531 kernel: GICv3: CPU34: found redistributor f0000 region 0:0x0000100100500000 May 14 00:24:23.251538 kernel: GICv3: CPU34: using allocated LPI pending table @0x0000080000a10000 May 14 00:24:23.251545 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.251553 kernel: CPU34: Booted secondary processor 0x00000f0000 [0x413fd0c1] May 14 00:24:23.251560 kernel: Detected PIPT I-cache on CPU35 May 14 00:24:23.251568 kernel: GICv3: CPU35: found redistributor 270000 region 0:0x0000100100b00000 May 14 00:24:23.251575 kernel: GICv3: CPU35: using allocated LPI pending table @0x0000080000a20000 May 14 00:24:23.251582 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.251589 kernel: CPU35: Booted secondary processor 0x0000270000 [0x413fd0c1] May 14 00:24:23.251596 kernel: Detected PIPT I-cache on CPU36 May 14 00:24:23.251604 kernel: GICv3: CPU36: found redistributor 30000 region 0:0x0000100100200000 May 14 00:24:23.251613 kernel: GICv3: CPU36: using allocated LPI pending table @0x0000080000a30000 May 14 00:24:23.251622 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.251630 kernel: CPU36: Booted secondary processor 0x0000030000 [0x413fd0c1] May 14 00:24:23.251637 kernel: Detected PIPT I-cache on CPU37 May 14 00:24:23.251644 kernel: GICv3: CPU37: found redistributor 50000 region 0:0x0000100100280000 May 14 00:24:23.251651 kernel: GICv3: CPU37: using allocated LPI pending table @0x0000080000a40000 May 14 00:24:23.251659 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.251666 kernel: CPU37: Booted secondary processor 0x0000050000 [0x413fd0c1] May 14 00:24:23.251673 kernel: Detected PIPT I-cache on CPU38 May 14 00:24:23.251682 kernel: GICv3: CPU38: found redistributor 10000 region 0:0x0000100100180000 May 14 00:24:23.251689 kernel: GICv3: CPU38: using allocated LPI pending table @0x0000080000a50000 May 14 00:24:23.251698 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.251705 kernel: CPU38: Booted secondary processor 0x0000010000 [0x413fd0c1] May 14 00:24:23.251712 kernel: Detected PIPT I-cache on CPU39 May 14 00:24:23.251719 kernel: GICv3: CPU39: found redistributor 70000 region 0:0x0000100100300000 May 14 00:24:23.251727 kernel: GICv3: CPU39: using allocated LPI pending table @0x0000080000a60000 May 14 00:24:23.251734 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.251741 kernel: CPU39: Booted secondary processor 0x0000070000 [0x413fd0c1] May 14 00:24:23.251748 kernel: Detected PIPT I-cache on CPU40 May 14 00:24:23.251757 kernel: GICv3: CPU40: found redistributor 120100 region 0:0x00001001005e0000 May 14 00:24:23.251764 kernel: GICv3: CPU40: using allocated LPI pending table @0x0000080000a70000 May 14 00:24:23.251771 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.251779 kernel: CPU40: Booted secondary processor 0x0000120100 [0x413fd0c1] May 14 00:24:23.251786 kernel: Detected PIPT I-cache on CPU41 May 14 00:24:23.251793 kernel: GICv3: CPU41: found redistributor 1a0100 region 0:0x00001001007e0000 May 14 00:24:23.251800 kernel: GICv3: CPU41: using allocated LPI pending table @0x0000080000a80000 May 14 00:24:23.251808 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.251815 kernel: CPU41: Booted secondary processor 0x00001a0100 [0x413fd0c1] May 14 00:24:23.251824 kernel: Detected PIPT I-cache on CPU42 May 14 00:24:23.251831 kernel: GICv3: CPU42: found redistributor 140100 region 0:0x0000100100660000 May 14 00:24:23.251838 kernel: GICv3: CPU42: using allocated LPI pending table @0x0000080000a90000 May 14 00:24:23.251846 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.251853 kernel: CPU42: Booted secondary processor 0x0000140100 [0x413fd0c1] May 14 00:24:23.251860 kernel: Detected PIPT I-cache on CPU43 May 14 00:24:23.251867 kernel: GICv3: CPU43: found redistributor 1c0100 region 0:0x0000100100860000 May 14 00:24:23.251874 kernel: GICv3: CPU43: using allocated LPI pending table @0x0000080000aa0000 May 14 00:24:23.251882 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.251889 kernel: CPU43: Booted secondary processor 0x00001c0100 [0x413fd0c1] May 14 00:24:23.251898 kernel: Detected PIPT I-cache on CPU44 May 14 00:24:23.251905 kernel: GICv3: CPU44: found redistributor 100100 region 0:0x0000100100560000 May 14 00:24:23.251912 kernel: GICv3: CPU44: using allocated LPI pending table @0x0000080000ab0000 May 14 00:24:23.251919 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.251927 kernel: CPU44: Booted secondary processor 0x0000100100 [0x413fd0c1] May 14 00:24:23.251934 kernel: Detected PIPT I-cache on CPU45 May 14 00:24:23.251941 kernel: GICv3: CPU45: found redistributor 180100 region 0:0x0000100100760000 May 14 00:24:23.251949 kernel: GICv3: CPU45: using allocated LPI pending table @0x0000080000ac0000 May 14 00:24:23.251956 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.251965 kernel: CPU45: Booted secondary processor 0x0000180100 [0x413fd0c1] May 14 00:24:23.251972 kernel: Detected PIPT I-cache on CPU46 May 14 00:24:23.251979 kernel: GICv3: CPU46: found redistributor 160100 region 0:0x00001001006e0000 May 14 00:24:23.251987 kernel: GICv3: CPU46: using allocated LPI pending table @0x0000080000ad0000 May 14 00:24:23.251994 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.252001 kernel: CPU46: Booted secondary processor 0x0000160100 [0x413fd0c1] May 14 00:24:23.252008 kernel: Detected PIPT I-cache on CPU47 May 14 00:24:23.252016 kernel: GICv3: CPU47: found redistributor 1e0100 region 0:0x00001001008e0000 May 14 00:24:23.252023 kernel: GICv3: CPU47: using allocated LPI pending table @0x0000080000ae0000 May 14 00:24:23.252030 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.252039 kernel: CPU47: Booted secondary processor 0x00001e0100 [0x413fd0c1] May 14 00:24:23.252046 kernel: Detected PIPT I-cache on CPU48 May 14 00:24:23.252053 kernel: GICv3: CPU48: found redistributor a0100 region 0:0x00001001003e0000 May 14 00:24:23.252061 kernel: GICv3: CPU48: using allocated LPI pending table @0x0000080000af0000 May 14 00:24:23.252068 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.252075 kernel: CPU48: Booted secondary processor 0x00000a0100 [0x413fd0c1] May 14 00:24:23.252082 kernel: Detected PIPT I-cache on CPU49 May 14 00:24:23.252090 kernel: GICv3: CPU49: found redistributor 220100 region 0:0x00001001009e0000 May 14 00:24:23.252097 kernel: GICv3: CPU49: using allocated LPI pending table @0x0000080000b00000 May 14 00:24:23.252106 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.252113 kernel: CPU49: Booted secondary processor 0x0000220100 [0x413fd0c1] May 14 00:24:23.252122 kernel: Detected PIPT I-cache on CPU50 May 14 00:24:23.252129 kernel: GICv3: CPU50: found redistributor c0100 region 0:0x0000100100460000 May 14 00:24:23.252136 kernel: GICv3: CPU50: using allocated LPI pending table @0x0000080000b10000 May 14 00:24:23.252144 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.252151 kernel: CPU50: Booted secondary processor 0x00000c0100 [0x413fd0c1] May 14 00:24:23.252158 kernel: Detected PIPT I-cache on CPU51 May 14 00:24:23.252165 kernel: GICv3: CPU51: found redistributor 240100 region 0:0x0000100100a60000 May 14 00:24:23.252173 kernel: GICv3: CPU51: using allocated LPI pending table @0x0000080000b20000 May 14 00:24:23.252181 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.252189 kernel: CPU51: Booted secondary processor 0x0000240100 [0x413fd0c1] May 14 00:24:23.252196 kernel: Detected PIPT I-cache on CPU52 May 14 00:24:23.252203 kernel: GICv3: CPU52: found redistributor 80100 region 0:0x0000100100360000 May 14 00:24:23.252210 kernel: GICv3: CPU52: using allocated LPI pending table @0x0000080000b30000 May 14 00:24:23.252217 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.252224 kernel: CPU52: Booted secondary processor 0x0000080100 [0x413fd0c1] May 14 00:24:23.252232 kernel: Detected PIPT I-cache on CPU53 May 14 00:24:23.252239 kernel: GICv3: CPU53: found redistributor 200100 region 0:0x0000100100960000 May 14 00:24:23.252248 kernel: GICv3: CPU53: using allocated LPI pending table @0x0000080000b40000 May 14 00:24:23.252255 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.252262 kernel: CPU53: Booted secondary processor 0x0000200100 [0x413fd0c1] May 14 00:24:23.252269 kernel: Detected PIPT I-cache on CPU54 May 14 00:24:23.252277 kernel: GICv3: CPU54: found redistributor e0100 region 0:0x00001001004e0000 May 14 00:24:23.252284 kernel: GICv3: CPU54: using allocated LPI pending table @0x0000080000b50000 May 14 00:24:23.252291 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.252298 kernel: CPU54: Booted secondary processor 0x00000e0100 [0x413fd0c1] May 14 00:24:23.252305 kernel: Detected PIPT I-cache on CPU55 May 14 00:24:23.252313 kernel: GICv3: CPU55: found redistributor 260100 region 0:0x0000100100ae0000 May 14 00:24:23.252321 kernel: GICv3: CPU55: using allocated LPI pending table @0x0000080000b60000 May 14 00:24:23.252329 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.252336 kernel: CPU55: Booted secondary processor 0x0000260100 [0x413fd0c1] May 14 00:24:23.252343 kernel: Detected PIPT I-cache on CPU56 May 14 00:24:23.252351 kernel: GICv3: CPU56: found redistributor 20100 region 0:0x00001001001e0000 May 14 00:24:23.252358 kernel: GICv3: CPU56: using allocated LPI pending table @0x0000080000b70000 May 14 00:24:23.252365 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.252374 kernel: CPU56: Booted secondary processor 0x0000020100 [0x413fd0c1] May 14 00:24:23.252381 kernel: Detected PIPT I-cache on CPU57 May 14 00:24:23.252390 kernel: GICv3: CPU57: found redistributor 40100 region 0:0x0000100100260000 May 14 00:24:23.252398 kernel: GICv3: CPU57: using allocated LPI pending table @0x0000080000b80000 May 14 00:24:23.252405 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.252412 kernel: CPU57: Booted secondary processor 0x0000040100 [0x413fd0c1] May 14 00:24:23.252419 kernel: Detected PIPT I-cache on CPU58 May 14 00:24:23.252426 kernel: GICv3: CPU58: found redistributor 100 region 0:0x0000100100160000 May 14 00:24:23.252434 kernel: GICv3: CPU58: using allocated LPI pending table @0x0000080000b90000 May 14 00:24:23.252441 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.252448 kernel: CPU58: Booted secondary processor 0x0000000100 [0x413fd0c1] May 14 00:24:23.252456 kernel: Detected PIPT I-cache on CPU59 May 14 00:24:23.252464 kernel: GICv3: CPU59: found redistributor 60100 region 0:0x00001001002e0000 May 14 00:24:23.252471 kernel: GICv3: CPU59: using allocated LPI pending table @0x0000080000ba0000 May 14 00:24:23.252478 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.252486 kernel: CPU59: Booted secondary processor 0x0000060100 [0x413fd0c1] May 14 00:24:23.252493 kernel: Detected PIPT I-cache on CPU60 May 14 00:24:23.252500 kernel: GICv3: CPU60: found redistributor 130100 region 0:0x0000100100620000 May 14 00:24:23.252507 kernel: GICv3: CPU60: using allocated LPI pending table @0x0000080000bb0000 May 14 00:24:23.252515 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.252522 kernel: CPU60: Booted secondary processor 0x0000130100 [0x413fd0c1] May 14 00:24:23.252531 kernel: Detected PIPT I-cache on CPU61 May 14 00:24:23.252538 kernel: GICv3: CPU61: found redistributor 1b0100 region 0:0x0000100100820000 May 14 00:24:23.252545 kernel: GICv3: CPU61: using allocated LPI pending table @0x0000080000bc0000 May 14 00:24:23.252552 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.252560 kernel: CPU61: Booted secondary processor 0x00001b0100 [0x413fd0c1] May 14 00:24:23.252567 kernel: Detected PIPT I-cache on CPU62 May 14 00:24:23.252574 kernel: GICv3: CPU62: found redistributor 150100 region 0:0x00001001006a0000 May 14 00:24:23.252582 kernel: GICv3: CPU62: using allocated LPI pending table @0x0000080000bd0000 May 14 00:24:23.252589 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.252597 kernel: CPU62: Booted secondary processor 0x0000150100 [0x413fd0c1] May 14 00:24:23.252606 kernel: Detected PIPT I-cache on CPU63 May 14 00:24:23.252614 kernel: GICv3: CPU63: found redistributor 1d0100 region 0:0x00001001008a0000 May 14 00:24:23.252621 kernel: GICv3: CPU63: using allocated LPI pending table @0x0000080000be0000 May 14 00:24:23.252628 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.252635 kernel: CPU63: Booted secondary processor 0x00001d0100 [0x413fd0c1] May 14 00:24:23.252643 kernel: Detected PIPT I-cache on CPU64 May 14 00:24:23.252650 kernel: GICv3: CPU64: found redistributor 110100 region 0:0x00001001005a0000 May 14 00:24:23.252657 kernel: GICv3: CPU64: using allocated LPI pending table @0x0000080000bf0000 May 14 00:24:23.252664 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.252673 kernel: CPU64: Booted secondary processor 0x0000110100 [0x413fd0c1] May 14 00:24:23.252680 kernel: Detected PIPT I-cache on CPU65 May 14 00:24:23.252688 kernel: GICv3: CPU65: found redistributor 190100 region 0:0x00001001007a0000 May 14 00:24:23.252695 kernel: GICv3: CPU65: using allocated LPI pending table @0x0000080000c00000 May 14 00:24:23.252702 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.252709 kernel: CPU65: Booted secondary processor 0x0000190100 [0x413fd0c1] May 14 00:24:23.252717 kernel: Detected PIPT I-cache on CPU66 May 14 00:24:23.252724 kernel: GICv3: CPU66: found redistributor 170100 region 0:0x0000100100720000 May 14 00:24:23.252731 kernel: GICv3: CPU66: using allocated LPI pending table @0x0000080000c10000 May 14 00:24:23.252740 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.252747 kernel: CPU66: Booted secondary processor 0x0000170100 [0x413fd0c1] May 14 00:24:23.252754 kernel: Detected PIPT I-cache on CPU67 May 14 00:24:23.252761 kernel: GICv3: CPU67: found redistributor 1f0100 region 0:0x0000100100920000 May 14 00:24:23.252769 kernel: GICv3: CPU67: using allocated LPI pending table @0x0000080000c20000 May 14 00:24:23.252776 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.252783 kernel: CPU67: Booted secondary processor 0x00001f0100 [0x413fd0c1] May 14 00:24:23.252790 kernel: Detected PIPT I-cache on CPU68 May 14 00:24:23.252797 kernel: GICv3: CPU68: found redistributor b0100 region 0:0x0000100100420000 May 14 00:24:23.252805 kernel: GICv3: CPU68: using allocated LPI pending table @0x0000080000c30000 May 14 00:24:23.252813 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.252821 kernel: CPU68: Booted secondary processor 0x00000b0100 [0x413fd0c1] May 14 00:24:23.252828 kernel: Detected PIPT I-cache on CPU69 May 14 00:24:23.252836 kernel: GICv3: CPU69: found redistributor 230100 region 0:0x0000100100a20000 May 14 00:24:23.252843 kernel: GICv3: CPU69: using allocated LPI pending table @0x0000080000c40000 May 14 00:24:23.252850 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.252857 kernel: CPU69: Booted secondary processor 0x0000230100 [0x413fd0c1] May 14 00:24:23.252865 kernel: Detected PIPT I-cache on CPU70 May 14 00:24:23.252872 kernel: GICv3: CPU70: found redistributor d0100 region 0:0x00001001004a0000 May 14 00:24:23.252881 kernel: GICv3: CPU70: using allocated LPI pending table @0x0000080000c50000 May 14 00:24:23.252888 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.252895 kernel: CPU70: Booted secondary processor 0x00000d0100 [0x413fd0c1] May 14 00:24:23.252902 kernel: Detected PIPT I-cache on CPU71 May 14 00:24:23.252910 kernel: GICv3: CPU71: found redistributor 250100 region 0:0x0000100100aa0000 May 14 00:24:23.252917 kernel: GICv3: CPU71: using allocated LPI pending table @0x0000080000c60000 May 14 00:24:23.252924 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.252931 kernel: CPU71: Booted secondary processor 0x0000250100 [0x413fd0c1] May 14 00:24:23.252939 kernel: Detected PIPT I-cache on CPU72 May 14 00:24:23.252946 kernel: GICv3: CPU72: found redistributor 90100 region 0:0x00001001003a0000 May 14 00:24:23.252954 kernel: GICv3: CPU72: using allocated LPI pending table @0x0000080000c70000 May 14 00:24:23.252962 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.252969 kernel: CPU72: Booted secondary processor 0x0000090100 [0x413fd0c1] May 14 00:24:23.252976 kernel: Detected PIPT I-cache on CPU73 May 14 00:24:23.252983 kernel: GICv3: CPU73: found redistributor 210100 region 0:0x00001001009a0000 May 14 00:24:23.252991 kernel: GICv3: CPU73: using allocated LPI pending table @0x0000080000c80000 May 14 00:24:23.252998 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.253005 kernel: CPU73: Booted secondary processor 0x0000210100 [0x413fd0c1] May 14 00:24:23.253012 kernel: Detected PIPT I-cache on CPU74 May 14 00:24:23.253021 kernel: GICv3: CPU74: found redistributor f0100 region 0:0x0000100100520000 May 14 00:24:23.253028 kernel: GICv3: CPU74: using allocated LPI pending table @0x0000080000c90000 May 14 00:24:23.253036 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.253043 kernel: CPU74: Booted secondary processor 0x00000f0100 [0x413fd0c1] May 14 00:24:23.253050 kernel: Detected PIPT I-cache on CPU75 May 14 00:24:23.253057 kernel: GICv3: CPU75: found redistributor 270100 region 0:0x0000100100b20000 May 14 00:24:23.253065 kernel: GICv3: CPU75: using allocated LPI pending table @0x0000080000ca0000 May 14 00:24:23.253072 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.253079 kernel: CPU75: Booted secondary processor 0x0000270100 [0x413fd0c1] May 14 00:24:23.253086 kernel: Detected PIPT I-cache on CPU76 May 14 00:24:23.253095 kernel: GICv3: CPU76: found redistributor 30100 region 0:0x0000100100220000 May 14 00:24:23.253103 kernel: GICv3: CPU76: using allocated LPI pending table @0x0000080000cb0000 May 14 00:24:23.253110 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.253117 kernel: CPU76: Booted secondary processor 0x0000030100 [0x413fd0c1] May 14 00:24:23.253124 kernel: Detected PIPT I-cache on CPU77 May 14 00:24:23.253132 kernel: GICv3: CPU77: found redistributor 50100 region 0:0x00001001002a0000 May 14 00:24:23.253139 kernel: GICv3: CPU77: using allocated LPI pending table @0x0000080000cc0000 May 14 00:24:23.253146 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.253153 kernel: CPU77: Booted secondary processor 0x0000050100 [0x413fd0c1] May 14 00:24:23.253162 kernel: Detected PIPT I-cache on CPU78 May 14 00:24:23.253169 kernel: GICv3: CPU78: found redistributor 10100 region 0:0x00001001001a0000 May 14 00:24:23.253177 kernel: GICv3: CPU78: using allocated LPI pending table @0x0000080000cd0000 May 14 00:24:23.253184 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.253191 kernel: CPU78: Booted secondary processor 0x0000010100 [0x413fd0c1] May 14 00:24:23.253198 kernel: Detected PIPT I-cache on CPU79 May 14 00:24:23.253206 kernel: GICv3: CPU79: found redistributor 70100 region 0:0x0000100100320000 May 14 00:24:23.253213 kernel: GICv3: CPU79: using allocated LPI pending table @0x0000080000ce0000 May 14 00:24:23.253220 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:24:23.253229 kernel: CPU79: Booted secondary processor 0x0000070100 [0x413fd0c1] May 14 00:24:23.253236 kernel: smp: Brought up 1 node, 80 CPUs May 14 00:24:23.253243 kernel: SMP: Total of 80 processors activated. May 14 00:24:23.253250 kernel: CPU features: detected: 32-bit EL0 Support May 14 00:24:23.253258 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 14 00:24:23.253265 kernel: CPU features: detected: Common not Private translations May 14 00:24:23.253272 kernel: CPU features: detected: CRC32 instructions May 14 00:24:23.253280 kernel: CPU features: detected: Enhanced Virtualization Traps May 14 00:24:23.253287 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 14 00:24:23.253295 kernel: CPU features: detected: LSE atomic instructions May 14 00:24:23.253303 kernel: CPU features: detected: Privileged Access Never May 14 00:24:23.253310 kernel: CPU features: detected: RAS Extension Support May 14 00:24:23.253317 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 14 00:24:23.253324 kernel: CPU: All CPU(s) started at EL2 May 14 00:24:23.253332 kernel: alternatives: applying system-wide alternatives May 14 00:24:23.253339 kernel: devtmpfs: initialized May 14 00:24:23.253346 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 00:24:23.253354 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 14 00:24:23.253361 kernel: pinctrl core: initialized pinctrl subsystem May 14 00:24:23.253370 kernel: SMBIOS 3.4.0 present. May 14 00:24:23.253377 kernel: DMI: GIGABYTE R272-P30-JG/MP32-AR0-JG, BIOS F17a (SCP: 1.07.20210713) 07/22/2021 May 14 00:24:23.253385 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 00:24:23.253392 kernel: DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations May 14 00:24:23.253400 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 14 00:24:23.253407 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 14 00:24:23.253414 kernel: audit: initializing netlink subsys (disabled) May 14 00:24:23.253422 kernel: audit: type=2000 audit(0.042:1): state=initialized audit_enabled=0 res=1 May 14 00:24:23.253430 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 00:24:23.253437 kernel: cpuidle: using governor menu May 14 00:24:23.253445 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 14 00:24:23.253452 kernel: ASID allocator initialised with 32768 entries May 14 00:24:23.253459 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 00:24:23.253466 kernel: Serial: AMBA PL011 UART driver May 14 00:24:23.253474 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 14 00:24:23.253481 kernel: Modules: 0 pages in range for non-PLT usage May 14 00:24:23.253489 kernel: Modules: 509232 pages in range for PLT usage May 14 00:24:23.253498 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 00:24:23.253505 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 14 00:24:23.253512 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 14 00:24:23.253520 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 14 00:24:23.253527 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 00:24:23.253534 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 14 00:24:23.253541 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 14 00:24:23.253548 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 14 00:24:23.253556 kernel: ACPI: Added _OSI(Module Device) May 14 00:24:23.253564 kernel: ACPI: Added _OSI(Processor Device) May 14 00:24:23.253571 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 00:24:23.253579 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 00:24:23.253586 kernel: ACPI: 2 ACPI AML tables successfully acquired and loaded May 14 00:24:23.253593 kernel: ACPI: Interpreter enabled May 14 00:24:23.253600 kernel: ACPI: Using GIC for interrupt routing May 14 00:24:23.253610 kernel: ACPI: MCFG table detected, 8 entries May 14 00:24:23.253617 kernel: ACPI: IORT: SMMU-v3[33ffe0000000] Mapped to Proximity domain 0 May 14 00:24:23.253624 kernel: ACPI: IORT: SMMU-v3[37ffe0000000] Mapped to Proximity domain 0 May 14 00:24:23.253632 kernel: ACPI: IORT: SMMU-v3[3bffe0000000] Mapped to Proximity domain 0 May 14 00:24:23.253641 kernel: ACPI: IORT: SMMU-v3[3fffe0000000] Mapped to Proximity domain 0 May 14 00:24:23.253648 kernel: ACPI: IORT: SMMU-v3[23ffe0000000] Mapped to Proximity domain 0 May 14 00:24:23.253655 kernel: ACPI: IORT: SMMU-v3[27ffe0000000] Mapped to Proximity domain 0 May 14 00:24:23.253663 kernel: ACPI: IORT: SMMU-v3[2bffe0000000] Mapped to Proximity domain 0 May 14 00:24:23.253670 kernel: ACPI: IORT: SMMU-v3[2fffe0000000] Mapped to Proximity domain 0 May 14 00:24:23.253677 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x100002600000 (irq = 19, base_baud = 0) is a SBSA May 14 00:24:23.253685 kernel: printk: console [ttyAMA0] enabled May 14 00:24:23.253692 kernel: ARMH0011:01: ttyAMA1 at MMIO 0x100002620000 (irq = 20, base_baud = 0) is a SBSA May 14 00:24:23.253701 kernel: ACPI: PCI Root Bridge [PCI1] (domain 000d [bus 00-ff]) May 14 00:24:23.253840 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 00:24:23.253911 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug PME LTR] May 14 00:24:23.253975 kernel: acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability] May 14 00:24:23.254037 kernel: acpi PNP0A08:00: MCFG quirk: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 14 00:24:23.254098 kernel: acpi PNP0A08:00: ECAM area [mem 0x37fff0000000-0x37ffffffffff] reserved by PNP0C02:00 May 14 00:24:23.254159 kernel: acpi PNP0A08:00: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] May 14 00:24:23.254171 kernel: PCI host bridge to bus 000d:00 May 14 00:24:23.254241 kernel: pci_bus 000d:00: root bus resource [mem 0x50000000-0x5fffffff window] May 14 00:24:23.254299 kernel: pci_bus 000d:00: root bus resource [mem 0x340000000000-0x37ffdfffffff window] May 14 00:24:23.254355 kernel: pci_bus 000d:00: root bus resource [bus 00-ff] May 14 00:24:23.254433 kernel: pci 000d:00:00.0: [1def:e100] type 00 class 0x060000 May 14 00:24:23.254507 kernel: pci 000d:00:01.0: [1def:e101] type 01 class 0x060400 May 14 00:24:23.254573 kernel: pci 000d:00:01.0: enabling Extended Tags May 14 00:24:23.254644 kernel: pci 000d:00:01.0: supports D1 D2 May 14 00:24:23.254708 kernel: pci 000d:00:01.0: PME# supported from D0 D1 D3hot May 14 00:24:23.254780 kernel: pci 000d:00:02.0: [1def:e102] type 01 class 0x060400 May 14 00:24:23.254843 kernel: pci 000d:00:02.0: supports D1 D2 May 14 00:24:23.254907 kernel: pci 000d:00:02.0: PME# supported from D0 D1 D3hot May 14 00:24:23.254977 kernel: pci 000d:00:03.0: [1def:e103] type 01 class 0x060400 May 14 00:24:23.255044 kernel: pci 000d:00:03.0: supports D1 D2 May 14 00:24:23.255108 kernel: pci 000d:00:03.0: PME# supported from D0 D1 D3hot May 14 00:24:23.255179 kernel: pci 000d:00:04.0: [1def:e104] type 01 class 0x060400 May 14 00:24:23.255242 kernel: pci 000d:00:04.0: supports D1 D2 May 14 00:24:23.255304 kernel: pci 000d:00:04.0: PME# supported from D0 D1 D3hot May 14 00:24:23.255314 kernel: acpiphp: Slot [1] registered May 14 00:24:23.255321 kernel: acpiphp: Slot [2] registered May 14 00:24:23.255331 kernel: acpiphp: Slot [3] registered May 14 00:24:23.255338 kernel: acpiphp: Slot [4] registered May 14 00:24:23.255397 kernel: pci_bus 000d:00: on NUMA node 0 May 14 00:24:23.255460 kernel: pci 000d:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 14 00:24:23.255525 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 14 00:24:23.255588 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 14 00:24:23.255680 kernel: pci 000d:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 14 00:24:23.255743 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 14 00:24:23.255807 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 14 00:24:23.255869 kernel: pci 000d:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 14 00:24:23.255930 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 14 00:24:23.255991 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 14 00:24:23.256054 kernel: pci 000d:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 14 00:24:23.256116 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 14 00:24:23.256181 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 14 00:24:23.256245 kernel: pci 000d:00:01.0: BAR 14: assigned [mem 0x50000000-0x501fffff] May 14 00:24:23.256311 kernel: pci 000d:00:01.0: BAR 15: assigned [mem 0x340000000000-0x3400001fffff 64bit pref] May 14 00:24:23.256374 kernel: pci 000d:00:02.0: BAR 14: assigned [mem 0x50200000-0x503fffff] May 14 00:24:23.256436 kernel: pci 000d:00:02.0: BAR 15: assigned [mem 0x340000200000-0x3400003fffff 64bit pref] May 14 00:24:23.256500 kernel: pci 000d:00:03.0: BAR 14: assigned [mem 0x50400000-0x505fffff] May 14 00:24:23.256562 kernel: pci 000d:00:03.0: BAR 15: assigned [mem 0x340000400000-0x3400005fffff 64bit pref] May 14 00:24:23.256628 kernel: pci 000d:00:04.0: BAR 14: assigned [mem 0x50600000-0x507fffff] May 14 00:24:23.256693 kernel: pci 000d:00:04.0: BAR 15: assigned [mem 0x340000600000-0x3400007fffff 64bit pref] May 14 00:24:23.256757 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.256820 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.256881 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.256943 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.257005 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.257068 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.257130 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.257195 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.257255 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.257318 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.257381 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.257443 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.257506 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.257567 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.257639 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.257704 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.257767 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] May 14 00:24:23.257830 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff] May 14 00:24:23.257891 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref] May 14 00:24:23.257955 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] May 14 00:24:23.258017 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff] May 14 00:24:23.258080 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref] May 14 00:24:23.258146 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] May 14 00:24:23.258211 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff] May 14 00:24:23.258277 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref] May 14 00:24:23.258340 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] May 14 00:24:23.258403 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff] May 14 00:24:23.258467 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref] May 14 00:24:23.258529 kernel: pci_bus 000d:00: resource 4 [mem 0x50000000-0x5fffffff window] May 14 00:24:23.258585 kernel: pci_bus 000d:00: resource 5 [mem 0x340000000000-0x37ffdfffffff window] May 14 00:24:23.258658 kernel: pci_bus 000d:01: resource 1 [mem 0x50000000-0x501fffff] May 14 00:24:23.258719 kernel: pci_bus 000d:01: resource 2 [mem 0x340000000000-0x3400001fffff 64bit pref] May 14 00:24:23.258786 kernel: pci_bus 000d:02: resource 1 [mem 0x50200000-0x503fffff] May 14 00:24:23.258844 kernel: pci_bus 000d:02: resource 2 [mem 0x340000200000-0x3400003fffff 64bit pref] May 14 00:24:23.258922 kernel: pci_bus 000d:03: resource 1 [mem 0x50400000-0x505fffff] May 14 00:24:23.258983 kernel: pci_bus 000d:03: resource 2 [mem 0x340000400000-0x3400005fffff 64bit pref] May 14 00:24:23.259049 kernel: pci_bus 000d:04: resource 1 [mem 0x50600000-0x507fffff] May 14 00:24:23.259109 kernel: pci_bus 000d:04: resource 2 [mem 0x340000600000-0x3400007fffff 64bit pref] May 14 00:24:23.259118 kernel: ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 00-ff]) May 14 00:24:23.259186 kernel: acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 00:24:23.259251 kernel: acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug PME LTR] May 14 00:24:23.259328 kernel: acpi PNP0A08:01: _OSC: OS now controls [AER PCIeCapability] May 14 00:24:23.259389 kernel: acpi PNP0A08:01: MCFG quirk: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] with pci_32b_read_ops May 14 00:24:23.259449 kernel: acpi PNP0A08:01: ECAM area [mem 0x3ffff0000000-0x3fffffffffff] reserved by PNP0C02:00 May 14 00:24:23.259508 kernel: acpi PNP0A08:01: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] May 14 00:24:23.259518 kernel: PCI host bridge to bus 0000:00 May 14 00:24:23.259581 kernel: pci_bus 0000:00: root bus resource [mem 0x70000000-0x7fffffff window] May 14 00:24:23.259644 kernel: pci_bus 0000:00: root bus resource [mem 0x3c0000000000-0x3fffdfffffff window] May 14 00:24:23.259700 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 00:24:23.259770 kernel: pci 0000:00:00.0: [1def:e100] type 00 class 0x060000 May 14 00:24:23.259844 kernel: pci 0000:00:01.0: [1def:e101] type 01 class 0x060400 May 14 00:24:23.259909 kernel: pci 0000:00:01.0: enabling Extended Tags May 14 00:24:23.259971 kernel: pci 0000:00:01.0: supports D1 D2 May 14 00:24:23.260033 kernel: pci 0000:00:01.0: PME# supported from D0 D1 D3hot May 14 00:24:23.260106 kernel: pci 0000:00:02.0: [1def:e102] type 01 class 0x060400 May 14 00:24:23.260170 kernel: pci 0000:00:02.0: supports D1 D2 May 14 00:24:23.260232 kernel: pci 0000:00:02.0: PME# supported from D0 D1 D3hot May 14 00:24:23.260302 kernel: pci 0000:00:03.0: [1def:e103] type 01 class 0x060400 May 14 00:24:23.260365 kernel: pci 0000:00:03.0: supports D1 D2 May 14 00:24:23.260427 kernel: pci 0000:00:03.0: PME# supported from D0 D1 D3hot May 14 00:24:23.260498 kernel: pci 0000:00:04.0: [1def:e104] type 01 class 0x060400 May 14 00:24:23.260562 kernel: pci 0000:00:04.0: supports D1 D2 May 14 00:24:23.260631 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D3hot May 14 00:24:23.260640 kernel: acpiphp: Slot [1-1] registered May 14 00:24:23.260648 kernel: acpiphp: Slot [2-1] registered May 14 00:24:23.260655 kernel: acpiphp: Slot [3-1] registered May 14 00:24:23.260662 kernel: acpiphp: Slot [4-1] registered May 14 00:24:23.260717 kernel: pci_bus 0000:00: on NUMA node 0 May 14 00:24:23.260781 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 14 00:24:23.260845 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 14 00:24:23.260909 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 14 00:24:23.260972 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 14 00:24:23.261035 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 14 00:24:23.261097 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 14 00:24:23.261160 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 14 00:24:23.261224 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 14 00:24:23.261288 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 14 00:24:23.261351 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 14 00:24:23.261413 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 14 00:24:23.261475 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 14 00:24:23.261538 kernel: pci 0000:00:01.0: BAR 14: assigned [mem 0x70000000-0x701fffff] May 14 00:24:23.261599 kernel: pci 0000:00:01.0: BAR 15: assigned [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 14 00:24:23.261665 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x70200000-0x703fffff] May 14 00:24:23.261730 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 14 00:24:23.261793 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x70400000-0x705fffff] May 14 00:24:23.261855 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 14 00:24:23.261918 kernel: pci 0000:00:04.0: BAR 14: assigned [mem 0x70600000-0x707fffff] May 14 00:24:23.261980 kernel: pci 0000:00:04.0: BAR 15: assigned [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 14 00:24:23.262042 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.262105 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.262168 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.262232 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.262294 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.262356 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.262417 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.262479 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.262541 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.262602 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.262669 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.262732 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.262795 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.262857 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.262920 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.262982 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.263044 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 14 00:24:23.263106 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff] May 14 00:24:23.263168 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 14 00:24:23.263232 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] May 14 00:24:23.263294 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff] May 14 00:24:23.263359 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 14 00:24:23.263421 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] May 14 00:24:23.263485 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff] May 14 00:24:23.263548 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 14 00:24:23.263616 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] May 14 00:24:23.263683 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff] May 14 00:24:23.263749 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 14 00:24:23.263815 kernel: pci_bus 0000:00: resource 4 [mem 0x70000000-0x7fffffff window] May 14 00:24:23.263871 kernel: pci_bus 0000:00: resource 5 [mem 0x3c0000000000-0x3fffdfffffff window] May 14 00:24:23.263941 kernel: pci_bus 0000:01: resource 1 [mem 0x70000000-0x701fffff] May 14 00:24:23.264007 kernel: pci_bus 0000:01: resource 2 [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 14 00:24:23.264073 kernel: pci_bus 0000:02: resource 1 [mem 0x70200000-0x703fffff] May 14 00:24:23.264132 kernel: pci_bus 0000:02: resource 2 [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 14 00:24:23.264207 kernel: pci_bus 0000:03: resource 1 [mem 0x70400000-0x705fffff] May 14 00:24:23.264271 kernel: pci_bus 0000:03: resource 2 [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 14 00:24:23.264341 kernel: pci_bus 0000:04: resource 1 [mem 0x70600000-0x707fffff] May 14 00:24:23.264405 kernel: pci_bus 0000:04: resource 2 [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 14 00:24:23.264415 kernel: ACPI: PCI Root Bridge [PCI7] (domain 0005 [bus 00-ff]) May 14 00:24:23.264484 kernel: acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 00:24:23.264548 kernel: acpi PNP0A08:02: _OSC: platform does not support [PCIeHotplug PME LTR] May 14 00:24:23.264616 kernel: acpi PNP0A08:02: _OSC: OS now controls [AER PCIeCapability] May 14 00:24:23.264677 kernel: acpi PNP0A08:02: MCFG quirk: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] with pci_32b_read_ops May 14 00:24:23.264737 kernel: acpi PNP0A08:02: ECAM area [mem 0x2ffff0000000-0x2fffffffffff] reserved by PNP0C02:00 May 14 00:24:23.264797 kernel: acpi PNP0A08:02: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] May 14 00:24:23.264807 kernel: PCI host bridge to bus 0005:00 May 14 00:24:23.264871 kernel: pci_bus 0005:00: root bus resource [mem 0x30000000-0x3fffffff window] May 14 00:24:23.264928 kernel: pci_bus 0005:00: root bus resource [mem 0x2c0000000000-0x2fffdfffffff window] May 14 00:24:23.264985 kernel: pci_bus 0005:00: root bus resource [bus 00-ff] May 14 00:24:23.265057 kernel: pci 0005:00:00.0: [1def:e110] type 00 class 0x060000 May 14 00:24:23.265127 kernel: pci 0005:00:01.0: [1def:e111] type 01 class 0x060400 May 14 00:24:23.265193 kernel: pci 0005:00:01.0: supports D1 D2 May 14 00:24:23.265255 kernel: pci 0005:00:01.0: PME# supported from D0 D1 D3hot May 14 00:24:23.265325 kernel: pci 0005:00:03.0: [1def:e113] type 01 class 0x060400 May 14 00:24:23.265387 kernel: pci 0005:00:03.0: supports D1 D2 May 14 00:24:23.265453 kernel: pci 0005:00:03.0: PME# supported from D0 D1 D3hot May 14 00:24:23.265522 kernel: pci 0005:00:05.0: [1def:e115] type 01 class 0x060400 May 14 00:24:23.265585 kernel: pci 0005:00:05.0: supports D1 D2 May 14 00:24:23.265652 kernel: pci 0005:00:05.0: PME# supported from D0 D1 D3hot May 14 00:24:23.265722 kernel: pci 0005:00:07.0: [1def:e117] type 01 class 0x060400 May 14 00:24:23.265786 kernel: pci 0005:00:07.0: supports D1 D2 May 14 00:24:23.265849 kernel: pci 0005:00:07.0: PME# supported from D0 D1 D3hot May 14 00:24:23.265859 kernel: acpiphp: Slot [1-2] registered May 14 00:24:23.265866 kernel: acpiphp: Slot [2-2] registered May 14 00:24:23.265936 kernel: pci 0005:03:00.0: [144d:a808] type 00 class 0x010802 May 14 00:24:23.266002 kernel: pci 0005:03:00.0: reg 0x10: [mem 0x30110000-0x30113fff 64bit] May 14 00:24:23.266065 kernel: pci 0005:03:00.0: reg 0x30: [mem 0x30100000-0x3010ffff pref] May 14 00:24:23.266136 kernel: pci 0005:04:00.0: [144d:a808] type 00 class 0x010802 May 14 00:24:23.266201 kernel: pci 0005:04:00.0: reg 0x10: [mem 0x30010000-0x30013fff 64bit] May 14 00:24:23.266268 kernel: pci 0005:04:00.0: reg 0x30: [mem 0x30000000-0x3000ffff pref] May 14 00:24:23.266323 kernel: pci_bus 0005:00: on NUMA node 0 May 14 00:24:23.266388 kernel: pci 0005:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 14 00:24:23.266451 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 14 00:24:23.266513 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 14 00:24:23.266577 kernel: pci 0005:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 14 00:24:23.266646 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 14 00:24:23.266712 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 14 00:24:23.266776 kernel: pci 0005:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 14 00:24:23.266839 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 14 00:24:23.266903 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 14 00:24:23.266966 kernel: pci 0005:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 14 00:24:23.267029 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 14 00:24:23.267092 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x001fffff] to [bus 04] add_size 100000 add_align 100000 May 14 00:24:23.267156 kernel: pci 0005:00:01.0: BAR 14: assigned [mem 0x30000000-0x301fffff] May 14 00:24:23.267217 kernel: pci 0005:00:01.0: BAR 15: assigned [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 14 00:24:23.267279 kernel: pci 0005:00:03.0: BAR 14: assigned [mem 0x30200000-0x303fffff] May 14 00:24:23.267341 kernel: pci 0005:00:03.0: BAR 15: assigned [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 14 00:24:23.267404 kernel: pci 0005:00:05.0: BAR 14: assigned [mem 0x30400000-0x305fffff] May 14 00:24:23.267466 kernel: pci 0005:00:05.0: BAR 15: assigned [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 14 00:24:23.267528 kernel: pci 0005:00:07.0: BAR 14: assigned [mem 0x30600000-0x307fffff] May 14 00:24:23.267592 kernel: pci 0005:00:07.0: BAR 15: assigned [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 14 00:24:23.267659 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.267722 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.267784 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.267847 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.267910 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.267972 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.268034 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.268098 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.268160 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.268221 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.268284 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.268346 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.268409 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.268472 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.268534 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.268596 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.268663 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] May 14 00:24:23.268726 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff] May 14 00:24:23.268788 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 14 00:24:23.268850 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] May 14 00:24:23.268913 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff] May 14 00:24:23.268974 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 14 00:24:23.269043 kernel: pci 0005:03:00.0: BAR 6: assigned [mem 0x30400000-0x3040ffff pref] May 14 00:24:23.269107 kernel: pci 0005:03:00.0: BAR 0: assigned [mem 0x30410000-0x30413fff 64bit] May 14 00:24:23.269170 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] May 14 00:24:23.269231 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff] May 14 00:24:23.269295 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 14 00:24:23.269361 kernel: pci 0005:04:00.0: BAR 6: assigned [mem 0x30600000-0x3060ffff pref] May 14 00:24:23.269426 kernel: pci 0005:04:00.0: BAR 0: assigned [mem 0x30610000-0x30613fff 64bit] May 14 00:24:23.269491 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] May 14 00:24:23.269552 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff] May 14 00:24:23.269619 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 14 00:24:23.269678 kernel: pci_bus 0005:00: resource 4 [mem 0x30000000-0x3fffffff window] May 14 00:24:23.269734 kernel: pci_bus 0005:00: resource 5 [mem 0x2c0000000000-0x2fffdfffffff window] May 14 00:24:23.269801 kernel: pci_bus 0005:01: resource 1 [mem 0x30000000-0x301fffff] May 14 00:24:23.269860 kernel: pci_bus 0005:01: resource 2 [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 14 00:24:23.269937 kernel: pci_bus 0005:02: resource 1 [mem 0x30200000-0x303fffff] May 14 00:24:23.269996 kernel: pci_bus 0005:02: resource 2 [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 14 00:24:23.270061 kernel: pci_bus 0005:03: resource 1 [mem 0x30400000-0x305fffff] May 14 00:24:23.270119 kernel: pci_bus 0005:03: resource 2 [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 14 00:24:23.270187 kernel: pci_bus 0005:04: resource 1 [mem 0x30600000-0x307fffff] May 14 00:24:23.270246 kernel: pci_bus 0005:04: resource 2 [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 14 00:24:23.270256 kernel: ACPI: PCI Root Bridge [PCI5] (domain 0003 [bus 00-ff]) May 14 00:24:23.270329 kernel: acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 00:24:23.270399 kernel: acpi PNP0A08:03: _OSC: platform does not support [PCIeHotplug PME LTR] May 14 00:24:23.270461 kernel: acpi PNP0A08:03: _OSC: OS now controls [AER PCIeCapability] May 14 00:24:23.270526 kernel: acpi PNP0A08:03: MCFG quirk: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 14 00:24:23.270586 kernel: acpi PNP0A08:03: ECAM area [mem 0x27fff0000000-0x27ffffffffff] reserved by PNP0C02:00 May 14 00:24:23.270653 kernel: acpi PNP0A08:03: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] May 14 00:24:23.270663 kernel: PCI host bridge to bus 0003:00 May 14 00:24:23.270731 kernel: pci_bus 0003:00: root bus resource [mem 0x10000000-0x1fffffff window] May 14 00:24:23.270787 kernel: pci_bus 0003:00: root bus resource [mem 0x240000000000-0x27ffdfffffff window] May 14 00:24:23.270843 kernel: pci_bus 0003:00: root bus resource [bus 00-ff] May 14 00:24:23.270912 kernel: pci 0003:00:00.0: [1def:e110] type 00 class 0x060000 May 14 00:24:23.270982 kernel: pci 0003:00:01.0: [1def:e111] type 01 class 0x060400 May 14 00:24:23.271048 kernel: pci 0003:00:01.0: supports D1 D2 May 14 00:24:23.271112 kernel: pci 0003:00:01.0: PME# supported from D0 D1 D3hot May 14 00:24:23.271181 kernel: pci 0003:00:03.0: [1def:e113] type 01 class 0x060400 May 14 00:24:23.271245 kernel: pci 0003:00:03.0: supports D1 D2 May 14 00:24:23.271308 kernel: pci 0003:00:03.0: PME# supported from D0 D1 D3hot May 14 00:24:23.271375 kernel: pci 0003:00:05.0: [1def:e115] type 01 class 0x060400 May 14 00:24:23.271441 kernel: pci 0003:00:05.0: supports D1 D2 May 14 00:24:23.271503 kernel: pci 0003:00:05.0: PME# supported from D0 D1 D3hot May 14 00:24:23.271513 kernel: acpiphp: Slot [1-3] registered May 14 00:24:23.271520 kernel: acpiphp: Slot [2-3] registered May 14 00:24:23.271592 kernel: pci 0003:03:00.0: [8086:1521] type 00 class 0x020000 May 14 00:24:23.271701 kernel: pci 0003:03:00.0: reg 0x10: [mem 0x10020000-0x1003ffff] May 14 00:24:23.271766 kernel: pci 0003:03:00.0: reg 0x18: [io 0x0020-0x003f] May 14 00:24:23.271829 kernel: pci 0003:03:00.0: reg 0x1c: [mem 0x10044000-0x10047fff] May 14 00:24:23.271894 kernel: pci 0003:03:00.0: PME# supported from D0 D3hot D3cold May 14 00:24:23.271957 kernel: pci 0003:03:00.0: reg 0x184: [mem 0x240000060000-0x240000063fff 64bit pref] May 14 00:24:23.272019 kernel: pci 0003:03:00.0: VF(n) BAR0 space: [mem 0x240000060000-0x24000007ffff 64bit pref] (contains BAR0 for 8 VFs) May 14 00:24:23.272082 kernel: pci 0003:03:00.0: reg 0x190: [mem 0x240000040000-0x240000043fff 64bit pref] May 14 00:24:23.272145 kernel: pci 0003:03:00.0: VF(n) BAR3 space: [mem 0x240000040000-0x24000005ffff 64bit pref] (contains BAR3 for 8 VFs) May 14 00:24:23.272209 kernel: pci 0003:03:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x2 link at 0003:00:05.0 (capable of 16.000 Gb/s with 5.0 GT/s PCIe x4 link) May 14 00:24:23.272278 kernel: pci 0003:03:00.1: [8086:1521] type 00 class 0x020000 May 14 00:24:23.272344 kernel: pci 0003:03:00.1: reg 0x10: [mem 0x10000000-0x1001ffff] May 14 00:24:23.272410 kernel: pci 0003:03:00.1: reg 0x18: [io 0x0000-0x001f] May 14 00:24:23.272472 kernel: pci 0003:03:00.1: reg 0x1c: [mem 0x10040000-0x10043fff] May 14 00:24:23.272535 kernel: pci 0003:03:00.1: PME# supported from D0 D3hot D3cold May 14 00:24:23.272597 kernel: pci 0003:03:00.1: reg 0x184: [mem 0x240000020000-0x240000023fff 64bit pref] May 14 00:24:23.272664 kernel: pci 0003:03:00.1: VF(n) BAR0 space: [mem 0x240000020000-0x24000003ffff 64bit pref] (contains BAR0 for 8 VFs) May 14 00:24:23.272727 kernel: pci 0003:03:00.1: reg 0x190: [mem 0x240000000000-0x240000003fff 64bit pref] May 14 00:24:23.272792 kernel: pci 0003:03:00.1: VF(n) BAR3 space: [mem 0x240000000000-0x24000001ffff 64bit pref] (contains BAR3 for 8 VFs) May 14 00:24:23.272848 kernel: pci_bus 0003:00: on NUMA node 0 May 14 00:24:23.272910 kernel: pci 0003:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 14 00:24:23.272972 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 14 00:24:23.273033 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 14 00:24:23.273095 kernel: pci 0003:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 14 00:24:23.273156 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 14 00:24:23.273219 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 14 00:24:23.273282 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03-04] add_size 300000 add_align 100000 May 14 00:24:23.273343 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03-04] add_size 100000 add_align 100000 May 14 00:24:23.273406 kernel: pci 0003:00:01.0: BAR 14: assigned [mem 0x10000000-0x101fffff] May 14 00:24:23.273480 kernel: pci 0003:00:01.0: BAR 15: assigned [mem 0x240000000000-0x2400001fffff 64bit pref] May 14 00:24:23.273545 kernel: pci 0003:00:03.0: BAR 14: assigned [mem 0x10200000-0x103fffff] May 14 00:24:23.273610 kernel: pci 0003:00:03.0: BAR 15: assigned [mem 0x240000200000-0x2400003fffff 64bit pref] May 14 00:24:23.273673 kernel: pci 0003:00:05.0: BAR 14: assigned [mem 0x10400000-0x105fffff] May 14 00:24:23.273740 kernel: pci 0003:00:05.0: BAR 15: assigned [mem 0x240000400000-0x2400006fffff 64bit pref] May 14 00:24:23.273803 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.273866 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.273928 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.273991 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.274053 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.274116 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.274179 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.274244 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.274306 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.274368 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.274431 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.274493 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.274556 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] May 14 00:24:23.274622 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff] May 14 00:24:23.274685 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref] May 14 00:24:23.274751 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] May 14 00:24:23.274815 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff] May 14 00:24:23.274878 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref] May 14 00:24:23.274944 kernel: pci 0003:03:00.0: BAR 0: assigned [mem 0x10400000-0x1041ffff] May 14 00:24:23.275010 kernel: pci 0003:03:00.1: BAR 0: assigned [mem 0x10420000-0x1043ffff] May 14 00:24:23.275078 kernel: pci 0003:03:00.0: BAR 3: assigned [mem 0x10440000-0x10443fff] May 14 00:24:23.275144 kernel: pci 0003:03:00.0: BAR 7: assigned [mem 0x240000400000-0x24000041ffff 64bit pref] May 14 00:24:23.275209 kernel: pci 0003:03:00.0: BAR 10: assigned [mem 0x240000420000-0x24000043ffff 64bit pref] May 14 00:24:23.275273 kernel: pci 0003:03:00.1: BAR 3: assigned [mem 0x10444000-0x10447fff] May 14 00:24:23.275339 kernel: pci 0003:03:00.1: BAR 7: assigned [mem 0x240000440000-0x24000045ffff 64bit pref] May 14 00:24:23.275403 kernel: pci 0003:03:00.1: BAR 10: assigned [mem 0x240000460000-0x24000047ffff 64bit pref] May 14 00:24:23.275467 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] May 14 00:24:23.275532 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] May 14 00:24:23.275599 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] May 14 00:24:23.275670 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] May 14 00:24:23.275735 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] May 14 00:24:23.275802 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] May 14 00:24:23.275867 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] May 14 00:24:23.275931 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] May 14 00:24:23.275997 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] May 14 00:24:23.276059 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff] May 14 00:24:23.276125 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref] May 14 00:24:23.276183 kernel: pci_bus 0003:00: Some PCI device resources are unassigned, try booting with pci=realloc May 14 00:24:23.276240 kernel: pci_bus 0003:00: resource 4 [mem 0x10000000-0x1fffffff window] May 14 00:24:23.276297 kernel: pci_bus 0003:00: resource 5 [mem 0x240000000000-0x27ffdfffffff window] May 14 00:24:23.276372 kernel: pci_bus 0003:01: resource 1 [mem 0x10000000-0x101fffff] May 14 00:24:23.276432 kernel: pci_bus 0003:01: resource 2 [mem 0x240000000000-0x2400001fffff 64bit pref] May 14 00:24:23.276500 kernel: pci_bus 0003:02: resource 1 [mem 0x10200000-0x103fffff] May 14 00:24:23.276559 kernel: pci_bus 0003:02: resource 2 [mem 0x240000200000-0x2400003fffff 64bit pref] May 14 00:24:23.276628 kernel: pci_bus 0003:03: resource 1 [mem 0x10400000-0x105fffff] May 14 00:24:23.276688 kernel: pci_bus 0003:03: resource 2 [mem 0x240000400000-0x2400006fffff 64bit pref] May 14 00:24:23.276698 kernel: ACPI: PCI Root Bridge [PCI0] (domain 000c [bus 00-ff]) May 14 00:24:23.276768 kernel: acpi PNP0A08:04: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 00:24:23.276832 kernel: acpi PNP0A08:04: _OSC: platform does not support [PCIeHotplug PME LTR] May 14 00:24:23.276895 kernel: acpi PNP0A08:04: _OSC: OS now controls [AER PCIeCapability] May 14 00:24:23.276955 kernel: acpi PNP0A08:04: MCFG quirk: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 14 00:24:23.277017 kernel: acpi PNP0A08:04: ECAM area [mem 0x33fff0000000-0x33ffffffffff] reserved by PNP0C02:00 May 14 00:24:23.277077 kernel: acpi PNP0A08:04: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] May 14 00:24:23.277087 kernel: PCI host bridge to bus 000c:00 May 14 00:24:23.277150 kernel: pci_bus 000c:00: root bus resource [mem 0x40000000-0x4fffffff window] May 14 00:24:23.277209 kernel: pci_bus 000c:00: root bus resource [mem 0x300000000000-0x33ffdfffffff window] May 14 00:24:23.277264 kernel: pci_bus 000c:00: root bus resource [bus 00-ff] May 14 00:24:23.277336 kernel: pci 000c:00:00.0: [1def:e100] type 00 class 0x060000 May 14 00:24:23.277408 kernel: pci 000c:00:01.0: [1def:e101] type 01 class 0x060400 May 14 00:24:23.277473 kernel: pci 000c:00:01.0: enabling Extended Tags May 14 00:24:23.277535 kernel: pci 000c:00:01.0: supports D1 D2 May 14 00:24:23.277599 kernel: pci 000c:00:01.0: PME# supported from D0 D1 D3hot May 14 00:24:23.277675 kernel: pci 000c:00:02.0: [1def:e102] type 01 class 0x060400 May 14 00:24:23.277740 kernel: pci 000c:00:02.0: supports D1 D2 May 14 00:24:23.277803 kernel: pci 000c:00:02.0: PME# supported from D0 D1 D3hot May 14 00:24:23.277873 kernel: pci 000c:00:03.0: [1def:e103] type 01 class 0x060400 May 14 00:24:23.277936 kernel: pci 000c:00:03.0: supports D1 D2 May 14 00:24:23.277999 kernel: pci 000c:00:03.0: PME# supported from D0 D1 D3hot May 14 00:24:23.278067 kernel: pci 000c:00:04.0: [1def:e104] type 01 class 0x060400 May 14 00:24:23.278132 kernel: pci 000c:00:04.0: supports D1 D2 May 14 00:24:23.278195 kernel: pci 000c:00:04.0: PME# supported from D0 D1 D3hot May 14 00:24:23.278205 kernel: acpiphp: Slot [1-4] registered May 14 00:24:23.278213 kernel: acpiphp: Slot [2-4] registered May 14 00:24:23.278220 kernel: acpiphp: Slot [3-2] registered May 14 00:24:23.278228 kernel: acpiphp: Slot [4-2] registered May 14 00:24:23.278282 kernel: pci_bus 000c:00: on NUMA node 0 May 14 00:24:23.278346 kernel: pci 000c:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 14 00:24:23.278412 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 14 00:24:23.278475 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 14 00:24:23.278538 kernel: pci 000c:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 14 00:24:23.278600 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 14 00:24:23.278671 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 14 00:24:23.278735 kernel: pci 000c:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 14 00:24:23.278798 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 14 00:24:23.278863 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 14 00:24:23.278927 kernel: pci 000c:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 14 00:24:23.278990 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 14 00:24:23.279053 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 14 00:24:23.279117 kernel: pci 000c:00:01.0: BAR 14: assigned [mem 0x40000000-0x401fffff] May 14 00:24:23.279179 kernel: pci 000c:00:01.0: BAR 15: assigned [mem 0x300000000000-0x3000001fffff 64bit pref] May 14 00:24:23.279246 kernel: pci 000c:00:02.0: BAR 14: assigned [mem 0x40200000-0x403fffff] May 14 00:24:23.279310 kernel: pci 000c:00:02.0: BAR 15: assigned [mem 0x300000200000-0x3000003fffff 64bit pref] May 14 00:24:23.279374 kernel: pci 000c:00:03.0: BAR 14: assigned [mem 0x40400000-0x405fffff] May 14 00:24:23.279437 kernel: pci 000c:00:03.0: BAR 15: assigned [mem 0x300000400000-0x3000005fffff 64bit pref] May 14 00:24:23.279499 kernel: pci 000c:00:04.0: BAR 14: assigned [mem 0x40600000-0x407fffff] May 14 00:24:23.279562 kernel: pci 000c:00:04.0: BAR 15: assigned [mem 0x300000600000-0x3000007fffff 64bit pref] May 14 00:24:23.279628 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.279692 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.279754 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.279819 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.279882 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.279945 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.280008 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.280070 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.280133 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.280195 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.280258 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.280320 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.280385 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.280449 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.280511 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.280574 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.280639 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] May 14 00:24:23.280702 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff] May 14 00:24:23.280764 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref] May 14 00:24:23.280830 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] May 14 00:24:23.280892 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff] May 14 00:24:23.280955 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref] May 14 00:24:23.281020 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] May 14 00:24:23.281082 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff] May 14 00:24:23.281145 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref] May 14 00:24:23.281208 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] May 14 00:24:23.281274 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff] May 14 00:24:23.281336 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref] May 14 00:24:23.281396 kernel: pci_bus 000c:00: resource 4 [mem 0x40000000-0x4fffffff window] May 14 00:24:23.281453 kernel: pci_bus 000c:00: resource 5 [mem 0x300000000000-0x33ffdfffffff window] May 14 00:24:23.281520 kernel: pci_bus 000c:01: resource 1 [mem 0x40000000-0x401fffff] May 14 00:24:23.281580 kernel: pci_bus 000c:01: resource 2 [mem 0x300000000000-0x3000001fffff 64bit pref] May 14 00:24:23.281659 kernel: pci_bus 000c:02: resource 1 [mem 0x40200000-0x403fffff] May 14 00:24:23.281720 kernel: pci_bus 000c:02: resource 2 [mem 0x300000200000-0x3000003fffff 64bit pref] May 14 00:24:23.281786 kernel: pci_bus 000c:03: resource 1 [mem 0x40400000-0x405fffff] May 14 00:24:23.281845 kernel: pci_bus 000c:03: resource 2 [mem 0x300000400000-0x3000005fffff 64bit pref] May 14 00:24:23.281912 kernel: pci_bus 000c:04: resource 1 [mem 0x40600000-0x407fffff] May 14 00:24:23.281972 kernel: pci_bus 000c:04: resource 2 [mem 0x300000600000-0x3000007fffff 64bit pref] May 14 00:24:23.281984 kernel: ACPI: PCI Root Bridge [PCI4] (domain 0002 [bus 00-ff]) May 14 00:24:23.282054 kernel: acpi PNP0A08:05: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 00:24:23.282115 kernel: acpi PNP0A08:05: _OSC: platform does not support [PCIeHotplug PME LTR] May 14 00:24:23.282177 kernel: acpi PNP0A08:05: _OSC: OS now controls [AER PCIeCapability] May 14 00:24:23.282237 kernel: acpi PNP0A08:05: MCFG quirk: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 14 00:24:23.282299 kernel: acpi PNP0A08:05: ECAM area [mem 0x23fff0000000-0x23ffffffffff] reserved by PNP0C02:00 May 14 00:24:23.282360 kernel: acpi PNP0A08:05: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] May 14 00:24:23.282371 kernel: PCI host bridge to bus 0002:00 May 14 00:24:23.282436 kernel: pci_bus 0002:00: root bus resource [mem 0x00800000-0x0fffffff window] May 14 00:24:23.282494 kernel: pci_bus 0002:00: root bus resource [mem 0x200000000000-0x23ffdfffffff window] May 14 00:24:23.282550 kernel: pci_bus 0002:00: root bus resource [bus 00-ff] May 14 00:24:23.282625 kernel: pci 0002:00:00.0: [1def:e110] type 00 class 0x060000 May 14 00:24:23.282696 kernel: pci 0002:00:01.0: [1def:e111] type 01 class 0x060400 May 14 00:24:23.282763 kernel: pci 0002:00:01.0: supports D1 D2 May 14 00:24:23.282829 kernel: pci 0002:00:01.0: PME# supported from D0 D1 D3hot May 14 00:24:23.282899 kernel: pci 0002:00:03.0: [1def:e113] type 01 class 0x060400 May 14 00:24:23.282963 kernel: pci 0002:00:03.0: supports D1 D2 May 14 00:24:23.283025 kernel: pci 0002:00:03.0: PME# supported from D0 D1 D3hot May 14 00:24:23.283094 kernel: pci 0002:00:05.0: [1def:e115] type 01 class 0x060400 May 14 00:24:23.283157 kernel: pci 0002:00:05.0: supports D1 D2 May 14 00:24:23.283221 kernel: pci 0002:00:05.0: PME# supported from D0 D1 D3hot May 14 00:24:23.283294 kernel: pci 0002:00:07.0: [1def:e117] type 01 class 0x060400 May 14 00:24:23.283358 kernel: pci 0002:00:07.0: supports D1 D2 May 14 00:24:23.283421 kernel: pci 0002:00:07.0: PME# supported from D0 D1 D3hot May 14 00:24:23.283431 kernel: acpiphp: Slot [1-5] registered May 14 00:24:23.283438 kernel: acpiphp: Slot [2-5] registered May 14 00:24:23.283446 kernel: acpiphp: Slot [3-3] registered May 14 00:24:23.283454 kernel: acpiphp: Slot [4-3] registered May 14 00:24:23.283508 kernel: pci_bus 0002:00: on NUMA node 0 May 14 00:24:23.283574 kernel: pci 0002:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 14 00:24:23.283640 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 14 00:24:23.283703 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 14 00:24:23.283768 kernel: pci 0002:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 14 00:24:23.283833 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 14 00:24:23.283895 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 14 00:24:23.283959 kernel: pci 0002:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 14 00:24:23.284023 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 14 00:24:23.284085 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 14 00:24:23.284149 kernel: pci 0002:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 14 00:24:23.284211 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 14 00:24:23.284276 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 14 00:24:23.284339 kernel: pci 0002:00:01.0: BAR 14: assigned [mem 0x00800000-0x009fffff] May 14 00:24:23.284402 kernel: pci 0002:00:01.0: BAR 15: assigned [mem 0x200000000000-0x2000001fffff 64bit pref] May 14 00:24:23.284465 kernel: pci 0002:00:03.0: BAR 14: assigned [mem 0x00a00000-0x00bfffff] May 14 00:24:23.284528 kernel: pci 0002:00:03.0: BAR 15: assigned [mem 0x200000200000-0x2000003fffff 64bit pref] May 14 00:24:23.284591 kernel: pci 0002:00:05.0: BAR 14: assigned [mem 0x00c00000-0x00dfffff] May 14 00:24:23.284658 kernel: pci 0002:00:05.0: BAR 15: assigned [mem 0x200000400000-0x2000005fffff 64bit pref] May 14 00:24:23.284724 kernel: pci 0002:00:07.0: BAR 14: assigned [mem 0x00e00000-0x00ffffff] May 14 00:24:23.284786 kernel: pci 0002:00:07.0: BAR 15: assigned [mem 0x200000600000-0x2000007fffff 64bit pref] May 14 00:24:23.284850 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.284913 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.284976 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.285040 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.285102 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.285165 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.285230 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.285292 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.285354 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.285417 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.285478 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.285542 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.285608 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.285670 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.285733 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.285795 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.285860 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] May 14 00:24:23.285923 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff] May 14 00:24:23.285987 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref] May 14 00:24:23.286050 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] May 14 00:24:23.286112 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff] May 14 00:24:23.286175 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref] May 14 00:24:23.286239 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] May 14 00:24:23.286305 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff] May 14 00:24:23.286369 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref] May 14 00:24:23.286432 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] May 14 00:24:23.286495 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff] May 14 00:24:23.286560 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref] May 14 00:24:23.286634 kernel: pci_bus 0002:00: resource 4 [mem 0x00800000-0x0fffffff window] May 14 00:24:23.286690 kernel: pci_bus 0002:00: resource 5 [mem 0x200000000000-0x23ffdfffffff window] May 14 00:24:23.286759 kernel: pci_bus 0002:01: resource 1 [mem 0x00800000-0x009fffff] May 14 00:24:23.286818 kernel: pci_bus 0002:01: resource 2 [mem 0x200000000000-0x2000001fffff 64bit pref] May 14 00:24:23.286886 kernel: pci_bus 0002:02: resource 1 [mem 0x00a00000-0x00bfffff] May 14 00:24:23.286944 kernel: pci_bus 0002:02: resource 2 [mem 0x200000200000-0x2000003fffff 64bit pref] May 14 00:24:23.287018 kernel: pci_bus 0002:03: resource 1 [mem 0x00c00000-0x00dfffff] May 14 00:24:23.287079 kernel: pci_bus 0002:03: resource 2 [mem 0x200000400000-0x2000005fffff 64bit pref] May 14 00:24:23.287145 kernel: pci_bus 0002:04: resource 1 [mem 0x00e00000-0x00ffffff] May 14 00:24:23.287204 kernel: pci_bus 0002:04: resource 2 [mem 0x200000600000-0x2000007fffff 64bit pref] May 14 00:24:23.287214 kernel: ACPI: PCI Root Bridge [PCI2] (domain 0001 [bus 00-ff]) May 14 00:24:23.287283 kernel: acpi PNP0A08:06: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 00:24:23.287347 kernel: acpi PNP0A08:06: _OSC: platform does not support [PCIeHotplug PME LTR] May 14 00:24:23.287409 kernel: acpi PNP0A08:06: _OSC: OS now controls [AER PCIeCapability] May 14 00:24:23.287470 kernel: acpi PNP0A08:06: MCFG quirk: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] with pci_32b_read_ops May 14 00:24:23.287532 kernel: acpi PNP0A08:06: ECAM area [mem 0x3bfff0000000-0x3bffffffffff] reserved by PNP0C02:00 May 14 00:24:23.287592 kernel: acpi PNP0A08:06: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] May 14 00:24:23.287602 kernel: PCI host bridge to bus 0001:00 May 14 00:24:23.287669 kernel: pci_bus 0001:00: root bus resource [mem 0x60000000-0x6fffffff window] May 14 00:24:23.287727 kernel: pci_bus 0001:00: root bus resource [mem 0x380000000000-0x3bffdfffffff window] May 14 00:24:23.287783 kernel: pci_bus 0001:00: root bus resource [bus 00-ff] May 14 00:24:23.287853 kernel: pci 0001:00:00.0: [1def:e100] type 00 class 0x060000 May 14 00:24:23.287925 kernel: pci 0001:00:01.0: [1def:e101] type 01 class 0x060400 May 14 00:24:23.287988 kernel: pci 0001:00:01.0: enabling Extended Tags May 14 00:24:23.288051 kernel: pci 0001:00:01.0: supports D1 D2 May 14 00:24:23.288113 kernel: pci 0001:00:01.0: PME# supported from D0 D1 D3hot May 14 00:24:23.288184 kernel: pci 0001:00:02.0: [1def:e102] type 01 class 0x060400 May 14 00:24:23.288248 kernel: pci 0001:00:02.0: supports D1 D2 May 14 00:24:23.288310 kernel: pci 0001:00:02.0: PME# supported from D0 D1 D3hot May 14 00:24:23.288382 kernel: pci 0001:00:03.0: [1def:e103] type 01 class 0x060400 May 14 00:24:23.288444 kernel: pci 0001:00:03.0: supports D1 D2 May 14 00:24:23.288507 kernel: pci 0001:00:03.0: PME# supported from D0 D1 D3hot May 14 00:24:23.288576 kernel: pci 0001:00:04.0: [1def:e104] type 01 class 0x060400 May 14 00:24:23.288645 kernel: pci 0001:00:04.0: supports D1 D2 May 14 00:24:23.288708 kernel: pci 0001:00:04.0: PME# supported from D0 D1 D3hot May 14 00:24:23.288718 kernel: acpiphp: Slot [1-6] registered May 14 00:24:23.288787 kernel: pci 0001:01:00.0: [15b3:1015] type 00 class 0x020000 May 14 00:24:23.288854 kernel: pci 0001:01:00.0: reg 0x10: [mem 0x380002000000-0x380003ffffff 64bit pref] May 14 00:24:23.288921 kernel: pci 0001:01:00.0: reg 0x30: [mem 0x60100000-0x601fffff pref] May 14 00:24:23.288986 kernel: pci 0001:01:00.0: PME# supported from D3cold May 14 00:24:23.289053 kernel: pci 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 14 00:24:23.289125 kernel: pci 0001:01:00.1: [15b3:1015] type 00 class 0x020000 May 14 00:24:23.289190 kernel: pci 0001:01:00.1: reg 0x10: [mem 0x380000000000-0x380001ffffff 64bit pref] May 14 00:24:23.289255 kernel: pci 0001:01:00.1: reg 0x30: [mem 0x60000000-0x600fffff pref] May 14 00:24:23.289321 kernel: pci 0001:01:00.1: PME# supported from D3cold May 14 00:24:23.289331 kernel: acpiphp: Slot [2-6] registered May 14 00:24:23.289339 kernel: acpiphp: Slot [3-4] registered May 14 00:24:23.289348 kernel: acpiphp: Slot [4-4] registered May 14 00:24:23.289404 kernel: pci_bus 0001:00: on NUMA node 0 May 14 00:24:23.289467 kernel: pci 0001:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 14 00:24:23.289532 kernel: pci 0001:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 14 00:24:23.289594 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 14 00:24:23.289660 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 14 00:24:23.289724 kernel: pci 0001:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 14 00:24:23.289786 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 14 00:24:23.289852 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 14 00:24:23.289918 kernel: pci 0001:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 14 00:24:23.289981 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 14 00:24:23.290043 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 14 00:24:23.290108 kernel: pci 0001:00:01.0: BAR 15: assigned [mem 0x380000000000-0x380003ffffff 64bit pref] May 14 00:24:23.290170 kernel: pci 0001:00:01.0: BAR 14: assigned [mem 0x60000000-0x601fffff] May 14 00:24:23.290234 kernel: pci 0001:00:02.0: BAR 14: assigned [mem 0x60200000-0x603fffff] May 14 00:24:23.290298 kernel: pci 0001:00:02.0: BAR 15: assigned [mem 0x380004000000-0x3800041fffff 64bit pref] May 14 00:24:23.290362 kernel: pci 0001:00:03.0: BAR 14: assigned [mem 0x60400000-0x605fffff] May 14 00:24:23.290425 kernel: pci 0001:00:03.0: BAR 15: assigned [mem 0x380004200000-0x3800043fffff 64bit pref] May 14 00:24:23.290487 kernel: pci 0001:00:04.0: BAR 14: assigned [mem 0x60600000-0x607fffff] May 14 00:24:23.290550 kernel: pci 0001:00:04.0: BAR 15: assigned [mem 0x380004400000-0x3800045fffff 64bit pref] May 14 00:24:23.290615 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.290679 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.290743 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.290807 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.290869 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.290932 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.290995 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.291057 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.291120 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.291182 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.291247 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.291310 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.291374 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.291436 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.291499 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.291562 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.291630 kernel: pci 0001:01:00.0: BAR 0: assigned [mem 0x380000000000-0x380001ffffff 64bit pref] May 14 00:24:23.291697 kernel: pci 0001:01:00.1: BAR 0: assigned [mem 0x380002000000-0x380003ffffff 64bit pref] May 14 00:24:23.291761 kernel: pci 0001:01:00.0: BAR 6: assigned [mem 0x60000000-0x600fffff pref] May 14 00:24:23.291828 kernel: pci 0001:01:00.1: BAR 6: assigned [mem 0x60100000-0x601fffff pref] May 14 00:24:23.291891 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] May 14 00:24:23.291954 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] May 14 00:24:23.292016 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref] May 14 00:24:23.292078 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] May 14 00:24:23.292142 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff] May 14 00:24:23.292204 kernel: pci 0001:00:02.0: bridge window [mem 0x380004000000-0x3800041fffff 64bit pref] May 14 00:24:23.292269 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] May 14 00:24:23.292332 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff] May 14 00:24:23.292395 kernel: pci 0001:00:03.0: bridge window [mem 0x380004200000-0x3800043fffff 64bit pref] May 14 00:24:23.292458 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] May 14 00:24:23.292521 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff] May 14 00:24:23.292584 kernel: pci 0001:00:04.0: bridge window [mem 0x380004400000-0x3800045fffff 64bit pref] May 14 00:24:23.292646 kernel: pci_bus 0001:00: resource 4 [mem 0x60000000-0x6fffffff window] May 14 00:24:23.292702 kernel: pci_bus 0001:00: resource 5 [mem 0x380000000000-0x3bffdfffffff window] May 14 00:24:23.292780 kernel: pci_bus 0001:01: resource 1 [mem 0x60000000-0x601fffff] May 14 00:24:23.292841 kernel: pci_bus 0001:01: resource 2 [mem 0x380000000000-0x380003ffffff 64bit pref] May 14 00:24:23.292907 kernel: pci_bus 0001:02: resource 1 [mem 0x60200000-0x603fffff] May 14 00:24:23.292965 kernel: pci_bus 0001:02: resource 2 [mem 0x380004000000-0x3800041fffff 64bit pref] May 14 00:24:23.293033 kernel: pci_bus 0001:03: resource 1 [mem 0x60400000-0x605fffff] May 14 00:24:23.293091 kernel: pci_bus 0001:03: resource 2 [mem 0x380004200000-0x3800043fffff 64bit pref] May 14 00:24:23.293157 kernel: pci_bus 0001:04: resource 1 [mem 0x60600000-0x607fffff] May 14 00:24:23.293215 kernel: pci_bus 0001:04: resource 2 [mem 0x380004400000-0x3800045fffff 64bit pref] May 14 00:24:23.293225 kernel: ACPI: PCI Root Bridge [PCI6] (domain 0004 [bus 00-ff]) May 14 00:24:23.293293 kernel: acpi PNP0A08:07: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 00:24:23.293361 kernel: acpi PNP0A08:07: _OSC: platform does not support [PCIeHotplug PME LTR] May 14 00:24:23.293422 kernel: acpi PNP0A08:07: _OSC: OS now controls [AER PCIeCapability] May 14 00:24:23.293482 kernel: acpi PNP0A08:07: MCFG quirk: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] with pci_32b_read_ops May 14 00:24:23.293543 kernel: acpi PNP0A08:07: ECAM area [mem 0x2bfff0000000-0x2bffffffffff] reserved by PNP0C02:00 May 14 00:24:23.293608 kernel: acpi PNP0A08:07: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] May 14 00:24:23.293618 kernel: PCI host bridge to bus 0004:00 May 14 00:24:23.293680 kernel: pci_bus 0004:00: root bus resource [mem 0x20000000-0x2fffffff window] May 14 00:24:23.293740 kernel: pci_bus 0004:00: root bus resource [mem 0x280000000000-0x2bffdfffffff window] May 14 00:24:23.293798 kernel: pci_bus 0004:00: root bus resource [bus 00-ff] May 14 00:24:23.293867 kernel: pci 0004:00:00.0: [1def:e110] type 00 class 0x060000 May 14 00:24:23.293938 kernel: pci 0004:00:01.0: [1def:e111] type 01 class 0x060400 May 14 00:24:23.294001 kernel: pci 0004:00:01.0: supports D1 D2 May 14 00:24:23.294065 kernel: pci 0004:00:01.0: PME# supported from D0 D1 D3hot May 14 00:24:23.294133 kernel: pci 0004:00:03.0: [1def:e113] type 01 class 0x060400 May 14 00:24:23.294200 kernel: pci 0004:00:03.0: supports D1 D2 May 14 00:24:23.294263 kernel: pci 0004:00:03.0: PME# supported from D0 D1 D3hot May 14 00:24:23.294333 kernel: pci 0004:00:05.0: [1def:e115] type 01 class 0x060400 May 14 00:24:23.294397 kernel: pci 0004:00:05.0: supports D1 D2 May 14 00:24:23.294459 kernel: pci 0004:00:05.0: PME# supported from D0 D1 D3hot May 14 00:24:23.294531 kernel: pci 0004:01:00.0: [1a03:1150] type 01 class 0x060400 May 14 00:24:23.294595 kernel: pci 0004:01:00.0: enabling Extended Tags May 14 00:24:23.294668 kernel: pci 0004:01:00.0: supports D1 D2 May 14 00:24:23.294731 kernel: pci 0004:01:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 14 00:24:23.294808 kernel: pci_bus 0004:02: extended config space not accessible May 14 00:24:23.294885 kernel: pci 0004:02:00.0: [1a03:2000] type 00 class 0x030000 May 14 00:24:23.294953 kernel: pci 0004:02:00.0: reg 0x10: [mem 0x20000000-0x21ffffff] May 14 00:24:23.295021 kernel: pci 0004:02:00.0: reg 0x14: [mem 0x22000000-0x2201ffff] May 14 00:24:23.295092 kernel: pci 0004:02:00.0: reg 0x18: [io 0x0000-0x007f] May 14 00:24:23.295160 kernel: pci 0004:02:00.0: BAR 0: assigned to efifb May 14 00:24:23.295228 kernel: pci 0004:02:00.0: supports D1 D2 May 14 00:24:23.295294 kernel: pci 0004:02:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 14 00:24:23.295367 kernel: pci 0004:03:00.0: [1912:0014] type 00 class 0x0c0330 May 14 00:24:23.295434 kernel: pci 0004:03:00.0: reg 0x10: [mem 0x22200000-0x22201fff 64bit] May 14 00:24:23.295498 kernel: pci 0004:03:00.0: PME# supported from D0 D3hot D3cold May 14 00:24:23.295557 kernel: pci_bus 0004:00: on NUMA node 0 May 14 00:24:23.295627 kernel: pci 0004:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01-02] add_size 200000 add_align 100000 May 14 00:24:23.295693 kernel: pci 0004:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 14 00:24:23.295756 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 14 00:24:23.295819 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 14 00:24:23.295883 kernel: pci 0004:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 14 00:24:23.295946 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 14 00:24:23.296009 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 14 00:24:23.296075 kernel: pci 0004:00:01.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] May 14 00:24:23.296139 kernel: pci 0004:00:01.0: BAR 15: assigned [mem 0x280000000000-0x2800001fffff 64bit pref] May 14 00:24:23.296202 kernel: pci 0004:00:03.0: BAR 14: assigned [mem 0x23000000-0x231fffff] May 14 00:24:23.296266 kernel: pci 0004:00:03.0: BAR 15: assigned [mem 0x280000200000-0x2800003fffff 64bit pref] May 14 00:24:23.296328 kernel: pci 0004:00:05.0: BAR 14: assigned [mem 0x23200000-0x233fffff] May 14 00:24:23.296410 kernel: pci 0004:00:05.0: BAR 15: assigned [mem 0x280000400000-0x2800005fffff 64bit pref] May 14 00:24:23.296475 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.296538 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.296609 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.296673 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.296738 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.296800 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.296863 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.296928 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.296990 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.297053 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.297130 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.297195 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.297260 kernel: pci 0004:01:00.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] May 14 00:24:23.297326 kernel: pci 0004:01:00.0: BAR 13: no space for [io size 0x1000] May 14 00:24:23.297392 kernel: pci 0004:01:00.0: BAR 13: failed to assign [io size 0x1000] May 14 00:24:23.297460 kernel: pci 0004:02:00.0: BAR 0: assigned [mem 0x20000000-0x21ffffff] May 14 00:24:23.297529 kernel: pci 0004:02:00.0: BAR 1: assigned [mem 0x22000000-0x2201ffff] May 14 00:24:23.297597 kernel: pci 0004:02:00.0: BAR 2: no space for [io size 0x0080] May 14 00:24:23.297675 kernel: pci 0004:02:00.0: BAR 2: failed to assign [io size 0x0080] May 14 00:24:23.297741 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] May 14 00:24:23.297805 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff] May 14 00:24:23.297869 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] May 14 00:24:23.297932 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff] May 14 00:24:23.297996 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref] May 14 00:24:23.298062 kernel: pci 0004:03:00.0: BAR 0: assigned [mem 0x23000000-0x23001fff 64bit] May 14 00:24:23.298126 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] May 14 00:24:23.298192 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff] May 14 00:24:23.298255 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref] May 14 00:24:23.298319 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] May 14 00:24:23.298381 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff] May 14 00:24:23.298446 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref] May 14 00:24:23.298505 kernel: pci_bus 0004:00: Some PCI device resources are unassigned, try booting with pci=realloc May 14 00:24:23.298562 kernel: pci_bus 0004:00: resource 4 [mem 0x20000000-0x2fffffff window] May 14 00:24:23.298623 kernel: pci_bus 0004:00: resource 5 [mem 0x280000000000-0x2bffdfffffff window] May 14 00:24:23.298692 kernel: pci_bus 0004:01: resource 1 [mem 0x20000000-0x22ffffff] May 14 00:24:23.298752 kernel: pci_bus 0004:01: resource 2 [mem 0x280000000000-0x2800001fffff 64bit pref] May 14 00:24:23.298814 kernel: pci_bus 0004:02: resource 1 [mem 0x20000000-0x22ffffff] May 14 00:24:23.298882 kernel: pci_bus 0004:03: resource 1 [mem 0x23000000-0x231fffff] May 14 00:24:23.298940 kernel: pci_bus 0004:03: resource 2 [mem 0x280000200000-0x2800003fffff 64bit pref] May 14 00:24:23.299012 kernel: pci_bus 0004:04: resource 1 [mem 0x23200000-0x233fffff] May 14 00:24:23.299073 kernel: pci_bus 0004:04: resource 2 [mem 0x280000400000-0x2800005fffff 64bit pref] May 14 00:24:23.299083 kernel: iommu: Default domain type: Translated May 14 00:24:23.299091 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 14 00:24:23.299099 kernel: efivars: Registered efivars operations May 14 00:24:23.299165 kernel: pci 0004:02:00.0: vgaarb: setting as boot VGA device May 14 00:24:23.299233 kernel: pci 0004:02:00.0: vgaarb: bridge control possible May 14 00:24:23.299301 kernel: pci 0004:02:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none May 14 00:24:23.299312 kernel: vgaarb: loaded May 14 00:24:23.299319 kernel: clocksource: Switched to clocksource arch_sys_counter May 14 00:24:23.299327 kernel: VFS: Disk quotas dquot_6.6.0 May 14 00:24:23.299335 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 00:24:23.299342 kernel: pnp: PnP ACPI init May 14 00:24:23.299411 kernel: system 00:00: [mem 0x3bfff0000000-0x3bffffffffff window] could not be reserved May 14 00:24:23.299471 kernel: system 00:00: [mem 0x3ffff0000000-0x3fffffffffff window] could not be reserved May 14 00:24:23.299530 kernel: system 00:00: [mem 0x23fff0000000-0x23ffffffffff window] could not be reserved May 14 00:24:23.299588 kernel: system 00:00: [mem 0x27fff0000000-0x27ffffffffff window] could not be reserved May 14 00:24:23.299787 kernel: system 00:00: [mem 0x2bfff0000000-0x2bffffffffff window] could not be reserved May 14 00:24:23.299848 kernel: system 00:00: [mem 0x2ffff0000000-0x2fffffffffff window] could not be reserved May 14 00:24:23.299904 kernel: system 00:00: [mem 0x33fff0000000-0x33ffffffffff window] could not be reserved May 14 00:24:23.299960 kernel: system 00:00: [mem 0x37fff0000000-0x37ffffffffff window] could not be reserved May 14 00:24:23.299973 kernel: pnp: PnP ACPI: found 1 devices May 14 00:24:23.299981 kernel: NET: Registered PF_INET protocol family May 14 00:24:23.299989 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 00:24:23.299997 kernel: tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear) May 14 00:24:23.300005 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 00:24:23.300013 kernel: TCP established hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 00:24:23.300020 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 14 00:24:23.300028 kernel: TCP: Hash tables configured (established 524288 bind 65536) May 14 00:24:23.300036 kernel: UDP hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 14 00:24:23.300046 kernel: UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 14 00:24:23.300053 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 00:24:23.300119 kernel: pci 0001:01:00.0: CLS mismatch (64 != 32), using 64 bytes May 14 00:24:23.300129 kernel: kvm [1]: IPA Size Limit: 48 bits May 14 00:24:23.300137 kernel: kvm [1]: GICv3: no GICV resource entry May 14 00:24:23.300145 kernel: kvm [1]: disabling GICv2 emulation May 14 00:24:23.300152 kernel: kvm [1]: GIC system register CPU interface enabled May 14 00:24:23.300160 kernel: kvm [1]: vgic interrupt IRQ9 May 14 00:24:23.300168 kernel: kvm [1]: VHE mode initialized successfully May 14 00:24:23.300178 kernel: Initialise system trusted keyrings May 14 00:24:23.300185 kernel: workingset: timestamp_bits=39 max_order=26 bucket_order=0 May 14 00:24:23.300193 kernel: Key type asymmetric registered May 14 00:24:23.300200 kernel: Asymmetric key parser 'x509' registered May 14 00:24:23.300208 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 14 00:24:23.300216 kernel: io scheduler mq-deadline registered May 14 00:24:23.300223 kernel: io scheduler kyber registered May 14 00:24:23.300231 kernel: io scheduler bfq registered May 14 00:24:23.300239 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 14 00:24:23.300248 kernel: ACPI: button: Power Button [PWRB] May 14 00:24:23.300256 kernel: ACPI GTDT: found 1 SBSA generic Watchdog(s). May 14 00:24:23.300264 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 00:24:23.300335 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: option mask 0x0 May 14 00:24:23.300394 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: IDR0.COHACC overridden by FW configuration (false) May 14 00:24:23.300454 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 14 00:24:23.300511 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for cmdq May 14 00:24:23.300569 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 131072 entries for evtq May 14 00:24:23.300635 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for priq May 14 00:24:23.300704 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: option mask 0x0 May 14 00:24:23.300762 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: IDR0.COHACC overridden by FW configuration (false) May 14 00:24:23.300820 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 14 00:24:23.300877 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for cmdq May 14 00:24:23.300934 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 131072 entries for evtq May 14 00:24:23.300995 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for priq May 14 00:24:23.301060 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: option mask 0x0 May 14 00:24:23.301118 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: IDR0.COHACC overridden by FW configuration (false) May 14 00:24:23.301175 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 14 00:24:23.301232 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for cmdq May 14 00:24:23.301289 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 131072 entries for evtq May 14 00:24:23.301348 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for priq May 14 00:24:23.301413 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: option mask 0x0 May 14 00:24:23.301471 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: IDR0.COHACC overridden by FW configuration (false) May 14 00:24:23.301529 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 14 00:24:23.301585 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for cmdq May 14 00:24:23.301647 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 131072 entries for evtq May 14 00:24:23.301704 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for priq May 14 00:24:23.301776 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: option mask 0x0 May 14 00:24:23.301836 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: IDR0.COHACC overridden by FW configuration (false) May 14 00:24:23.301894 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 14 00:24:23.301951 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for cmdq May 14 00:24:23.302008 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 131072 entries for evtq May 14 00:24:23.302066 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for priq May 14 00:24:23.302131 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: option mask 0x0 May 14 00:24:23.302192 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: IDR0.COHACC overridden by FW configuration (false) May 14 00:24:23.302249 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 14 00:24:23.302307 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for cmdq May 14 00:24:23.302363 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 131072 entries for evtq May 14 00:24:23.302421 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for priq May 14 00:24:23.302486 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: option mask 0x0 May 14 00:24:23.302545 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: IDR0.COHACC overridden by FW configuration (false) May 14 00:24:23.302603 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 14 00:24:23.302663 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for cmdq May 14 00:24:23.302723 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 131072 entries for evtq May 14 00:24:23.302780 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for priq May 14 00:24:23.302844 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: option mask 0x0 May 14 00:24:23.302903 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: IDR0.COHACC overridden by FW configuration (false) May 14 00:24:23.302961 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 14 00:24:23.303018 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for cmdq May 14 00:24:23.303076 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 131072 entries for evtq May 14 00:24:23.303133 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for priq May 14 00:24:23.303143 kernel: thunder_xcv, ver 1.0 May 14 00:24:23.303151 kernel: thunder_bgx, ver 1.0 May 14 00:24:23.303159 kernel: nicpf, ver 1.0 May 14 00:24:23.303168 kernel: nicvf, ver 1.0 May 14 00:24:23.303232 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 14 00:24:23.303291 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-14T00:24:21 UTC (1747182261) May 14 00:24:23.303301 kernel: efifb: probing for efifb May 14 00:24:23.303309 kernel: efifb: framebuffer at 0x20000000, using 1876k, total 1875k May 14 00:24:23.303317 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 May 14 00:24:23.303324 kernel: efifb: scrolling: redraw May 14 00:24:23.303332 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 14 00:24:23.303342 kernel: Console: switching to colour frame buffer device 100x37 May 14 00:24:23.303350 kernel: fb0: EFI VGA frame buffer device May 14 00:24:23.303357 kernel: SMCCC: SOC_ID: ID = jep106:0a16:0001 Revision = 0x000000a1 May 14 00:24:23.303365 kernel: hid: raw HID events driver (C) Jiri Kosina May 14 00:24:23.303373 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 14 00:24:23.303381 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 14 00:24:23.303389 kernel: watchdog: Hard watchdog permanently disabled May 14 00:24:23.303396 kernel: NET: Registered PF_INET6 protocol family May 14 00:24:23.303404 kernel: Segment Routing with IPv6 May 14 00:24:23.303413 kernel: In-situ OAM (IOAM) with IPv6 May 14 00:24:23.303421 kernel: NET: Registered PF_PACKET protocol family May 14 00:24:23.303429 kernel: Key type dns_resolver registered May 14 00:24:23.303436 kernel: registered taskstats version 1 May 14 00:24:23.303445 kernel: Loading compiled-in X.509 certificates May 14 00:24:23.303453 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 568a15bbab977599d8f910f319ba50c03c8a57bd' May 14 00:24:23.303461 kernel: Key type .fscrypt registered May 14 00:24:23.303468 kernel: Key type fscrypt-provisioning registered May 14 00:24:23.303476 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 00:24:23.303485 kernel: ima: Allocated hash algorithm: sha1 May 14 00:24:23.303493 kernel: ima: No architecture policies found May 14 00:24:23.303500 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 14 00:24:23.303564 kernel: pcieport 000d:00:01.0: Adding to iommu group 0 May 14 00:24:23.303632 kernel: pcieport 000d:00:01.0: AER: enabled with IRQ 91 May 14 00:24:23.303695 kernel: pcieport 000d:00:02.0: Adding to iommu group 1 May 14 00:24:23.303757 kernel: pcieport 000d:00:02.0: AER: enabled with IRQ 91 May 14 00:24:23.303820 kernel: pcieport 000d:00:03.0: Adding to iommu group 2 May 14 00:24:23.303882 kernel: pcieport 000d:00:03.0: AER: enabled with IRQ 91 May 14 00:24:23.303948 kernel: pcieport 000d:00:04.0: Adding to iommu group 3 May 14 00:24:23.304010 kernel: pcieport 000d:00:04.0: AER: enabled with IRQ 91 May 14 00:24:23.304073 kernel: pcieport 0000:00:01.0: Adding to iommu group 4 May 14 00:24:23.304136 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 92 May 14 00:24:23.304199 kernel: pcieport 0000:00:02.0: Adding to iommu group 5 May 14 00:24:23.304262 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 92 May 14 00:24:23.304324 kernel: pcieport 0000:00:03.0: Adding to iommu group 6 May 14 00:24:23.304387 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 92 May 14 00:24:23.304453 kernel: pcieport 0000:00:04.0: Adding to iommu group 7 May 14 00:24:23.304516 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 92 May 14 00:24:23.304579 kernel: pcieport 0005:00:01.0: Adding to iommu group 8 May 14 00:24:23.304645 kernel: pcieport 0005:00:01.0: AER: enabled with IRQ 93 May 14 00:24:23.304709 kernel: pcieport 0005:00:03.0: Adding to iommu group 9 May 14 00:24:23.304770 kernel: pcieport 0005:00:03.0: AER: enabled with IRQ 93 May 14 00:24:23.304833 kernel: pcieport 0005:00:05.0: Adding to iommu group 10 May 14 00:24:23.304896 kernel: pcieport 0005:00:05.0: AER: enabled with IRQ 93 May 14 00:24:23.304961 kernel: pcieport 0005:00:07.0: Adding to iommu group 11 May 14 00:24:23.305023 kernel: pcieport 0005:00:07.0: AER: enabled with IRQ 93 May 14 00:24:23.305086 kernel: pcieport 0003:00:01.0: Adding to iommu group 12 May 14 00:24:23.305147 kernel: pcieport 0003:00:01.0: AER: enabled with IRQ 94 May 14 00:24:23.305210 kernel: pcieport 0003:00:03.0: Adding to iommu group 13 May 14 00:24:23.305272 kernel: pcieport 0003:00:03.0: AER: enabled with IRQ 94 May 14 00:24:23.305334 kernel: pcieport 0003:00:05.0: Adding to iommu group 14 May 14 00:24:23.305397 kernel: pcieport 0003:00:05.0: AER: enabled with IRQ 94 May 14 00:24:23.305459 kernel: pcieport 000c:00:01.0: Adding to iommu group 15 May 14 00:24:23.305524 kernel: pcieport 000c:00:01.0: AER: enabled with IRQ 95 May 14 00:24:23.305587 kernel: pcieport 000c:00:02.0: Adding to iommu group 16 May 14 00:24:23.305652 kernel: pcieport 000c:00:02.0: AER: enabled with IRQ 95 May 14 00:24:23.305715 kernel: pcieport 000c:00:03.0: Adding to iommu group 17 May 14 00:24:23.305777 kernel: pcieport 000c:00:03.0: AER: enabled with IRQ 95 May 14 00:24:23.305840 kernel: pcieport 000c:00:04.0: Adding to iommu group 18 May 14 00:24:23.305901 kernel: pcieport 000c:00:04.0: AER: enabled with IRQ 95 May 14 00:24:23.305965 kernel: pcieport 0002:00:01.0: Adding to iommu group 19 May 14 00:24:23.306028 kernel: pcieport 0002:00:01.0: AER: enabled with IRQ 96 May 14 00:24:23.306091 kernel: pcieport 0002:00:03.0: Adding to iommu group 20 May 14 00:24:23.306153 kernel: pcieport 0002:00:03.0: AER: enabled with IRQ 96 May 14 00:24:23.306217 kernel: pcieport 0002:00:05.0: Adding to iommu group 21 May 14 00:24:23.306279 kernel: pcieport 0002:00:05.0: AER: enabled with IRQ 96 May 14 00:24:23.306342 kernel: pcieport 0002:00:07.0: Adding to iommu group 22 May 14 00:24:23.306404 kernel: pcieport 0002:00:07.0: AER: enabled with IRQ 96 May 14 00:24:23.306466 kernel: pcieport 0001:00:01.0: Adding to iommu group 23 May 14 00:24:23.306531 kernel: pcieport 0001:00:01.0: AER: enabled with IRQ 97 May 14 00:24:23.306592 kernel: pcieport 0001:00:02.0: Adding to iommu group 24 May 14 00:24:23.306657 kernel: pcieport 0001:00:02.0: AER: enabled with IRQ 97 May 14 00:24:23.306720 kernel: pcieport 0001:00:03.0: Adding to iommu group 25 May 14 00:24:23.306783 kernel: pcieport 0001:00:03.0: AER: enabled with IRQ 97 May 14 00:24:23.306846 kernel: pcieport 0001:00:04.0: Adding to iommu group 26 May 14 00:24:23.306910 kernel: pcieport 0001:00:04.0: AER: enabled with IRQ 97 May 14 00:24:23.306973 kernel: pcieport 0004:00:01.0: Adding to iommu group 27 May 14 00:24:23.307038 kernel: pcieport 0004:00:01.0: AER: enabled with IRQ 98 May 14 00:24:23.307101 kernel: pcieport 0004:00:03.0: Adding to iommu group 28 May 14 00:24:23.307162 kernel: pcieport 0004:00:03.0: AER: enabled with IRQ 98 May 14 00:24:23.307225 kernel: pcieport 0004:00:05.0: Adding to iommu group 29 May 14 00:24:23.307287 kernel: pcieport 0004:00:05.0: AER: enabled with IRQ 98 May 14 00:24:23.307352 kernel: pcieport 0004:01:00.0: Adding to iommu group 30 May 14 00:24:23.307362 kernel: clk: Disabling unused clocks May 14 00:24:23.307370 kernel: Freeing unused kernel memory: 38464K May 14 00:24:23.307380 kernel: Run /init as init process May 14 00:24:23.307387 kernel: with arguments: May 14 00:24:23.307395 kernel: /init May 14 00:24:23.307403 kernel: with environment: May 14 00:24:23.307410 kernel: HOME=/ May 14 00:24:23.307418 kernel: TERM=linux May 14 00:24:23.307425 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 00:24:23.307434 systemd[1]: Successfully made /usr/ read-only. May 14 00:24:23.307444 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 00:24:23.307454 systemd[1]: Detected architecture arm64. May 14 00:24:23.307463 systemd[1]: Running in initrd. May 14 00:24:23.307470 systemd[1]: No hostname configured, using default hostname. May 14 00:24:23.307478 systemd[1]: Hostname set to . May 14 00:24:23.307487 systemd[1]: Initializing machine ID from random generator. May 14 00:24:23.307494 systemd[1]: Queued start job for default target initrd.target. May 14 00:24:23.307502 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 00:24:23.307512 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 00:24:23.307521 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 00:24:23.307530 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 00:24:23.307538 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 00:24:23.307546 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 00:24:23.307556 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 00:24:23.307564 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 00:24:23.307574 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 00:24:23.307582 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 00:24:23.307590 systemd[1]: Reached target paths.target - Path Units. May 14 00:24:23.307598 systemd[1]: Reached target slices.target - Slice Units. May 14 00:24:23.307610 systemd[1]: Reached target swap.target - Swaps. May 14 00:24:23.307618 systemd[1]: Reached target timers.target - Timer Units. May 14 00:24:23.307626 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 00:24:23.307634 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 00:24:23.307645 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 00:24:23.307653 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 00:24:23.307661 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 00:24:23.307669 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 00:24:23.307678 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 00:24:23.307686 systemd[1]: Reached target sockets.target - Socket Units. May 14 00:24:23.307694 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 00:24:23.307702 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 00:24:23.307710 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 00:24:23.307719 systemd[1]: Starting systemd-fsck-usr.service... May 14 00:24:23.307727 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 00:24:23.307736 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 00:24:23.307744 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:24:23.307774 systemd-journald[900]: Collecting audit messages is disabled. May 14 00:24:23.307794 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 00:24:23.307803 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 00:24:23.307811 kernel: Bridge firewalling registered May 14 00:24:23.307819 systemd-journald[900]: Journal started May 14 00:24:23.307838 systemd-journald[900]: Runtime Journal (/run/log/journal/f7d2f8b219214ae18efec7ce6d6aa284) is 8M, max 4G, 3.9G free. May 14 00:24:23.247347 systemd-modules-load[902]: Inserted module 'overlay' May 14 00:24:23.330121 systemd[1]: Started systemd-journald.service - Journal Service. May 14 00:24:23.297353 systemd-modules-load[902]: Inserted module 'br_netfilter' May 14 00:24:23.335781 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 00:24:23.346788 systemd[1]: Finished systemd-fsck-usr.service. May 14 00:24:23.357816 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 00:24:23.368661 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:24:23.382677 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 00:24:23.390989 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 00:24:23.412156 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 00:24:23.418668 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 00:24:23.436814 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 00:24:23.452833 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 00:24:23.463978 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 00:24:23.480829 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 00:24:23.501381 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 00:24:23.522750 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 00:24:23.535979 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 00:24:23.549061 dracut-cmdline[944]: dracut-dracut-053 May 14 00:24:23.549061 dracut-cmdline[944]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=3174b2682629aa8ad4069807ed6fd62c10f62266ee1e150a1104f2a2fb6489b5 May 14 00:24:23.556660 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 00:24:23.561626 systemd-resolved[947]: Positive Trust Anchors: May 14 00:24:23.561635 systemd-resolved[947]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 00:24:23.561667 systemd-resolved[947]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 00:24:23.577018 systemd-resolved[947]: Defaulting to hostname 'linux'. May 14 00:24:23.578360 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 00:24:23.607522 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 00:24:23.713612 kernel: SCSI subsystem initialized May 14 00:24:23.728618 kernel: Loading iSCSI transport class v2.0-870. May 14 00:24:23.747613 kernel: iscsi: registered transport (tcp) May 14 00:24:23.774869 kernel: iscsi: registered transport (qla4xxx) May 14 00:24:23.774895 kernel: QLogic iSCSI HBA Driver May 14 00:24:23.817251 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 00:24:23.828534 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 00:24:23.889399 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 00:24:23.889430 kernel: device-mapper: uevent: version 1.0.3 May 14 00:24:23.899088 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 14 00:24:23.964617 kernel: raid6: neonx8 gen() 15848 MB/s May 14 00:24:23.990615 kernel: raid6: neonx4 gen() 15875 MB/s May 14 00:24:24.015615 kernel: raid6: neonx2 gen() 13239 MB/s May 14 00:24:24.040611 kernel: raid6: neonx1 gen() 10580 MB/s May 14 00:24:24.065611 kernel: raid6: int64x8 gen() 6811 MB/s May 14 00:24:24.090610 kernel: raid6: int64x4 gen() 7371 MB/s May 14 00:24:24.115614 kernel: raid6: int64x2 gen() 6136 MB/s May 14 00:24:24.143647 kernel: raid6: int64x1 gen() 5077 MB/s May 14 00:24:24.143667 kernel: raid6: using algorithm neonx4 gen() 15875 MB/s May 14 00:24:24.178061 kernel: raid6: .... xor() 12399 MB/s, rmw enabled May 14 00:24:24.178082 kernel: raid6: using neon recovery algorithm May 14 00:24:24.201060 kernel: xor: measuring software checksum speed May 14 00:24:24.201083 kernel: 8regs : 21641 MB/sec May 14 00:24:24.209011 kernel: 32regs : 21704 MB/sec May 14 00:24:24.216799 kernel: arm64_neon : 28041 MB/sec May 14 00:24:24.224472 kernel: xor: using function: arm64_neon (28041 MB/sec) May 14 00:24:24.284623 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 00:24:24.294195 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 00:24:24.303128 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 00:24:24.338623 systemd-udevd[1146]: Using default interface naming scheme 'v255'. May 14 00:24:24.342143 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 00:24:24.347996 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 00:24:24.378324 dracut-pre-trigger[1156]: rd.md=0: removing MD RAID activation May 14 00:24:24.404673 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 00:24:24.414493 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 00:24:24.532166 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 00:24:24.541563 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 00:24:24.582027 kernel: pps_core: LinuxPPS API ver. 1 registered May 14 00:24:24.582045 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 14 00:24:24.582055 kernel: PTP clock support registered May 14 00:24:24.583615 kernel: ACPI: bus type USB registered May 14 00:24:24.583626 kernel: usbcore: registered new interface driver usbfs May 14 00:24:24.583635 kernel: usbcore: registered new interface driver hub May 14 00:24:24.583644 kernel: usbcore: registered new device driver usb May 14 00:24:24.640105 kernel: igb: Intel(R) Gigabit Ethernet Network Driver May 14 00:24:24.640139 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. May 14 00:24:24.640158 kernel: igb 0003:03:00.0: Adding to iommu group 31 May 14 00:24:24.656610 kernel: mlx5_core 0001:01:00.0: Adding to iommu group 32 May 14 00:24:24.661536 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 00:24:24.771940 kernel: xhci_hcd 0004:03:00.0: Adding to iommu group 33 May 14 00:24:24.772137 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller May 14 00:24:24.772218 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 1 May 14 00:24:24.772299 kernel: xhci_hcd 0004:03:00.0: Zeroing 64bit base registers, expecting fault May 14 00:24:24.772378 kernel: mlx5_core 0001:01:00.0: firmware version: 14.31.1014 May 14 00:24:24.772458 kernel: mlx5_core 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 14 00:24:24.772535 kernel: nvme 0005:03:00.0: Adding to iommu group 34 May 14 00:24:24.772627 kernel: nvme 0005:04:00.0: Adding to iommu group 35 May 14 00:24:24.661611 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 00:24:24.786715 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 00:24:24.791793 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 00:24:24.791847 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:24:24.808293 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:24:24.819924 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:24:24.830890 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 00:24:24.833936 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 00:24:24.944886 kernel: igb 0003:03:00.0: added PHC on eth0 May 14 00:24:24.945020 kernel: igb 0003:03:00.0: Intel(R) Gigabit Ethernet Network Connection May 14 00:24:24.945102 kernel: igb 0003:03:00.0: eth0: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0c:6a:c4 May 14 00:24:24.945178 kernel: igb 0003:03:00.0: eth0: PBA No: 106300-000 May 14 00:24:24.945252 kernel: igb 0003:03:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) May 14 00:24:24.945333 kernel: igb 0003:03:00.1: Adding to iommu group 36 May 14 00:24:24.843959 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 00:24:24.853468 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 00:24:24.870049 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 00:24:24.951620 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 00:24:24.973843 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:24:24.983935 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 00:24:24.995002 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 00:24:25.134663 kernel: xhci_hcd 0004:03:00.0: hcc params 0x014051cf hci version 0x100 quirks 0x0000001100000010 May 14 00:24:25.134870 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller May 14 00:24:25.145214 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 2 May 14 00:24:25.157840 kernel: xhci_hcd 0004:03:00.0: Host supports USB 3.0 SuperSpeed May 14 00:24:25.169475 kernel: nvme nvme0: pci function 0005:03:00.0 May 14 00:24:25.179472 kernel: hub 1-0:1.0: USB hub found May 14 00:24:25.188255 kernel: hub 1-0:1.0: 4 ports detected May 14 00:24:25.197533 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 14 00:24:25.210766 kernel: nvme nvme1: pci function 0005:04:00.0 May 14 00:24:25.234611 kernel: nvme nvme0: Shutdown timeout set to 8 seconds May 14 00:24:25.234760 kernel: hub 2-0:1.0: USB hub found May 14 00:24:25.240216 kernel: hub 2-0:1.0: 4 ports detected May 14 00:24:25.256610 kernel: nvme nvme1: Shutdown timeout set to 8 seconds May 14 00:24:25.258009 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 00:24:25.279514 kernel: mlx5_core 0001:01:00.0: Port module event: module 0, Cable plugged May 14 00:24:25.282611 kernel: nvme nvme0: 32/0/0 default/read/poll queues May 14 00:24:25.299357 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 00:24:25.299391 kernel: nvme nvme1: 32/0/0 default/read/poll queues May 14 00:24:25.299537 kernel: GPT:9289727 != 1875385007 May 14 00:24:25.304307 kernel: igb 0003:03:00.1: added PHC on eth1 May 14 00:24:25.304407 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 00:24:25.304417 kernel: GPT:9289727 != 1875385007 May 14 00:24:25.304425 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 00:24:25.304434 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 14 00:24:25.314155 kernel: igb 0003:03:00.1: Intel(R) Gigabit Ethernet Network Connection May 14 00:24:25.389315 kernel: igb 0003:03:00.1: eth1: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0c:6a:c5 May 14 00:24:25.389399 kernel: igb 0003:03:00.1: eth1: PBA No: 106300-000 May 14 00:24:25.410643 kernel: igb 0003:03:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) May 14 00:24:25.430610 kernel: igb 0003:03:00.0 eno1: renamed from eth0 May 14 00:24:25.444612 kernel: BTRFS: device fsid ee830c17-a93d-4109-bd12-3fec8ef6763d devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (1195) May 14 00:24:25.444623 kernel: igb 0003:03:00.1 eno2: renamed from eth1 May 14 00:24:25.444713 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by (udev-worker) (1210) May 14 00:24:25.445152 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - SAMSUNG MZ1LB960HAJQ-00007 EFI-SYSTEM. May 14 00:24:25.510988 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - SAMSUNG MZ1LB960HAJQ-00007 ROOT. May 14 00:24:25.527690 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. May 14 00:24:25.554225 kernel: usb 1-3: new high-speed USB device number 2 using xhci_hcd May 14 00:24:25.544242 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. May 14 00:24:25.567061 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. May 14 00:24:25.578028 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 00:24:25.602624 disk-uuid[1318]: Primary Header is updated. May 14 00:24:25.602624 disk-uuid[1318]: Secondary Entries is updated. May 14 00:24:25.602624 disk-uuid[1318]: Secondary Header is updated. May 14 00:24:25.628868 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 14 00:24:25.678619 kernel: mlx5_core 0001:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 14 00:24:25.685617 kernel: mlx5_core 0001:01:00.1: Adding to iommu group 37 May 14 00:24:25.685718 kernel: hub 1-3:1.0: USB hub found May 14 00:24:25.716519 kernel: hub 1-3:1.0: 4 ports detected May 14 00:24:25.716703 kernel: mlx5_core 0001:01:00.1: firmware version: 14.31.1014 May 14 00:24:25.733260 kernel: mlx5_core 0001:01:00.1: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 14 00:24:25.813618 kernel: usb 2-3: new SuperSpeed USB device number 2 using xhci_hcd May 14 00:24:25.848426 kernel: hub 2-3:1.0: USB hub found May 14 00:24:25.848633 kernel: hub 2-3:1.0: 4 ports detected May 14 00:24:26.080995 kernel: mlx5_core 0001:01:00.1: Port module event: module 1, Cable plugged May 14 00:24:26.387616 kernel: mlx5_core 0001:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 14 00:24:26.402610 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: renamed from eth0 May 14 00:24:26.423614 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: renamed from eth1 May 14 00:24:26.622619 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 14 00:24:26.622657 disk-uuid[1319]: The operation has completed successfully. May 14 00:24:26.652220 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 00:24:26.652310 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 00:24:26.692101 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 00:24:26.709271 sh[1485]: Success May 14 00:24:26.732611 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 14 00:24:26.765215 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 00:24:26.776639 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 00:24:26.798684 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 00:24:26.805613 kernel: BTRFS info (device dm-0): first mount of filesystem ee830c17-a93d-4109-bd12-3fec8ef6763d May 14 00:24:26.805629 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 14 00:24:26.805638 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 14 00:24:26.805648 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 14 00:24:26.805662 kernel: BTRFS info (device dm-0): using free space tree May 14 00:24:26.808616 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 14 00:24:26.891856 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 00:24:26.902110 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 00:24:26.903144 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 00:24:26.919093 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 00:24:27.032948 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 14 00:24:27.032973 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 14 00:24:27.032991 kernel: BTRFS info (device nvme0n1p6): using free space tree May 14 00:24:27.033010 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 14 00:24:27.033028 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 14 00:24:27.033046 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 14 00:24:27.034234 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 00:24:27.045401 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 00:24:27.062402 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 00:24:27.089728 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 00:24:27.119696 systemd-networkd[1673]: lo: Link UP May 14 00:24:27.119701 systemd-networkd[1673]: lo: Gained carrier May 14 00:24:27.123512 systemd-networkd[1673]: Enumeration completed May 14 00:24:27.123690 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 00:24:27.124958 systemd-networkd[1673]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:24:27.131091 systemd[1]: Reached target network.target - Network. May 14 00:24:27.173536 ignition[1671]: Ignition 2.20.0 May 14 00:24:27.173549 ignition[1671]: Stage: fetch-offline May 14 00:24:27.177009 systemd-networkd[1673]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:24:27.173582 ignition[1671]: no configs at "/usr/lib/ignition/base.d" May 14 00:24:27.186306 unknown[1671]: fetched base config from "system" May 14 00:24:27.173590 ignition[1671]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 14 00:24:27.186312 unknown[1671]: fetched user config from "system" May 14 00:24:27.173734 ignition[1671]: parsed url from cmdline: "" May 14 00:24:27.189559 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 00:24:27.173737 ignition[1671]: no config URL provided May 14 00:24:27.200401 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 14 00:24:27.173741 ignition[1671]: reading system config file "/usr/lib/ignition/user.ign" May 14 00:24:27.201536 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 00:24:27.173790 ignition[1671]: parsing config with SHA512: 4b2982ce7f1602b92d1ba3069efb876339aa6095c21892771895f7e0a1ab39a24e31913b073eb16390662a8c7243b9dde2f60c8163a382ad9992f03f8a1d16d4 May 14 00:24:27.228484 systemd-networkd[1673]: enP1p1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:24:27.186752 ignition[1671]: fetch-offline: fetch-offline passed May 14 00:24:27.186756 ignition[1671]: POST message to Packet Timeline May 14 00:24:27.186760 ignition[1671]: POST Status error: resource requires networking May 14 00:24:27.186846 ignition[1671]: Ignition finished successfully May 14 00:24:27.234229 ignition[1709]: Ignition 2.20.0 May 14 00:24:27.234235 ignition[1709]: Stage: kargs May 14 00:24:27.234442 ignition[1709]: no configs at "/usr/lib/ignition/base.d" May 14 00:24:27.234450 ignition[1709]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 14 00:24:27.235900 ignition[1709]: kargs: kargs passed May 14 00:24:27.235905 ignition[1709]: POST message to Packet Timeline May 14 00:24:27.236139 ignition[1709]: GET https://metadata.packet.net/metadata: attempt #1 May 14 00:24:27.238378 ignition[1709]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:46020->[::1]:53: read: connection refused May 14 00:24:27.439081 ignition[1709]: GET https://metadata.packet.net/metadata: attempt #2 May 14 00:24:27.439462 ignition[1709]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:52741->[::1]:53: read: connection refused May 14 00:24:27.825618 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up May 14 00:24:27.828973 systemd-networkd[1673]: enP1p1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:24:27.840378 ignition[1709]: GET https://metadata.packet.net/metadata: attempt #3 May 14 00:24:27.841205 ignition[1709]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:34445->[::1]:53: read: connection refused May 14 00:24:28.437617 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up May 14 00:24:28.440519 systemd-networkd[1673]: eno1: Link UP May 14 00:24:28.440651 systemd-networkd[1673]: eno2: Link UP May 14 00:24:28.440766 systemd-networkd[1673]: enP1p1s0f0np0: Link UP May 14 00:24:28.440896 systemd-networkd[1673]: enP1p1s0f0np0: Gained carrier May 14 00:24:28.447734 systemd-networkd[1673]: enP1p1s0f1np1: Link UP May 14 00:24:28.468652 systemd-networkd[1673]: enP1p1s0f0np0: DHCPv4 address 147.75.51.18/30, gateway 147.75.51.17 acquired from 147.28.144.140 May 14 00:24:28.642273 ignition[1709]: GET https://metadata.packet.net/metadata: attempt #4 May 14 00:24:28.642694 ignition[1709]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56098->[::1]:53: read: connection refused May 14 00:24:28.833743 systemd-networkd[1673]: enP1p1s0f1np1: Gained carrier May 14 00:24:29.465700 systemd-networkd[1673]: enP1p1s0f0np0: Gained IPv6LL May 14 00:24:30.243735 ignition[1709]: GET https://metadata.packet.net/metadata: attempt #5 May 14 00:24:30.244437 ignition[1709]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:48245->[::1]:53: read: connection refused May 14 00:24:30.681723 systemd-networkd[1673]: enP1p1s0f1np1: Gained IPv6LL May 14 00:24:33.447481 ignition[1709]: GET https://metadata.packet.net/metadata: attempt #6 May 14 00:24:33.991314 ignition[1709]: GET result: OK May 14 00:24:34.335536 ignition[1709]: Ignition finished successfully May 14 00:24:34.338986 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 00:24:34.342114 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 00:24:34.367277 ignition[1733]: Ignition 2.20.0 May 14 00:24:34.367290 ignition[1733]: Stage: disks May 14 00:24:34.367449 ignition[1733]: no configs at "/usr/lib/ignition/base.d" May 14 00:24:34.367458 ignition[1733]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 14 00:24:34.369069 ignition[1733]: disks: disks passed May 14 00:24:34.369073 ignition[1733]: POST message to Packet Timeline May 14 00:24:34.369091 ignition[1733]: GET https://metadata.packet.net/metadata: attempt #1 May 14 00:24:34.885741 ignition[1733]: GET result: OK May 14 00:24:35.296750 ignition[1733]: Ignition finished successfully May 14 00:24:35.299496 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 00:24:35.305307 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 00:24:35.312804 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 00:24:35.320737 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 00:24:35.329196 systemd[1]: Reached target sysinit.target - System Initialization. May 14 00:24:35.338095 systemd[1]: Reached target basic.target - Basic System. May 14 00:24:35.348107 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 00:24:35.376591 systemd-fsck[1751]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 14 00:24:35.380191 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 00:24:35.387885 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 00:24:35.470610 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 9f8d74e6-c079-469f-823a-18a62077a2c7 r/w with ordered data mode. Quota mode: none. May 14 00:24:35.471002 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 00:24:35.481396 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 00:24:35.492545 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 00:24:35.511175 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 00:24:35.519615 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/nvme0n1p6 scanned by mount (1762) May 14 00:24:35.519640 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 14 00:24:35.519650 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 14 00:24:35.519660 kernel: BTRFS info (device nvme0n1p6): using free space tree May 14 00:24:35.521610 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 14 00:24:35.521622 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 14 00:24:35.605279 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 14 00:24:35.611489 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... May 14 00:24:35.627116 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 00:24:35.627153 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 00:24:35.640389 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 00:24:35.654361 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 00:24:35.667692 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 00:24:35.686886 coreos-metadata[1780]: May 14 00:24:35.671 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 14 00:24:35.697655 coreos-metadata[1781]: May 14 00:24:35.671 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 14 00:24:35.717139 initrd-setup-root[1802]: cut: /sysroot/etc/passwd: No such file or directory May 14 00:24:35.723364 initrd-setup-root[1809]: cut: /sysroot/etc/group: No such file or directory May 14 00:24:35.729827 initrd-setup-root[1816]: cut: /sysroot/etc/shadow: No such file or directory May 14 00:24:35.736079 initrd-setup-root[1823]: cut: /sysroot/etc/gshadow: No such file or directory May 14 00:24:35.805795 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 00:24:35.817405 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 00:24:35.834075 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 00:24:35.842611 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 14 00:24:35.866243 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 00:24:35.893794 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 00:24:35.905542 ignition[1898]: INFO : Ignition 2.20.0 May 14 00:24:35.905542 ignition[1898]: INFO : Stage: mount May 14 00:24:35.916858 ignition[1898]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:24:35.916858 ignition[1898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 14 00:24:35.916858 ignition[1898]: INFO : mount: mount passed May 14 00:24:35.916858 ignition[1898]: INFO : POST message to Packet Timeline May 14 00:24:35.916858 ignition[1898]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 14 00:24:36.146048 coreos-metadata[1780]: May 14 00:24:36.146 INFO Fetch successful May 14 00:24:36.151022 coreos-metadata[1781]: May 14 00:24:36.147 INFO Fetch successful May 14 00:24:36.187403 coreos-metadata[1780]: May 14 00:24:36.187 INFO wrote hostname ci-4284.0.0-n-c871d2567c to /sysroot/etc/hostname May 14 00:24:36.190913 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 14 00:24:36.202044 systemd[1]: flatcar-static-network.service: Deactivated successfully. May 14 00:24:36.202139 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. May 14 00:24:36.535031 ignition[1898]: INFO : GET result: OK May 14 00:24:37.444697 ignition[1898]: INFO : Ignition finished successfully May 14 00:24:37.446994 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 00:24:37.455748 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 00:24:37.479860 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 00:24:37.522805 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/nvme0n1p6 scanned by mount (1921) May 14 00:24:37.522838 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 14 00:24:37.537184 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 14 00:24:37.550249 kernel: BTRFS info (device nvme0n1p6): using free space tree May 14 00:24:37.573121 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 14 00:24:37.573149 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 14 00:24:37.581288 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 00:24:37.613780 ignition[1938]: INFO : Ignition 2.20.0 May 14 00:24:37.613780 ignition[1938]: INFO : Stage: files May 14 00:24:37.623273 ignition[1938]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:24:37.623273 ignition[1938]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 14 00:24:37.623273 ignition[1938]: DEBUG : files: compiled without relabeling support, skipping May 14 00:24:37.623273 ignition[1938]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 00:24:37.623273 ignition[1938]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 00:24:37.623273 ignition[1938]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 00:24:37.623273 ignition[1938]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 00:24:37.623273 ignition[1938]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 00:24:37.623273 ignition[1938]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 14 00:24:37.623273 ignition[1938]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 14 00:24:37.619140 unknown[1938]: wrote ssh authorized keys file for user: core May 14 00:24:37.715368 ignition[1938]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 00:24:37.873480 ignition[1938]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 14 00:24:37.883961 ignition[1938]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 14 00:24:37.883961 ignition[1938]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 14 00:24:37.883961 ignition[1938]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 00:24:37.883961 ignition[1938]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 00:24:37.883961 ignition[1938]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 00:24:37.883961 ignition[1938]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 00:24:37.883961 ignition[1938]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 00:24:37.883961 ignition[1938]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 00:24:37.883961 ignition[1938]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 00:24:37.883961 ignition[1938]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 00:24:37.883961 ignition[1938]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 14 00:24:37.883961 ignition[1938]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 14 00:24:37.883961 ignition[1938]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 14 00:24:37.883961 ignition[1938]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 14 00:24:38.065528 ignition[1938]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 14 00:24:38.435337 ignition[1938]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 14 00:24:38.435337 ignition[1938]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 14 00:24:38.460056 ignition[1938]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 00:24:38.460056 ignition[1938]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 00:24:38.460056 ignition[1938]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 14 00:24:38.460056 ignition[1938]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 14 00:24:38.460056 ignition[1938]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 14 00:24:38.460056 ignition[1938]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 00:24:38.460056 ignition[1938]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 00:24:38.460056 ignition[1938]: INFO : files: files passed May 14 00:24:38.460056 ignition[1938]: INFO : POST message to Packet Timeline May 14 00:24:38.460056 ignition[1938]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 14 00:24:38.971444 ignition[1938]: INFO : GET result: OK May 14 00:24:39.246760 ignition[1938]: INFO : Ignition finished successfully May 14 00:24:39.249966 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 00:24:39.259914 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 00:24:39.274166 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 00:24:39.282826 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 00:24:39.282905 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 00:24:39.304030 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 00:24:39.336090 initrd-setup-root-after-ignition[1981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 00:24:39.336090 initrd-setup-root-after-ignition[1981]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 00:24:39.313829 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 00:24:39.370533 initrd-setup-root-after-ignition[1985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 00:24:39.325823 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 00:24:39.400406 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 00:24:39.400569 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 00:24:39.412353 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 00:24:39.428625 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 00:24:39.439857 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 00:24:39.440746 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 00:24:39.473987 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 00:24:39.486786 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 00:24:39.510430 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 00:24:39.522288 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 00:24:39.528180 systemd[1]: Stopped target timers.target - Timer Units. May 14 00:24:39.539850 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 00:24:39.539959 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 00:24:39.551588 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 00:24:39.562928 systemd[1]: Stopped target basic.target - Basic System. May 14 00:24:39.574406 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 00:24:39.585823 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 00:24:39.597317 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 00:24:39.608648 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 00:24:39.619872 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 00:24:39.631197 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 00:24:39.642457 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 00:24:39.659289 systemd[1]: Stopped target swap.target - Swaps. May 14 00:24:39.670574 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 00:24:39.670680 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 00:24:39.682023 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 00:24:39.693039 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 00:24:39.704171 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 00:24:39.707642 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 00:24:39.715301 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 00:24:39.715400 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 00:24:39.732177 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 00:24:39.732275 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 00:24:39.743553 systemd[1]: Stopped target paths.target - Path Units. May 14 00:24:39.760400 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 00:24:39.764636 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 00:24:39.771807 systemd[1]: Stopped target slices.target - Slice Units. May 14 00:24:39.783155 systemd[1]: Stopped target sockets.target - Socket Units. May 14 00:24:39.794648 systemd[1]: iscsid.socket: Deactivated successfully. May 14 00:24:39.794732 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 00:24:39.806255 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 00:24:39.897999 ignition[2006]: INFO : Ignition 2.20.0 May 14 00:24:39.897999 ignition[2006]: INFO : Stage: umount May 14 00:24:39.897999 ignition[2006]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:24:39.897999 ignition[2006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 14 00:24:39.897999 ignition[2006]: INFO : umount: umount passed May 14 00:24:39.897999 ignition[2006]: INFO : POST message to Packet Timeline May 14 00:24:39.897999 ignition[2006]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 14 00:24:39.806316 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 00:24:39.817760 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 00:24:39.817855 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 00:24:39.829231 systemd[1]: ignition-files.service: Deactivated successfully. May 14 00:24:39.829315 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 00:24:39.846384 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 14 00:24:39.846473 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 14 00:24:39.858455 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 00:24:39.872254 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 00:24:39.880656 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 00:24:39.880768 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 00:24:39.892530 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 00:24:39.892626 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 00:24:39.905866 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 00:24:39.907858 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 00:24:39.907936 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 00:24:39.948593 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 00:24:39.948809 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 00:24:40.416378 ignition[2006]: INFO : GET result: OK May 14 00:24:40.754013 ignition[2006]: INFO : Ignition finished successfully May 14 00:24:40.756882 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 00:24:40.757091 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 00:24:40.764223 systemd[1]: Stopped target network.target - Network. May 14 00:24:40.773117 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 00:24:40.773169 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 00:24:40.782682 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 00:24:40.782717 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 00:24:40.792086 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 00:24:40.792159 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 00:24:40.801816 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 00:24:40.801850 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 00:24:40.811603 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 00:24:40.811643 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 00:24:40.821631 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 00:24:40.831442 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 00:24:40.841362 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 00:24:40.841491 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 00:24:40.854994 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 00:24:40.856205 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 00:24:40.856444 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 00:24:40.868952 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 00:24:40.869248 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 00:24:40.869378 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 00:24:40.876973 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 00:24:40.877860 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 00:24:40.878036 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 00:24:40.888318 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 00:24:40.896713 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 00:24:40.896772 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 00:24:40.907203 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 00:24:40.907241 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 00:24:40.917550 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 00:24:40.917618 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 00:24:40.927943 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 00:24:40.945113 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 00:24:40.946071 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 00:24:40.946682 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 00:24:40.958926 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 00:24:40.959090 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 00:24:40.971912 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 00:24:40.971963 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 00:24:40.988493 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 00:24:40.988548 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 00:24:40.999988 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 00:24:41.000024 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 00:24:41.016517 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 00:24:41.016586 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 00:24:41.028957 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 00:24:41.039707 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 00:24:41.039759 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 00:24:41.051840 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 14 00:24:41.051904 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 00:24:41.063530 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 00:24:41.063564 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 00:24:41.075327 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 00:24:41.075361 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:24:41.095322 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 14 00:24:41.095388 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 00:24:41.095709 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 00:24:41.095783 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 00:24:41.620477 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 00:24:41.620641 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 00:24:41.631987 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 00:24:41.643300 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 00:24:41.663122 systemd[1]: Switching root. May 14 00:24:41.719273 systemd-journald[900]: Journal stopped May 14 00:24:43.819945 systemd-journald[900]: Received SIGTERM from PID 1 (systemd). May 14 00:24:43.819974 kernel: SELinux: policy capability network_peer_controls=1 May 14 00:24:43.819985 kernel: SELinux: policy capability open_perms=1 May 14 00:24:43.819992 kernel: SELinux: policy capability extended_socket_class=1 May 14 00:24:43.820000 kernel: SELinux: policy capability always_check_network=0 May 14 00:24:43.820008 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 00:24:43.820016 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 00:24:43.820026 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 00:24:43.820034 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 00:24:43.820042 kernel: audit: type=1403 audit(1747182281.892:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 00:24:43.820051 systemd[1]: Successfully loaded SELinux policy in 116.300ms. May 14 00:24:43.820060 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.001ms. May 14 00:24:43.820070 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 00:24:43.820079 systemd[1]: Detected architecture arm64. May 14 00:24:43.820090 systemd[1]: Detected first boot. May 14 00:24:43.820099 systemd[1]: Hostname set to . May 14 00:24:43.820108 systemd[1]: Initializing machine ID from random generator. May 14 00:24:43.820117 zram_generator::config[2100]: No configuration found. May 14 00:24:43.820128 systemd[1]: Populated /etc with preset unit settings. May 14 00:24:43.820137 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 00:24:43.820146 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 00:24:43.820155 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 00:24:43.820164 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 00:24:43.820173 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 00:24:43.820182 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 00:24:43.820193 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 00:24:43.820202 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 00:24:43.820211 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 00:24:43.820220 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 00:24:43.820229 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 00:24:43.820238 systemd[1]: Created slice user.slice - User and Session Slice. May 14 00:24:43.820247 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 00:24:43.820256 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 00:24:43.820267 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 00:24:43.820276 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 00:24:43.820285 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 00:24:43.820295 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 00:24:43.820304 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 14 00:24:43.820312 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 00:24:43.820324 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 00:24:43.820335 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 00:24:43.820344 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 00:24:43.820355 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 00:24:43.820364 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 00:24:43.820373 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 00:24:43.820382 systemd[1]: Reached target slices.target - Slice Units. May 14 00:24:43.820392 systemd[1]: Reached target swap.target - Swaps. May 14 00:24:43.820401 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 00:24:43.820410 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 00:24:43.820421 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 00:24:43.820430 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 00:24:43.820439 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 00:24:43.820449 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 00:24:43.820458 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 00:24:43.820469 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 00:24:43.820478 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 00:24:43.820487 systemd[1]: Mounting media.mount - External Media Directory... May 14 00:24:43.820497 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 00:24:43.820506 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 00:24:43.820515 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 00:24:43.820525 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 00:24:43.820535 systemd[1]: Reached target machines.target - Containers. May 14 00:24:43.820546 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 00:24:43.820555 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 00:24:43.820564 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 00:24:43.820574 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 00:24:43.820583 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 00:24:43.820592 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 00:24:43.820601 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 00:24:43.820614 kernel: ACPI: bus type drm_connector registered May 14 00:24:43.820623 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 00:24:43.820634 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 00:24:43.820643 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 00:24:43.820652 kernel: loop: module loaded May 14 00:24:43.820661 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 00:24:43.820670 kernel: fuse: init (API version 7.39) May 14 00:24:43.820678 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 00:24:43.820688 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 00:24:43.820697 systemd[1]: Stopped systemd-fsck-usr.service. May 14 00:24:43.820709 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 00:24:43.820718 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 00:24:43.820729 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 00:24:43.820738 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 00:24:43.820765 systemd-journald[2209]: Collecting audit messages is disabled. May 14 00:24:43.820787 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 00:24:43.820797 systemd-journald[2209]: Journal started May 14 00:24:43.820815 systemd-journald[2209]: Runtime Journal (/run/log/journal/12235ab2fb3e4f818ef019a4296cbb54) is 8M, max 4G, 3.9G free. May 14 00:24:42.453846 systemd[1]: Queued start job for default target multi-user.target. May 14 00:24:42.465895 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 14 00:24:42.466274 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 00:24:42.466572 systemd[1]: systemd-journald.service: Consumed 3.410s CPU time. May 14 00:24:43.865622 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 00:24:43.887619 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 00:24:43.910697 systemd[1]: verity-setup.service: Deactivated successfully. May 14 00:24:43.910732 systemd[1]: Stopped verity-setup.service. May 14 00:24:43.936627 systemd[1]: Started systemd-journald.service - Journal Service. May 14 00:24:43.942279 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 00:24:43.947964 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 00:24:43.953550 systemd[1]: Mounted media.mount - External Media Directory. May 14 00:24:43.959064 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 00:24:43.964525 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 00:24:43.969955 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 00:24:43.976636 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 00:24:43.982301 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 00:24:43.988017 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 00:24:43.988198 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 00:24:43.993774 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:24:43.993950 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 00:24:43.999480 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 00:24:43.999666 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 00:24:44.005222 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:24:44.005389 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 00:24:44.010874 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 00:24:44.011048 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 00:24:44.016458 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:24:44.017634 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 00:24:44.023009 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 00:24:44.028114 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 00:24:44.033412 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 00:24:44.038554 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 00:24:44.043884 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 00:24:44.060929 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 00:24:44.067149 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 00:24:44.082300 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 00:24:44.087166 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 00:24:44.087194 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 00:24:44.092769 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 00:24:44.098512 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 00:24:44.104319 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 00:24:44.109166 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 00:24:44.110518 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 00:24:44.116225 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 00:24:44.121037 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:24:44.122112 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 00:24:44.126971 systemd-journald[2209]: Time spent on flushing to /var/log/journal/12235ab2fb3e4f818ef019a4296cbb54 is 23.406ms for 2358 entries. May 14 00:24:44.126971 systemd-journald[2209]: System Journal (/var/log/journal/12235ab2fb3e4f818ef019a4296cbb54) is 8M, max 195.6M, 187.6M free. May 14 00:24:44.160638 systemd-journald[2209]: Received client request to flush runtime journal. May 14 00:24:44.160731 kernel: loop0: detected capacity change from 0 to 8 May 14 00:24:44.139492 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 00:24:44.140634 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 00:24:44.146333 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 00:24:44.152086 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 00:24:44.157956 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 14 00:24:44.175438 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 00:24:44.176574 systemd-tmpfiles[2253]: ACLs are not supported, ignoring. May 14 00:24:44.176587 systemd-tmpfiles[2253]: ACLs are not supported, ignoring. May 14 00:24:44.183613 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 00:24:44.187845 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 00:24:44.193634 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 00:24:44.198437 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 00:24:44.205745 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 00:24:44.210513 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 00:24:44.216520 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 00:24:44.229804 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 00:24:44.233623 kernel: loop1: detected capacity change from 0 to 103832 May 14 00:24:44.240082 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 00:24:44.259261 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 00:24:44.264860 udevadm[2255]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 14 00:24:44.267392 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 00:24:44.268015 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 00:24:44.283258 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 00:24:44.287613 kernel: loop2: detected capacity change from 0 to 201592 May 14 00:24:44.300321 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 00:24:44.329924 systemd-tmpfiles[2295]: ACLs are not supported, ignoring. May 14 00:24:44.329937 systemd-tmpfiles[2295]: ACLs are not supported, ignoring. May 14 00:24:44.333749 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 00:24:44.359622 kernel: loop3: detected capacity change from 0 to 126448 May 14 00:24:44.363982 ldconfig[2245]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 00:24:44.365630 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 00:24:44.409621 kernel: loop4: detected capacity change from 0 to 8 May 14 00:24:44.422619 kernel: loop5: detected capacity change from 0 to 103832 May 14 00:24:44.438618 kernel: loop6: detected capacity change from 0 to 201592 May 14 00:24:44.457619 kernel: loop7: detected capacity change from 0 to 126448 May 14 00:24:44.461755 (sd-merge)[2303]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. May 14 00:24:44.462216 (sd-merge)[2303]: Merged extensions into '/usr'. May 14 00:24:44.465132 systemd[1]: Reload requested from client PID 2250 ('systemd-sysext') (unit systemd-sysext.service)... May 14 00:24:44.465144 systemd[1]: Reloading... May 14 00:24:44.513611 zram_generator::config[2335]: No configuration found. May 14 00:24:44.607105 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:24:44.668481 systemd[1]: Reloading finished in 202 ms. May 14 00:24:44.686023 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 00:24:44.690961 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 00:24:44.709968 systemd[1]: Starting ensure-sysext.service... May 14 00:24:44.715849 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 00:24:44.722534 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 00:24:44.733667 systemd[1]: Reload requested from client PID 2385 ('systemctl') (unit ensure-sysext.service)... May 14 00:24:44.733682 systemd[1]: Reloading... May 14 00:24:44.735172 systemd-tmpfiles[2386]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 00:24:44.735365 systemd-tmpfiles[2386]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 00:24:44.735986 systemd-tmpfiles[2386]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 00:24:44.736180 systemd-tmpfiles[2386]: ACLs are not supported, ignoring. May 14 00:24:44.736229 systemd-tmpfiles[2386]: ACLs are not supported, ignoring. May 14 00:24:44.739082 systemd-tmpfiles[2386]: Detected autofs mount point /boot during canonicalization of boot. May 14 00:24:44.739090 systemd-tmpfiles[2386]: Skipping /boot May 14 00:24:44.747509 systemd-tmpfiles[2386]: Detected autofs mount point /boot during canonicalization of boot. May 14 00:24:44.747517 systemd-tmpfiles[2386]: Skipping /boot May 14 00:24:44.749582 systemd-udevd[2387]: Using default interface naming scheme 'v255'. May 14 00:24:44.793619 zram_generator::config[2439]: No configuration found. May 14 00:24:44.817635 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (2408) May 14 00:24:44.836630 kernel: IPMI message handler: version 39.2 May 14 00:24:44.846618 kernel: ipmi device interface May 14 00:24:44.864131 kernel: ipmi_si: IPMI System Interface driver May 14 00:24:44.864175 kernel: ipmi_si: Unable to find any System Interface(s) May 14 00:24:44.870614 kernel: ipmi_ssif: IPMI SSIF Interface driver May 14 00:24:44.921781 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:24:45.001851 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 14 00:24:45.002058 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. May 14 00:24:45.006797 systemd[1]: Reloading finished in 272 ms. May 14 00:24:45.026864 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 00:24:45.047763 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 00:24:45.070984 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 14 00:24:45.076690 systemd[1]: Finished ensure-sysext.service. May 14 00:24:45.099222 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 00:24:45.117418 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 00:24:45.122412 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 00:24:45.123454 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 14 00:24:45.129280 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 00:24:45.135069 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 00:24:45.140879 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 00:24:45.142656 lvm[2620]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 00:24:45.146484 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 00:24:45.151293 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 00:24:45.152190 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 00:24:45.156965 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 00:24:45.158097 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 00:24:45.158186 augenrules[2646]: No rules May 14 00:24:45.164493 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 00:24:45.171001 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 00:24:45.177111 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 00:24:45.182642 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 00:24:45.203117 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:24:45.208565 systemd[1]: audit-rules.service: Deactivated successfully. May 14 00:24:45.209317 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 00:24:45.214481 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 00:24:45.220636 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 14 00:24:45.226196 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:24:45.226346 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 00:24:45.231399 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 00:24:45.231564 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 00:24:45.237017 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:24:45.237174 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 00:24:45.241935 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:24:45.242079 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 00:24:45.246816 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 00:24:45.251814 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 00:24:45.257235 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:24:45.270473 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 00:24:45.276092 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 14 00:24:45.280619 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:24:45.280686 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 00:24:45.290444 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 00:24:45.293570 lvm[2672]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 00:24:45.297084 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 00:24:45.301750 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 00:24:45.302245 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 00:24:45.307143 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 00:24:45.326087 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 14 00:24:45.333744 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 00:24:45.392300 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 00:24:45.397160 systemd[1]: Reached target time-set.target - System Time Set. May 14 00:24:45.400056 systemd-resolved[2654]: Positive Trust Anchors: May 14 00:24:45.400070 systemd-resolved[2654]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 00:24:45.400101 systemd-resolved[2654]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 00:24:45.403622 systemd-resolved[2654]: Using system hostname 'ci-4284.0.0-n-c871d2567c'. May 14 00:24:45.404963 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 00:24:45.408513 systemd-networkd[2653]: lo: Link UP May 14 00:24:45.408519 systemd-networkd[2653]: lo: Gained carrier May 14 00:24:45.410257 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 00:24:45.412245 systemd-networkd[2653]: bond0: netdev ready May 14 00:24:45.414542 systemd[1]: Reached target sysinit.target - System Initialization. May 14 00:24:45.418835 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 00:24:45.421207 systemd-networkd[2653]: Enumeration completed May 14 00:24:45.423092 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 00:24:45.427544 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 00:24:45.429153 systemd-networkd[2653]: enP1p1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:52:20:00.network. May 14 00:24:45.431891 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 00:24:45.436231 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 00:24:45.440580 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 00:24:45.440599 systemd[1]: Reached target paths.target - Path Units. May 14 00:24:45.444954 systemd[1]: Reached target timers.target - Timer Units. May 14 00:24:45.450125 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 00:24:45.455960 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 00:24:45.462172 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 00:24:45.468914 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 00:24:45.473839 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 00:24:45.478788 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 00:24:45.483443 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 00:24:45.487974 systemd[1]: Reached target network.target - Network. May 14 00:24:45.492353 systemd[1]: Reached target sockets.target - Socket Units. May 14 00:24:45.496663 systemd[1]: Reached target basic.target - Basic System. May 14 00:24:45.500943 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 00:24:45.500967 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 00:24:45.501995 systemd[1]: Starting containerd.service - containerd container runtime... May 14 00:24:45.516302 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 14 00:24:45.521929 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 00:24:45.527477 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 00:24:45.533007 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 00:24:45.537297 jq[2704]: false May 14 00:24:45.537423 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 00:24:45.538201 coreos-metadata[2700]: May 14 00:24:45.538 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 14 00:24:45.538566 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 00:24:45.541732 coreos-metadata[2700]: May 14 00:24:45.541 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 14 00:24:45.542896 dbus-daemon[2701]: [system] SELinux support is enabled May 14 00:24:45.544069 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 00:24:45.549582 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 00:24:45.552739 extend-filesystems[2705]: Found loop4 May 14 00:24:45.558838 extend-filesystems[2705]: Found loop5 May 14 00:24:45.558838 extend-filesystems[2705]: Found loop6 May 14 00:24:45.558838 extend-filesystems[2705]: Found loop7 May 14 00:24:45.558838 extend-filesystems[2705]: Found nvme0n1 May 14 00:24:45.558838 extend-filesystems[2705]: Found nvme0n1p1 May 14 00:24:45.558838 extend-filesystems[2705]: Found nvme0n1p2 May 14 00:24:45.558838 extend-filesystems[2705]: Found nvme0n1p3 May 14 00:24:45.558838 extend-filesystems[2705]: Found usr May 14 00:24:45.558838 extend-filesystems[2705]: Found nvme0n1p4 May 14 00:24:45.558838 extend-filesystems[2705]: Found nvme0n1p6 May 14 00:24:45.558838 extend-filesystems[2705]: Found nvme0n1p7 May 14 00:24:45.558838 extend-filesystems[2705]: Found nvme0n1p9 May 14 00:24:45.558838 extend-filesystems[2705]: Checking size of /dev/nvme0n1p9 May 14 00:24:45.690503 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 233815889 blocks May 14 00:24:45.690529 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (2567) May 14 00:24:45.555259 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 00:24:45.690638 extend-filesystems[2705]: Resized partition /dev/nvme0n1p9 May 14 00:24:45.567600 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 00:24:45.699960 extend-filesystems[2727]: resize2fs 1.47.2 (1-Jan-2025) May 14 00:24:45.573667 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 00:24:45.623702 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 00:24:45.632024 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 00:24:45.632705 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 00:24:45.705193 update_engine[2734]: I20250514 00:24:45.675508 2734 main.cc:92] Flatcar Update Engine starting May 14 00:24:45.705193 update_engine[2734]: I20250514 00:24:45.677618 2734 update_check_scheduler.cc:74] Next update check in 6m43s May 14 00:24:45.633344 systemd[1]: Starting update-engine.service - Update Engine... May 14 00:24:45.705442 jq[2735]: true May 14 00:24:45.641124 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 00:24:45.649467 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 00:24:45.662468 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 00:24:45.662732 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 00:24:45.662995 systemd-logind[2724]: Watching system buttons on /dev/input/event0 (Power Button) May 14 00:24:45.663056 systemd[1]: motdgen.service: Deactivated successfully. May 14 00:24:45.663285 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 00:24:45.667246 systemd-logind[2724]: New seat seat0. May 14 00:24:45.677668 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 00:24:45.677853 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 00:24:45.686381 systemd[1]: Started systemd-logind.service - User Login Management. May 14 00:24:45.705616 (ntainerd)[2740]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 00:24:45.706566 dbus-daemon[2701]: [system] Successfully activated service 'org.freedesktop.systemd1' May 14 00:24:45.708188 jq[2739]: true May 14 00:24:45.710525 tar[2737]: linux-arm64/LICENSE May 14 00:24:45.710685 tar[2737]: linux-arm64/helm May 14 00:24:45.723432 systemd[1]: Started update-engine.service - Update Engine. May 14 00:24:45.729093 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 00:24:45.729249 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 00:24:45.733862 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 00:24:45.733962 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 00:24:45.740345 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 00:24:45.740710 bash[2765]: Updated "/home/core/.ssh/authorized_keys" May 14 00:24:45.754904 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 00:24:45.762296 systemd[1]: Starting sshkeys.service... May 14 00:24:45.782169 locksmithd[2766]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 00:24:45.786805 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 14 00:24:45.793057 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 14 00:24:45.826785 coreos-metadata[2778]: May 14 00:24:45.826 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 14 00:24:45.827933 coreos-metadata[2778]: May 14 00:24:45.827 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 14 00:24:45.861656 containerd[2740]: time="2025-05-14T00:24:45Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 14 00:24:45.863693 containerd[2740]: time="2025-05-14T00:24:45.863665160Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 14 00:24:45.871919 containerd[2740]: time="2025-05-14T00:24:45.871889800Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.36µs" May 14 00:24:45.871941 containerd[2740]: time="2025-05-14T00:24:45.871920120Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 14 00:24:45.871957 containerd[2740]: time="2025-05-14T00:24:45.871939360Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 14 00:24:45.872131 containerd[2740]: time="2025-05-14T00:24:45.872112880Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 14 00:24:45.872153 containerd[2740]: time="2025-05-14T00:24:45.872133160Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 14 00:24:45.872171 containerd[2740]: time="2025-05-14T00:24:45.872157840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 00:24:45.872224 containerd[2740]: time="2025-05-14T00:24:45.872208840Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 00:24:45.872245 containerd[2740]: time="2025-05-14T00:24:45.872222360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 00:24:45.872505 containerd[2740]: time="2025-05-14T00:24:45.872485760Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 00:24:45.872505 containerd[2740]: time="2025-05-14T00:24:45.872503000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 00:24:45.872544 containerd[2740]: time="2025-05-14T00:24:45.872514600Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 00:24:45.872544 containerd[2740]: time="2025-05-14T00:24:45.872522760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 14 00:24:45.872609 containerd[2740]: time="2025-05-14T00:24:45.872595160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 14 00:24:45.872803 containerd[2740]: time="2025-05-14T00:24:45.872786440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 00:24:45.872829 containerd[2740]: time="2025-05-14T00:24:45.872816000Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 00:24:45.872849 containerd[2740]: time="2025-05-14T00:24:45.872827360Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 14 00:24:45.872872 containerd[2740]: time="2025-05-14T00:24:45.872853520Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 14 00:24:45.873072 containerd[2740]: time="2025-05-14T00:24:45.873058200Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 14 00:24:45.873135 containerd[2740]: time="2025-05-14T00:24:45.873121840Z" level=info msg="metadata content store policy set" policy=shared May 14 00:24:45.880403 containerd[2740]: time="2025-05-14T00:24:45.880377960Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 14 00:24:45.880440 containerd[2740]: time="2025-05-14T00:24:45.880422680Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 14 00:24:45.880440 containerd[2740]: time="2025-05-14T00:24:45.880436200Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 14 00:24:45.880513 containerd[2740]: time="2025-05-14T00:24:45.880448400Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 14 00:24:45.880513 containerd[2740]: time="2025-05-14T00:24:45.880462720Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 14 00:24:45.880513 containerd[2740]: time="2025-05-14T00:24:45.880473600Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 14 00:24:45.880513 containerd[2740]: time="2025-05-14T00:24:45.880485400Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 14 00:24:45.880513 containerd[2740]: time="2025-05-14T00:24:45.880498840Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 14 00:24:45.880513 containerd[2740]: time="2025-05-14T00:24:45.880509440Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 14 00:24:45.880693 containerd[2740]: time="2025-05-14T00:24:45.880519960Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 14 00:24:45.880693 containerd[2740]: time="2025-05-14T00:24:45.880529520Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 14 00:24:45.880693 containerd[2740]: time="2025-05-14T00:24:45.880541520Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 14 00:24:45.880693 containerd[2740]: time="2025-05-14T00:24:45.880661640Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 14 00:24:45.880693 containerd[2740]: time="2025-05-14T00:24:45.880682160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 14 00:24:45.880775 containerd[2740]: time="2025-05-14T00:24:45.880694600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 14 00:24:45.880775 containerd[2740]: time="2025-05-14T00:24:45.880705600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 14 00:24:45.880775 containerd[2740]: time="2025-05-14T00:24:45.880715920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 14 00:24:45.880775 containerd[2740]: time="2025-05-14T00:24:45.880725640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 14 00:24:45.880775 containerd[2740]: time="2025-05-14T00:24:45.880736960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 14 00:24:45.880775 containerd[2740]: time="2025-05-14T00:24:45.880746840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 14 00:24:45.880775 containerd[2740]: time="2025-05-14T00:24:45.880757560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 14 00:24:45.880775 containerd[2740]: time="2025-05-14T00:24:45.880772440Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 14 00:24:45.880905 containerd[2740]: time="2025-05-14T00:24:45.880783320Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 14 00:24:45.881059 containerd[2740]: time="2025-05-14T00:24:45.881044560Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 14 00:24:45.881081 containerd[2740]: time="2025-05-14T00:24:45.881060560Z" level=info msg="Start snapshots syncer" May 14 00:24:45.881081 containerd[2740]: time="2025-05-14T00:24:45.881076600Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 14 00:24:45.881318 containerd[2740]: time="2025-05-14T00:24:45.881289080Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 14 00:24:45.881401 containerd[2740]: time="2025-05-14T00:24:45.881334280Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 14 00:24:45.881401 containerd[2740]: time="2025-05-14T00:24:45.881390480Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 14 00:24:45.881516 containerd[2740]: time="2025-05-14T00:24:45.881500960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 14 00:24:45.881537 containerd[2740]: time="2025-05-14T00:24:45.881525920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 14 00:24:45.881555 containerd[2740]: time="2025-05-14T00:24:45.881538840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 14 00:24:45.881555 containerd[2740]: time="2025-05-14T00:24:45.881549360Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 14 00:24:45.881591 containerd[2740]: time="2025-05-14T00:24:45.881561200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 14 00:24:45.881591 containerd[2740]: time="2025-05-14T00:24:45.881572440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 14 00:24:45.881591 containerd[2740]: time="2025-05-14T00:24:45.881582640Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 14 00:24:45.881643 containerd[2740]: time="2025-05-14T00:24:45.881620320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 14 00:24:45.881643 containerd[2740]: time="2025-05-14T00:24:45.881633600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 14 00:24:45.881684 containerd[2740]: time="2025-05-14T00:24:45.881642720Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 14 00:24:45.881684 containerd[2740]: time="2025-05-14T00:24:45.881676280Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 00:24:45.881717 containerd[2740]: time="2025-05-14T00:24:45.881689960Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 00:24:45.881717 containerd[2740]: time="2025-05-14T00:24:45.881698760Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 00:24:45.881717 containerd[2740]: time="2025-05-14T00:24:45.881708120Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 00:24:45.881717 containerd[2740]: time="2025-05-14T00:24:45.881716440Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 14 00:24:45.881780 containerd[2740]: time="2025-05-14T00:24:45.881730800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 14 00:24:45.881780 containerd[2740]: time="2025-05-14T00:24:45.881741720Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 14 00:24:45.881829 containerd[2740]: time="2025-05-14T00:24:45.881820240Z" level=info msg="runtime interface created" May 14 00:24:45.881829 containerd[2740]: time="2025-05-14T00:24:45.881826720Z" level=info msg="created NRI interface" May 14 00:24:45.881862 containerd[2740]: time="2025-05-14T00:24:45.881835600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 14 00:24:45.881862 containerd[2740]: time="2025-05-14T00:24:45.881846640Z" level=info msg="Connect containerd service" May 14 00:24:45.881894 containerd[2740]: time="2025-05-14T00:24:45.881871760Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 00:24:45.883111 containerd[2740]: time="2025-05-14T00:24:45.883088680Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 00:24:45.935223 sshd_keygen[2730]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 00:24:45.955647 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 00:24:45.962484 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 00:24:45.969005 containerd[2740]: time="2025-05-14T00:24:45.968963120Z" level=info msg="Start subscribing containerd event" May 14 00:24:45.969084 containerd[2740]: time="2025-05-14T00:24:45.969028560Z" level=info msg="Start recovering state" May 14 00:24:45.969123 containerd[2740]: time="2025-05-14T00:24:45.969110640Z" level=info msg="Start event monitor" May 14 00:24:45.969145 containerd[2740]: time="2025-05-14T00:24:45.969126800Z" level=info msg="Start cni network conf syncer for default" May 14 00:24:45.969145 containerd[2740]: time="2025-05-14T00:24:45.969135080Z" level=info msg="Start streaming server" May 14 00:24:45.969177 containerd[2740]: time="2025-05-14T00:24:45.969143320Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 14 00:24:45.969177 containerd[2740]: time="2025-05-14T00:24:45.969151080Z" level=info msg="runtime interface starting up..." May 14 00:24:45.969177 containerd[2740]: time="2025-05-14T00:24:45.969156600Z" level=info msg="starting plugins..." May 14 00:24:45.969177 containerd[2740]: time="2025-05-14T00:24:45.969169600Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 14 00:24:45.969346 containerd[2740]: time="2025-05-14T00:24:45.969318960Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 00:24:45.969406 containerd[2740]: time="2025-05-14T00:24:45.969394680Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 00:24:45.969468 containerd[2740]: time="2025-05-14T00:24:45.969456640Z" level=info msg="containerd successfully booted in 0.108155s" May 14 00:24:45.982712 systemd[1]: Started containerd.service - containerd container runtime. May 14 00:24:45.991821 systemd[1]: issuegen.service: Deactivated successfully. May 14 00:24:45.992044 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 00:24:45.998857 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 00:24:46.025663 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 00:24:46.032247 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 00:24:46.038395 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 14 00:24:46.043512 systemd[1]: Reached target getty.target - Login Prompts. May 14 00:24:46.044107 tar[2737]: linux-arm64/README.md May 14 00:24:46.069888 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 00:24:46.137621 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 233815889 May 14 00:24:46.153881 extend-filesystems[2727]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 14 00:24:46.153881 extend-filesystems[2727]: old_desc_blocks = 1, new_desc_blocks = 112 May 14 00:24:46.153881 extend-filesystems[2727]: The filesystem on /dev/nvme0n1p9 is now 233815889 (4k) blocks long. May 14 00:24:46.182538 extend-filesystems[2705]: Resized filesystem in /dev/nvme0n1p9 May 14 00:24:46.182538 extend-filesystems[2705]: Found nvme1n1 May 14 00:24:46.156569 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 00:24:46.156864 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 00:24:46.168960 systemd[1]: extend-filesystems.service: Consumed 213ms CPU time, 68.8M memory peak. May 14 00:24:46.541857 coreos-metadata[2700]: May 14 00:24:46.541 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 May 14 00:24:46.542330 coreos-metadata[2700]: May 14 00:24:46.542 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 14 00:24:46.670621 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up May 14 00:24:46.688611 kernel: bond0: (slave enP1p1s0f0np0): Enslaving as a backup interface with an up link May 14 00:24:46.692083 systemd-networkd[2653]: enP1p1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:52:20:01.network. May 14 00:24:46.828103 coreos-metadata[2778]: May 14 00:24:46.828 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 May 14 00:24:46.828459 coreos-metadata[2778]: May 14 00:24:46.828 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 14 00:24:47.296621 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up May 14 00:24:47.314142 systemd-networkd[2653]: bond0: Configuring with /etc/systemd/network/05-bond0.network. May 14 00:24:47.314610 kernel: bond0: (slave enP1p1s0f1np1): Enslaving as a backup interface with an up link May 14 00:24:47.315702 systemd-networkd[2653]: enP1p1s0f0np0: Link UP May 14 00:24:47.315834 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 00:24:47.315948 systemd-networkd[2653]: enP1p1s0f0np0: Gained carrier May 14 00:24:47.334614 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond May 14 00:24:47.345928 systemd-networkd[2653]: enP1p1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:52:20:00.network. May 14 00:24:47.346207 systemd-networkd[2653]: enP1p1s0f1np1: Link UP May 14 00:24:47.346403 systemd-networkd[2653]: enP1p1s0f1np1: Gained carrier May 14 00:24:47.356791 systemd-networkd[2653]: bond0: Link UP May 14 00:24:47.357029 systemd-networkd[2653]: bond0: Gained carrier May 14 00:24:47.357189 systemd-timesyncd[2655]: Network configuration changed, trying to establish connection. May 14 00:24:47.357763 systemd-timesyncd[2655]: Network configuration changed, trying to establish connection. May 14 00:24:47.358020 systemd-timesyncd[2655]: Network configuration changed, trying to establish connection. May 14 00:24:47.358154 systemd-timesyncd[2655]: Network configuration changed, trying to establish connection. May 14 00:24:47.437207 kernel: bond0: (slave enP1p1s0f0np0): link status definitely up, 25000 Mbps full duplex May 14 00:24:47.437241 kernel: bond0: active interface up! May 14 00:24:47.560616 kernel: bond0: (slave enP1p1s0f1np1): link status definitely up, 25000 Mbps full duplex May 14 00:24:48.542431 coreos-metadata[2700]: May 14 00:24:48.542 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 May 14 00:24:48.828754 coreos-metadata[2778]: May 14 00:24:48.828 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 May 14 00:24:49.113680 systemd-networkd[2653]: bond0: Gained IPv6LL May 14 00:24:49.114058 systemd-timesyncd[2655]: Network configuration changed, trying to establish connection. May 14 00:24:49.241953 systemd-timesyncd[2655]: Network configuration changed, trying to establish connection. May 14 00:24:49.242067 systemd-timesyncd[2655]: Network configuration changed, trying to establish connection. May 14 00:24:49.243851 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 00:24:49.249800 systemd[1]: Reached target network-online.target - Network is Online. May 14 00:24:49.257058 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:24:49.272293 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 00:24:49.294813 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 00:24:49.862933 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:24:49.869045 (kubelet)[2852]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:24:50.233686 kubelet[2852]: E0514 00:24:50.233616 2852 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:24:50.235852 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:24:50.235991 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:24:50.236367 systemd[1]: kubelet.service: Consumed 703ms CPU time, 257.9M memory peak. May 14 00:24:50.721409 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 00:24:50.727676 systemd[1]: Started sshd@0-147.75.51.18:22-139.178.68.195:60128.service - OpenSSH per-connection server daemon (139.178.68.195:60128). May 14 00:24:51.061891 coreos-metadata[2778]: May 14 00:24:51.061 INFO Fetch successful May 14 00:24:51.080205 login[2825]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying May 14 00:24:51.081655 login[2826]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 14 00:24:51.091193 systemd-logind[2724]: New session 1 of user core. May 14 00:24:51.092579 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 00:24:51.093986 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 00:24:51.099817 coreos-metadata[2700]: May 14 00:24:51.099 INFO Fetch successful May 14 00:24:51.105913 kernel: mlx5_core 0001:01:00.0: lag map: port 1:1 port 2:2 May 14 00:24:51.106091 kernel: mlx5_core 0001:01:00.0: shared_fdb:0 mode:queue_affinity May 14 00:24:51.108798 unknown[2778]: wrote ssh authorized keys file for user: core May 14 00:24:51.125127 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 00:24:51.127581 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 00:24:51.130630 update-ssh-keys[2886]: Updated "/home/core/.ssh/authorized_keys" May 14 00:24:51.131855 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 14 00:24:51.133360 systemd[1]: Finished sshkeys.service. May 14 00:24:51.135603 (systemd)[2888]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 00:24:51.137393 systemd-logind[2724]: New session c1 of user core. May 14 00:24:51.157495 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 14 00:24:51.159260 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... May 14 00:24:51.166387 sshd[2875]: Accepted publickey for core from 139.178.68.195 port 60128 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 00:24:51.167856 sshd-session[2875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:24:51.170959 systemd-logind[2724]: New session 3 of user core. May 14 00:24:51.255922 systemd[2888]: Queued start job for default target default.target. May 14 00:24:51.267661 systemd[2888]: Created slice app.slice - User Application Slice. May 14 00:24:51.267685 systemd[2888]: Reached target paths.target - Paths. May 14 00:24:51.267719 systemd[2888]: Reached target timers.target - Timers. May 14 00:24:51.268973 systemd[2888]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 00:24:51.277286 systemd[2888]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 00:24:51.277337 systemd[2888]: Reached target sockets.target - Sockets. May 14 00:24:51.277378 systemd[2888]: Reached target basic.target - Basic System. May 14 00:24:51.277406 systemd[2888]: Reached target default.target - Main User Target. May 14 00:24:51.277428 systemd[2888]: Startup finished in 135ms. May 14 00:24:51.277624 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 00:24:51.279092 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 00:24:51.279932 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 00:24:51.496598 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. May 14 00:24:51.497084 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 00:24:51.499663 systemd[1]: Startup finished in 3.239s (kernel) + 19.406s (initrd) + 9.722s (userspace) = 32.369s. May 14 00:24:51.585364 systemd[1]: Started sshd@1-147.75.51.18:22-139.178.68.195:60134.service - OpenSSH per-connection server daemon (139.178.68.195:60134). May 14 00:24:52.017482 sshd[2921]: Accepted publickey for core from 139.178.68.195 port 60134 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 00:24:52.018512 sshd-session[2921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:24:52.021660 systemd-logind[2724]: New session 4 of user core. May 14 00:24:52.032709 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 00:24:52.082324 login[2825]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 14 00:24:52.085754 systemd-logind[2724]: New session 2 of user core. May 14 00:24:52.095766 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 00:24:52.316281 sshd[2923]: Connection closed by 139.178.68.195 port 60134 May 14 00:24:52.316788 sshd-session[2921]: pam_unix(sshd:session): session closed for user core May 14 00:24:52.320578 systemd[1]: sshd@1-147.75.51.18:22-139.178.68.195:60134.service: Deactivated successfully. May 14 00:24:52.322939 systemd[1]: session-4.scope: Deactivated successfully. May 14 00:24:52.323468 systemd-logind[2724]: Session 4 logged out. Waiting for processes to exit. May 14 00:24:52.324022 systemd-logind[2724]: Removed session 4. May 14 00:24:52.393351 systemd[1]: Started sshd@2-147.75.51.18:22-139.178.68.195:60138.service - OpenSSH per-connection server daemon (139.178.68.195:60138). May 14 00:24:52.827309 sshd[2940]: Accepted publickey for core from 139.178.68.195 port 60138 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 00:24:52.828325 sshd-session[2940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:24:52.831308 systemd-logind[2724]: New session 5 of user core. May 14 00:24:52.840712 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 00:24:53.125980 sshd[2942]: Connection closed by 139.178.68.195 port 60138 May 14 00:24:53.126303 sshd-session[2940]: pam_unix(sshd:session): session closed for user core May 14 00:24:53.128989 systemd[1]: sshd@2-147.75.51.18:22-139.178.68.195:60138.service: Deactivated successfully. May 14 00:24:53.131152 systemd[1]: session-5.scope: Deactivated successfully. May 14 00:24:53.131649 systemd-logind[2724]: Session 5 logged out. Waiting for processes to exit. May 14 00:24:53.132167 systemd-logind[2724]: Removed session 5. May 14 00:24:53.197289 systemd[1]: Started sshd@3-147.75.51.18:22-139.178.68.195:60144.service - OpenSSH per-connection server daemon (139.178.68.195:60144). May 14 00:24:53.636255 sshd[2948]: Accepted publickey for core from 139.178.68.195 port 60144 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 00:24:53.637277 sshd-session[2948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:24:53.640220 systemd-logind[2724]: New session 6 of user core. May 14 00:24:53.648767 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 00:24:53.934886 sshd[2950]: Connection closed by 139.178.68.195 port 60144 May 14 00:24:53.935313 sshd-session[2948]: pam_unix(sshd:session): session closed for user core May 14 00:24:53.939085 systemd[1]: sshd@3-147.75.51.18:22-139.178.68.195:60144.service: Deactivated successfully. May 14 00:24:53.940896 systemd[1]: session-6.scope: Deactivated successfully. May 14 00:24:53.942191 systemd-logind[2724]: Session 6 logged out. Waiting for processes to exit. May 14 00:24:53.942731 systemd-logind[2724]: Removed session 6. May 14 00:24:54.018275 systemd[1]: Started sshd@4-147.75.51.18:22-139.178.68.195:60146.service - OpenSSH per-connection server daemon (139.178.68.195:60146). May 14 00:24:54.084224 systemd-timesyncd[2655]: Network configuration changed, trying to establish connection. May 14 00:24:54.451353 sshd[2957]: Accepted publickey for core from 139.178.68.195 port 60146 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 00:24:54.452403 sshd-session[2957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:24:54.455261 systemd-logind[2724]: New session 7 of user core. May 14 00:24:54.470716 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 00:24:54.697491 sudo[2960]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 00:24:54.697770 sudo[2960]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 00:24:54.713409 sudo[2960]: pam_unix(sudo:session): session closed for user root May 14 00:24:54.778365 sshd[2959]: Connection closed by 139.178.68.195 port 60146 May 14 00:24:54.779012 sshd-session[2957]: pam_unix(sshd:session): session closed for user core May 14 00:24:54.783103 systemd[1]: sshd@4-147.75.51.18:22-139.178.68.195:60146.service: Deactivated successfully. May 14 00:24:54.786062 systemd[1]: session-7.scope: Deactivated successfully. May 14 00:24:54.786610 systemd-logind[2724]: Session 7 logged out. Waiting for processes to exit. May 14 00:24:54.787249 systemd-logind[2724]: Removed session 7. May 14 00:24:54.853480 systemd[1]: Started sshd@5-147.75.51.18:22-139.178.68.195:51354.service - OpenSSH per-connection server daemon (139.178.68.195:51354). May 14 00:24:55.296331 sshd[2969]: Accepted publickey for core from 139.178.68.195 port 51354 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 00:24:55.297489 sshd-session[2969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:24:55.300552 systemd-logind[2724]: New session 8 of user core. May 14 00:24:55.308761 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 00:24:55.537945 sudo[2973]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 00:24:55.538211 sudo[2973]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 00:24:55.540960 sudo[2973]: pam_unix(sudo:session): session closed for user root May 14 00:24:55.545291 sudo[2972]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 00:24:55.545542 sudo[2972]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 00:24:55.552668 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 00:24:55.587500 augenrules[2995]: No rules May 14 00:24:55.588602 systemd[1]: audit-rules.service: Deactivated successfully. May 14 00:24:55.589737 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 00:24:55.590567 sudo[2972]: pam_unix(sudo:session): session closed for user root May 14 00:24:55.655723 sshd[2971]: Connection closed by 139.178.68.195 port 51354 May 14 00:24:55.656227 sshd-session[2969]: pam_unix(sshd:session): session closed for user core May 14 00:24:55.659761 systemd[1]: sshd@5-147.75.51.18:22-139.178.68.195:51354.service: Deactivated successfully. May 14 00:24:55.662091 systemd[1]: session-8.scope: Deactivated successfully. May 14 00:24:55.662711 systemd-logind[2724]: Session 8 logged out. Waiting for processes to exit. May 14 00:24:55.663269 systemd-logind[2724]: Removed session 8. May 14 00:24:55.731302 systemd[1]: Started sshd@6-147.75.51.18:22-139.178.68.195:51368.service - OpenSSH per-connection server daemon (139.178.68.195:51368). May 14 00:24:56.180625 sshd[3004]: Accepted publickey for core from 139.178.68.195 port 51368 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 00:24:56.181594 sshd-session[3004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:24:56.184463 systemd-logind[2724]: New session 9 of user core. May 14 00:24:56.192706 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 00:24:56.423189 sudo[3007]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 00:24:56.423461 sudo[3007]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 00:24:56.708470 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 00:24:56.721013 (dockerd)[3039]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 00:24:56.925064 dockerd[3039]: time="2025-05-14T00:24:56.925017960Z" level=info msg="Starting up" May 14 00:24:56.926730 dockerd[3039]: time="2025-05-14T00:24:56.926708200Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 14 00:24:56.952429 dockerd[3039]: time="2025-05-14T00:24:56.952404120Z" level=info msg="Loading containers: start." May 14 00:24:57.092614 kernel: Initializing XFRM netlink socket May 14 00:24:57.094241 systemd-timesyncd[2655]: Network configuration changed, trying to establish connection. May 14 00:24:57.160775 systemd-networkd[2653]: docker0: Link UP May 14 00:24:57.217669 dockerd[3039]: time="2025-05-14T00:24:57.217636000Z" level=info msg="Loading containers: done." May 14 00:24:57.227373 dockerd[3039]: time="2025-05-14T00:24:57.227341520Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 00:24:57.227445 dockerd[3039]: time="2025-05-14T00:24:57.227411520Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 14 00:24:57.227590 dockerd[3039]: time="2025-05-14T00:24:57.227575960Z" level=info msg="Daemon has completed initialization" May 14 00:24:57.249173 dockerd[3039]: time="2025-05-14T00:24:57.249133440Z" level=info msg="API listen on /run/docker.sock" May 14 00:24:57.249296 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 00:24:57.814275 containerd[2740]: time="2025-05-14T00:24:57.814243720Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 14 00:24:57.941985 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1281996165-merged.mount: Deactivated successfully. May 14 00:24:58.216734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3205662029.mount: Deactivated successfully. May 14 00:24:58.761258 systemd-resolved[2654]: Clock change detected. Flushing caches. May 14 00:24:58.761471 systemd-timesyncd[2655]: Contacted time server [2600:3c02::f03c:91ff:fe96:4212]:123 (2.flatcar.pool.ntp.org). May 14 00:24:58.761518 systemd-timesyncd[2655]: Initial clock synchronization to Wed 2025-05-14 00:24:58.761200 UTC. May 14 00:24:59.520858 containerd[2740]: time="2025-05-14T00:24:59.520784857Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233118" May 14 00:24:59.520858 containerd[2740]: time="2025-05-14T00:24:59.520849817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:24:59.521844 containerd[2740]: time="2025-05-14T00:24:59.521785937Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:24:59.524282 containerd[2740]: time="2025-05-14T00:24:59.524232617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:24:59.525302 containerd[2740]: time="2025-05-14T00:24:59.525245817Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 2.15219624s" May 14 00:24:59.525302 containerd[2740]: time="2025-05-14T00:24:59.525280337Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 14 00:24:59.525815 containerd[2740]: time="2025-05-14T00:24:59.525793217Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 14 00:24:59.858199 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 00:24:59.860251 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:24:59.977200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:24:59.980495 (kubelet)[3357]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:25:00.014597 kubelet[3357]: E0514 00:25:00.014561 3357 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:25:00.017495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:25:00.017626 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:25:00.017941 systemd[1]: kubelet.service: Consumed 145ms CPU time, 121.1M memory peak. May 14 00:25:01.429360 containerd[2740]: time="2025-05-14T00:25:01.429290817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:01.429360 containerd[2740]: time="2025-05-14T00:25:01.429329977Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529571" May 14 00:25:01.430208 containerd[2740]: time="2025-05-14T00:25:01.430179537Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:01.432616 containerd[2740]: time="2025-05-14T00:25:01.432590697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:01.433614 containerd[2740]: time="2025-05-14T00:25:01.433583057Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.90767984s" May 14 00:25:01.433678 containerd[2740]: time="2025-05-14T00:25:01.433617257Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 14 00:25:01.434013 containerd[2740]: time="2025-05-14T00:25:01.433994057Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 14 00:25:02.913393 containerd[2740]: time="2025-05-14T00:25:02.913251177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:02.913393 containerd[2740]: time="2025-05-14T00:25:02.913346057Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482173" May 14 00:25:02.915189 containerd[2740]: time="2025-05-14T00:25:02.915131097Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:02.917578 containerd[2740]: time="2025-05-14T00:25:02.917544937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:02.918565 containerd[2740]: time="2025-05-14T00:25:02.918535097Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.48451284s" May 14 00:25:02.918682 containerd[2740]: time="2025-05-14T00:25:02.918666417Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 14 00:25:02.919087 containerd[2740]: time="2025-05-14T00:25:02.919038297Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 14 00:25:03.941734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1429038186.mount: Deactivated successfully. May 14 00:25:04.125044 containerd[2740]: time="2025-05-14T00:25:04.124979337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:04.125339 containerd[2740]: time="2025-05-14T00:25:04.125051657Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370351" May 14 00:25:04.125678 containerd[2740]: time="2025-05-14T00:25:04.125658017Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:04.127080 containerd[2740]: time="2025-05-14T00:25:04.127063657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:04.127741 containerd[2740]: time="2025-05-14T00:25:04.127723537Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.20864924s" May 14 00:25:04.127779 containerd[2740]: time="2025-05-14T00:25:04.127748777Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 14 00:25:04.128028 containerd[2740]: time="2025-05-14T00:25:04.128006337Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 14 00:25:04.503410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1371147748.mount: Deactivated successfully. May 14 00:25:05.636614 containerd[2740]: time="2025-05-14T00:25:05.636558057Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" May 14 00:25:05.636614 containerd[2740]: time="2025-05-14T00:25:05.636566297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:05.637628 containerd[2740]: time="2025-05-14T00:25:05.637601697Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:05.640111 containerd[2740]: time="2025-05-14T00:25:05.640088257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:05.641201 containerd[2740]: time="2025-05-14T00:25:05.641161057Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.51310392s" May 14 00:25:05.641229 containerd[2740]: time="2025-05-14T00:25:05.641214377Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 14 00:25:05.641650 containerd[2740]: time="2025-05-14T00:25:05.641621977Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 00:25:05.970976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3327237635.mount: Deactivated successfully. May 14 00:25:05.971404 containerd[2740]: time="2025-05-14T00:25:05.971362777Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 14 00:25:05.971447 containerd[2740]: time="2025-05-14T00:25:05.971423577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:25:05.972149 containerd[2740]: time="2025-05-14T00:25:05.972127737Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:25:05.973835 containerd[2740]: time="2025-05-14T00:25:05.973814617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:25:05.974502 containerd[2740]: time="2025-05-14T00:25:05.974479497Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 332.81724ms" May 14 00:25:05.974535 containerd[2740]: time="2025-05-14T00:25:05.974507297Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 14 00:25:05.974792 containerd[2740]: time="2025-05-14T00:25:05.974772177Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 14 00:25:06.419823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3148521541.mount: Deactivated successfully. May 14 00:25:10.107193 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 00:25:10.108761 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:25:10.228293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:25:10.231621 (kubelet)[3533]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:25:10.238037 containerd[2740]: time="2025-05-14T00:25:10.237977097Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" May 14 00:25:10.238037 containerd[2740]: time="2025-05-14T00:25:10.237984017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:10.239344 containerd[2740]: time="2025-05-14T00:25:10.239117257Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:10.243136 containerd[2740]: time="2025-05-14T00:25:10.243110497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:10.244174 containerd[2740]: time="2025-05-14T00:25:10.244144697Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 4.26934216s" May 14 00:25:10.244201 containerd[2740]: time="2025-05-14T00:25:10.244181577Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 14 00:25:10.263480 kubelet[3533]: E0514 00:25:10.263444 3533 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:25:10.265809 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:25:10.265946 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:25:10.266259 systemd[1]: kubelet.service: Consumed 140ms CPU time, 117.3M memory peak. May 14 00:25:14.853079 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:25:14.853210 systemd[1]: kubelet.service: Consumed 140ms CPU time, 117.3M memory peak. May 14 00:25:14.855245 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:25:14.877633 systemd[1]: Reload requested from client PID 3627 ('systemctl') (unit session-9.scope)... May 14 00:25:14.877644 systemd[1]: Reloading... May 14 00:25:14.946384 zram_generator::config[3678]: No configuration found. May 14 00:25:15.035862 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:25:15.127034 systemd[1]: Reloading finished in 249 ms. May 14 00:25:15.169577 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:25:15.172301 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:25:15.172859 systemd[1]: kubelet.service: Deactivated successfully. May 14 00:25:15.173056 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:25:15.173090 systemd[1]: kubelet.service: Consumed 89ms CPU time, 90.4M memory peak. May 14 00:25:15.175583 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:25:15.283546 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:25:15.287059 (kubelet)[3742]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 00:25:15.333307 kubelet[3742]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:25:15.333307 kubelet[3742]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 14 00:25:15.333307 kubelet[3742]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:25:15.333605 kubelet[3742]: I0514 00:25:15.333403 3742 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 00:25:16.049225 kubelet[3742]: I0514 00:25:16.049191 3742 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 14 00:25:16.049225 kubelet[3742]: I0514 00:25:16.049222 3742 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 00:25:16.049503 kubelet[3742]: I0514 00:25:16.049488 3742 server.go:954] "Client rotation is on, will bootstrap in background" May 14 00:25:16.067071 kubelet[3742]: E0514 00:25:16.067045 3742 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.75.51.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.75.51.18:6443: connect: connection refused" logger="UnhandledError" May 14 00:25:16.070008 kubelet[3742]: I0514 00:25:16.069976 3742 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:25:16.077451 kubelet[3742]: I0514 00:25:16.077434 3742 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 00:25:16.097590 kubelet[3742]: I0514 00:25:16.097565 3742 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 00:25:16.098180 kubelet[3742]: I0514 00:25:16.098146 3742 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 00:25:16.098348 kubelet[3742]: I0514 00:25:16.098182 3742 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284.0.0-n-c871d2567c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 00:25:16.098442 kubelet[3742]: I0514 00:25:16.098427 3742 topology_manager.go:138] "Creating topology manager with none policy" May 14 00:25:16.098442 kubelet[3742]: I0514 00:25:16.098437 3742 container_manager_linux.go:304] "Creating device plugin manager" May 14 00:25:16.098658 kubelet[3742]: I0514 00:25:16.098644 3742 state_mem.go:36] "Initialized new in-memory state store" May 14 00:25:16.101515 kubelet[3742]: I0514 00:25:16.101497 3742 kubelet.go:446] "Attempting to sync node with API server" May 14 00:25:16.101540 kubelet[3742]: I0514 00:25:16.101527 3742 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 00:25:16.101574 kubelet[3742]: I0514 00:25:16.101564 3742 kubelet.go:352] "Adding apiserver pod source" May 14 00:25:16.101601 kubelet[3742]: I0514 00:25:16.101582 3742 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 00:25:16.103383 kubelet[3742]: W0514 00:25:16.103339 3742 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.51.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.75.51.18:6443: connect: connection refused May 14 00:25:16.103420 kubelet[3742]: E0514 00:25:16.103406 3742 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.75.51.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.75.51.18:6443: connect: connection refused" logger="UnhandledError" May 14 00:25:16.103895 kubelet[3742]: W0514 00:25:16.103857 3742 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.75.51.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-c871d2567c&limit=500&resourceVersion=0": dial tcp 147.75.51.18:6443: connect: connection refused May 14 00:25:16.103918 kubelet[3742]: E0514 00:25:16.103908 3742 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.75.51.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-c871d2567c&limit=500&resourceVersion=0\": dial tcp 147.75.51.18:6443: connect: connection refused" logger="UnhandledError" May 14 00:25:16.104800 kubelet[3742]: I0514 00:25:16.104782 3742 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 14 00:25:16.105356 kubelet[3742]: I0514 00:25:16.105342 3742 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 00:25:16.105473 kubelet[3742]: W0514 00:25:16.105462 3742 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 00:25:16.106243 kubelet[3742]: I0514 00:25:16.106229 3742 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 14 00:25:16.106273 kubelet[3742]: I0514 00:25:16.106264 3742 server.go:1287] "Started kubelet" May 14 00:25:16.106355 kubelet[3742]: I0514 00:25:16.106318 3742 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 14 00:25:16.108040 kubelet[3742]: I0514 00:25:16.107981 3742 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 00:25:16.108288 kubelet[3742]: I0514 00:25:16.108271 3742 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 00:25:16.108359 kubelet[3742]: E0514 00:25:16.108340 3742 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 00:25:16.108427 kubelet[3742]: I0514 00:25:16.108412 3742 server.go:490] "Adding debug handlers to kubelet server" May 14 00:25:16.108524 kubelet[3742]: I0514 00:25:16.108510 3742 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 00:25:16.108546 kubelet[3742]: I0514 00:25:16.108521 3742 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 00:25:16.108625 kubelet[3742]: I0514 00:25:16.108617 3742 volume_manager.go:297] "Starting Kubelet Volume Manager" May 14 00:25:16.108664 kubelet[3742]: I0514 00:25:16.108648 3742 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 00:25:16.108664 kubelet[3742]: E0514 00:25:16.108652 3742 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-c871d2567c\" not found" May 14 00:25:16.108705 kubelet[3742]: I0514 00:25:16.108695 3742 reconciler.go:26] "Reconciler: start to sync state" May 14 00:25:16.111914 kubelet[3742]: E0514 00:25:16.111781 3742 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.51.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-c871d2567c?timeout=10s\": dial tcp 147.75.51.18:6443: connect: connection refused" interval="200ms" May 14 00:25:16.112739 kubelet[3742]: I0514 00:25:16.112717 3742 factory.go:221] Registration of the systemd container factory successfully May 14 00:25:16.127353 kubelet[3742]: I0514 00:25:16.127323 3742 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 00:25:16.127385 kubelet[3742]: E0514 00:25:16.112634 3742 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.75.51.18:6443/api/v1/namespaces/default/events\": dial tcp 147.75.51.18:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284.0.0-n-c871d2567c.183f3d0f3d5b4f71 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284.0.0-n-c871d2567c,UID:ci-4284.0.0-n-c871d2567c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284.0.0-n-c871d2567c,},FirstTimestamp:2025-05-14 00:25:16.106239857 +0000 UTC m=+0.816275801,LastTimestamp:2025-05-14 00:25:16.106239857 +0000 UTC m=+0.816275801,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284.0.0-n-c871d2567c,}" May 14 00:25:16.127458 kubelet[3742]: W0514 00:25:16.127306 3742 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.51.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.51.18:6443: connect: connection refused May 14 00:25:16.127490 kubelet[3742]: E0514 00:25:16.127467 3742 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.75.51.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.75.51.18:6443: connect: connection refused" logger="UnhandledError" May 14 00:25:16.128151 kubelet[3742]: I0514 00:25:16.128137 3742 factory.go:221] Registration of the containerd container factory successfully May 14 00:25:16.139317 kubelet[3742]: I0514 00:25:16.139284 3742 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 00:25:16.140313 kubelet[3742]: I0514 00:25:16.140299 3742 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 00:25:16.140340 kubelet[3742]: I0514 00:25:16.140315 3742 status_manager.go:227] "Starting to sync pod status with apiserver" May 14 00:25:16.140340 kubelet[3742]: I0514 00:25:16.140332 3742 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 14 00:25:16.140380 kubelet[3742]: I0514 00:25:16.140341 3742 kubelet.go:2388] "Starting kubelet main sync loop" May 14 00:25:16.140403 kubelet[3742]: E0514 00:25:16.140385 3742 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 00:25:16.140917 kubelet[3742]: I0514 00:25:16.140902 3742 cpu_manager.go:221] "Starting CPU manager" policy="none" May 14 00:25:16.140917 kubelet[3742]: I0514 00:25:16.140915 3742 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 14 00:25:16.140961 kubelet[3742]: I0514 00:25:16.140932 3742 state_mem.go:36] "Initialized new in-memory state store" May 14 00:25:16.141647 kubelet[3742]: W0514 00:25:16.141607 3742 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.75.51.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.51.18:6443: connect: connection refused May 14 00:25:16.141730 kubelet[3742]: E0514 00:25:16.141658 3742 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.75.51.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.75.51.18:6443: connect: connection refused" logger="UnhandledError" May 14 00:25:16.141844 kubelet[3742]: I0514 00:25:16.141829 3742 policy_none.go:49] "None policy: Start" May 14 00:25:16.141870 kubelet[3742]: I0514 00:25:16.141846 3742 memory_manager.go:186] "Starting memorymanager" policy="None" May 14 00:25:16.141870 kubelet[3742]: I0514 00:25:16.141856 3742 state_mem.go:35] "Initializing new in-memory state store" May 14 00:25:16.145333 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 00:25:16.162490 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 00:25:16.164969 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 00:25:16.176209 kubelet[3742]: I0514 00:25:16.176185 3742 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 00:25:16.176384 kubelet[3742]: I0514 00:25:16.176364 3742 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 00:25:16.176410 kubelet[3742]: I0514 00:25:16.176385 3742 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 00:25:16.176530 kubelet[3742]: I0514 00:25:16.176515 3742 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 00:25:16.176991 kubelet[3742]: E0514 00:25:16.176975 3742 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 14 00:25:16.177019 kubelet[3742]: E0514 00:25:16.177010 3742 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4284.0.0-n-c871d2567c\" not found" May 14 00:25:16.248849 systemd[1]: Created slice kubepods-burstable-pod84572541beac8e2bfbbb5eb8b72dc88c.slice - libcontainer container kubepods-burstable-pod84572541beac8e2bfbbb5eb8b72dc88c.slice. May 14 00:25:16.268587 kubelet[3742]: E0514 00:25:16.268558 3742 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-c871d2567c\" not found" node="ci-4284.0.0-n-c871d2567c" May 14 00:25:16.270795 systemd[1]: Created slice kubepods-burstable-pod8988d9dcf9cfee7e1edff4189425c686.slice - libcontainer container kubepods-burstable-pod8988d9dcf9cfee7e1edff4189425c686.slice. May 14 00:25:16.278822 kubelet[3742]: I0514 00:25:16.278805 3742 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284.0.0-n-c871d2567c" May 14 00:25:16.279184 kubelet[3742]: E0514 00:25:16.279163 3742 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://147.75.51.18:6443/api/v1/nodes\": dial tcp 147.75.51.18:6443: connect: connection refused" node="ci-4284.0.0-n-c871d2567c" May 14 00:25:16.281422 kubelet[3742]: E0514 00:25:16.281406 3742 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-c871d2567c\" not found" node="ci-4284.0.0-n-c871d2567c" May 14 00:25:16.283539 systemd[1]: Created slice kubepods-burstable-pod3d742d8381b07eae71f0a125905885ce.slice - libcontainer container kubepods-burstable-pod3d742d8381b07eae71f0a125905885ce.slice. May 14 00:25:16.284889 kubelet[3742]: E0514 00:25:16.284870 3742 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-c871d2567c\" not found" node="ci-4284.0.0-n-c871d2567c" May 14 00:25:16.310173 kubelet[3742]: I0514 00:25:16.310104 3742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8988d9dcf9cfee7e1edff4189425c686-k8s-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-c871d2567c\" (UID: \"8988d9dcf9cfee7e1edff4189425c686\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-c871d2567c" May 14 00:25:16.310173 kubelet[3742]: I0514 00:25:16.310132 3742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8988d9dcf9cfee7e1edff4189425c686-kubeconfig\") pod \"kube-controller-manager-ci-4284.0.0-n-c871d2567c\" (UID: \"8988d9dcf9cfee7e1edff4189425c686\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-c871d2567c" May 14 00:25:16.310173 kubelet[3742]: I0514 00:25:16.310154 3742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3d742d8381b07eae71f0a125905885ce-kubeconfig\") pod \"kube-scheduler-ci-4284.0.0-n-c871d2567c\" (UID: \"3d742d8381b07eae71f0a125905885ce\") " pod="kube-system/kube-scheduler-ci-4284.0.0-n-c871d2567c" May 14 00:25:16.310173 kubelet[3742]: I0514 00:25:16.310171 3742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84572541beac8e2bfbbb5eb8b72dc88c-ca-certs\") pod \"kube-apiserver-ci-4284.0.0-n-c871d2567c\" (UID: \"84572541beac8e2bfbbb5eb8b72dc88c\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-c871d2567c" May 14 00:25:16.310264 kubelet[3742]: I0514 00:25:16.310188 3742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84572541beac8e2bfbbb5eb8b72dc88c-k8s-certs\") pod \"kube-apiserver-ci-4284.0.0-n-c871d2567c\" (UID: \"84572541beac8e2bfbbb5eb8b72dc88c\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-c871d2567c" May 14 00:25:16.310264 kubelet[3742]: I0514 00:25:16.310246 3742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84572541beac8e2bfbbb5eb8b72dc88c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284.0.0-n-c871d2567c\" (UID: \"84572541beac8e2bfbbb5eb8b72dc88c\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-c871d2567c" May 14 00:25:16.310316 kubelet[3742]: I0514 00:25:16.310295 3742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8988d9dcf9cfee7e1edff4189425c686-ca-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-c871d2567c\" (UID: \"8988d9dcf9cfee7e1edff4189425c686\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-c871d2567c" May 14 00:25:16.310342 kubelet[3742]: I0514 00:25:16.310324 3742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8988d9dcf9cfee7e1edff4189425c686-flexvolume-dir\") pod \"kube-controller-manager-ci-4284.0.0-n-c871d2567c\" (UID: \"8988d9dcf9cfee7e1edff4189425c686\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-c871d2567c" May 14 00:25:16.310365 kubelet[3742]: I0514 00:25:16.310347 3742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8988d9dcf9cfee7e1edff4189425c686-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284.0.0-n-c871d2567c\" (UID: \"8988d9dcf9cfee7e1edff4189425c686\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-c871d2567c" May 14 00:25:16.312425 kubelet[3742]: E0514 00:25:16.312400 3742 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.51.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-c871d2567c?timeout=10s\": dial tcp 147.75.51.18:6443: connect: connection refused" interval="400ms" May 14 00:25:16.481684 kubelet[3742]: I0514 00:25:16.481650 3742 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284.0.0-n-c871d2567c" May 14 00:25:16.482003 kubelet[3742]: E0514 00:25:16.481976 3742 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://147.75.51.18:6443/api/v1/nodes\": dial tcp 147.75.51.18:6443: connect: connection refused" node="ci-4284.0.0-n-c871d2567c" May 14 00:25:16.570409 containerd[2740]: time="2025-05-14T00:25:16.570328857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284.0.0-n-c871d2567c,Uid:84572541beac8e2bfbbb5eb8b72dc88c,Namespace:kube-system,Attempt:0,}" May 14 00:25:16.580643 containerd[2740]: time="2025-05-14T00:25:16.580617657Z" level=info msg="connecting to shim 7b2752355cf24b75287e8721065e568fc509ebf9b332cae1dc8cdf57eb548408" address="unix:///run/containerd/s/ea5f06768484f9f30a5e32728127b7a46afb4002cbb774c769fd8e4d90f7790c" namespace=k8s.io protocol=ttrpc version=3 May 14 00:25:16.582768 containerd[2740]: time="2025-05-14T00:25:16.582746897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284.0.0-n-c871d2567c,Uid:8988d9dcf9cfee7e1edff4189425c686,Namespace:kube-system,Attempt:0,}" May 14 00:25:16.586229 containerd[2740]: time="2025-05-14T00:25:16.586205577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284.0.0-n-c871d2567c,Uid:3d742d8381b07eae71f0a125905885ce,Namespace:kube-system,Attempt:0,}" May 14 00:25:16.590923 containerd[2740]: time="2025-05-14T00:25:16.590896137Z" level=info msg="connecting to shim ebd0d78105aa1e2105bcb9397de4933dccf358577faca4723e87bcce487c1ef2" address="unix:///run/containerd/s/e5b0e31c1447cd98ef282a7becc5d4c778720f5822b4bb5c80c94af510e5fa54" namespace=k8s.io protocol=ttrpc version=3 May 14 00:25:16.594706 containerd[2740]: time="2025-05-14T00:25:16.594679697Z" level=info msg="connecting to shim b41cd57c28b6771a41a1acd4f5cbccd60de91f2f849eb309e45f0419cc4f0d89" address="unix:///run/containerd/s/92c6e4b60e1d521aa1b48545096bb4629cd6d4b9053619d64543b7ecd460a79d" namespace=k8s.io protocol=ttrpc version=3 May 14 00:25:16.618552 systemd[1]: Started cri-containerd-7b2752355cf24b75287e8721065e568fc509ebf9b332cae1dc8cdf57eb548408.scope - libcontainer container 7b2752355cf24b75287e8721065e568fc509ebf9b332cae1dc8cdf57eb548408. May 14 00:25:16.624668 systemd[1]: Started cri-containerd-b41cd57c28b6771a41a1acd4f5cbccd60de91f2f849eb309e45f0419cc4f0d89.scope - libcontainer container b41cd57c28b6771a41a1acd4f5cbccd60de91f2f849eb309e45f0419cc4f0d89. May 14 00:25:16.625914 systemd[1]: Started cri-containerd-ebd0d78105aa1e2105bcb9397de4933dccf358577faca4723e87bcce487c1ef2.scope - libcontainer container ebd0d78105aa1e2105bcb9397de4933dccf358577faca4723e87bcce487c1ef2. May 14 00:25:16.644957 containerd[2740]: time="2025-05-14T00:25:16.644924097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284.0.0-n-c871d2567c,Uid:84572541beac8e2bfbbb5eb8b72dc88c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b2752355cf24b75287e8721065e568fc509ebf9b332cae1dc8cdf57eb548408\"" May 14 00:25:16.647184 containerd[2740]: time="2025-05-14T00:25:16.647163937Z" level=info msg="CreateContainer within sandbox \"7b2752355cf24b75287e8721065e568fc509ebf9b332cae1dc8cdf57eb548408\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 00:25:16.649880 containerd[2740]: time="2025-05-14T00:25:16.649855657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284.0.0-n-c871d2567c,Uid:3d742d8381b07eae71f0a125905885ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"b41cd57c28b6771a41a1acd4f5cbccd60de91f2f849eb309e45f0419cc4f0d89\"" May 14 00:25:16.650515 containerd[2740]: time="2025-05-14T00:25:16.650493297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284.0.0-n-c871d2567c,Uid:8988d9dcf9cfee7e1edff4189425c686,Namespace:kube-system,Attempt:0,} returns sandbox id \"ebd0d78105aa1e2105bcb9397de4933dccf358577faca4723e87bcce487c1ef2\"" May 14 00:25:16.651304 containerd[2740]: time="2025-05-14T00:25:16.651284777Z" level=info msg="CreateContainer within sandbox \"b41cd57c28b6771a41a1acd4f5cbccd60de91f2f849eb309e45f0419cc4f0d89\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 00:25:16.651559 containerd[2740]: time="2025-05-14T00:25:16.651537697Z" level=info msg="Container f440b6ca1fb4cc5899b60672b46de6ea1905cd45c7c4857cc1c23a82c31386eb: CDI devices from CRI Config.CDIDevices: []" May 14 00:25:16.651927 containerd[2740]: time="2025-05-14T00:25:16.651907537Z" level=info msg="CreateContainer within sandbox \"ebd0d78105aa1e2105bcb9397de4933dccf358577faca4723e87bcce487c1ef2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 00:25:16.656948 containerd[2740]: time="2025-05-14T00:25:16.656919377Z" level=info msg="Container aeaaca5edeb08597d8db0a9360b8fe3f128c491f898c4f7c09b1bdc99e68ce37: CDI devices from CRI Config.CDIDevices: []" May 14 00:25:16.657488 containerd[2740]: time="2025-05-14T00:25:16.657462217Z" level=info msg="CreateContainer within sandbox \"7b2752355cf24b75287e8721065e568fc509ebf9b332cae1dc8cdf57eb548408\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f440b6ca1fb4cc5899b60672b46de6ea1905cd45c7c4857cc1c23a82c31386eb\"" May 14 00:25:16.657891 containerd[2740]: time="2025-05-14T00:25:16.657873177Z" level=info msg="StartContainer for \"f440b6ca1fb4cc5899b60672b46de6ea1905cd45c7c4857cc1c23a82c31386eb\"" May 14 00:25:16.658015 containerd[2740]: time="2025-05-14T00:25:16.657991537Z" level=info msg="Container 13ad08b691eac4afc3749f4426692a716128e72673984155abec60960429aed2: CDI devices from CRI Config.CDIDevices: []" May 14 00:25:16.658922 containerd[2740]: time="2025-05-14T00:25:16.658900097Z" level=info msg="connecting to shim f440b6ca1fb4cc5899b60672b46de6ea1905cd45c7c4857cc1c23a82c31386eb" address="unix:///run/containerd/s/ea5f06768484f9f30a5e32728127b7a46afb4002cbb774c769fd8e4d90f7790c" protocol=ttrpc version=3 May 14 00:25:16.673302 containerd[2740]: time="2025-05-14T00:25:16.673267417Z" level=info msg="CreateContainer within sandbox \"b41cd57c28b6771a41a1acd4f5cbccd60de91f2f849eb309e45f0419cc4f0d89\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"aeaaca5edeb08597d8db0a9360b8fe3f128c491f898c4f7c09b1bdc99e68ce37\"" May 14 00:25:16.673517 containerd[2740]: time="2025-05-14T00:25:16.673484897Z" level=info msg="CreateContainer within sandbox \"ebd0d78105aa1e2105bcb9397de4933dccf358577faca4723e87bcce487c1ef2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"13ad08b691eac4afc3749f4426692a716128e72673984155abec60960429aed2\"" May 14 00:25:16.673591 containerd[2740]: time="2025-05-14T00:25:16.673567457Z" level=info msg="StartContainer for \"aeaaca5edeb08597d8db0a9360b8fe3f128c491f898c4f7c09b1bdc99e68ce37\"" May 14 00:25:16.673742 containerd[2740]: time="2025-05-14T00:25:16.673720457Z" level=info msg="StartContainer for \"13ad08b691eac4afc3749f4426692a716128e72673984155abec60960429aed2\"" May 14 00:25:16.674530 containerd[2740]: time="2025-05-14T00:25:16.674503697Z" level=info msg="connecting to shim aeaaca5edeb08597d8db0a9360b8fe3f128c491f898c4f7c09b1bdc99e68ce37" address="unix:///run/containerd/s/92c6e4b60e1d521aa1b48545096bb4629cd6d4b9053619d64543b7ecd460a79d" protocol=ttrpc version=3 May 14 00:25:16.674686 containerd[2740]: time="2025-05-14T00:25:16.674662697Z" level=info msg="connecting to shim 13ad08b691eac4afc3749f4426692a716128e72673984155abec60960429aed2" address="unix:///run/containerd/s/e5b0e31c1447cd98ef282a7becc5d4c778720f5822b4bb5c80c94af510e5fa54" protocol=ttrpc version=3 May 14 00:25:16.680494 systemd[1]: Started cri-containerd-f440b6ca1fb4cc5899b60672b46de6ea1905cd45c7c4857cc1c23a82c31386eb.scope - libcontainer container f440b6ca1fb4cc5899b60672b46de6ea1905cd45c7c4857cc1c23a82c31386eb. May 14 00:25:16.685100 systemd[1]: Started cri-containerd-13ad08b691eac4afc3749f4426692a716128e72673984155abec60960429aed2.scope - libcontainer container 13ad08b691eac4afc3749f4426692a716128e72673984155abec60960429aed2. May 14 00:25:16.686199 systemd[1]: Started cri-containerd-aeaaca5edeb08597d8db0a9360b8fe3f128c491f898c4f7c09b1bdc99e68ce37.scope - libcontainer container aeaaca5edeb08597d8db0a9360b8fe3f128c491f898c4f7c09b1bdc99e68ce37. May 14 00:25:16.708745 containerd[2740]: time="2025-05-14T00:25:16.708712337Z" level=info msg="StartContainer for \"f440b6ca1fb4cc5899b60672b46de6ea1905cd45c7c4857cc1c23a82c31386eb\" returns successfully" May 14 00:25:16.713130 containerd[2740]: time="2025-05-14T00:25:16.713104537Z" level=info msg="StartContainer for \"aeaaca5edeb08597d8db0a9360b8fe3f128c491f898c4f7c09b1bdc99e68ce37\" returns successfully" May 14 00:25:16.713573 kubelet[3742]: E0514 00:25:16.713547 3742 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.51.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-c871d2567c?timeout=10s\": dial tcp 147.75.51.18:6443: connect: connection refused" interval="800ms" May 14 00:25:16.714636 containerd[2740]: time="2025-05-14T00:25:16.714615377Z" level=info msg="StartContainer for \"13ad08b691eac4afc3749f4426692a716128e72673984155abec60960429aed2\" returns successfully" May 14 00:25:16.884054 kubelet[3742]: I0514 00:25:16.884027 3742 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284.0.0-n-c871d2567c" May 14 00:25:17.145810 kubelet[3742]: E0514 00:25:17.145772 3742 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-c871d2567c\" not found" node="ci-4284.0.0-n-c871d2567c" May 14 00:25:17.149358 kubelet[3742]: E0514 00:25:17.149333 3742 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-c871d2567c\" not found" node="ci-4284.0.0-n-c871d2567c" May 14 00:25:17.151142 kubelet[3742]: E0514 00:25:17.151123 3742 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-c871d2567c\" not found" node="ci-4284.0.0-n-c871d2567c" May 14 00:25:17.890632 kubelet[3742]: E0514 00:25:17.890597 3742 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4284.0.0-n-c871d2567c\" not found" node="ci-4284.0.0-n-c871d2567c" May 14 00:25:17.993509 kubelet[3742]: I0514 00:25:17.993476 3742 kubelet_node_status.go:79] "Successfully registered node" node="ci-4284.0.0-n-c871d2567c" May 14 00:25:18.008909 kubelet[3742]: I0514 00:25:18.008880 3742 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284.0.0-n-c871d2567c" May 14 00:25:18.013493 kubelet[3742]: E0514 00:25:18.013461 3742 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4284.0.0-n-c871d2567c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4284.0.0-n-c871d2567c" May 14 00:25:18.013554 kubelet[3742]: I0514 00:25:18.013491 3742 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284.0.0-n-c871d2567c" May 14 00:25:18.014941 kubelet[3742]: E0514 00:25:18.014921 3742 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4284.0.0-n-c871d2567c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4284.0.0-n-c871d2567c" May 14 00:25:18.014969 kubelet[3742]: I0514 00:25:18.014939 3742 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-c871d2567c" May 14 00:25:18.016331 kubelet[3742]: E0514 00:25:18.016305 3742 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4284.0.0-n-c871d2567c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-c871d2567c" May 14 00:25:18.102420 kubelet[3742]: I0514 00:25:18.102400 3742 apiserver.go:52] "Watching apiserver" May 14 00:25:18.109682 kubelet[3742]: I0514 00:25:18.109663 3742 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 00:25:18.150928 kubelet[3742]: I0514 00:25:18.150861 3742 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284.0.0-n-c871d2567c" May 14 00:25:18.150984 kubelet[3742]: I0514 00:25:18.150953 3742 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284.0.0-n-c871d2567c" May 14 00:25:18.152222 kubelet[3742]: E0514 00:25:18.152203 3742 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4284.0.0-n-c871d2567c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4284.0.0-n-c871d2567c" May 14 00:25:18.152464 kubelet[3742]: E0514 00:25:18.152447 3742 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4284.0.0-n-c871d2567c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4284.0.0-n-c871d2567c" May 14 00:25:19.733842 systemd[1]: Reload requested from client PID 4157 ('systemctl') (unit session-9.scope)... May 14 00:25:19.733853 systemd[1]: Reloading... May 14 00:25:19.804389 zram_generator::config[4213]: No configuration found. May 14 00:25:19.893459 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:25:19.995376 systemd[1]: Reloading finished in 261 ms. May 14 00:25:20.014125 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:25:20.032333 systemd[1]: kubelet.service: Deactivated successfully. May 14 00:25:20.032750 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:25:20.032913 systemd[1]: kubelet.service: Consumed 1.275s CPU time, 150.4M memory peak. May 14 00:25:20.036552 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:25:20.176646 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:25:20.180429 (kubelet)[4273]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 00:25:20.213712 kubelet[4273]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:25:20.213712 kubelet[4273]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 14 00:25:20.213712 kubelet[4273]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:25:20.214055 kubelet[4273]: I0514 00:25:20.213754 4273 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 00:25:20.219795 kubelet[4273]: I0514 00:25:20.219769 4273 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 14 00:25:20.219795 kubelet[4273]: I0514 00:25:20.219795 4273 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 00:25:20.220017 kubelet[4273]: I0514 00:25:20.220007 4273 server.go:954] "Client rotation is on, will bootstrap in background" May 14 00:25:20.221161 kubelet[4273]: I0514 00:25:20.221149 4273 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 00:25:20.223181 kubelet[4273]: I0514 00:25:20.223161 4273 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:25:20.225903 kubelet[4273]: I0514 00:25:20.225890 4273 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 00:25:20.244613 kubelet[4273]: I0514 00:25:20.244585 4273 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 00:25:20.244796 kubelet[4273]: I0514 00:25:20.244765 4273 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 00:25:20.244952 kubelet[4273]: I0514 00:25:20.244792 4273 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284.0.0-n-c871d2567c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 00:25:20.245022 kubelet[4273]: I0514 00:25:20.244961 4273 topology_manager.go:138] "Creating topology manager with none policy" May 14 00:25:20.245022 kubelet[4273]: I0514 00:25:20.244971 4273 container_manager_linux.go:304] "Creating device plugin manager" May 14 00:25:20.245064 kubelet[4273]: I0514 00:25:20.245036 4273 state_mem.go:36] "Initialized new in-memory state store" May 14 00:25:20.245336 kubelet[4273]: I0514 00:25:20.245324 4273 kubelet.go:446] "Attempting to sync node with API server" May 14 00:25:20.245361 kubelet[4273]: I0514 00:25:20.245344 4273 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 00:25:20.245384 kubelet[4273]: I0514 00:25:20.245364 4273 kubelet.go:352] "Adding apiserver pod source" May 14 00:25:20.245403 kubelet[4273]: I0514 00:25:20.245385 4273 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 00:25:20.246099 kubelet[4273]: I0514 00:25:20.246034 4273 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 14 00:25:20.246564 kubelet[4273]: I0514 00:25:20.246550 4273 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 00:25:20.246985 kubelet[4273]: I0514 00:25:20.246971 4273 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 14 00:25:20.247009 kubelet[4273]: I0514 00:25:20.247002 4273 server.go:1287] "Started kubelet" May 14 00:25:20.247129 kubelet[4273]: I0514 00:25:20.247105 4273 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 14 00:25:20.247877 kubelet[4273]: I0514 00:25:20.247229 4273 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 00:25:20.248198 kubelet[4273]: I0514 00:25:20.248182 4273 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 00:25:20.249610 kubelet[4273]: E0514 00:25:20.249591 4273 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 00:25:20.249690 kubelet[4273]: I0514 00:25:20.249674 4273 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 00:25:20.249690 kubelet[4273]: I0514 00:25:20.249679 4273 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 00:25:20.249783 kubelet[4273]: I0514 00:25:20.249766 4273 volume_manager.go:297] "Starting Kubelet Volume Manager" May 14 00:25:20.249808 kubelet[4273]: E0514 00:25:20.249763 4273 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-c871d2567c\" not found" May 14 00:25:20.249829 kubelet[4273]: I0514 00:25:20.249814 4273 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 00:25:20.249921 kubelet[4273]: I0514 00:25:20.249906 4273 reconciler.go:26] "Reconciler: start to sync state" May 14 00:25:20.250155 kubelet[4273]: I0514 00:25:20.250137 4273 factory.go:221] Registration of the systemd container factory successfully May 14 00:25:20.250252 kubelet[4273]: I0514 00:25:20.250231 4273 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 00:25:20.250299 kubelet[4273]: I0514 00:25:20.250280 4273 server.go:490] "Adding debug handlers to kubelet server" May 14 00:25:20.250966 kubelet[4273]: I0514 00:25:20.250944 4273 factory.go:221] Registration of the containerd container factory successfully May 14 00:25:20.256914 kubelet[4273]: I0514 00:25:20.256883 4273 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 00:25:20.257823 kubelet[4273]: I0514 00:25:20.257794 4273 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 00:25:20.257823 kubelet[4273]: I0514 00:25:20.257820 4273 status_manager.go:227] "Starting to sync pod status with apiserver" May 14 00:25:20.257918 kubelet[4273]: I0514 00:25:20.257837 4273 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 14 00:25:20.257918 kubelet[4273]: I0514 00:25:20.257844 4273 kubelet.go:2388] "Starting kubelet main sync loop" May 14 00:25:20.257918 kubelet[4273]: E0514 00:25:20.257884 4273 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 00:25:20.281009 kubelet[4273]: I0514 00:25:20.280992 4273 cpu_manager.go:221] "Starting CPU manager" policy="none" May 14 00:25:20.281009 kubelet[4273]: I0514 00:25:20.281008 4273 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 14 00:25:20.281082 kubelet[4273]: I0514 00:25:20.281027 4273 state_mem.go:36] "Initialized new in-memory state store" May 14 00:25:20.281179 kubelet[4273]: I0514 00:25:20.281167 4273 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 00:25:20.281202 kubelet[4273]: I0514 00:25:20.281180 4273 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 00:25:20.281202 kubelet[4273]: I0514 00:25:20.281199 4273 policy_none.go:49] "None policy: Start" May 14 00:25:20.281237 kubelet[4273]: I0514 00:25:20.281207 4273 memory_manager.go:186] "Starting memorymanager" policy="None" May 14 00:25:20.281237 kubelet[4273]: I0514 00:25:20.281216 4273 state_mem.go:35] "Initializing new in-memory state store" May 14 00:25:20.281350 kubelet[4273]: I0514 00:25:20.281342 4273 state_mem.go:75] "Updated machine memory state" May 14 00:25:20.284445 kubelet[4273]: I0514 00:25:20.284425 4273 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 00:25:20.284593 kubelet[4273]: I0514 00:25:20.284584 4273 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 00:25:20.284630 kubelet[4273]: I0514 00:25:20.284597 4273 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 00:25:20.284748 kubelet[4273]: I0514 00:25:20.284736 4273 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 00:25:20.285195 kubelet[4273]: E0514 00:25:20.285182 4273 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 14 00:25:20.358999 kubelet[4273]: I0514 00:25:20.358980 4273 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284.0.0-n-c871d2567c" May 14 00:25:20.358999 kubelet[4273]: I0514 00:25:20.358990 4273 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284.0.0-n-c871d2567c" May 14 00:25:20.359087 kubelet[4273]: I0514 00:25:20.359021 4273 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-c871d2567c" May 14 00:25:20.361731 kubelet[4273]: W0514 00:25:20.361710 4273 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 00:25:20.361911 kubelet[4273]: W0514 00:25:20.361891 4273 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 00:25:20.361993 kubelet[4273]: W0514 00:25:20.361969 4273 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 00:25:20.387654 kubelet[4273]: I0514 00:25:20.387639 4273 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284.0.0-n-c871d2567c" May 14 00:25:20.391849 kubelet[4273]: I0514 00:25:20.391826 4273 kubelet_node_status.go:125] "Node was previously registered" node="ci-4284.0.0-n-c871d2567c" May 14 00:25:20.391900 kubelet[4273]: I0514 00:25:20.391888 4273 kubelet_node_status.go:79] "Successfully registered node" node="ci-4284.0.0-n-c871d2567c" May 14 00:25:20.451098 kubelet[4273]: I0514 00:25:20.451074 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84572541beac8e2bfbbb5eb8b72dc88c-ca-certs\") pod \"kube-apiserver-ci-4284.0.0-n-c871d2567c\" (UID: \"84572541beac8e2bfbbb5eb8b72dc88c\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-c871d2567c" May 14 00:25:20.451151 kubelet[4273]: I0514 00:25:20.451100 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8988d9dcf9cfee7e1edff4189425c686-k8s-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-c871d2567c\" (UID: \"8988d9dcf9cfee7e1edff4189425c686\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-c871d2567c" May 14 00:25:20.451151 kubelet[4273]: I0514 00:25:20.451118 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8988d9dcf9cfee7e1edff4189425c686-kubeconfig\") pod \"kube-controller-manager-ci-4284.0.0-n-c871d2567c\" (UID: \"8988d9dcf9cfee7e1edff4189425c686\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-c871d2567c" May 14 00:25:20.451151 kubelet[4273]: I0514 00:25:20.451133 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84572541beac8e2bfbbb5eb8b72dc88c-k8s-certs\") pod \"kube-apiserver-ci-4284.0.0-n-c871d2567c\" (UID: \"84572541beac8e2bfbbb5eb8b72dc88c\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-c871d2567c" May 14 00:25:20.451266 kubelet[4273]: I0514 00:25:20.451150 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84572541beac8e2bfbbb5eb8b72dc88c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284.0.0-n-c871d2567c\" (UID: \"84572541beac8e2bfbbb5eb8b72dc88c\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-c871d2567c" May 14 00:25:20.451266 kubelet[4273]: I0514 00:25:20.451174 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8988d9dcf9cfee7e1edff4189425c686-ca-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-c871d2567c\" (UID: \"8988d9dcf9cfee7e1edff4189425c686\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-c871d2567c" May 14 00:25:20.451266 kubelet[4273]: I0514 00:25:20.451190 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8988d9dcf9cfee7e1edff4189425c686-flexvolume-dir\") pod \"kube-controller-manager-ci-4284.0.0-n-c871d2567c\" (UID: \"8988d9dcf9cfee7e1edff4189425c686\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-c871d2567c" May 14 00:25:20.451266 kubelet[4273]: I0514 00:25:20.451208 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8988d9dcf9cfee7e1edff4189425c686-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284.0.0-n-c871d2567c\" (UID: \"8988d9dcf9cfee7e1edff4189425c686\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-c871d2567c" May 14 00:25:20.451266 kubelet[4273]: I0514 00:25:20.451228 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3d742d8381b07eae71f0a125905885ce-kubeconfig\") pod \"kube-scheduler-ci-4284.0.0-n-c871d2567c\" (UID: \"3d742d8381b07eae71f0a125905885ce\") " pod="kube-system/kube-scheduler-ci-4284.0.0-n-c871d2567c" May 14 00:25:21.246548 kubelet[4273]: I0514 00:25:21.246514 4273 apiserver.go:52] "Watching apiserver" May 14 00:25:21.250641 kubelet[4273]: I0514 00:25:21.250620 4273 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 00:25:21.278523 kubelet[4273]: I0514 00:25:21.278476 4273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4284.0.0-n-c871d2567c" podStartSLOduration=1.278463897 podStartE2EDuration="1.278463897s" podCreationTimestamp="2025-05-14 00:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:25:21.278414337 +0000 UTC m=+1.093803481" watchObservedRunningTime="2025-05-14 00:25:21.278463897 +0000 UTC m=+1.093853081" May 14 00:25:21.288952 kubelet[4273]: I0514 00:25:21.288912 4273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4284.0.0-n-c871d2567c" podStartSLOduration=1.288900737 podStartE2EDuration="1.288900737s" podCreationTimestamp="2025-05-14 00:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:25:21.283494257 +0000 UTC m=+1.098883401" watchObservedRunningTime="2025-05-14 00:25:21.288900737 +0000 UTC m=+1.104289921" May 14 00:25:21.294985 kubelet[4273]: I0514 00:25:21.294935 4273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-c871d2567c" podStartSLOduration=1.294917457 podStartE2EDuration="1.294917457s" podCreationTimestamp="2025-05-14 00:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:25:21.288976257 +0000 UTC m=+1.104365441" watchObservedRunningTime="2025-05-14 00:25:21.294917457 +0000 UTC m=+1.110306681" May 14 00:25:24.667117 sudo[3007]: pam_unix(sudo:session): session closed for user root May 14 00:25:24.732540 sshd[3006]: Connection closed by 139.178.68.195 port 51368 May 14 00:25:24.732900 sshd-session[3004]: pam_unix(sshd:session): session closed for user core May 14 00:25:24.735803 systemd[1]: sshd@6-147.75.51.18:22-139.178.68.195:51368.service: Deactivated successfully. May 14 00:25:24.738180 systemd[1]: session-9.scope: Deactivated successfully. May 14 00:25:24.738349 systemd[1]: session-9.scope: Consumed 6.586s CPU time, 242.1M memory peak. May 14 00:25:24.739401 systemd-logind[2724]: Session 9 logged out. Waiting for processes to exit. May 14 00:25:24.740061 systemd-logind[2724]: Removed session 9. May 14 00:25:25.717468 kubelet[4273]: I0514 00:25:25.717363 4273 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 00:25:25.717780 containerd[2740]: time="2025-05-14T00:25:25.717659577Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 00:25:25.717962 kubelet[4273]: I0514 00:25:25.717806 4273 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 00:25:26.713830 systemd[1]: Created slice kubepods-besteffort-podb5216a53_f76c_4098_a13d_5db5f2590419.slice - libcontainer container kubepods-besteffort-podb5216a53_f76c_4098_a13d_5db5f2590419.slice. May 14 00:25:26.788567 kubelet[4273]: I0514 00:25:26.788532 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b5216a53-f76c-4098-a13d-5db5f2590419-kube-proxy\") pod \"kube-proxy-l4sz8\" (UID: \"b5216a53-f76c-4098-a13d-5db5f2590419\") " pod="kube-system/kube-proxy-l4sz8" May 14 00:25:26.788567 kubelet[4273]: I0514 00:25:26.788567 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5216a53-f76c-4098-a13d-5db5f2590419-xtables-lock\") pod \"kube-proxy-l4sz8\" (UID: \"b5216a53-f76c-4098-a13d-5db5f2590419\") " pod="kube-system/kube-proxy-l4sz8" May 14 00:25:26.788954 kubelet[4273]: I0514 00:25:26.788584 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-669f5\" (UniqueName: \"kubernetes.io/projected/b5216a53-f76c-4098-a13d-5db5f2590419-kube-api-access-669f5\") pod \"kube-proxy-l4sz8\" (UID: \"b5216a53-f76c-4098-a13d-5db5f2590419\") " pod="kube-system/kube-proxy-l4sz8" May 14 00:25:26.788954 kubelet[4273]: I0514 00:25:26.788604 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5216a53-f76c-4098-a13d-5db5f2590419-lib-modules\") pod \"kube-proxy-l4sz8\" (UID: \"b5216a53-f76c-4098-a13d-5db5f2590419\") " pod="kube-system/kube-proxy-l4sz8" May 14 00:25:26.870496 systemd[1]: Created slice kubepods-besteffort-poda61ccb92_ae4a_4d06_b050_1ca8a0dd2e2a.slice - libcontainer container kubepods-besteffort-poda61ccb92_ae4a_4d06_b050_1ca8a0dd2e2a.slice. May 14 00:25:26.889308 kubelet[4273]: I0514 00:25:26.889271 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a61ccb92-ae4a-4d06-b050-1ca8a0dd2e2a-var-lib-calico\") pod \"tigera-operator-789496d6f5-sc5z5\" (UID: \"a61ccb92-ae4a-4d06-b050-1ca8a0dd2e2a\") " pod="tigera-operator/tigera-operator-789496d6f5-sc5z5" May 14 00:25:26.889393 kubelet[4273]: I0514 00:25:26.889363 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kqfl\" (UniqueName: \"kubernetes.io/projected/a61ccb92-ae4a-4d06-b050-1ca8a0dd2e2a-kube-api-access-6kqfl\") pod \"tigera-operator-789496d6f5-sc5z5\" (UID: \"a61ccb92-ae4a-4d06-b050-1ca8a0dd2e2a\") " pod="tigera-operator/tigera-operator-789496d6f5-sc5z5" May 14 00:25:27.032418 containerd[2740]: time="2025-05-14T00:25:27.032344457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l4sz8,Uid:b5216a53-f76c-4098-a13d-5db5f2590419,Namespace:kube-system,Attempt:0,}" May 14 00:25:27.041286 containerd[2740]: time="2025-05-14T00:25:27.041255097Z" level=info msg="connecting to shim 57500641735e7bd4f4c649d0b41957d0c17f22edef19da6a83d97ac4a3fc283c" address="unix:///run/containerd/s/6144497ee133e6628e3d7d71c01594896b6556f047b766b1f68d6fb8f4a9c267" namespace=k8s.io protocol=ttrpc version=3 May 14 00:25:27.067557 systemd[1]: Started cri-containerd-57500641735e7bd4f4c649d0b41957d0c17f22edef19da6a83d97ac4a3fc283c.scope - libcontainer container 57500641735e7bd4f4c649d0b41957d0c17f22edef19da6a83d97ac4a3fc283c. May 14 00:25:27.084381 containerd[2740]: time="2025-05-14T00:25:27.084342017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l4sz8,Uid:b5216a53-f76c-4098-a13d-5db5f2590419,Namespace:kube-system,Attempt:0,} returns sandbox id \"57500641735e7bd4f4c649d0b41957d0c17f22edef19da6a83d97ac4a3fc283c\"" May 14 00:25:27.086574 containerd[2740]: time="2025-05-14T00:25:27.086549857Z" level=info msg="CreateContainer within sandbox \"57500641735e7bd4f4c649d0b41957d0c17f22edef19da6a83d97ac4a3fc283c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 00:25:27.091785 containerd[2740]: time="2025-05-14T00:25:27.091758697Z" level=info msg="Container becc867d976838f66f2438f4d67a32715fa53c3cef44d8e0d99e96a76a021cc1: CDI devices from CRI Config.CDIDevices: []" May 14 00:25:27.095774 containerd[2740]: time="2025-05-14T00:25:27.095744257Z" level=info msg="CreateContainer within sandbox \"57500641735e7bd4f4c649d0b41957d0c17f22edef19da6a83d97ac4a3fc283c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"becc867d976838f66f2438f4d67a32715fa53c3cef44d8e0d99e96a76a021cc1\"" May 14 00:25:27.096167 containerd[2740]: time="2025-05-14T00:25:27.096147857Z" level=info msg="StartContainer for \"becc867d976838f66f2438f4d67a32715fa53c3cef44d8e0d99e96a76a021cc1\"" May 14 00:25:27.097432 containerd[2740]: time="2025-05-14T00:25:27.097410577Z" level=info msg="connecting to shim becc867d976838f66f2438f4d67a32715fa53c3cef44d8e0d99e96a76a021cc1" address="unix:///run/containerd/s/6144497ee133e6628e3d7d71c01594896b6556f047b766b1f68d6fb8f4a9c267" protocol=ttrpc version=3 May 14 00:25:27.118490 systemd[1]: Started cri-containerd-becc867d976838f66f2438f4d67a32715fa53c3cef44d8e0d99e96a76a021cc1.scope - libcontainer container becc867d976838f66f2438f4d67a32715fa53c3cef44d8e0d99e96a76a021cc1. May 14 00:25:27.146023 containerd[2740]: time="2025-05-14T00:25:27.145995177Z" level=info msg="StartContainer for \"becc867d976838f66f2438f4d67a32715fa53c3cef44d8e0d99e96a76a021cc1\" returns successfully" May 14 00:25:27.173521 containerd[2740]: time="2025-05-14T00:25:27.173496177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-sc5z5,Uid:a61ccb92-ae4a-4d06-b050-1ca8a0dd2e2a,Namespace:tigera-operator,Attempt:0,}" May 14 00:25:27.181969 containerd[2740]: time="2025-05-14T00:25:27.181937097Z" level=info msg="connecting to shim 317dbac3a8088db8d0baf83455256b06900b01aded32488ce7f9e34a3814ef4b" address="unix:///run/containerd/s/23dafc52df13e62b8d9019a74a551fc35658eae60811239cfd9d602b7c04cd61" namespace=k8s.io protocol=ttrpc version=3 May 14 00:25:27.207572 systemd[1]: Started cri-containerd-317dbac3a8088db8d0baf83455256b06900b01aded32488ce7f9e34a3814ef4b.scope - libcontainer container 317dbac3a8088db8d0baf83455256b06900b01aded32488ce7f9e34a3814ef4b. May 14 00:25:27.232558 containerd[2740]: time="2025-05-14T00:25:27.232523017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-sc5z5,Uid:a61ccb92-ae4a-4d06-b050-1ca8a0dd2e2a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"317dbac3a8088db8d0baf83455256b06900b01aded32488ce7f9e34a3814ef4b\"" May 14 00:25:27.233718 containerd[2740]: time="2025-05-14T00:25:27.233697577Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 14 00:25:27.281031 kubelet[4273]: I0514 00:25:27.280989 4273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l4sz8" podStartSLOduration=1.280971857 podStartE2EDuration="1.280971857s" podCreationTimestamp="2025-05-14 00:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:25:27.280925577 +0000 UTC m=+7.096314761" watchObservedRunningTime="2025-05-14 00:25:27.280971857 +0000 UTC m=+7.096361041" May 14 00:25:27.901084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3057979745.mount: Deactivated successfully. May 14 00:25:27.996238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount733224279.mount: Deactivated successfully. May 14 00:25:28.630705 containerd[2740]: time="2025-05-14T00:25:28.630583817Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 14 00:25:28.630705 containerd[2740]: time="2025-05-14T00:25:28.630585177Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:28.631580 containerd[2740]: time="2025-05-14T00:25:28.631532817Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:28.633250 containerd[2740]: time="2025-05-14T00:25:28.633199177Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:28.633943 containerd[2740]: time="2025-05-14T00:25:28.633916257Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 1.40018916s" May 14 00:25:28.634098 containerd[2740]: time="2025-05-14T00:25:28.634021737Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 14 00:25:28.635544 containerd[2740]: time="2025-05-14T00:25:28.635505537Z" level=info msg="CreateContainer within sandbox \"317dbac3a8088db8d0baf83455256b06900b01aded32488ce7f9e34a3814ef4b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 14 00:25:28.638999 containerd[2740]: time="2025-05-14T00:25:28.638975217Z" level=info msg="Container 0172bd71f2248cb7ec8f4d01cf487438390ebb13fcb2251cb6e2dd1c942969a9: CDI devices from CRI Config.CDIDevices: []" May 14 00:25:28.641764 containerd[2740]: time="2025-05-14T00:25:28.641739737Z" level=info msg="CreateContainer within sandbox \"317dbac3a8088db8d0baf83455256b06900b01aded32488ce7f9e34a3814ef4b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0172bd71f2248cb7ec8f4d01cf487438390ebb13fcb2251cb6e2dd1c942969a9\"" May 14 00:25:28.642027 containerd[2740]: time="2025-05-14T00:25:28.642008857Z" level=info msg="StartContainer for \"0172bd71f2248cb7ec8f4d01cf487438390ebb13fcb2251cb6e2dd1c942969a9\"" May 14 00:25:28.642730 containerd[2740]: time="2025-05-14T00:25:28.642709737Z" level=info msg="connecting to shim 0172bd71f2248cb7ec8f4d01cf487438390ebb13fcb2251cb6e2dd1c942969a9" address="unix:///run/containerd/s/23dafc52df13e62b8d9019a74a551fc35658eae60811239cfd9d602b7c04cd61" protocol=ttrpc version=3 May 14 00:25:28.683499 systemd[1]: Started cri-containerd-0172bd71f2248cb7ec8f4d01cf487438390ebb13fcb2251cb6e2dd1c942969a9.scope - libcontainer container 0172bd71f2248cb7ec8f4d01cf487438390ebb13fcb2251cb6e2dd1c942969a9. May 14 00:25:28.703476 containerd[2740]: time="2025-05-14T00:25:28.703444857Z" level=info msg="StartContainer for \"0172bd71f2248cb7ec8f4d01cf487438390ebb13fcb2251cb6e2dd1c942969a9\" returns successfully" May 14 00:25:29.284927 kubelet[4273]: I0514 00:25:29.284878 4273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-sc5z5" podStartSLOduration=1.8836622570000001 podStartE2EDuration="3.284856497s" podCreationTimestamp="2025-05-14 00:25:26 +0000 UTC" firstStartedPulling="2025-05-14 00:25:27.233352377 +0000 UTC m=+7.048741561" lastFinishedPulling="2025-05-14 00:25:28.634546617 +0000 UTC m=+8.449935801" observedRunningTime="2025-05-14 00:25:29.284834417 +0000 UTC m=+9.100223641" watchObservedRunningTime="2025-05-14 00:25:29.284856497 +0000 UTC m=+9.100245641" May 14 00:25:30.691446 update_engine[2734]: I20250514 00:25:30.691388 2734 update_attempter.cc:509] Updating boot flags... May 14 00:25:30.731383 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (4893) May 14 00:25:30.760383 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (4893) May 14 00:25:32.345901 systemd[1]: Created slice kubepods-besteffort-poddf55a364_c469_465a_bb4e_e43be7d0326b.slice - libcontainer container kubepods-besteffort-poddf55a364_c469_465a_bb4e_e43be7d0326b.slice. May 14 00:25:32.363542 systemd[1]: Created slice kubepods-besteffort-pod6f081908_8d83_4f17_ae5e_8205debeff4b.slice - libcontainer container kubepods-besteffort-pod6f081908_8d83_4f17_ae5e_8205debeff4b.slice. May 14 00:25:32.426556 kubelet[4273]: I0514 00:25:32.426460 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6f081908-8d83-4f17-ae5e-8205debeff4b-cni-net-dir\") pod \"calico-node-jsztg\" (UID: \"6f081908-8d83-4f17-ae5e-8205debeff4b\") " pod="calico-system/calico-node-jsztg" May 14 00:25:32.426556 kubelet[4273]: I0514 00:25:32.426502 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6f081908-8d83-4f17-ae5e-8205debeff4b-flexvol-driver-host\") pod \"calico-node-jsztg\" (UID: \"6f081908-8d83-4f17-ae5e-8205debeff4b\") " pod="calico-system/calico-node-jsztg" May 14 00:25:32.426556 kubelet[4273]: I0514 00:25:32.426522 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df55a364-c469-465a-bb4e-e43be7d0326b-tigera-ca-bundle\") pod \"calico-typha-65656997bc-qll4h\" (UID: \"df55a364-c469-465a-bb4e-e43be7d0326b\") " pod="calico-system/calico-typha-65656997bc-qll4h" May 14 00:25:32.426556 kubelet[4273]: I0514 00:25:32.426539 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9tth\" (UniqueName: \"kubernetes.io/projected/df55a364-c469-465a-bb4e-e43be7d0326b-kube-api-access-p9tth\") pod \"calico-typha-65656997bc-qll4h\" (UID: \"df55a364-c469-465a-bb4e-e43be7d0326b\") " pod="calico-system/calico-typha-65656997bc-qll4h" May 14 00:25:32.426949 kubelet[4273]: I0514 00:25:32.426616 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6f081908-8d83-4f17-ae5e-8205debeff4b-cni-log-dir\") pod \"calico-node-jsztg\" (UID: \"6f081908-8d83-4f17-ae5e-8205debeff4b\") " pod="calico-system/calico-node-jsztg" May 14 00:25:32.426949 kubelet[4273]: I0514 00:25:32.426669 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f081908-8d83-4f17-ae5e-8205debeff4b-lib-modules\") pod \"calico-node-jsztg\" (UID: \"6f081908-8d83-4f17-ae5e-8205debeff4b\") " pod="calico-system/calico-node-jsztg" May 14 00:25:32.426949 kubelet[4273]: I0514 00:25:32.426743 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6f081908-8d83-4f17-ae5e-8205debeff4b-var-run-calico\") pod \"calico-node-jsztg\" (UID: \"6f081908-8d83-4f17-ae5e-8205debeff4b\") " pod="calico-system/calico-node-jsztg" May 14 00:25:32.426949 kubelet[4273]: I0514 00:25:32.426782 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6f081908-8d83-4f17-ae5e-8205debeff4b-var-lib-calico\") pod \"calico-node-jsztg\" (UID: \"6f081908-8d83-4f17-ae5e-8205debeff4b\") " pod="calico-system/calico-node-jsztg" May 14 00:25:32.426949 kubelet[4273]: I0514 00:25:32.426802 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f081908-8d83-4f17-ae5e-8205debeff4b-xtables-lock\") pod \"calico-node-jsztg\" (UID: \"6f081908-8d83-4f17-ae5e-8205debeff4b\") " pod="calico-system/calico-node-jsztg" May 14 00:25:32.427057 kubelet[4273]: I0514 00:25:32.426818 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6f081908-8d83-4f17-ae5e-8205debeff4b-policysync\") pod \"calico-node-jsztg\" (UID: \"6f081908-8d83-4f17-ae5e-8205debeff4b\") " pod="calico-system/calico-node-jsztg" May 14 00:25:32.427057 kubelet[4273]: I0514 00:25:32.426834 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f081908-8d83-4f17-ae5e-8205debeff4b-tigera-ca-bundle\") pod \"calico-node-jsztg\" (UID: \"6f081908-8d83-4f17-ae5e-8205debeff4b\") " pod="calico-system/calico-node-jsztg" May 14 00:25:32.427057 kubelet[4273]: I0514 00:25:32.426849 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6f081908-8d83-4f17-ae5e-8205debeff4b-node-certs\") pod \"calico-node-jsztg\" (UID: \"6f081908-8d83-4f17-ae5e-8205debeff4b\") " pod="calico-system/calico-node-jsztg" May 14 00:25:32.427057 kubelet[4273]: I0514 00:25:32.426864 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6f081908-8d83-4f17-ae5e-8205debeff4b-cni-bin-dir\") pod \"calico-node-jsztg\" (UID: \"6f081908-8d83-4f17-ae5e-8205debeff4b\") " pod="calico-system/calico-node-jsztg" May 14 00:25:32.427057 kubelet[4273]: I0514 00:25:32.426878 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzxqw\" (UniqueName: \"kubernetes.io/projected/6f081908-8d83-4f17-ae5e-8205debeff4b-kube-api-access-qzxqw\") pod \"calico-node-jsztg\" (UID: \"6f081908-8d83-4f17-ae5e-8205debeff4b\") " pod="calico-system/calico-node-jsztg" May 14 00:25:32.427159 kubelet[4273]: I0514 00:25:32.426902 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/df55a364-c469-465a-bb4e-e43be7d0326b-typha-certs\") pod \"calico-typha-65656997bc-qll4h\" (UID: \"df55a364-c469-465a-bb4e-e43be7d0326b\") " pod="calico-system/calico-typha-65656997bc-qll4h" May 14 00:25:32.494542 kubelet[4273]: E0514 00:25:32.493269 4273 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mx9xb" podUID="c3ec6e0e-bc4d-41b2-9bcc-12ee175ce28a" May 14 00:25:32.527147 kubelet[4273]: I0514 00:25:32.527102 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c3ec6e0e-bc4d-41b2-9bcc-12ee175ce28a-registration-dir\") pod \"csi-node-driver-mx9xb\" (UID: \"c3ec6e0e-bc4d-41b2-9bcc-12ee175ce28a\") " pod="calico-system/csi-node-driver-mx9xb" May 14 00:25:32.527251 kubelet[4273]: I0514 00:25:32.527204 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c3ec6e0e-bc4d-41b2-9bcc-12ee175ce28a-kubelet-dir\") pod \"csi-node-driver-mx9xb\" (UID: \"c3ec6e0e-bc4d-41b2-9bcc-12ee175ce28a\") " pod="calico-system/csi-node-driver-mx9xb" May 14 00:25:32.527251 kubelet[4273]: I0514 00:25:32.527234 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c3ec6e0e-bc4d-41b2-9bcc-12ee175ce28a-socket-dir\") pod \"csi-node-driver-mx9xb\" (UID: \"c3ec6e0e-bc4d-41b2-9bcc-12ee175ce28a\") " pod="calico-system/csi-node-driver-mx9xb" May 14 00:25:32.527387 kubelet[4273]: I0514 00:25:32.527353 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prnpq\" (UniqueName: \"kubernetes.io/projected/c3ec6e0e-bc4d-41b2-9bcc-12ee175ce28a-kube-api-access-prnpq\") pod \"csi-node-driver-mx9xb\" (UID: \"c3ec6e0e-bc4d-41b2-9bcc-12ee175ce28a\") " pod="calico-system/csi-node-driver-mx9xb" May 14 00:25:32.527449 kubelet[4273]: I0514 00:25:32.527422 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c3ec6e0e-bc4d-41b2-9bcc-12ee175ce28a-varrun\") pod \"csi-node-driver-mx9xb\" (UID: \"c3ec6e0e-bc4d-41b2-9bcc-12ee175ce28a\") " pod="calico-system/csi-node-driver-mx9xb" May 14 00:25:32.527743 kubelet[4273]: E0514 00:25:32.527723 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.527767 kubelet[4273]: W0514 00:25:32.527745 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.527787 kubelet[4273]: E0514 00:25:32.527774 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.528084 kubelet[4273]: E0514 00:25:32.528076 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.528084 kubelet[4273]: W0514 00:25:32.528084 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.528174 kubelet[4273]: E0514 00:25:32.528095 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.528300 kubelet[4273]: E0514 00:25:32.528289 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.528300 kubelet[4273]: W0514 00:25:32.528296 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.528358 kubelet[4273]: E0514 00:25:32.528307 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.528588 kubelet[4273]: E0514 00:25:32.528575 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.528588 kubelet[4273]: W0514 00:25:32.528584 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.528660 kubelet[4273]: E0514 00:25:32.528597 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.528834 kubelet[4273]: E0514 00:25:32.528822 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.528834 kubelet[4273]: W0514 00:25:32.528831 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.528880 kubelet[4273]: E0514 00:25:32.528842 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.529037 kubelet[4273]: E0514 00:25:32.529026 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.529037 kubelet[4273]: W0514 00:25:32.529034 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.529086 kubelet[4273]: E0514 00:25:32.529057 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.529215 kubelet[4273]: E0514 00:25:32.529205 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.529215 kubelet[4273]: W0514 00:25:32.529212 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.529266 kubelet[4273]: E0514 00:25:32.529227 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.529428 kubelet[4273]: E0514 00:25:32.529417 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.529428 kubelet[4273]: W0514 00:25:32.529425 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.529472 kubelet[4273]: E0514 00:25:32.529440 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.529639 kubelet[4273]: E0514 00:25:32.529629 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.529639 kubelet[4273]: W0514 00:25:32.529636 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.529682 kubelet[4273]: E0514 00:25:32.529650 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.529851 kubelet[4273]: E0514 00:25:32.529841 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.529851 kubelet[4273]: W0514 00:25:32.529848 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.529894 kubelet[4273]: E0514 00:25:32.529872 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.530038 kubelet[4273]: E0514 00:25:32.530027 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.530038 kubelet[4273]: W0514 00:25:32.530035 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.530080 kubelet[4273]: E0514 00:25:32.530050 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.530182 kubelet[4273]: E0514 00:25:32.530174 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.530205 kubelet[4273]: W0514 00:25:32.530183 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.530205 kubelet[4273]: E0514 00:25:32.530198 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.530383 kubelet[4273]: E0514 00:25:32.530376 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.530407 kubelet[4273]: W0514 00:25:32.530383 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.530407 kubelet[4273]: E0514 00:25:32.530394 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.530534 kubelet[4273]: E0514 00:25:32.530524 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.530534 kubelet[4273]: W0514 00:25:32.530531 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.530580 kubelet[4273]: E0514 00:25:32.530541 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.530784 kubelet[4273]: E0514 00:25:32.530773 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.530784 kubelet[4273]: W0514 00:25:32.530781 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.530829 kubelet[4273]: E0514 00:25:32.530792 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.530991 kubelet[4273]: E0514 00:25:32.530980 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.530991 kubelet[4273]: W0514 00:25:32.530988 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.531029 kubelet[4273]: E0514 00:25:32.530995 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.531182 kubelet[4273]: E0514 00:25:32.531173 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.531182 kubelet[4273]: W0514 00:25:32.531180 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.531225 kubelet[4273]: E0514 00:25:32.531186 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.531362 kubelet[4273]: E0514 00:25:32.531351 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.531362 kubelet[4273]: W0514 00:25:32.531359 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.531410 kubelet[4273]: E0514 00:25:32.531367 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.535917 kubelet[4273]: E0514 00:25:32.535902 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.535917 kubelet[4273]: W0514 00:25:32.535915 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.535960 kubelet[4273]: E0514 00:25:32.535929 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.536092 kubelet[4273]: E0514 00:25:32.536081 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.536092 kubelet[4273]: W0514 00:25:32.536089 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.536132 kubelet[4273]: E0514 00:25:32.536097 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.628637 kubelet[4273]: E0514 00:25:32.628615 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.628637 kubelet[4273]: W0514 00:25:32.628629 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.628751 kubelet[4273]: E0514 00:25:32.628643 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.628869 kubelet[4273]: E0514 00:25:32.628853 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.628869 kubelet[4273]: W0514 00:25:32.628861 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.628929 kubelet[4273]: E0514 00:25:32.628872 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.629118 kubelet[4273]: E0514 00:25:32.629107 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.629118 kubelet[4273]: W0514 00:25:32.629115 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.629163 kubelet[4273]: E0514 00:25:32.629139 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.629343 kubelet[4273]: E0514 00:25:32.629333 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.629343 kubelet[4273]: W0514 00:25:32.629340 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.629402 kubelet[4273]: E0514 00:25:32.629350 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.629527 kubelet[4273]: E0514 00:25:32.629516 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.629527 kubelet[4273]: W0514 00:25:32.629524 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.629569 kubelet[4273]: E0514 00:25:32.629535 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.629763 kubelet[4273]: E0514 00:25:32.629753 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.629763 kubelet[4273]: W0514 00:25:32.629761 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.629805 kubelet[4273]: E0514 00:25:32.629772 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.629918 kubelet[4273]: E0514 00:25:32.629908 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.629918 kubelet[4273]: W0514 00:25:32.629915 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.629956 kubelet[4273]: E0514 00:25:32.629938 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.630094 kubelet[4273]: E0514 00:25:32.630084 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.630120 kubelet[4273]: W0514 00:25:32.630093 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.630120 kubelet[4273]: E0514 00:25:32.630112 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.630241 kubelet[4273]: E0514 00:25:32.630234 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.630262 kubelet[4273]: W0514 00:25:32.630241 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.630262 kubelet[4273]: E0514 00:25:32.630259 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.630390 kubelet[4273]: E0514 00:25:32.630383 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.630419 kubelet[4273]: W0514 00:25:32.630390 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.630439 kubelet[4273]: E0514 00:25:32.630416 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.630539 kubelet[4273]: E0514 00:25:32.630529 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.630539 kubelet[4273]: W0514 00:25:32.630538 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.630580 kubelet[4273]: E0514 00:25:32.630556 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.630679 kubelet[4273]: E0514 00:25:32.630671 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.630702 kubelet[4273]: W0514 00:25:32.630679 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.630702 kubelet[4273]: E0514 00:25:32.630689 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.630902 kubelet[4273]: E0514 00:25:32.630894 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.630924 kubelet[4273]: W0514 00:25:32.630902 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.630924 kubelet[4273]: E0514 00:25:32.630912 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.631105 kubelet[4273]: E0514 00:25:32.631095 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.631129 kubelet[4273]: W0514 00:25:32.631106 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.631129 kubelet[4273]: E0514 00:25:32.631120 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.631326 kubelet[4273]: E0514 00:25:32.631312 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.631347 kubelet[4273]: W0514 00:25:32.631323 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.631347 kubelet[4273]: E0514 00:25:32.631337 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.631528 kubelet[4273]: E0514 00:25:32.631513 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.631592 kubelet[4273]: W0514 00:25:32.631527 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.631592 kubelet[4273]: E0514 00:25:32.631543 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.631747 kubelet[4273]: E0514 00:25:32.631736 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.631747 kubelet[4273]: W0514 00:25:32.631744 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.631788 kubelet[4273]: E0514 00:25:32.631763 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.631884 kubelet[4273]: E0514 00:25:32.631874 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.631884 kubelet[4273]: W0514 00:25:32.631882 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.631928 kubelet[4273]: E0514 00:25:32.631897 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.632012 kubelet[4273]: E0514 00:25:32.632005 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.632032 kubelet[4273]: W0514 00:25:32.632015 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.632032 kubelet[4273]: E0514 00:25:32.632026 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.632261 kubelet[4273]: E0514 00:25:32.632254 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.632284 kubelet[4273]: W0514 00:25:32.632261 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.632284 kubelet[4273]: E0514 00:25:32.632272 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.632479 kubelet[4273]: E0514 00:25:32.632470 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.632499 kubelet[4273]: W0514 00:25:32.632479 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.632499 kubelet[4273]: E0514 00:25:32.632489 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.632686 kubelet[4273]: E0514 00:25:32.632677 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.632713 kubelet[4273]: W0514 00:25:32.632686 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.632713 kubelet[4273]: E0514 00:25:32.632697 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.632866 kubelet[4273]: E0514 00:25:32.632858 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.632886 kubelet[4273]: W0514 00:25:32.632866 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.632886 kubelet[4273]: E0514 00:25:32.632876 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.633072 kubelet[4273]: E0514 00:25:32.633064 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.633095 kubelet[4273]: W0514 00:25:32.633071 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.633095 kubelet[4273]: E0514 00:25:32.633079 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.633443 kubelet[4273]: E0514 00:25:32.633433 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.633467 kubelet[4273]: W0514 00:25:32.633443 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.633467 kubelet[4273]: E0514 00:25:32.633452 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.639463 kubelet[4273]: E0514 00:25:32.639450 4273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:25:32.639489 kubelet[4273]: W0514 00:25:32.639463 4273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:25:32.639489 kubelet[4273]: E0514 00:25:32.639475 4273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:25:32.647819 containerd[2740]: time="2025-05-14T00:25:32.647790069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65656997bc-qll4h,Uid:df55a364-c469-465a-bb4e-e43be7d0326b,Namespace:calico-system,Attempt:0,}" May 14 00:25:32.666341 containerd[2740]: time="2025-05-14T00:25:32.666308916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jsztg,Uid:6f081908-8d83-4f17-ae5e-8205debeff4b,Namespace:calico-system,Attempt:0,}" May 14 00:25:32.667941 containerd[2740]: time="2025-05-14T00:25:32.667919371Z" level=info msg="connecting to shim 97ebdf9c4714793d537dcbe9c8b054d790db105cb4e442a7f3b4a3aba7eacc52" address="unix:///run/containerd/s/c43022edf30d4b36b3ed4c2681d75adb0eec14811c114feb994f8a2a1c0df9e0" namespace=k8s.io protocol=ttrpc version=3 May 14 00:25:32.672917 containerd[2740]: time="2025-05-14T00:25:32.672895536Z" level=info msg="connecting to shim 6049011c1253d105c245ec7c78e7c102b2f675c3de83c40f7828eef420339e86" address="unix:///run/containerd/s/a8fae552cd85d4c3d4edf8a74df65ae43652d53f90d7ac182637393f6c0c9b6f" namespace=k8s.io protocol=ttrpc version=3 May 14 00:25:32.699497 systemd[1]: Started cri-containerd-97ebdf9c4714793d537dcbe9c8b054d790db105cb4e442a7f3b4a3aba7eacc52.scope - libcontainer container 97ebdf9c4714793d537dcbe9c8b054d790db105cb4e442a7f3b4a3aba7eacc52. May 14 00:25:32.701810 systemd[1]: Started cri-containerd-6049011c1253d105c245ec7c78e7c102b2f675c3de83c40f7828eef420339e86.scope - libcontainer container 6049011c1253d105c245ec7c78e7c102b2f675c3de83c40f7828eef420339e86. May 14 00:25:32.718412 containerd[2740]: time="2025-05-14T00:25:32.718287626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jsztg,Uid:6f081908-8d83-4f17-ae5e-8205debeff4b,Namespace:calico-system,Attempt:0,} returns sandbox id \"6049011c1253d105c245ec7c78e7c102b2f675c3de83c40f7828eef420339e86\"" May 14 00:25:32.719948 containerd[2740]: time="2025-05-14T00:25:32.719924321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 14 00:25:32.724320 containerd[2740]: time="2025-05-14T00:25:32.724292080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65656997bc-qll4h,Uid:df55a364-c469-465a-bb4e-e43be7d0326b,Namespace:calico-system,Attempt:0,} returns sandbox id \"97ebdf9c4714793d537dcbe9c8b054d790db105cb4e442a7f3b4a3aba7eacc52\"" May 14 00:25:33.120325 containerd[2740]: time="2025-05-14T00:25:33.120282232Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:33.120441 containerd[2740]: time="2025-05-14T00:25:33.120338832Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 14 00:25:33.121040 containerd[2740]: time="2025-05-14T00:25:33.121016358Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:33.122480 containerd[2740]: time="2025-05-14T00:25:33.122456890Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:33.123072 containerd[2740]: time="2025-05-14T00:25:33.123050215Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 403.095014ms" May 14 00:25:33.123117 containerd[2740]: time="2025-05-14T00:25:33.123072215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 14 00:25:33.123762 containerd[2740]: time="2025-05-14T00:25:33.123741101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 14 00:25:33.124533 containerd[2740]: time="2025-05-14T00:25:33.124510428Z" level=info msg="CreateContainer within sandbox \"6049011c1253d105c245ec7c78e7c102b2f675c3de83c40f7828eef420339e86\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 14 00:25:33.128871 containerd[2740]: time="2025-05-14T00:25:33.128842024Z" level=info msg="Container 81122568053cca9fc64f56213b634791d4db70e38a6d015d2dce4f293d4c5a9e: CDI devices from CRI Config.CDIDevices: []" May 14 00:25:33.132803 containerd[2740]: time="2025-05-14T00:25:33.132767498Z" level=info msg="CreateContainer within sandbox \"6049011c1253d105c245ec7c78e7c102b2f675c3de83c40f7828eef420339e86\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"81122568053cca9fc64f56213b634791d4db70e38a6d015d2dce4f293d4c5a9e\"" May 14 00:25:33.133085 containerd[2740]: time="2025-05-14T00:25:33.133062340Z" level=info msg="StartContainer for \"81122568053cca9fc64f56213b634791d4db70e38a6d015d2dce4f293d4c5a9e\"" May 14 00:25:33.134345 containerd[2740]: time="2025-05-14T00:25:33.134322911Z" level=info msg="connecting to shim 81122568053cca9fc64f56213b634791d4db70e38a6d015d2dce4f293d4c5a9e" address="unix:///run/containerd/s/a8fae552cd85d4c3d4edf8a74df65ae43652d53f90d7ac182637393f6c0c9b6f" protocol=ttrpc version=3 May 14 00:25:33.159484 systemd[1]: Started cri-containerd-81122568053cca9fc64f56213b634791d4db70e38a6d015d2dce4f293d4c5a9e.scope - libcontainer container 81122568053cca9fc64f56213b634791d4db70e38a6d015d2dce4f293d4c5a9e. May 14 00:25:33.186164 containerd[2740]: time="2025-05-14T00:25:33.186132230Z" level=info msg="StartContainer for \"81122568053cca9fc64f56213b634791d4db70e38a6d015d2dce4f293d4c5a9e\" returns successfully" May 14 00:25:33.197646 systemd[1]: cri-containerd-81122568053cca9fc64f56213b634791d4db70e38a6d015d2dce4f293d4c5a9e.scope: Deactivated successfully. May 14 00:25:33.199174 containerd[2740]: time="2025-05-14T00:25:33.199144260Z" level=info msg="received exit event container_id:\"81122568053cca9fc64f56213b634791d4db70e38a6d015d2dce4f293d4c5a9e\" id:\"81122568053cca9fc64f56213b634791d4db70e38a6d015d2dce4f293d4c5a9e\" pid:5100 exited_at:{seconds:1747182333 nanos:198881458}" May 14 00:25:33.199245 containerd[2740]: time="2025-05-14T00:25:33.199224341Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81122568053cca9fc64f56213b634791d4db70e38a6d015d2dce4f293d4c5a9e\" id:\"81122568053cca9fc64f56213b634791d4db70e38a6d015d2dce4f293d4c5a9e\" pid:5100 exited_at:{seconds:1747182333 nanos:198881458}" May 14 00:25:33.736818 containerd[2740]: time="2025-05-14T00:25:33.736777896Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:33.737213 containerd[2740]: time="2025-05-14T00:25:33.736797976Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 14 00:25:33.737481 containerd[2740]: time="2025-05-14T00:25:33.737462901Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:33.738941 containerd[2740]: time="2025-05-14T00:25:33.738919034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:33.739519 containerd[2740]: time="2025-05-14T00:25:33.739505119Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 615.735858ms" May 14 00:25:33.739544 containerd[2740]: time="2025-05-14T00:25:33.739524759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 14 00:25:33.740200 containerd[2740]: time="2025-05-14T00:25:33.740181725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 14 00:25:33.745043 containerd[2740]: time="2025-05-14T00:25:33.744813244Z" level=info msg="CreateContainer within sandbox \"97ebdf9c4714793d537dcbe9c8b054d790db105cb4e442a7f3b4a3aba7eacc52\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 14 00:25:33.748450 containerd[2740]: time="2025-05-14T00:25:33.748419754Z" level=info msg="Container acd5b3fa31b1ad8ac765648713f543479f6c8a9d802f10f063914f271bedeee7: CDI devices from CRI Config.CDIDevices: []" May 14 00:25:33.751854 containerd[2740]: time="2025-05-14T00:25:33.751827903Z" level=info msg="CreateContainer within sandbox \"97ebdf9c4714793d537dcbe9c8b054d790db105cb4e442a7f3b4a3aba7eacc52\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"acd5b3fa31b1ad8ac765648713f543479f6c8a9d802f10f063914f271bedeee7\"" May 14 00:25:33.752169 containerd[2740]: time="2025-05-14T00:25:33.752144226Z" level=info msg="StartContainer for \"acd5b3fa31b1ad8ac765648713f543479f6c8a9d802f10f063914f271bedeee7\"" May 14 00:25:33.753123 containerd[2740]: time="2025-05-14T00:25:33.753097474Z" level=info msg="connecting to shim acd5b3fa31b1ad8ac765648713f543479f6c8a9d802f10f063914f271bedeee7" address="unix:///run/containerd/s/c43022edf30d4b36b3ed4c2681d75adb0eec14811c114feb994f8a2a1c0df9e0" protocol=ttrpc version=3 May 14 00:25:33.781580 systemd[1]: Started cri-containerd-acd5b3fa31b1ad8ac765648713f543479f6c8a9d802f10f063914f271bedeee7.scope - libcontainer container acd5b3fa31b1ad8ac765648713f543479f6c8a9d802f10f063914f271bedeee7. May 14 00:25:33.810148 containerd[2740]: time="2025-05-14T00:25:33.810117397Z" level=info msg="StartContainer for \"acd5b3fa31b1ad8ac765648713f543479f6c8a9d802f10f063914f271bedeee7\" returns successfully" May 14 00:25:34.258784 kubelet[4273]: E0514 00:25:34.258746 4273 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mx9xb" podUID="c3ec6e0e-bc4d-41b2-9bcc-12ee175ce28a" May 14 00:25:34.295332 kubelet[4273]: I0514 00:25:34.295286 4273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-65656997bc-qll4h" podStartSLOduration=1.280227475 podStartE2EDuration="2.295272512s" podCreationTimestamp="2025-05-14 00:25:32 +0000 UTC" firstStartedPulling="2025-05-14 00:25:32.725021567 +0000 UTC m=+12.540410751" lastFinishedPulling="2025-05-14 00:25:33.740066604 +0000 UTC m=+13.555455788" observedRunningTime="2025-05-14 00:25:34.295100111 +0000 UTC m=+14.110489255" watchObservedRunningTime="2025-05-14 00:25:34.295272512 +0000 UTC m=+14.110661696" May 14 00:25:35.289130 kubelet[4273]: I0514 00:25:35.289103 4273 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:25:35.361734 containerd[2740]: time="2025-05-14T00:25:35.361693764Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:35.361990 containerd[2740]: time="2025-05-14T00:25:35.361741644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 14 00:25:35.362307 containerd[2740]: time="2025-05-14T00:25:35.362287968Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:35.363854 containerd[2740]: time="2025-05-14T00:25:35.363837500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:35.364531 containerd[2740]: time="2025-05-14T00:25:35.364512585Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 1.62430186s" May 14 00:25:35.364578 containerd[2740]: time="2025-05-14T00:25:35.364535825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 14 00:25:35.366016 containerd[2740]: time="2025-05-14T00:25:35.365994436Z" level=info msg="CreateContainer within sandbox \"6049011c1253d105c245ec7c78e7c102b2f675c3de83c40f7828eef420339e86\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 14 00:25:35.370561 containerd[2740]: time="2025-05-14T00:25:35.370535870Z" level=info msg="Container 63a6cf7faf5505700db125d6638ecfcaf02450cc229c5688a440671b1c3dd3c2: CDI devices from CRI Config.CDIDevices: []" May 14 00:25:35.375051 containerd[2740]: time="2025-05-14T00:25:35.375028063Z" level=info msg="CreateContainer within sandbox \"6049011c1253d105c245ec7c78e7c102b2f675c3de83c40f7828eef420339e86\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"63a6cf7faf5505700db125d6638ecfcaf02450cc229c5688a440671b1c3dd3c2\"" May 14 00:25:35.375397 containerd[2740]: time="2025-05-14T00:25:35.375362986Z" level=info msg="StartContainer for \"63a6cf7faf5505700db125d6638ecfcaf02450cc229c5688a440671b1c3dd3c2\"" May 14 00:25:35.376731 containerd[2740]: time="2025-05-14T00:25:35.376709076Z" level=info msg="connecting to shim 63a6cf7faf5505700db125d6638ecfcaf02450cc229c5688a440671b1c3dd3c2" address="unix:///run/containerd/s/a8fae552cd85d4c3d4edf8a74df65ae43652d53f90d7ac182637393f6c0c9b6f" protocol=ttrpc version=3 May 14 00:25:35.395539 systemd[1]: Started cri-containerd-63a6cf7faf5505700db125d6638ecfcaf02450cc229c5688a440671b1c3dd3c2.scope - libcontainer container 63a6cf7faf5505700db125d6638ecfcaf02450cc229c5688a440671b1c3dd3c2. May 14 00:25:35.422908 containerd[2740]: time="2025-05-14T00:25:35.422877260Z" level=info msg="StartContainer for \"63a6cf7faf5505700db125d6638ecfcaf02450cc229c5688a440671b1c3dd3c2\" returns successfully" May 14 00:25:35.797888 systemd[1]: cri-containerd-63a6cf7faf5505700db125d6638ecfcaf02450cc229c5688a440671b1c3dd3c2.scope: Deactivated successfully. May 14 00:25:35.798210 systemd[1]: cri-containerd-63a6cf7faf5505700db125d6638ecfcaf02450cc229c5688a440671b1c3dd3c2.scope: Consumed 848ms CPU time, 179.6M memory peak, 150.3M written to disk. May 14 00:25:35.798699 containerd[2740]: time="2025-05-14T00:25:35.798668018Z" level=info msg="received exit event container_id:\"63a6cf7faf5505700db125d6638ecfcaf02450cc229c5688a440671b1c3dd3c2\" id:\"63a6cf7faf5505700db125d6638ecfcaf02450cc229c5688a440671b1c3dd3c2\" pid:5229 exited_at:{seconds:1747182335 nanos:798463937}" May 14 00:25:35.798776 containerd[2740]: time="2025-05-14T00:25:35.798748419Z" level=info msg="TaskExit event in podsandbox handler container_id:\"63a6cf7faf5505700db125d6638ecfcaf02450cc229c5688a440671b1c3dd3c2\" id:\"63a6cf7faf5505700db125d6638ecfcaf02450cc229c5688a440671b1c3dd3c2\" pid:5229 exited_at:{seconds:1747182335 nanos:798463937}" May 14 00:25:35.813992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63a6cf7faf5505700db125d6638ecfcaf02450cc229c5688a440671b1c3dd3c2-rootfs.mount: Deactivated successfully. May 14 00:25:35.848208 kubelet[4273]: I0514 00:25:35.848185 4273 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 14 00:25:35.871721 systemd[1]: Created slice kubepods-burstable-pod85f34c01_961a_4637_a5ea_c120695df56f.slice - libcontainer container kubepods-burstable-pod85f34c01_961a_4637_a5ea_c120695df56f.slice. May 14 00:25:35.875577 systemd[1]: Created slice kubepods-burstable-pod7d50dd31_4298_4702_84aa_1d607e2c54f6.slice - libcontainer container kubepods-burstable-pod7d50dd31_4298_4702_84aa_1d607e2c54f6.slice. May 14 00:25:35.879581 systemd[1]: Created slice kubepods-besteffort-poda5ac81ee_e702_42e2_9641_780d92758acf.slice - libcontainer container kubepods-besteffort-poda5ac81ee_e702_42e2_9641_780d92758acf.slice. May 14 00:25:35.883364 systemd[1]: Created slice kubepods-besteffort-poda371c5ab_8a26_493c_ab8a_af3a3a0dee7b.slice - libcontainer container kubepods-besteffort-poda371c5ab_8a26_493c_ab8a_af3a3a0dee7b.slice. May 14 00:25:35.886917 systemd[1]: Created slice kubepods-besteffort-pod4e12767f_c168_4722_a3fd_46e1cef4da0d.slice - libcontainer container kubepods-besteffort-pod4e12767f_c168_4722_a3fd_46e1cef4da0d.slice. May 14 00:25:35.949127 kubelet[4273]: I0514 00:25:35.949093 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85f34c01-961a-4637-a5ea-c120695df56f-config-volume\") pod \"coredns-668d6bf9bc-s6kln\" (UID: \"85f34c01-961a-4637-a5ea-c120695df56f\") " pod="kube-system/coredns-668d6bf9bc-s6kln" May 14 00:25:35.949127 kubelet[4273]: I0514 00:25:35.949131 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5srv4\" (UniqueName: \"kubernetes.io/projected/a371c5ab-8a26-493c-ab8a-af3a3a0dee7b-kube-api-access-5srv4\") pod \"calico-apiserver-7f6599c95d-87vz5\" (UID: \"a371c5ab-8a26-493c-ab8a-af3a3a0dee7b\") " pod="calico-apiserver/calico-apiserver-7f6599c95d-87vz5" May 14 00:25:35.949293 kubelet[4273]: I0514 00:25:35.949151 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e12767f-c168-4722-a3fd-46e1cef4da0d-tigera-ca-bundle\") pod \"calico-kube-controllers-5dfc87b9df-mzz2h\" (UID: \"4e12767f-c168-4722-a3fd-46e1cef4da0d\") " pod="calico-system/calico-kube-controllers-5dfc87b9df-mzz2h" May 14 00:25:35.949293 kubelet[4273]: I0514 00:25:35.949171 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkvnf\" (UniqueName: \"kubernetes.io/projected/7d50dd31-4298-4702-84aa-1d607e2c54f6-kube-api-access-fkvnf\") pod \"coredns-668d6bf9bc-tplq2\" (UID: \"7d50dd31-4298-4702-84aa-1d607e2c54f6\") " pod="kube-system/coredns-668d6bf9bc-tplq2" May 14 00:25:35.949293 kubelet[4273]: I0514 00:25:35.949188 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a5ac81ee-e702-42e2-9641-780d92758acf-calico-apiserver-certs\") pod \"calico-apiserver-7f6599c95d-mpm9c\" (UID: \"a5ac81ee-e702-42e2-9641-780d92758acf\") " pod="calico-apiserver/calico-apiserver-7f6599c95d-mpm9c" May 14 00:25:35.949293 kubelet[4273]: I0514 00:25:35.949206 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a371c5ab-8a26-493c-ab8a-af3a3a0dee7b-calico-apiserver-certs\") pod \"calico-apiserver-7f6599c95d-87vz5\" (UID: \"a371c5ab-8a26-493c-ab8a-af3a3a0dee7b\") " pod="calico-apiserver/calico-apiserver-7f6599c95d-87vz5" May 14 00:25:35.949293 kubelet[4273]: I0514 00:25:35.949225 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmh2s\" (UniqueName: \"kubernetes.io/projected/4e12767f-c168-4722-a3fd-46e1cef4da0d-kube-api-access-lmh2s\") pod \"calico-kube-controllers-5dfc87b9df-mzz2h\" (UID: \"4e12767f-c168-4722-a3fd-46e1cef4da0d\") " pod="calico-system/calico-kube-controllers-5dfc87b9df-mzz2h" May 14 00:25:35.949476 kubelet[4273]: I0514 00:25:35.949247 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlcmd\" (UniqueName: \"kubernetes.io/projected/85f34c01-961a-4637-a5ea-c120695df56f-kube-api-access-qlcmd\") pod \"coredns-668d6bf9bc-s6kln\" (UID: \"85f34c01-961a-4637-a5ea-c120695df56f\") " pod="kube-system/coredns-668d6bf9bc-s6kln" May 14 00:25:35.949476 kubelet[4273]: I0514 00:25:35.949262 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d50dd31-4298-4702-84aa-1d607e2c54f6-config-volume\") pod \"coredns-668d6bf9bc-tplq2\" (UID: \"7d50dd31-4298-4702-84aa-1d607e2c54f6\") " pod="kube-system/coredns-668d6bf9bc-tplq2" May 14 00:25:35.949476 kubelet[4273]: I0514 00:25:35.949279 4273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z29g6\" (UniqueName: \"kubernetes.io/projected/a5ac81ee-e702-42e2-9641-780d92758acf-kube-api-access-z29g6\") pod \"calico-apiserver-7f6599c95d-mpm9c\" (UID: \"a5ac81ee-e702-42e2-9641-780d92758acf\") " pod="calico-apiserver/calico-apiserver-7f6599c95d-mpm9c" May 14 00:25:36.174767 containerd[2740]: time="2025-05-14T00:25:36.174727738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s6kln,Uid:85f34c01-961a-4637-a5ea-c120695df56f,Namespace:kube-system,Attempt:0,}" May 14 00:25:36.178181 containerd[2740]: time="2025-05-14T00:25:36.178155122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tplq2,Uid:7d50dd31-4298-4702-84aa-1d607e2c54f6,Namespace:kube-system,Attempt:0,}" May 14 00:25:36.181662 containerd[2740]: time="2025-05-14T00:25:36.181637506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f6599c95d-mpm9c,Uid:a5ac81ee-e702-42e2-9641-780d92758acf,Namespace:calico-apiserver,Attempt:0,}" May 14 00:25:36.186102 containerd[2740]: time="2025-05-14T00:25:36.186075137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f6599c95d-87vz5,Uid:a371c5ab-8a26-493c-ab8a-af3a3a0dee7b,Namespace:calico-apiserver,Attempt:0,}" May 14 00:25:36.189612 containerd[2740]: time="2025-05-14T00:25:36.189584442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dfc87b9df-mzz2h,Uid:4e12767f-c168-4722-a3fd-46e1cef4da0d,Namespace:calico-system,Attempt:0,}" May 14 00:25:36.239014 containerd[2740]: time="2025-05-14T00:25:36.238965506Z" level=error msg="Failed to destroy network for sandbox \"8a8458223b3aa602e745c3b1bb84e3828b0e91d4245ed409a5c0a84a5f954f73\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:25:36.239273 containerd[2740]: time="2025-05-14T00:25:36.239237868Z" level=error msg="Failed to destroy network for sandbox \"29d89dc9ea11f7a38ed67f72f1bc28213b0019f595d8f92f1b741b337fb1ec05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:25:36.239399 containerd[2740]: time="2025-05-14T00:25:36.239366069Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s6kln,Uid:85f34c01-961a-4637-a5ea-c120695df56f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a8458223b3aa602e745c3b1bb84e3828b0e91d4245ed409a5c0a84a5f954f73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:25:36.239621 containerd[2740]: time="2025-05-14T00:25:36.239592391Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f6599c95d-87vz5,Uid:a371c5ab-8a26-493c-ab8a-af3a3a0dee7b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"29d89dc9ea11f7a38ed67f72f1bc28213b0019f595d8f92f1b741b337fb1ec05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:25:36.239681 containerd[2740]: time="2025-05-14T00:25:36.239617871Z" level=error msg="Failed to destroy network for sandbox \"a7d0cc3fc4653645f82af6f13ab340be77b0fa917d86dd518ac957cf01697ad1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:25:36.239710 kubelet[4273]: E0514 00:25:36.239612 4273 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a8458223b3aa602e745c3b1bb84e3828b0e91d4245ed409a5c0a84a5f954f73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:25:36.239710 kubelet[4273]: E0514 00:25:36.239685 4273 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a8458223b3aa602e745c3b1bb84e3828b0e91d4245ed409a5c0a84a5f954f73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-s6kln" May 14 00:25:36.239710 kubelet[4273]: E0514 00:25:36.239705 4273 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a8458223b3aa602e745c3b1bb84e3828b0e91d4245ed409a5c0a84a5f954f73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-s6kln" May 14 00:25:36.239801 containerd[2740]: time="2025-05-14T00:25:36.239619311Z" level=error msg="Failed to destroy network for sandbox \"424a2848ff701484037bf1d3a7a78eaa24414f211d3e2f4809c8fd29d579ac89\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:25:36.239825 kubelet[4273]: E0514 00:25:36.239710 4273 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29d89dc9ea11f7a38ed67f72f1bc28213b0019f595d8f92f1b741b337fb1ec05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:25:36.239825 kubelet[4273]: E0514 00:25:36.239747 4273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-s6kln_kube-system(85f34c01-961a-4637-a5ea-c120695df56f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-s6kln_kube-system(85f34c01-961a-4637-a5ea-c120695df56f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a8458223b3aa602e745c3b1bb84e3828b0e91d4245ed409a5c0a84a5f954f73\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-s6kln" podUID="85f34c01-961a-4637-a5ea-c120695df56f" May 14 00:25:36.239825 kubelet[4273]: E0514 00:25:36.239756 4273 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29d89dc9ea11f7a38ed67f72f1bc28213b0019f595d8f92f1b741b337fb1ec05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f6599c95d-87vz5" May 14 00:25:36.239904 kubelet[4273]: E0514 00:25:36.239785 4273 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29d89dc9ea11f7a38ed67f72f1bc28213b0019f595d8f92f1b741b337fb1ec05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f6599c95d-87vz5" May 14 00:25:36.239904 kubelet[4273]: E0514 00:25:36.239811 4273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f6599c95d-87vz5_calico-apiserver(a371c5ab-8a26-493c-ab8a-af3a3a0dee7b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f6599c95d-87vz5_calico-apiserver(a371c5ab-8a26-493c-ab8a-af3a3a0dee7b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29d89dc9ea11f7a38ed67f72f1bc28213b0019f595d8f92f1b741b337fb1ec05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f6599c95d-87vz5" podUID="a371c5ab-8a26-493c-ab8a-af3a3a0dee7b" May 14 00:25:36.239973 containerd[2740]: time="2025-05-14T00:25:36.239916313Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f6599c95d-mpm9c,Uid:a5ac81ee-e702-42e2-9641-780d92758acf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7d0cc3fc4653645f82af6f13ab340be77b0fa917d86dd518ac957cf01697ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:25:36.240056 kubelet[4273]: E0514 00:25:36.240033 4273 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7d0cc3fc4653645f82af6f13ab340be77b0fa917d86dd518ac957cf01697ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:25:36.240094 kubelet[4273]: E0514 00:25:36.240069 4273 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7d0cc3fc4653645f82af6f13ab340be77b0fa917d86dd518ac957cf01697ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f6599c95d-mpm9c" May 14 00:25:36.240094 kubelet[4273]: E0514 00:25:36.240085 4273 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7d0cc3fc4653645f82af6f13ab340be77b0fa917d86dd518ac957cf01697ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f6599c95d-mpm9c" May 14 00:25:36.240141 kubelet[4273]: E0514 00:25:36.240122 4273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f6599c95d-mpm9c_calico-apiserver(a5ac81ee-e702-42e2-9641-780d92758acf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f6599c95d-mpm9c_calico-apiserver(a5ac81ee-e702-42e2-9641-780d92758acf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7d0cc3fc4653645f82af6f13ab340be77b0fa917d86dd518ac957cf01697ad1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f6599c95d-mpm9c" podUID="a5ac81ee-e702-42e2-9641-780d92758acf" May 14 00:25:36.240218 containerd[2740]: time="2025-05-14T00:25:36.240191675Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tplq2,Uid:7d50dd31-4298-4702-84aa-1d607e2c54f6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"424a2848ff701484037bf1d3a7a78eaa24414f211d3e2f4809c8fd29d579ac89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:25:36.240326 containerd[2740]: time="2025-05-14T00:25:36.240280556Z" level=error msg="Failed to destroy network for sandbox \"b787cc16de45ebafcd3c410662c6d234bfa297527055f9038da7d2424fb1b6cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:25:36.240351 kubelet[4273]: E0514 00:25:36.240310 4273 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"424a2848ff701484037bf1d3a7a78eaa24414f211d3e2f4809c8fd29d579ac89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:25:36.240430 kubelet[4273]: E0514 00:25:36.240352 4273 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"424a2848ff701484037bf1d3a7a78eaa24414f211d3e2f4809c8fd29d579ac89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tplq2" May 14 00:25:36.240430 kubelet[4273]: E0514 00:25:36.240367 4273 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"424a2848ff701484037bf1d3a7a78eaa24414f211d3e2f4809c8fd29d579ac89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tplq2" May 14 00:25:36.240430 kubelet[4273]: E0514 00:25:36.240415 4273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-tplq2_kube-system(7d50dd31-4298-4702-84aa-1d607e2c54f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-tplq2_kube-system(7d50dd31-4298-4702-84aa-1d607e2c54f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"424a2848ff701484037bf1d3a7a78eaa24414f211d3e2f4809c8fd29d579ac89\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-tplq2" podUID="7d50dd31-4298-4702-84aa-1d607e2c54f6" May 14 00:25:36.240667 containerd[2740]: time="2025-05-14T00:25:36.240641678Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dfc87b9df-mzz2h,Uid:4e12767f-c168-4722-a3fd-46e1cef4da0d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b787cc16de45ebafcd3c410662c6d234bfa297527055f9038da7d2424fb1b6cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:25:36.241181 kubelet[4273]: E0514 00:25:36.240769 4273 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b787cc16de45ebafcd3c410662c6d234bfa297527055f9038da7d2424fb1b6cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:25:36.241181 kubelet[4273]: E0514 00:25:36.240794 4273 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b787cc16de45ebafcd3c410662c6d234bfa297527055f9038da7d2424fb1b6cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5dfc87b9df-mzz2h" May 14 00:25:36.241181 kubelet[4273]: E0514 00:25:36.240807 4273 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b787cc16de45ebafcd3c410662c6d234bfa297527055f9038da7d2424fb1b6cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5dfc87b9df-mzz2h" May 14 00:25:36.241268 kubelet[4273]: E0514 00:25:36.240834 4273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5dfc87b9df-mzz2h_calico-system(4e12767f-c168-4722-a3fd-46e1cef4da0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5dfc87b9df-mzz2h_calico-system(4e12767f-c168-4722-a3fd-46e1cef4da0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b787cc16de45ebafcd3c410662c6d234bfa297527055f9038da7d2424fb1b6cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5dfc87b9df-mzz2h" podUID="4e12767f-c168-4722-a3fd-46e1cef4da0d" May 14 00:25:36.262982 systemd[1]: Created slice kubepods-besteffort-podc3ec6e0e_bc4d_41b2_9bcc_12ee175ce28a.slice - libcontainer container kubepods-besteffort-podc3ec6e0e_bc4d_41b2_9bcc_12ee175ce28a.slice. May 14 00:25:36.264677 containerd[2740]: time="2025-05-14T00:25:36.264652286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mx9xb,Uid:c3ec6e0e-bc4d-41b2-9bcc-12ee175ce28a,Namespace:calico-system,Attempt:0,}" May 14 00:25:36.293904 containerd[2740]: time="2025-05-14T00:25:36.293867650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 14 00:25:36.311304 containerd[2740]: time="2025-05-14T00:25:36.311259451Z" level=error msg="Failed to destroy network for sandbox \"de335c6b276d64a49e61155ef8c786e800758679f7f5a131e9965c190a824bc4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:25:36.311640 containerd[2740]: time="2025-05-14T00:25:36.311610934Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mx9xb,Uid:c3ec6e0e-bc4d-41b2-9bcc-12ee175ce28a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"de335c6b276d64a49e61155ef8c786e800758679f7f5a131e9965c190a824bc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:25:36.311799 kubelet[4273]: E0514 00:25:36.311767 4273 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de335c6b276d64a49e61155ef8c786e800758679f7f5a131e9965c190a824bc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:25:36.312049 kubelet[4273]: E0514 00:25:36.311819 4273 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de335c6b276d64a49e61155ef8c786e800758679f7f5a131e9965c190a824bc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mx9xb" May 14 00:25:36.312049 kubelet[4273]: E0514 00:25:36.311838 4273 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de335c6b276d64a49e61155ef8c786e800758679f7f5a131e9965c190a824bc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mx9xb" May 14 00:25:36.312049 kubelet[4273]: E0514 00:25:36.311874 4273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mx9xb_calico-system(c3ec6e0e-bc4d-41b2-9bcc-12ee175ce28a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mx9xb_calico-system(c3ec6e0e-bc4d-41b2-9bcc-12ee175ce28a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de335c6b276d64a49e61155ef8c786e800758679f7f5a131e9965c190a824bc4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mx9xb" podUID="c3ec6e0e-bc4d-41b2-9bcc-12ee175ce28a" May 14 00:25:38.946913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3855851189.mount: Deactivated successfully. May 14 00:25:38.969927 containerd[2740]: time="2025-05-14T00:25:38.969852997Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 14 00:25:38.970130 containerd[2740]: time="2025-05-14T00:25:38.969903917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:38.970628 containerd[2740]: time="2025-05-14T00:25:38.970603442Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:38.971943 containerd[2740]: time="2025-05-14T00:25:38.971920210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:38.972490 containerd[2740]: time="2025-05-14T00:25:38.972468453Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 2.678565923s" May 14 00:25:38.972516 containerd[2740]: time="2025-05-14T00:25:38.972497133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 14 00:25:38.977740 containerd[2740]: time="2025-05-14T00:25:38.977717685Z" level=info msg="CreateContainer within sandbox \"6049011c1253d105c245ec7c78e7c102b2f675c3de83c40f7828eef420339e86\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 14 00:25:38.982672 containerd[2740]: time="2025-05-14T00:25:38.982645276Z" level=info msg="Container c683fce9041df3a997ada6cec725cb3e54770debeddbc11182af3f9afeb9cffd: CDI devices from CRI Config.CDIDevices: []" May 14 00:25:38.988337 containerd[2740]: time="2025-05-14T00:25:38.988310590Z" level=info msg="CreateContainer within sandbox \"6049011c1253d105c245ec7c78e7c102b2f675c3de83c40f7828eef420339e86\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c683fce9041df3a997ada6cec725cb3e54770debeddbc11182af3f9afeb9cffd\"" May 14 00:25:38.988685 containerd[2740]: time="2025-05-14T00:25:38.988664033Z" level=info msg="StartContainer for \"c683fce9041df3a997ada6cec725cb3e54770debeddbc11182af3f9afeb9cffd\"" May 14 00:25:38.989998 containerd[2740]: time="2025-05-14T00:25:38.989971001Z" level=info msg="connecting to shim c683fce9041df3a997ada6cec725cb3e54770debeddbc11182af3f9afeb9cffd" address="unix:///run/containerd/s/a8fae552cd85d4c3d4edf8a74df65ae43652d53f90d7ac182637393f6c0c9b6f" protocol=ttrpc version=3 May 14 00:25:39.010537 systemd[1]: Started cri-containerd-c683fce9041df3a997ada6cec725cb3e54770debeddbc11182af3f9afeb9cffd.scope - libcontainer container c683fce9041df3a997ada6cec725cb3e54770debeddbc11182af3f9afeb9cffd. May 14 00:25:39.039863 containerd[2740]: time="2025-05-14T00:25:39.039838732Z" level=info msg="StartContainer for \"c683fce9041df3a997ada6cec725cb3e54770debeddbc11182af3f9afeb9cffd\" returns successfully" May 14 00:25:39.146719 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 14 00:25:39.146796 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 14 00:25:39.310999 kubelet[4273]: I0514 00:25:39.310900 4273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jsztg" podStartSLOduration=1.057546833 podStartE2EDuration="7.310884771s" podCreationTimestamp="2025-05-14 00:25:32 +0000 UTC" firstStartedPulling="2025-05-14 00:25:32.719682839 +0000 UTC m=+12.535072023" lastFinishedPulling="2025-05-14 00:25:38.973020777 +0000 UTC m=+18.788409961" observedRunningTime="2025-05-14 00:25:39.310503849 +0000 UTC m=+19.125893033" watchObservedRunningTime="2025-05-14 00:25:39.310884771 +0000 UTC m=+19.126273955" May 14 00:25:47.258637 containerd[2740]: time="2025-05-14T00:25:47.258585873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dfc87b9df-mzz2h,Uid:4e12767f-c168-4722-a3fd-46e1cef4da0d,Namespace:calico-system,Attempt:0,}" May 14 00:25:47.259041 containerd[2740]: time="2025-05-14T00:25:47.258663514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mx9xb,Uid:c3ec6e0e-bc4d-41b2-9bcc-12ee175ce28a,Namespace:calico-system,Attempt:0,}" May 14 00:25:47.362466 systemd-networkd[2653]: cali657d4951ddc: Link UP May 14 00:25:47.362643 systemd-networkd[2653]: cali657d4951ddc: Gained carrier May 14 00:25:47.369281 containerd[2740]: 2025-05-14 00:25:47.276 [INFO][6123] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 00:25:47.369281 containerd[2740]: 2025-05-14 00:25:47.291 [INFO][6123] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--c871d2567c-k8s-calico--kube--controllers--5dfc87b9df--mzz2h-eth0 calico-kube-controllers-5dfc87b9df- calico-system 4e12767f-c168-4722-a3fd-46e1cef4da0d 670 0 2025-05-14 00:25:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5dfc87b9df projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4284.0.0-n-c871d2567c calico-kube-controllers-5dfc87b9df-mzz2h eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali657d4951ddc [] []}} ContainerID="e2d5bcb5dc74c1b7b42cc2d03fd154f8d9fb9fc99a51bea9474779f5e3eb8960" Namespace="calico-system" Pod="calico-kube-controllers-5dfc87b9df-mzz2h" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-calico--kube--controllers--5dfc87b9df--mzz2h-" May 14 00:25:47.369281 containerd[2740]: 2025-05-14 00:25:47.291 [INFO][6123] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e2d5bcb5dc74c1b7b42cc2d03fd154f8d9fb9fc99a51bea9474779f5e3eb8960" Namespace="calico-system" Pod="calico-kube-controllers-5dfc87b9df-mzz2h" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-calico--kube--controllers--5dfc87b9df--mzz2h-eth0" May 14 00:25:47.369281 containerd[2740]: 2025-05-14 00:25:47.328 [INFO][6179] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2d5bcb5dc74c1b7b42cc2d03fd154f8d9fb9fc99a51bea9474779f5e3eb8960" HandleID="k8s-pod-network.e2d5bcb5dc74c1b7b42cc2d03fd154f8d9fb9fc99a51bea9474779f5e3eb8960" Workload="ci--4284.0.0--n--c871d2567c-k8s-calico--kube--controllers--5dfc87b9df--mzz2h-eth0" May 14 00:25:47.369445 containerd[2740]: 2025-05-14 00:25:47.338 [INFO][6179] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e2d5bcb5dc74c1b7b42cc2d03fd154f8d9fb9fc99a51bea9474779f5e3eb8960" HandleID="k8s-pod-network.e2d5bcb5dc74c1b7b42cc2d03fd154f8d9fb9fc99a51bea9474779f5e3eb8960" Workload="ci--4284.0.0--n--c871d2567c-k8s-calico--kube--controllers--5dfc87b9df--mzz2h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000372cc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4284.0.0-n-c871d2567c", "pod":"calico-kube-controllers-5dfc87b9df-mzz2h", "timestamp":"2025-05-14 00:25:47.328173472 +0000 UTC"}, Hostname:"ci-4284.0.0-n-c871d2567c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:25:47.369445 containerd[2740]: 2025-05-14 00:25:47.338 [INFO][6179] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:25:47.369445 containerd[2740]: 2025-05-14 00:25:47.338 [INFO][6179] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:25:47.369445 containerd[2740]: 2025-05-14 00:25:47.338 [INFO][6179] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-c871d2567c' May 14 00:25:47.369445 containerd[2740]: 2025-05-14 00:25:47.339 [INFO][6179] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e2d5bcb5dc74c1b7b42cc2d03fd154f8d9fb9fc99a51bea9474779f5e3eb8960" host="ci-4284.0.0-n-c871d2567c" May 14 00:25:47.369445 containerd[2740]: 2025-05-14 00:25:47.342 [INFO][6179] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-c871d2567c" May 14 00:25:47.369445 containerd[2740]: 2025-05-14 00:25:47.345 [INFO][6179] ipam/ipam.go 489: Trying affinity for 192.168.49.192/26 host="ci-4284.0.0-n-c871d2567c" May 14 00:25:47.369445 containerd[2740]: 2025-05-14 00:25:47.347 [INFO][6179] ipam/ipam.go 155: Attempting to load block cidr=192.168.49.192/26 host="ci-4284.0.0-n-c871d2567c" May 14 00:25:47.369445 containerd[2740]: 2025-05-14 00:25:47.348 [INFO][6179] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.49.192/26 host="ci-4284.0.0-n-c871d2567c" May 14 00:25:47.369629 containerd[2740]: 2025-05-14 00:25:47.348 [INFO][6179] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.49.192/26 handle="k8s-pod-network.e2d5bcb5dc74c1b7b42cc2d03fd154f8d9fb9fc99a51bea9474779f5e3eb8960" host="ci-4284.0.0-n-c871d2567c" May 14 00:25:47.369629 containerd[2740]: 2025-05-14 00:25:47.349 [INFO][6179] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e2d5bcb5dc74c1b7b42cc2d03fd154f8d9fb9fc99a51bea9474779f5e3eb8960 May 14 00:25:47.369629 containerd[2740]: 2025-05-14 00:25:47.352 [INFO][6179] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.49.192/26 handle="k8s-pod-network.e2d5bcb5dc74c1b7b42cc2d03fd154f8d9fb9fc99a51bea9474779f5e3eb8960" host="ci-4284.0.0-n-c871d2567c" May 14 00:25:47.369629 containerd[2740]: 2025-05-14 00:25:47.355 [INFO][6179] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.49.193/26] block=192.168.49.192/26 handle="k8s-pod-network.e2d5bcb5dc74c1b7b42cc2d03fd154f8d9fb9fc99a51bea9474779f5e3eb8960" host="ci-4284.0.0-n-c871d2567c" May 14 00:25:47.369629 containerd[2740]: 2025-05-14 00:25:47.355 [INFO][6179] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.49.193/26] handle="k8s-pod-network.e2d5bcb5dc74c1b7b42cc2d03fd154f8d9fb9fc99a51bea9474779f5e3eb8960" host="ci-4284.0.0-n-c871d2567c" May 14 00:25:47.369629 containerd[2740]: 2025-05-14 00:25:47.355 [INFO][6179] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:25:47.369629 containerd[2740]: 2025-05-14 00:25:47.355 [INFO][6179] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.49.193/26] IPv6=[] ContainerID="e2d5bcb5dc74c1b7b42cc2d03fd154f8d9fb9fc99a51bea9474779f5e3eb8960" HandleID="k8s-pod-network.e2d5bcb5dc74c1b7b42cc2d03fd154f8d9fb9fc99a51bea9474779f5e3eb8960" Workload="ci--4284.0.0--n--c871d2567c-k8s-calico--kube--controllers--5dfc87b9df--mzz2h-eth0" May 14 00:25:47.369757 containerd[2740]: 2025-05-14 00:25:47.357 [INFO][6123] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e2d5bcb5dc74c1b7b42cc2d03fd154f8d9fb9fc99a51bea9474779f5e3eb8960" Namespace="calico-system" Pod="calico-kube-controllers-5dfc87b9df-mzz2h" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-calico--kube--controllers--5dfc87b9df--mzz2h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--c871d2567c-k8s-calico--kube--controllers--5dfc87b9df--mzz2h-eth0", GenerateName:"calico-kube-controllers-5dfc87b9df-", Namespace:"calico-system", SelfLink:"", UID:"4e12767f-c168-4722-a3fd-46e1cef4da0d", ResourceVersion:"670", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 25, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5dfc87b9df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-c871d2567c", ContainerID:"", Pod:"calico-kube-controllers-5dfc87b9df-mzz2h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.49.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali657d4951ddc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:25:47.369811 containerd[2740]: 2025-05-14 00:25:47.357 [INFO][6123] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.49.193/32] ContainerID="e2d5bcb5dc74c1b7b42cc2d03fd154f8d9fb9fc99a51bea9474779f5e3eb8960" Namespace="calico-system" Pod="calico-kube-controllers-5dfc87b9df-mzz2h" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-calico--kube--controllers--5dfc87b9df--mzz2h-eth0" May 14 00:25:47.369811 containerd[2740]: 2025-05-14 00:25:47.357 [INFO][6123] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali657d4951ddc ContainerID="e2d5bcb5dc74c1b7b42cc2d03fd154f8d9fb9fc99a51bea9474779f5e3eb8960" Namespace="calico-system" Pod="calico-kube-controllers-5dfc87b9df-mzz2h" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-calico--kube--controllers--5dfc87b9df--mzz2h-eth0" May 14 00:25:47.369811 containerd[2740]: 2025-05-14 00:25:47.362 [INFO][6123] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e2d5bcb5dc74c1b7b42cc2d03fd154f8d9fb9fc99a51bea9474779f5e3eb8960" Namespace="calico-system" Pod="calico-kube-controllers-5dfc87b9df-mzz2h" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-calico--kube--controllers--5dfc87b9df--mzz2h-eth0" May 14 00:25:47.369871 containerd[2740]: 2025-05-14 00:25:47.362 [INFO][6123] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e2d5bcb5dc74c1b7b42cc2d03fd154f8d9fb9fc99a51bea9474779f5e3eb8960" Namespace="calico-system" Pod="calico-kube-controllers-5dfc87b9df-mzz2h" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-calico--kube--controllers--5dfc87b9df--mzz2h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--c871d2567c-k8s-calico--kube--controllers--5dfc87b9df--mzz2h-eth0", GenerateName:"calico-kube-controllers-5dfc87b9df-", Namespace:"calico-system", SelfLink:"", UID:"4e12767f-c168-4722-a3fd-46e1cef4da0d", ResourceVersion:"670", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 25, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5dfc87b9df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-c871d2567c", ContainerID:"e2d5bcb5dc74c1b7b42cc2d03fd154f8d9fb9fc99a51bea9474779f5e3eb8960", Pod:"calico-kube-controllers-5dfc87b9df-mzz2h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.49.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali657d4951ddc", MAC:"b2:61:24:4d:2f:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:25:47.369918 containerd[2740]: 2025-05-14 00:25:47.368 [INFO][6123] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e2d5bcb5dc74c1b7b42cc2d03fd154f8d9fb9fc99a51bea9474779f5e3eb8960" Namespace="calico-system" Pod="calico-kube-controllers-5dfc87b9df-mzz2h" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-calico--kube--controllers--5dfc87b9df--mzz2h-eth0" May 14 00:25:47.382286 containerd[2740]: time="2025-05-14T00:25:47.382249658Z" level=info msg="connecting to shim e2d5bcb5dc74c1b7b42cc2d03fd154f8d9fb9fc99a51bea9474779f5e3eb8960" address="unix:///run/containerd/s/acbcb7956b13f57c2fcc6b3f4563a6f8708d6198cacc65f3880fa1b3bddca982" namespace=k8s.io protocol=ttrpc version=3 May 14 00:25:47.410552 systemd[1]: Started cri-containerd-e2d5bcb5dc74c1b7b42cc2d03fd154f8d9fb9fc99a51bea9474779f5e3eb8960.scope - libcontainer container e2d5bcb5dc74c1b7b42cc2d03fd154f8d9fb9fc99a51bea9474779f5e3eb8960. May 14 00:25:47.435306 containerd[2740]: time="2025-05-14T00:25:47.435277440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dfc87b9df-mzz2h,Uid:4e12767f-c168-4722-a3fd-46e1cef4da0d,Namespace:calico-system,Attempt:0,} returns sandbox id \"e2d5bcb5dc74c1b7b42cc2d03fd154f8d9fb9fc99a51bea9474779f5e3eb8960\"" May 14 00:25:47.436390 containerd[2740]: time="2025-05-14T00:25:47.436362284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 14 00:25:47.459032 systemd-networkd[2653]: cali18aa9a334f6: Link UP May 14 00:25:47.459198 systemd-networkd[2653]: cali18aa9a334f6: Gained carrier May 14 00:25:47.466108 containerd[2740]: 2025-05-14 00:25:47.276 [INFO][6125] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 00:25:47.466108 containerd[2740]: 2025-05-14 00:25:47.291 [INFO][6125] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--c871d2567c-k8s-csi--node--driver--mx9xb-eth0 csi-node-driver- calico-system c3ec6e0e-bc4d-41b2-9bcc-12ee175ce28a 578 0 2025-05-14 00:25:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4284.0.0-n-c871d2567c csi-node-driver-mx9xb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali18aa9a334f6 [] []}} ContainerID="b16240b5264a61810c099eae120ca39f5dacda714a0b2afbb02af6e48163c7f7" Namespace="calico-system" Pod="csi-node-driver-mx9xb" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-csi--node--driver--mx9xb-" May 14 00:25:47.466108 containerd[2740]: 2025-05-14 00:25:47.291 [INFO][6125] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b16240b5264a61810c099eae120ca39f5dacda714a0b2afbb02af6e48163c7f7" Namespace="calico-system" Pod="csi-node-driver-mx9xb" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-csi--node--driver--mx9xb-eth0" May 14 00:25:47.466108 containerd[2740]: 2025-05-14 00:25:47.328 [INFO][6177] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b16240b5264a61810c099eae120ca39f5dacda714a0b2afbb02af6e48163c7f7" HandleID="k8s-pod-network.b16240b5264a61810c099eae120ca39f5dacda714a0b2afbb02af6e48163c7f7" Workload="ci--4284.0.0--n--c871d2567c-k8s-csi--node--driver--mx9xb-eth0" May 14 00:25:47.466227 containerd[2740]: 2025-05-14 00:25:47.338 [INFO][6177] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b16240b5264a61810c099eae120ca39f5dacda714a0b2afbb02af6e48163c7f7" HandleID="k8s-pod-network.b16240b5264a61810c099eae120ca39f5dacda714a0b2afbb02af6e48163c7f7" Workload="ci--4284.0.0--n--c871d2567c-k8s-csi--node--driver--mx9xb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004e0ca0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4284.0.0-n-c871d2567c", "pod":"csi-node-driver-mx9xb", "timestamp":"2025-05-14 00:25:47.328170912 +0000 UTC"}, Hostname:"ci-4284.0.0-n-c871d2567c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:25:47.466227 containerd[2740]: 2025-05-14 00:25:47.338 [INFO][6177] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:25:47.466227 containerd[2740]: 2025-05-14 00:25:47.355 [INFO][6177] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:25:47.466227 containerd[2740]: 2025-05-14 00:25:47.355 [INFO][6177] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-c871d2567c' May 14 00:25:47.466227 containerd[2740]: 2025-05-14 00:25:47.441 [INFO][6177] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b16240b5264a61810c099eae120ca39f5dacda714a0b2afbb02af6e48163c7f7" host="ci-4284.0.0-n-c871d2567c" May 14 00:25:47.466227 containerd[2740]: 2025-05-14 00:25:47.444 [INFO][6177] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-c871d2567c" May 14 00:25:47.466227 containerd[2740]: 2025-05-14 00:25:47.447 [INFO][6177] ipam/ipam.go 489: Trying affinity for 192.168.49.192/26 host="ci-4284.0.0-n-c871d2567c" May 14 00:25:47.466227 containerd[2740]: 2025-05-14 00:25:47.448 [INFO][6177] ipam/ipam.go 155: Attempting to load block cidr=192.168.49.192/26 host="ci-4284.0.0-n-c871d2567c" May 14 00:25:47.466227 containerd[2740]: 2025-05-14 00:25:47.449 [INFO][6177] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.49.192/26 host="ci-4284.0.0-n-c871d2567c" May 14 00:25:47.466409 containerd[2740]: 2025-05-14 00:25:47.449 [INFO][6177] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.49.192/26 handle="k8s-pod-network.b16240b5264a61810c099eae120ca39f5dacda714a0b2afbb02af6e48163c7f7" host="ci-4284.0.0-n-c871d2567c" May 14 00:25:47.466409 containerd[2740]: 2025-05-14 00:25:47.450 [INFO][6177] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b16240b5264a61810c099eae120ca39f5dacda714a0b2afbb02af6e48163c7f7 May 14 00:25:47.466409 containerd[2740]: 2025-05-14 00:25:47.453 [INFO][6177] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.49.192/26 handle="k8s-pod-network.b16240b5264a61810c099eae120ca39f5dacda714a0b2afbb02af6e48163c7f7" host="ci-4284.0.0-n-c871d2567c" May 14 00:25:47.466409 containerd[2740]: 2025-05-14 00:25:47.456 [INFO][6177] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.49.194/26] block=192.168.49.192/26 handle="k8s-pod-network.b16240b5264a61810c099eae120ca39f5dacda714a0b2afbb02af6e48163c7f7" host="ci-4284.0.0-n-c871d2567c" May 14 00:25:47.466409 containerd[2740]: 2025-05-14 00:25:47.456 [INFO][6177] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.49.194/26] handle="k8s-pod-network.b16240b5264a61810c099eae120ca39f5dacda714a0b2afbb02af6e48163c7f7" host="ci-4284.0.0-n-c871d2567c" May 14 00:25:47.466409 containerd[2740]: 2025-05-14 00:25:47.456 [INFO][6177] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:25:47.466409 containerd[2740]: 2025-05-14 00:25:47.456 [INFO][6177] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.49.194/26] IPv6=[] ContainerID="b16240b5264a61810c099eae120ca39f5dacda714a0b2afbb02af6e48163c7f7" HandleID="k8s-pod-network.b16240b5264a61810c099eae120ca39f5dacda714a0b2afbb02af6e48163c7f7" Workload="ci--4284.0.0--n--c871d2567c-k8s-csi--node--driver--mx9xb-eth0" May 14 00:25:47.466535 containerd[2740]: 2025-05-14 00:25:47.457 [INFO][6125] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b16240b5264a61810c099eae120ca39f5dacda714a0b2afbb02af6e48163c7f7" Namespace="calico-system" Pod="csi-node-driver-mx9xb" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-csi--node--driver--mx9xb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--c871d2567c-k8s-csi--node--driver--mx9xb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c3ec6e0e-bc4d-41b2-9bcc-12ee175ce28a", ResourceVersion:"578", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 25, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-c871d2567c", ContainerID:"", Pod:"csi-node-driver-mx9xb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.49.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali18aa9a334f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:25:47.466582 containerd[2740]: 2025-05-14 00:25:47.458 [INFO][6125] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.49.194/32] ContainerID="b16240b5264a61810c099eae120ca39f5dacda714a0b2afbb02af6e48163c7f7" Namespace="calico-system" Pod="csi-node-driver-mx9xb" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-csi--node--driver--mx9xb-eth0" May 14 00:25:47.466582 containerd[2740]: 2025-05-14 00:25:47.458 [INFO][6125] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18aa9a334f6 ContainerID="b16240b5264a61810c099eae120ca39f5dacda714a0b2afbb02af6e48163c7f7" Namespace="calico-system" Pod="csi-node-driver-mx9xb" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-csi--node--driver--mx9xb-eth0" May 14 00:25:47.466582 containerd[2740]: 2025-05-14 00:25:47.459 [INFO][6125] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b16240b5264a61810c099eae120ca39f5dacda714a0b2afbb02af6e48163c7f7" Namespace="calico-system" Pod="csi-node-driver-mx9xb" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-csi--node--driver--mx9xb-eth0" May 14 00:25:47.466646 containerd[2740]: 2025-05-14 00:25:47.459 [INFO][6125] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b16240b5264a61810c099eae120ca39f5dacda714a0b2afbb02af6e48163c7f7" Namespace="calico-system" Pod="csi-node-driver-mx9xb" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-csi--node--driver--mx9xb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--c871d2567c-k8s-csi--node--driver--mx9xb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c3ec6e0e-bc4d-41b2-9bcc-12ee175ce28a", ResourceVersion:"578", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 25, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-c871d2567c", ContainerID:"b16240b5264a61810c099eae120ca39f5dacda714a0b2afbb02af6e48163c7f7", Pod:"csi-node-driver-mx9xb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.49.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali18aa9a334f6", MAC:"0a:20:2a:3c:1d:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:25:47.466690 containerd[2740]: 2025-05-14 00:25:47.464 [INFO][6125] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b16240b5264a61810c099eae120ca39f5dacda714a0b2afbb02af6e48163c7f7" Namespace="calico-system" Pod="csi-node-driver-mx9xb" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-csi--node--driver--mx9xb-eth0" May 14 00:25:47.477284 containerd[2740]: time="2025-05-14T00:25:47.477257784Z" level=info msg="connecting to shim b16240b5264a61810c099eae120ca39f5dacda714a0b2afbb02af6e48163c7f7" address="unix:///run/containerd/s/8894d2616483c3a0037f1ab5298886cbbf7df27ab6d0c15be00c7358d2dabd16" namespace=k8s.io protocol=ttrpc version=3 May 14 00:25:47.506555 systemd[1]: Started cri-containerd-b16240b5264a61810c099eae120ca39f5dacda714a0b2afbb02af6e48163c7f7.scope - libcontainer container b16240b5264a61810c099eae120ca39f5dacda714a0b2afbb02af6e48163c7f7. May 14 00:25:47.524023 containerd[2740]: time="2025-05-14T00:25:47.523964264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mx9xb,Uid:c3ec6e0e-bc4d-41b2-9bcc-12ee175ce28a,Namespace:calico-system,Attempt:0,} returns sandbox id \"b16240b5264a61810c099eae120ca39f5dacda714a0b2afbb02af6e48163c7f7\"" May 14 00:25:48.146965 containerd[2740]: time="2025-05-14T00:25:48.146924772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:48.147280 containerd[2740]: time="2025-05-14T00:25:48.147236933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 14 00:25:48.147663 containerd[2740]: time="2025-05-14T00:25:48.147612054Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:48.149209 containerd[2740]: time="2025-05-14T00:25:48.149162179Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:48.149835 containerd[2740]: time="2025-05-14T00:25:48.149800461Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 713.400537ms" May 14 00:25:48.149904 containerd[2740]: time="2025-05-14T00:25:48.149840021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 14 00:25:48.150573 containerd[2740]: time="2025-05-14T00:25:48.150552943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 14 00:25:48.154978 containerd[2740]: time="2025-05-14T00:25:48.154956037Z" level=info msg="CreateContainer within sandbox \"e2d5bcb5dc74c1b7b42cc2d03fd154f8d9fb9fc99a51bea9474779f5e3eb8960\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 14 00:25:48.158570 containerd[2740]: time="2025-05-14T00:25:48.158543089Z" level=info msg="Container 9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5: CDI devices from CRI Config.CDIDevices: []" May 14 00:25:48.161932 containerd[2740]: time="2025-05-14T00:25:48.161907660Z" level=info msg="CreateContainer within sandbox \"e2d5bcb5dc74c1b7b42cc2d03fd154f8d9fb9fc99a51bea9474779f5e3eb8960\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5\"" May 14 00:25:48.162226 containerd[2740]: time="2025-05-14T00:25:48.162206021Z" level=info msg="StartContainer for \"9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5\"" May 14 00:25:48.163138 containerd[2740]: time="2025-05-14T00:25:48.163118104Z" level=info msg="connecting to shim 9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5" address="unix:///run/containerd/s/acbcb7956b13f57c2fcc6b3f4563a6f8708d6198cacc65f3880fa1b3bddca982" protocol=ttrpc version=3 May 14 00:25:48.180498 systemd[1]: Started cri-containerd-9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5.scope - libcontainer container 9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5. May 14 00:25:48.208808 containerd[2740]: time="2025-05-14T00:25:48.208746891Z" level=info msg="StartContainer for \"9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5\" returns successfully" May 14 00:25:48.259060 containerd[2740]: time="2025-05-14T00:25:48.259031412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f6599c95d-mpm9c,Uid:a5ac81ee-e702-42e2-9641-780d92758acf,Namespace:calico-apiserver,Attempt:0,}" May 14 00:25:48.323763 kubelet[4273]: I0514 00:25:48.323710 4273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5dfc87b9df-mzz2h" podStartSLOduration=15.60946472 podStartE2EDuration="16.32369398s" podCreationTimestamp="2025-05-14 00:25:32 +0000 UTC" firstStartedPulling="2025-05-14 00:25:47.436148123 +0000 UTC m=+27.251537307" lastFinishedPulling="2025-05-14 00:25:48.150377383 +0000 UTC m=+27.965766567" observedRunningTime="2025-05-14 00:25:48.323190139 +0000 UTC m=+28.138579323" watchObservedRunningTime="2025-05-14 00:25:48.32369398 +0000 UTC m=+28.139083124" May 14 00:25:48.354898 systemd-networkd[2653]: cali9d1460b5604: Link UP May 14 00:25:48.355068 systemd-networkd[2653]: cali9d1460b5604: Gained carrier May 14 00:25:48.361474 containerd[2740]: 2025-05-14 00:25:48.280 [INFO][6426] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 00:25:48.361474 containerd[2740]: 2025-05-14 00:25:48.292 [INFO][6426] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--c871d2567c-k8s-calico--apiserver--7f6599c95d--mpm9c-eth0 calico-apiserver-7f6599c95d- calico-apiserver a5ac81ee-e702-42e2-9641-780d92758acf 669 0 2025-05-14 00:25:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f6599c95d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4284.0.0-n-c871d2567c calico-apiserver-7f6599c95d-mpm9c eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9d1460b5604 [] []}} ContainerID="b7284e1c3d24097829722c91125f92f37604b676b9ddd2fc4635e3cf209c469d" Namespace="calico-apiserver" Pod="calico-apiserver-7f6599c95d-mpm9c" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-calico--apiserver--7f6599c95d--mpm9c-" May 14 00:25:48.361474 containerd[2740]: 2025-05-14 00:25:48.292 [INFO][6426] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b7284e1c3d24097829722c91125f92f37604b676b9ddd2fc4635e3cf209c469d" Namespace="calico-apiserver" Pod="calico-apiserver-7f6599c95d-mpm9c" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-calico--apiserver--7f6599c95d--mpm9c-eth0" May 14 00:25:48.361474 containerd[2740]: 2025-05-14 00:25:48.314 [INFO][6455] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7284e1c3d24097829722c91125f92f37604b676b9ddd2fc4635e3cf209c469d" HandleID="k8s-pod-network.b7284e1c3d24097829722c91125f92f37604b676b9ddd2fc4635e3cf209c469d" Workload="ci--4284.0.0--n--c871d2567c-k8s-calico--apiserver--7f6599c95d--mpm9c-eth0" May 14 00:25:48.361664 containerd[2740]: 2025-05-14 00:25:48.326 [INFO][6455] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b7284e1c3d24097829722c91125f92f37604b676b9ddd2fc4635e3cf209c469d" HandleID="k8s-pod-network.b7284e1c3d24097829722c91125f92f37604b676b9ddd2fc4635e3cf209c469d" Workload="ci--4284.0.0--n--c871d2567c-k8s-calico--apiserver--7f6599c95d--mpm9c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000372d30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4284.0.0-n-c871d2567c", "pod":"calico-apiserver-7f6599c95d-mpm9c", "timestamp":"2025-05-14 00:25:48.314723032 +0000 UTC"}, Hostname:"ci-4284.0.0-n-c871d2567c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:25:48.361664 containerd[2740]: 2025-05-14 00:25:48.326 [INFO][6455] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:25:48.361664 containerd[2740]: 2025-05-14 00:25:48.326 [INFO][6455] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:25:48.361664 containerd[2740]: 2025-05-14 00:25:48.326 [INFO][6455] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-c871d2567c' May 14 00:25:48.361664 containerd[2740]: 2025-05-14 00:25:48.327 [INFO][6455] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b7284e1c3d24097829722c91125f92f37604b676b9ddd2fc4635e3cf209c469d" host="ci-4284.0.0-n-c871d2567c" May 14 00:25:48.361664 containerd[2740]: 2025-05-14 00:25:48.330 [INFO][6455] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-c871d2567c" May 14 00:25:48.361664 containerd[2740]: 2025-05-14 00:25:48.333 [INFO][6455] ipam/ipam.go 489: Trying affinity for 192.168.49.192/26 host="ci-4284.0.0-n-c871d2567c" May 14 00:25:48.361664 containerd[2740]: 2025-05-14 00:25:48.334 [INFO][6455] ipam/ipam.go 155: Attempting to load block cidr=192.168.49.192/26 host="ci-4284.0.0-n-c871d2567c" May 14 00:25:48.361664 containerd[2740]: 2025-05-14 00:25:48.340 [INFO][6455] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.49.192/26 host="ci-4284.0.0-n-c871d2567c" May 14 00:25:48.361865 containerd[2740]: 2025-05-14 00:25:48.340 [INFO][6455] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.49.192/26 handle="k8s-pod-network.b7284e1c3d24097829722c91125f92f37604b676b9ddd2fc4635e3cf209c469d" host="ci-4284.0.0-n-c871d2567c" May 14 00:25:48.361865 containerd[2740]: 2025-05-14 00:25:48.344 [INFO][6455] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b7284e1c3d24097829722c91125f92f37604b676b9ddd2fc4635e3cf209c469d May 14 00:25:48.361865 containerd[2740]: 2025-05-14 00:25:48.348 [INFO][6455] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.49.192/26 handle="k8s-pod-network.b7284e1c3d24097829722c91125f92f37604b676b9ddd2fc4635e3cf209c469d" host="ci-4284.0.0-n-c871d2567c" May 14 00:25:48.361865 containerd[2740]: 2025-05-14 00:25:48.352 [INFO][6455] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.49.195/26] block=192.168.49.192/26 handle="k8s-pod-network.b7284e1c3d24097829722c91125f92f37604b676b9ddd2fc4635e3cf209c469d" host="ci-4284.0.0-n-c871d2567c" May 14 00:25:48.361865 containerd[2740]: 2025-05-14 00:25:48.352 [INFO][6455] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.49.195/26] handle="k8s-pod-network.b7284e1c3d24097829722c91125f92f37604b676b9ddd2fc4635e3cf209c469d" host="ci-4284.0.0-n-c871d2567c" May 14 00:25:48.361865 containerd[2740]: 2025-05-14 00:25:48.352 [INFO][6455] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:25:48.361865 containerd[2740]: 2025-05-14 00:25:48.352 [INFO][6455] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.49.195/26] IPv6=[] ContainerID="b7284e1c3d24097829722c91125f92f37604b676b9ddd2fc4635e3cf209c469d" HandleID="k8s-pod-network.b7284e1c3d24097829722c91125f92f37604b676b9ddd2fc4635e3cf209c469d" Workload="ci--4284.0.0--n--c871d2567c-k8s-calico--apiserver--7f6599c95d--mpm9c-eth0" May 14 00:25:48.361992 containerd[2740]: 2025-05-14 00:25:48.353 [INFO][6426] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b7284e1c3d24097829722c91125f92f37604b676b9ddd2fc4635e3cf209c469d" Namespace="calico-apiserver" Pod="calico-apiserver-7f6599c95d-mpm9c" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-calico--apiserver--7f6599c95d--mpm9c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--c871d2567c-k8s-calico--apiserver--7f6599c95d--mpm9c-eth0", GenerateName:"calico-apiserver-7f6599c95d-", Namespace:"calico-apiserver", SelfLink:"", UID:"a5ac81ee-e702-42e2-9641-780d92758acf", ResourceVersion:"669", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 25, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f6599c95d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-c871d2567c", ContainerID:"", Pod:"calico-apiserver-7f6599c95d-mpm9c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.49.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d1460b5604", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:25:48.362040 containerd[2740]: 2025-05-14 00:25:48.353 [INFO][6426] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.49.195/32] ContainerID="b7284e1c3d24097829722c91125f92f37604b676b9ddd2fc4635e3cf209c469d" Namespace="calico-apiserver" Pod="calico-apiserver-7f6599c95d-mpm9c" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-calico--apiserver--7f6599c95d--mpm9c-eth0" May 14 00:25:48.362040 containerd[2740]: 2025-05-14 00:25:48.353 [INFO][6426] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9d1460b5604 ContainerID="b7284e1c3d24097829722c91125f92f37604b676b9ddd2fc4635e3cf209c469d" Namespace="calico-apiserver" Pod="calico-apiserver-7f6599c95d-mpm9c" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-calico--apiserver--7f6599c95d--mpm9c-eth0" May 14 00:25:48.362040 containerd[2740]: 2025-05-14 00:25:48.355 [INFO][6426] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b7284e1c3d24097829722c91125f92f37604b676b9ddd2fc4635e3cf209c469d" Namespace="calico-apiserver" Pod="calico-apiserver-7f6599c95d-mpm9c" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-calico--apiserver--7f6599c95d--mpm9c-eth0" May 14 00:25:48.362098 containerd[2740]: 2025-05-14 00:25:48.355 [INFO][6426] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b7284e1c3d24097829722c91125f92f37604b676b9ddd2fc4635e3cf209c469d" Namespace="calico-apiserver" Pod="calico-apiserver-7f6599c95d-mpm9c" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-calico--apiserver--7f6599c95d--mpm9c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--c871d2567c-k8s-calico--apiserver--7f6599c95d--mpm9c-eth0", GenerateName:"calico-apiserver-7f6599c95d-", Namespace:"calico-apiserver", SelfLink:"", UID:"a5ac81ee-e702-42e2-9641-780d92758acf", ResourceVersion:"669", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 25, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f6599c95d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-c871d2567c", ContainerID:"b7284e1c3d24097829722c91125f92f37604b676b9ddd2fc4635e3cf209c469d", Pod:"calico-apiserver-7f6599c95d-mpm9c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.49.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d1460b5604", MAC:"7e:a8:bc:78:37:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:25:48.362143 containerd[2740]: 2025-05-14 00:25:48.360 [INFO][6426] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b7284e1c3d24097829722c91125f92f37604b676b9ddd2fc4635e3cf209c469d" Namespace="calico-apiserver" Pod="calico-apiserver-7f6599c95d-mpm9c" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-calico--apiserver--7f6599c95d--mpm9c-eth0" May 14 00:25:48.372281 containerd[2740]: time="2025-05-14T00:25:48.372254417Z" level=info msg="connecting to shim b7284e1c3d24097829722c91125f92f37604b676b9ddd2fc4635e3cf209c469d" address="unix:///run/containerd/s/ec21a6681e11cc60471b7ffe9cadd3c957ea23364fc783cbff1ce7870261453b" namespace=k8s.io protocol=ttrpc version=3 May 14 00:25:48.402550 systemd[1]: Started cri-containerd-b7284e1c3d24097829722c91125f92f37604b676b9ddd2fc4635e3cf209c469d.scope - libcontainer container b7284e1c3d24097829722c91125f92f37604b676b9ddd2fc4635e3cf209c469d. May 14 00:25:48.427469 containerd[2740]: time="2025-05-14T00:25:48.427438514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f6599c95d-mpm9c,Uid:a5ac81ee-e702-42e2-9641-780d92758acf,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b7284e1c3d24097829722c91125f92f37604b676b9ddd2fc4635e3cf209c469d\"" May 14 00:25:48.520985 containerd[2740]: time="2025-05-14T00:25:48.520948175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:48.521065 containerd[2740]: time="2025-05-14T00:25:48.521000735Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 14 00:25:48.521654 containerd[2740]: time="2025-05-14T00:25:48.521633777Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:48.523195 containerd[2740]: time="2025-05-14T00:25:48.523174142Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:48.523742 containerd[2740]: time="2025-05-14T00:25:48.523722184Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 373.139321ms" May 14 00:25:48.523777 containerd[2740]: time="2025-05-14T00:25:48.523749664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 14 00:25:48.524598 containerd[2740]: time="2025-05-14T00:25:48.524574587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 00:25:48.525364 containerd[2740]: time="2025-05-14T00:25:48.525345349Z" level=info msg="CreateContainer within sandbox \"b16240b5264a61810c099eae120ca39f5dacda714a0b2afbb02af6e48163c7f7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 14 00:25:48.530400 containerd[2740]: time="2025-05-14T00:25:48.530367246Z" level=info msg="Container 7760e653d3e0c7a69088b039a7deb148272832a70cf6fae9f9d41b1e0ea2309a: CDI devices from CRI Config.CDIDevices: []" May 14 00:25:48.534659 containerd[2740]: time="2025-05-14T00:25:48.534631499Z" level=info msg="CreateContainer within sandbox \"b16240b5264a61810c099eae120ca39f5dacda714a0b2afbb02af6e48163c7f7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7760e653d3e0c7a69088b039a7deb148272832a70cf6fae9f9d41b1e0ea2309a\"" May 14 00:25:48.534971 containerd[2740]: time="2025-05-14T00:25:48.534945740Z" level=info msg="StartContainer for \"7760e653d3e0c7a69088b039a7deb148272832a70cf6fae9f9d41b1e0ea2309a\"" May 14 00:25:48.536287 containerd[2740]: time="2025-05-14T00:25:48.536258305Z" level=info msg="connecting to shim 7760e653d3e0c7a69088b039a7deb148272832a70cf6fae9f9d41b1e0ea2309a" address="unix:///run/containerd/s/8894d2616483c3a0037f1ab5298886cbbf7df27ab6d0c15be00c7358d2dabd16" protocol=ttrpc version=3 May 14 00:25:48.561547 systemd[1]: Started cri-containerd-7760e653d3e0c7a69088b039a7deb148272832a70cf6fae9f9d41b1e0ea2309a.scope - libcontainer container 7760e653d3e0c7a69088b039a7deb148272832a70cf6fae9f9d41b1e0ea2309a. May 14 00:25:48.600741 containerd[2740]: time="2025-05-14T00:25:48.600708272Z" level=info msg="StartContainer for \"7760e653d3e0c7a69088b039a7deb148272832a70cf6fae9f9d41b1e0ea2309a\" returns successfully" May 14 00:25:48.768501 systemd-networkd[2653]: cali657d4951ddc: Gained IPv6LL May 14 00:25:48.832457 systemd-networkd[2653]: cali18aa9a334f6: Gained IPv6LL May 14 00:25:49.187498 containerd[2740]: time="2025-05-14T00:25:49.187456803Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5\" id:\"37bd30ee08464e9a28523c2cea99ef9a9046a1b1035b68c1dc9bb6bf67499728\" pid:6620 exited_at:{seconds:1747182349 nanos:185419397}" May 14 00:25:49.220787 containerd[2740]: time="2025-05-14T00:25:49.220753063Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5\" id:\"e28130be1d06dbe78ed1c884b28bb54f601e0c6c4eaa7cb2d7159682d9c1f029\" pid:6643 exited_at:{seconds:1747182349 nanos:220592663}" May 14 00:25:49.258915 containerd[2740]: time="2025-05-14T00:25:49.258885218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f6599c95d-87vz5,Uid:a371c5ab-8a26-493c-ab8a-af3a3a0dee7b,Namespace:calico-apiserver,Attempt:0,}" May 14 00:25:49.338390 systemd-networkd[2653]: califb602fa1194: Link UP May 14 00:25:49.338725 systemd-networkd[2653]: califb602fa1194: Gained carrier May 14 00:25:49.349772 containerd[2740]: 2025-05-14 00:25:49.276 [INFO][6658] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 00:25:49.349772 containerd[2740]: 2025-05-14 00:25:49.287 [INFO][6658] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--c871d2567c-k8s-calico--apiserver--7f6599c95d--87vz5-eth0 calico-apiserver-7f6599c95d- calico-apiserver a371c5ab-8a26-493c-ab8a-af3a3a0dee7b 672 0 2025-05-14 00:25:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f6599c95d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4284.0.0-n-c871d2567c calico-apiserver-7f6599c95d-87vz5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califb602fa1194 [] []}} ContainerID="5051f9d397fcd41fda2e654a1e8f98fa0f20d80bae5a25ea9e120974d0eeae36" Namespace="calico-apiserver" Pod="calico-apiserver-7f6599c95d-87vz5" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-calico--apiserver--7f6599c95d--87vz5-" May 14 00:25:49.349772 containerd[2740]: 2025-05-14 00:25:49.287 [INFO][6658] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5051f9d397fcd41fda2e654a1e8f98fa0f20d80bae5a25ea9e120974d0eeae36" Namespace="calico-apiserver" Pod="calico-apiserver-7f6599c95d-87vz5" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-calico--apiserver--7f6599c95d--87vz5-eth0" May 14 00:25:49.349772 containerd[2740]: 2025-05-14 00:25:49.309 [INFO][6682] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5051f9d397fcd41fda2e654a1e8f98fa0f20d80bae5a25ea9e120974d0eeae36" HandleID="k8s-pod-network.5051f9d397fcd41fda2e654a1e8f98fa0f20d80bae5a25ea9e120974d0eeae36" Workload="ci--4284.0.0--n--c871d2567c-k8s-calico--apiserver--7f6599c95d--87vz5-eth0" May 14 00:25:49.350184 containerd[2740]: 2025-05-14 00:25:49.319 [INFO][6682] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5051f9d397fcd41fda2e654a1e8f98fa0f20d80bae5a25ea9e120974d0eeae36" HandleID="k8s-pod-network.5051f9d397fcd41fda2e654a1e8f98fa0f20d80bae5a25ea9e120974d0eeae36" Workload="ci--4284.0.0--n--c871d2567c-k8s-calico--apiserver--7f6599c95d--87vz5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40006303d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4284.0.0-n-c871d2567c", "pod":"calico-apiserver-7f6599c95d-87vz5", "timestamp":"2025-05-14 00:25:49.309789732 +0000 UTC"}, Hostname:"ci-4284.0.0-n-c871d2567c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:25:49.350184 containerd[2740]: 2025-05-14 00:25:49.319 [INFO][6682] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:25:49.350184 containerd[2740]: 2025-05-14 00:25:49.319 [INFO][6682] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:25:49.350184 containerd[2740]: 2025-05-14 00:25:49.319 [INFO][6682] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-c871d2567c' May 14 00:25:49.350184 containerd[2740]: 2025-05-14 00:25:49.320 [INFO][6682] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5051f9d397fcd41fda2e654a1e8f98fa0f20d80bae5a25ea9e120974d0eeae36" host="ci-4284.0.0-n-c871d2567c" May 14 00:25:49.350184 containerd[2740]: 2025-05-14 00:25:49.322 [INFO][6682] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-c871d2567c" May 14 00:25:49.350184 containerd[2740]: 2025-05-14 00:25:49.325 [INFO][6682] ipam/ipam.go 489: Trying affinity for 192.168.49.192/26 host="ci-4284.0.0-n-c871d2567c" May 14 00:25:49.350184 containerd[2740]: 2025-05-14 00:25:49.327 [INFO][6682] ipam/ipam.go 155: Attempting to load block cidr=192.168.49.192/26 host="ci-4284.0.0-n-c871d2567c" May 14 00:25:49.350184 containerd[2740]: 2025-05-14 00:25:49.328 [INFO][6682] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.49.192/26 host="ci-4284.0.0-n-c871d2567c" May 14 00:25:49.350435 containerd[2740]: 2025-05-14 00:25:49.328 [INFO][6682] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.49.192/26 handle="k8s-pod-network.5051f9d397fcd41fda2e654a1e8f98fa0f20d80bae5a25ea9e120974d0eeae36" host="ci-4284.0.0-n-c871d2567c" May 14 00:25:49.350435 containerd[2740]: 2025-05-14 00:25:49.329 [INFO][6682] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5051f9d397fcd41fda2e654a1e8f98fa0f20d80bae5a25ea9e120974d0eeae36 May 14 00:25:49.350435 containerd[2740]: 2025-05-14 00:25:49.331 [INFO][6682] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.49.192/26 handle="k8s-pod-network.5051f9d397fcd41fda2e654a1e8f98fa0f20d80bae5a25ea9e120974d0eeae36" host="ci-4284.0.0-n-c871d2567c" May 14 00:25:49.350435 containerd[2740]: 2025-05-14 00:25:49.335 [INFO][6682] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.49.196/26] block=192.168.49.192/26 handle="k8s-pod-network.5051f9d397fcd41fda2e654a1e8f98fa0f20d80bae5a25ea9e120974d0eeae36" host="ci-4284.0.0-n-c871d2567c" May 14 00:25:49.350435 containerd[2740]: 2025-05-14 00:25:49.335 [INFO][6682] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.49.196/26] handle="k8s-pod-network.5051f9d397fcd41fda2e654a1e8f98fa0f20d80bae5a25ea9e120974d0eeae36" host="ci-4284.0.0-n-c871d2567c" May 14 00:25:49.350435 containerd[2740]: 2025-05-14 00:25:49.335 [INFO][6682] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:25:49.350435 containerd[2740]: 2025-05-14 00:25:49.335 [INFO][6682] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.49.196/26] IPv6=[] ContainerID="5051f9d397fcd41fda2e654a1e8f98fa0f20d80bae5a25ea9e120974d0eeae36" HandleID="k8s-pod-network.5051f9d397fcd41fda2e654a1e8f98fa0f20d80bae5a25ea9e120974d0eeae36" Workload="ci--4284.0.0--n--c871d2567c-k8s-calico--apiserver--7f6599c95d--87vz5-eth0" May 14 00:25:49.350561 containerd[2740]: 2025-05-14 00:25:49.337 [INFO][6658] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5051f9d397fcd41fda2e654a1e8f98fa0f20d80bae5a25ea9e120974d0eeae36" Namespace="calico-apiserver" Pod="calico-apiserver-7f6599c95d-87vz5" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-calico--apiserver--7f6599c95d--87vz5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--c871d2567c-k8s-calico--apiserver--7f6599c95d--87vz5-eth0", GenerateName:"calico-apiserver-7f6599c95d-", Namespace:"calico-apiserver", SelfLink:"", UID:"a371c5ab-8a26-493c-ab8a-af3a3a0dee7b", ResourceVersion:"672", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 25, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f6599c95d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-c871d2567c", ContainerID:"", Pod:"calico-apiserver-7f6599c95d-87vz5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.49.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb602fa1194", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:25:49.350610 containerd[2740]: 2025-05-14 00:25:49.337 [INFO][6658] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.49.196/32] ContainerID="5051f9d397fcd41fda2e654a1e8f98fa0f20d80bae5a25ea9e120974d0eeae36" Namespace="calico-apiserver" Pod="calico-apiserver-7f6599c95d-87vz5" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-calico--apiserver--7f6599c95d--87vz5-eth0" May 14 00:25:49.350610 containerd[2740]: 2025-05-14 00:25:49.337 [INFO][6658] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb602fa1194 ContainerID="5051f9d397fcd41fda2e654a1e8f98fa0f20d80bae5a25ea9e120974d0eeae36" Namespace="calico-apiserver" Pod="calico-apiserver-7f6599c95d-87vz5" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-calico--apiserver--7f6599c95d--87vz5-eth0" May 14 00:25:49.350610 containerd[2740]: 2025-05-14 00:25:49.338 [INFO][6658] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5051f9d397fcd41fda2e654a1e8f98fa0f20d80bae5a25ea9e120974d0eeae36" Namespace="calico-apiserver" Pod="calico-apiserver-7f6599c95d-87vz5" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-calico--apiserver--7f6599c95d--87vz5-eth0" May 14 00:25:49.350668 containerd[2740]: 2025-05-14 00:25:49.338 [INFO][6658] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5051f9d397fcd41fda2e654a1e8f98fa0f20d80bae5a25ea9e120974d0eeae36" Namespace="calico-apiserver" Pod="calico-apiserver-7f6599c95d-87vz5" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-calico--apiserver--7f6599c95d--87vz5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--c871d2567c-k8s-calico--apiserver--7f6599c95d--87vz5-eth0", GenerateName:"calico-apiserver-7f6599c95d-", Namespace:"calico-apiserver", SelfLink:"", UID:"a371c5ab-8a26-493c-ab8a-af3a3a0dee7b", ResourceVersion:"672", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 25, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f6599c95d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-c871d2567c", ContainerID:"5051f9d397fcd41fda2e654a1e8f98fa0f20d80bae5a25ea9e120974d0eeae36", Pod:"calico-apiserver-7f6599c95d-87vz5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.49.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb602fa1194", MAC:"ba:6a:5c:3d:e7:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:25:49.350715 containerd[2740]: 2025-05-14 00:25:49.348 [INFO][6658] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5051f9d397fcd41fda2e654a1e8f98fa0f20d80bae5a25ea9e120974d0eeae36" Namespace="calico-apiserver" Pod="calico-apiserver-7f6599c95d-87vz5" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-calico--apiserver--7f6599c95d--87vz5-eth0" May 14 00:25:49.360996 containerd[2740]: time="2025-05-14T00:25:49.360964446Z" level=info msg="connecting to shim 5051f9d397fcd41fda2e654a1e8f98fa0f20d80bae5a25ea9e120974d0eeae36" address="unix:///run/containerd/s/611cb3dc43f684036d403768f94bc394c50d76d4ca7b69785ac02f6d8c0bd73a" namespace=k8s.io protocol=ttrpc version=3 May 14 00:25:49.383487 systemd[1]: Started cri-containerd-5051f9d397fcd41fda2e654a1e8f98fa0f20d80bae5a25ea9e120974d0eeae36.scope - libcontainer container 5051f9d397fcd41fda2e654a1e8f98fa0f20d80bae5a25ea9e120974d0eeae36. May 14 00:25:49.415747 containerd[2740]: time="2025-05-14T00:25:49.415718731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f6599c95d-87vz5,Uid:a371c5ab-8a26-493c-ab8a-af3a3a0dee7b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5051f9d397fcd41fda2e654a1e8f98fa0f20d80bae5a25ea9e120974d0eeae36\"" May 14 00:25:49.508452 containerd[2740]: time="2025-05-14T00:25:49.508368011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:49.508516 containerd[2740]: time="2025-05-14T00:25:49.508380331Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 14 00:25:49.509103 containerd[2740]: time="2025-05-14T00:25:49.509080293Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:49.510685 containerd[2740]: time="2025-05-14T00:25:49.510665618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:49.511295 containerd[2740]: time="2025-05-14T00:25:49.511275820Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 986.668193ms" May 14 00:25:49.511425 containerd[2740]: time="2025-05-14T00:25:49.511300460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 14 00:25:49.512020 containerd[2740]: time="2025-05-14T00:25:49.511999062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 14 00:25:49.512830 containerd[2740]: time="2025-05-14T00:25:49.512806184Z" level=info msg="CreateContainer within sandbox \"b7284e1c3d24097829722c91125f92f37604b676b9ddd2fc4635e3cf209c469d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 00:25:49.516255 containerd[2740]: time="2025-05-14T00:25:49.516226635Z" level=info msg="Container 7ec883c7bbd72b080e4e321b849d92cc1a28a25fa01249a96a70d920ad01ca88: CDI devices from CRI Config.CDIDevices: []" May 14 00:25:49.519664 containerd[2740]: time="2025-05-14T00:25:49.519637005Z" level=info msg="CreateContainer within sandbox \"b7284e1c3d24097829722c91125f92f37604b676b9ddd2fc4635e3cf209c469d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7ec883c7bbd72b080e4e321b849d92cc1a28a25fa01249a96a70d920ad01ca88\"" May 14 00:25:49.519979 containerd[2740]: time="2025-05-14T00:25:49.519960126Z" level=info msg="StartContainer for \"7ec883c7bbd72b080e4e321b849d92cc1a28a25fa01249a96a70d920ad01ca88\"" May 14 00:25:49.520917 containerd[2740]: time="2025-05-14T00:25:49.520896569Z" level=info msg="connecting to shim 7ec883c7bbd72b080e4e321b849d92cc1a28a25fa01249a96a70d920ad01ca88" address="unix:///run/containerd/s/ec21a6681e11cc60471b7ffe9cadd3c957ea23364fc783cbff1ce7870261453b" protocol=ttrpc version=3 May 14 00:25:49.539482 systemd[1]: Started cri-containerd-7ec883c7bbd72b080e4e321b849d92cc1a28a25fa01249a96a70d920ad01ca88.scope - libcontainer container 7ec883c7bbd72b080e4e321b849d92cc1a28a25fa01249a96a70d920ad01ca88. May 14 00:25:49.567439 containerd[2740]: time="2025-05-14T00:25:49.567414509Z" level=info msg="StartContainer for \"7ec883c7bbd72b080e4e321b849d92cc1a28a25fa01249a96a70d920ad01ca88\" returns successfully" May 14 00:25:49.974745 containerd[2740]: time="2025-05-14T00:25:49.974697538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:49.974880 containerd[2740]: time="2025-05-14T00:25:49.974759018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 14 00:25:49.975434 containerd[2740]: time="2025-05-14T00:25:49.975408860Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:49.980544 containerd[2740]: time="2025-05-14T00:25:49.980511796Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:49.981209 containerd[2740]: time="2025-05-14T00:25:49.981182558Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 469.149335ms" May 14 00:25:49.981231 containerd[2740]: time="2025-05-14T00:25:49.981216158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 14 00:25:49.981970 containerd[2740]: time="2025-05-14T00:25:49.981948440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 00:25:49.982843 containerd[2740]: time="2025-05-14T00:25:49.982819722Z" level=info msg="CreateContainer within sandbox \"b16240b5264a61810c099eae120ca39f5dacda714a0b2afbb02af6e48163c7f7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 14 00:25:49.987093 containerd[2740]: time="2025-05-14T00:25:49.987065415Z" level=info msg="Container ecb215db3a0233319d29781969b96d5be546c181889b3b178d1b32ebd825e344: CDI devices from CRI Config.CDIDevices: []" May 14 00:25:49.991512 containerd[2740]: time="2025-05-14T00:25:49.991478949Z" level=info msg="CreateContainer within sandbox \"b16240b5264a61810c099eae120ca39f5dacda714a0b2afbb02af6e48163c7f7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ecb215db3a0233319d29781969b96d5be546c181889b3b178d1b32ebd825e344\"" May 14 00:25:49.991797 containerd[2740]: time="2025-05-14T00:25:49.991770629Z" level=info msg="StartContainer for \"ecb215db3a0233319d29781969b96d5be546c181889b3b178d1b32ebd825e344\"" May 14 00:25:49.993062 containerd[2740]: time="2025-05-14T00:25:49.993038033Z" level=info msg="connecting to shim ecb215db3a0233319d29781969b96d5be546c181889b3b178d1b32ebd825e344" address="unix:///run/containerd/s/8894d2616483c3a0037f1ab5298886cbbf7df27ab6d0c15be00c7358d2dabd16" protocol=ttrpc version=3 May 14 00:25:50.014535 systemd[1]: Started cri-containerd-ecb215db3a0233319d29781969b96d5be546c181889b3b178d1b32ebd825e344.scope - libcontainer container ecb215db3a0233319d29781969b96d5be546c181889b3b178d1b32ebd825e344. May 14 00:25:50.041998 containerd[2740]: time="2025-05-14T00:25:50.041969373Z" level=info msg="StartContainer for \"ecb215db3a0233319d29781969b96d5be546c181889b3b178d1b32ebd825e344\" returns successfully" May 14 00:25:50.064950 containerd[2740]: time="2025-05-14T00:25:50.064891758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 14 00:25:50.065010 containerd[2740]: time="2025-05-14T00:25:50.064905838Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:25:50.067161 containerd[2740]: time="2025-05-14T00:25:50.067130004Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 85.149124ms" May 14 00:25:50.067206 containerd[2740]: time="2025-05-14T00:25:50.067162124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 14 00:25:50.068939 containerd[2740]: time="2025-05-14T00:25:50.068917209Z" level=info msg="CreateContainer within sandbox \"5051f9d397fcd41fda2e654a1e8f98fa0f20d80bae5a25ea9e120974d0eeae36\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 00:25:50.072317 containerd[2740]: time="2025-05-14T00:25:50.072288539Z" level=info msg="Container 4f7ba79604349154910afcc4d174b565ae482af195f21e0ab5a2062c586f9dc0: CDI devices from CRI Config.CDIDevices: []" May 14 00:25:50.075601 containerd[2740]: time="2025-05-14T00:25:50.075576548Z" level=info msg="CreateContainer within sandbox \"5051f9d397fcd41fda2e654a1e8f98fa0f20d80bae5a25ea9e120974d0eeae36\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4f7ba79604349154910afcc4d174b565ae482af195f21e0ab5a2062c586f9dc0\"" May 14 00:25:50.075932 containerd[2740]: time="2025-05-14T00:25:50.075900989Z" level=info msg="StartContainer for \"4f7ba79604349154910afcc4d174b565ae482af195f21e0ab5a2062c586f9dc0\"" May 14 00:25:50.076900 containerd[2740]: time="2025-05-14T00:25:50.076874992Z" level=info msg="connecting to shim 4f7ba79604349154910afcc4d174b565ae482af195f21e0ab5a2062c586f9dc0" address="unix:///run/containerd/s/611cb3dc43f684036d403768f94bc394c50d76d4ca7b69785ac02f6d8c0bd73a" protocol=ttrpc version=3 May 14 00:25:50.097548 systemd[1]: Started cri-containerd-4f7ba79604349154910afcc4d174b565ae482af195f21e0ab5a2062c586f9dc0.scope - libcontainer container 4f7ba79604349154910afcc4d174b565ae482af195f21e0ab5a2062c586f9dc0. May 14 00:25:50.125471 containerd[2740]: time="2025-05-14T00:25:50.125446649Z" level=info msg="StartContainer for \"4f7ba79604349154910afcc4d174b565ae482af195f21e0ab5a2062c586f9dc0\" returns successfully" May 14 00:25:50.259803 containerd[2740]: time="2025-05-14T00:25:50.259712629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tplq2,Uid:7d50dd31-4298-4702-84aa-1d607e2c54f6,Namespace:kube-system,Attempt:0,}" May 14 00:25:50.299594 kubelet[4273]: I0514 00:25:50.299572 4273 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 14 00:25:50.299826 kubelet[4273]: I0514 00:25:50.299598 4273 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 14 00:25:50.305624 systemd-networkd[2653]: cali9d1460b5604: Gained IPv6LL May 14 00:25:50.333152 kubelet[4273]: I0514 00:25:50.333105 4273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-mx9xb" podStartSLOduration=15.876066145 podStartE2EDuration="18.333090597s" podCreationTimestamp="2025-05-14 00:25:32 +0000 UTC" firstStartedPulling="2025-05-14 00:25:47.524776027 +0000 UTC m=+27.340165211" lastFinishedPulling="2025-05-14 00:25:49.981800479 +0000 UTC m=+29.797189663" observedRunningTime="2025-05-14 00:25:50.332717516 +0000 UTC m=+30.148106740" watchObservedRunningTime="2025-05-14 00:25:50.333090597 +0000 UTC m=+30.148479781" May 14 00:25:50.339670 kubelet[4273]: I0514 00:25:50.339625 4273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f6599c95d-mpm9c" podStartSLOduration=17.25600299 podStartE2EDuration="18.339610335s" podCreationTimestamp="2025-05-14 00:25:32 +0000 UTC" firstStartedPulling="2025-05-14 00:25:48.428304597 +0000 UTC m=+28.243693781" lastFinishedPulling="2025-05-14 00:25:49.511911942 +0000 UTC m=+29.327301126" observedRunningTime="2025-05-14 00:25:50.339307054 +0000 UTC m=+30.154696278" watchObservedRunningTime="2025-05-14 00:25:50.339610335 +0000 UTC m=+30.154999519" May 14 00:25:50.346531 kubelet[4273]: I0514 00:25:50.346489 4273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f6599c95d-87vz5" podStartSLOduration=17.695367362 podStartE2EDuration="18.346475234s" podCreationTimestamp="2025-05-14 00:25:32 +0000 UTC" firstStartedPulling="2025-05-14 00:25:49.416531054 +0000 UTC m=+29.231920238" lastFinishedPulling="2025-05-14 00:25:50.067638926 +0000 UTC m=+29.883028110" observedRunningTime="2025-05-14 00:25:50.346192714 +0000 UTC m=+30.161581898" watchObservedRunningTime="2025-05-14 00:25:50.346475234 +0000 UTC m=+30.161864418" May 14 00:25:50.351049 systemd-networkd[2653]: calib33f1060f99: Link UP May 14 00:25:50.351205 systemd-networkd[2653]: calib33f1060f99: Gained carrier May 14 00:25:50.358419 containerd[2740]: 2025-05-14 00:25:50.277 [INFO][6955] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 00:25:50.358419 containerd[2740]: 2025-05-14 00:25:50.288 [INFO][6955] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--c871d2567c-k8s-coredns--668d6bf9bc--tplq2-eth0 coredns-668d6bf9bc- kube-system 7d50dd31-4298-4702-84aa-1d607e2c54f6 671 0 2025-05-14 00:25:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4284.0.0-n-c871d2567c coredns-668d6bf9bc-tplq2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib33f1060f99 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f98d73482f6c082546d3ec5e75610917ab3b3919c9fcb5e30edf473464eb6167" Namespace="kube-system" Pod="coredns-668d6bf9bc-tplq2" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-coredns--668d6bf9bc--tplq2-" May 14 00:25:50.358419 containerd[2740]: 2025-05-14 00:25:50.288 [INFO][6955] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f98d73482f6c082546d3ec5e75610917ab3b3919c9fcb5e30edf473464eb6167" Namespace="kube-system" Pod="coredns-668d6bf9bc-tplq2" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-coredns--668d6bf9bc--tplq2-eth0" May 14 00:25:50.358419 containerd[2740]: 2025-05-14 00:25:50.311 [INFO][6979] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f98d73482f6c082546d3ec5e75610917ab3b3919c9fcb5e30edf473464eb6167" HandleID="k8s-pod-network.f98d73482f6c082546d3ec5e75610917ab3b3919c9fcb5e30edf473464eb6167" Workload="ci--4284.0.0--n--c871d2567c-k8s-coredns--668d6bf9bc--tplq2-eth0" May 14 00:25:50.358750 containerd[2740]: 2025-05-14 00:25:50.320 [INFO][6979] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f98d73482f6c082546d3ec5e75610917ab3b3919c9fcb5e30edf473464eb6167" HandleID="k8s-pod-network.f98d73482f6c082546d3ec5e75610917ab3b3919c9fcb5e30edf473464eb6167" Workload="ci--4284.0.0--n--c871d2567c-k8s-coredns--668d6bf9bc--tplq2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003caa90), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4284.0.0-n-c871d2567c", "pod":"coredns-668d6bf9bc-tplq2", "timestamp":"2025-05-14 00:25:50.311343975 +0000 UTC"}, Hostname:"ci-4284.0.0-n-c871d2567c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:25:50.358750 containerd[2740]: 2025-05-14 00:25:50.320 [INFO][6979] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:25:50.358750 containerd[2740]: 2025-05-14 00:25:50.320 [INFO][6979] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:25:50.358750 containerd[2740]: 2025-05-14 00:25:50.320 [INFO][6979] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-c871d2567c' May 14 00:25:50.358750 containerd[2740]: 2025-05-14 00:25:50.321 [INFO][6979] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f98d73482f6c082546d3ec5e75610917ab3b3919c9fcb5e30edf473464eb6167" host="ci-4284.0.0-n-c871d2567c" May 14 00:25:50.358750 containerd[2740]: 2025-05-14 00:25:50.324 [INFO][6979] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-c871d2567c" May 14 00:25:50.358750 containerd[2740]: 2025-05-14 00:25:50.333 [INFO][6979] ipam/ipam.go 489: Trying affinity for 192.168.49.192/26 host="ci-4284.0.0-n-c871d2567c" May 14 00:25:50.358750 containerd[2740]: 2025-05-14 00:25:50.337 [INFO][6979] ipam/ipam.go 155: Attempting to load block cidr=192.168.49.192/26 host="ci-4284.0.0-n-c871d2567c" May 14 00:25:50.358750 containerd[2740]: 2025-05-14 00:25:50.339 [INFO][6979] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.49.192/26 host="ci-4284.0.0-n-c871d2567c" May 14 00:25:50.358969 containerd[2740]: 2025-05-14 00:25:50.339 [INFO][6979] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.49.192/26 handle="k8s-pod-network.f98d73482f6c082546d3ec5e75610917ab3b3919c9fcb5e30edf473464eb6167" host="ci-4284.0.0-n-c871d2567c" May 14 00:25:50.358969 containerd[2740]: 2025-05-14 00:25:50.340 [INFO][6979] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f98d73482f6c082546d3ec5e75610917ab3b3919c9fcb5e30edf473464eb6167 May 14 00:25:50.358969 containerd[2740]: 2025-05-14 00:25:50.343 [INFO][6979] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.49.192/26 handle="k8s-pod-network.f98d73482f6c082546d3ec5e75610917ab3b3919c9fcb5e30edf473464eb6167" host="ci-4284.0.0-n-c871d2567c" May 14 00:25:50.358969 containerd[2740]: 2025-05-14 00:25:50.347 [INFO][6979] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.49.197/26] block=192.168.49.192/26 handle="k8s-pod-network.f98d73482f6c082546d3ec5e75610917ab3b3919c9fcb5e30edf473464eb6167" host="ci-4284.0.0-n-c871d2567c" May 14 00:25:50.358969 containerd[2740]: 2025-05-14 00:25:50.347 [INFO][6979] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.49.197/26] handle="k8s-pod-network.f98d73482f6c082546d3ec5e75610917ab3b3919c9fcb5e30edf473464eb6167" host="ci-4284.0.0-n-c871d2567c" May 14 00:25:50.358969 containerd[2740]: 2025-05-14 00:25:50.347 [INFO][6979] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:25:50.358969 containerd[2740]: 2025-05-14 00:25:50.347 [INFO][6979] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.49.197/26] IPv6=[] ContainerID="f98d73482f6c082546d3ec5e75610917ab3b3919c9fcb5e30edf473464eb6167" HandleID="k8s-pod-network.f98d73482f6c082546d3ec5e75610917ab3b3919c9fcb5e30edf473464eb6167" Workload="ci--4284.0.0--n--c871d2567c-k8s-coredns--668d6bf9bc--tplq2-eth0" May 14 00:25:50.359096 containerd[2740]: 2025-05-14 00:25:50.348 [INFO][6955] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f98d73482f6c082546d3ec5e75610917ab3b3919c9fcb5e30edf473464eb6167" Namespace="kube-system" Pod="coredns-668d6bf9bc-tplq2" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-coredns--668d6bf9bc--tplq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--c871d2567c-k8s-coredns--668d6bf9bc--tplq2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7d50dd31-4298-4702-84aa-1d607e2c54f6", ResourceVersion:"671", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 25, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-c871d2567c", ContainerID:"", Pod:"coredns-668d6bf9bc-tplq2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.49.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib33f1060f99", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:25:50.359096 containerd[2740]: 2025-05-14 00:25:50.349 [INFO][6955] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.49.197/32] ContainerID="f98d73482f6c082546d3ec5e75610917ab3b3919c9fcb5e30edf473464eb6167" Namespace="kube-system" Pod="coredns-668d6bf9bc-tplq2" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-coredns--668d6bf9bc--tplq2-eth0" May 14 00:25:50.359096 containerd[2740]: 2025-05-14 00:25:50.349 [INFO][6955] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib33f1060f99 ContainerID="f98d73482f6c082546d3ec5e75610917ab3b3919c9fcb5e30edf473464eb6167" Namespace="kube-system" Pod="coredns-668d6bf9bc-tplq2" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-coredns--668d6bf9bc--tplq2-eth0" May 14 00:25:50.359096 containerd[2740]: 2025-05-14 00:25:50.351 [INFO][6955] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f98d73482f6c082546d3ec5e75610917ab3b3919c9fcb5e30edf473464eb6167" Namespace="kube-system" Pod="coredns-668d6bf9bc-tplq2" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-coredns--668d6bf9bc--tplq2-eth0" May 14 00:25:50.359096 containerd[2740]: 2025-05-14 00:25:50.351 [INFO][6955] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f98d73482f6c082546d3ec5e75610917ab3b3919c9fcb5e30edf473464eb6167" Namespace="kube-system" Pod="coredns-668d6bf9bc-tplq2" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-coredns--668d6bf9bc--tplq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--c871d2567c-k8s-coredns--668d6bf9bc--tplq2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7d50dd31-4298-4702-84aa-1d607e2c54f6", ResourceVersion:"671", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 25, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-c871d2567c", ContainerID:"f98d73482f6c082546d3ec5e75610917ab3b3919c9fcb5e30edf473464eb6167", Pod:"coredns-668d6bf9bc-tplq2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.49.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib33f1060f99", MAC:"c2:bc:ef:bd:12:07", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:25:50.359096 containerd[2740]: 2025-05-14 00:25:50.356 [INFO][6955] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f98d73482f6c082546d3ec5e75610917ab3b3919c9fcb5e30edf473464eb6167" Namespace="kube-system" Pod="coredns-668d6bf9bc-tplq2" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-coredns--668d6bf9bc--tplq2-eth0" May 14 00:25:50.370613 containerd[2740]: time="2025-05-14T00:25:50.370581383Z" level=info msg="connecting to shim f98d73482f6c082546d3ec5e75610917ab3b3919c9fcb5e30edf473464eb6167" address="unix:///run/containerd/s/128311ce629b66050e5ee8c3c55f710bc4c66a989ca5fa4f469a71c38cba348b" namespace=k8s.io protocol=ttrpc version=3 May 14 00:25:50.396551 systemd[1]: Started cri-containerd-f98d73482f6c082546d3ec5e75610917ab3b3919c9fcb5e30edf473464eb6167.scope - libcontainer container f98d73482f6c082546d3ec5e75610917ab3b3919c9fcb5e30edf473464eb6167. May 14 00:25:50.421516 containerd[2740]: time="2025-05-14T00:25:50.421488687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tplq2,Uid:7d50dd31-4298-4702-84aa-1d607e2c54f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"f98d73482f6c082546d3ec5e75610917ab3b3919c9fcb5e30edf473464eb6167\"" May 14 00:25:50.423376 containerd[2740]: time="2025-05-14T00:25:50.423345772Z" level=info msg="CreateContainer within sandbox \"f98d73482f6c082546d3ec5e75610917ab3b3919c9fcb5e30edf473464eb6167\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:25:50.427782 containerd[2740]: time="2025-05-14T00:25:50.427751584Z" level=info msg="Container d037a48a867ea1bb2a3f7579f508606e55af601cc1598b304a8333b5f563959d: CDI devices from CRI Config.CDIDevices: []" May 14 00:25:50.431044 containerd[2740]: time="2025-05-14T00:25:50.430924913Z" level=info msg="CreateContainer within sandbox \"f98d73482f6c082546d3ec5e75610917ab3b3919c9fcb5e30edf473464eb6167\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d037a48a867ea1bb2a3f7579f508606e55af601cc1598b304a8333b5f563959d\"" May 14 00:25:50.432078 containerd[2740]: time="2025-05-14T00:25:50.432053677Z" level=info msg="StartContainer for \"d037a48a867ea1bb2a3f7579f508606e55af601cc1598b304a8333b5f563959d\"" May 14 00:25:50.432438 systemd-networkd[2653]: califb602fa1194: Gained IPv6LL May 14 00:25:50.433531 containerd[2740]: time="2025-05-14T00:25:50.433506281Z" level=info msg="connecting to shim d037a48a867ea1bb2a3f7579f508606e55af601cc1598b304a8333b5f563959d" address="unix:///run/containerd/s/128311ce629b66050e5ee8c3c55f710bc4c66a989ca5fa4f469a71c38cba348b" protocol=ttrpc version=3 May 14 00:25:50.453478 systemd[1]: Started cri-containerd-d037a48a867ea1bb2a3f7579f508606e55af601cc1598b304a8333b5f563959d.scope - libcontainer container d037a48a867ea1bb2a3f7579f508606e55af601cc1598b304a8333b5f563959d. May 14 00:25:50.473569 containerd[2740]: time="2025-05-14T00:25:50.473540034Z" level=info msg="StartContainer for \"d037a48a867ea1bb2a3f7579f508606e55af601cc1598b304a8333b5f563959d\" returns successfully" May 14 00:25:51.126335 kubelet[4273]: I0514 00:25:51.126294 4273 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:25:51.182464 containerd[2740]: time="2025-05-14T00:25:51.182428807Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c683fce9041df3a997ada6cec725cb3e54770debeddbc11182af3f9afeb9cffd\" id:\"6d6bb882e71812262941c513dc612fa8ac1eebd815c7f2203d307a97fd5c9e48\" pid:7188 exit_status:1 exited_at:{seconds:1747182351 nanos:182192406}" May 14 00:25:51.241332 containerd[2740]: time="2025-05-14T00:25:51.241290763Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c683fce9041df3a997ada6cec725cb3e54770debeddbc11182af3f9afeb9cffd\" id:\"cb64b852285a216e6f0039043f2c8a38fb8994c0beee2d5d5ea4af9dba2f48a1\" pid:7223 exit_status:1 exited_at:{seconds:1747182351 nanos:241055922}" May 14 00:25:51.258942 containerd[2740]: time="2025-05-14T00:25:51.258914170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s6kln,Uid:85f34c01-961a-4637-a5ea-c120695df56f,Namespace:kube-system,Attempt:0,}" May 14 00:25:51.329798 kubelet[4273]: I0514 00:25:51.329769 4273 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:25:51.329798 kubelet[4273]: I0514 00:25:51.329794 4273 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:25:51.337535 kubelet[4273]: I0514 00:25:51.337491 4273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-tplq2" podStartSLOduration=25.337476378 podStartE2EDuration="25.337476378s" podCreationTimestamp="2025-05-14 00:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:25:51.337190497 +0000 UTC m=+31.152579681" watchObservedRunningTime="2025-05-14 00:25:51.337476378 +0000 UTC m=+31.152865562" May 14 00:25:51.341057 systemd-networkd[2653]: cali84fe76799b2: Link UP May 14 00:25:51.341188 systemd-networkd[2653]: cali84fe76799b2: Gained carrier May 14 00:25:51.347998 containerd[2740]: 2025-05-14 00:25:51.276 [INFO][7250] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 00:25:51.347998 containerd[2740]: 2025-05-14 00:25:51.287 [INFO][7250] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--c871d2567c-k8s-coredns--668d6bf9bc--s6kln-eth0 coredns-668d6bf9bc- kube-system 85f34c01-961a-4637-a5ea-c120695df56f 667 0 2025-05-14 00:25:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4284.0.0-n-c871d2567c coredns-668d6bf9bc-s6kln eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali84fe76799b2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ca99b7277e91b5a6ee03053b1f0311e8e5f864492fb58d73550716fd40f5dcee" Namespace="kube-system" Pod="coredns-668d6bf9bc-s6kln" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-coredns--668d6bf9bc--s6kln-" May 14 00:25:51.347998 containerd[2740]: 2025-05-14 00:25:51.287 [INFO][7250] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ca99b7277e91b5a6ee03053b1f0311e8e5f864492fb58d73550716fd40f5dcee" Namespace="kube-system" Pod="coredns-668d6bf9bc-s6kln" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-coredns--668d6bf9bc--s6kln-eth0" May 14 00:25:51.347998 containerd[2740]: 2025-05-14 00:25:51.310 [INFO][7278] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca99b7277e91b5a6ee03053b1f0311e8e5f864492fb58d73550716fd40f5dcee" HandleID="k8s-pod-network.ca99b7277e91b5a6ee03053b1f0311e8e5f864492fb58d73550716fd40f5dcee" Workload="ci--4284.0.0--n--c871d2567c-k8s-coredns--668d6bf9bc--s6kln-eth0" May 14 00:25:51.347998 containerd[2740]: 2025-05-14 00:25:51.320 [INFO][7278] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ca99b7277e91b5a6ee03053b1f0311e8e5f864492fb58d73550716fd40f5dcee" HandleID="k8s-pod-network.ca99b7277e91b5a6ee03053b1f0311e8e5f864492fb58d73550716fd40f5dcee" Workload="ci--4284.0.0--n--c871d2567c-k8s-coredns--668d6bf9bc--s6kln-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400081edd0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4284.0.0-n-c871d2567c", "pod":"coredns-668d6bf9bc-s6kln", "timestamp":"2025-05-14 00:25:51.310610307 +0000 UTC"}, Hostname:"ci-4284.0.0-n-c871d2567c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:25:51.347998 containerd[2740]: 2025-05-14 00:25:51.320 [INFO][7278] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:25:51.347998 containerd[2740]: 2025-05-14 00:25:51.320 [INFO][7278] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:25:51.347998 containerd[2740]: 2025-05-14 00:25:51.320 [INFO][7278] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-c871d2567c' May 14 00:25:51.347998 containerd[2740]: 2025-05-14 00:25:51.321 [INFO][7278] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ca99b7277e91b5a6ee03053b1f0311e8e5f864492fb58d73550716fd40f5dcee" host="ci-4284.0.0-n-c871d2567c" May 14 00:25:51.347998 containerd[2740]: 2025-05-14 00:25:51.324 [INFO][7278] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-c871d2567c" May 14 00:25:51.347998 containerd[2740]: 2025-05-14 00:25:51.327 [INFO][7278] ipam/ipam.go 489: Trying affinity for 192.168.49.192/26 host="ci-4284.0.0-n-c871d2567c" May 14 00:25:51.347998 containerd[2740]: 2025-05-14 00:25:51.328 [INFO][7278] ipam/ipam.go 155: Attempting to load block cidr=192.168.49.192/26 host="ci-4284.0.0-n-c871d2567c" May 14 00:25:51.347998 containerd[2740]: 2025-05-14 00:25:51.330 [INFO][7278] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.49.192/26 host="ci-4284.0.0-n-c871d2567c" May 14 00:25:51.347998 containerd[2740]: 2025-05-14 00:25:51.330 [INFO][7278] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.49.192/26 handle="k8s-pod-network.ca99b7277e91b5a6ee03053b1f0311e8e5f864492fb58d73550716fd40f5dcee" host="ci-4284.0.0-n-c871d2567c" May 14 00:25:51.347998 containerd[2740]: 2025-05-14 00:25:51.331 [INFO][7278] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ca99b7277e91b5a6ee03053b1f0311e8e5f864492fb58d73550716fd40f5dcee May 14 00:25:51.347998 containerd[2740]: 2025-05-14 00:25:51.333 [INFO][7278] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.49.192/26 handle="k8s-pod-network.ca99b7277e91b5a6ee03053b1f0311e8e5f864492fb58d73550716fd40f5dcee" host="ci-4284.0.0-n-c871d2567c" May 14 00:25:51.347998 containerd[2740]: 2025-05-14 00:25:51.338 [INFO][7278] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.49.198/26] block=192.168.49.192/26 handle="k8s-pod-network.ca99b7277e91b5a6ee03053b1f0311e8e5f864492fb58d73550716fd40f5dcee" host="ci-4284.0.0-n-c871d2567c" May 14 00:25:51.347998 containerd[2740]: 2025-05-14 00:25:51.338 [INFO][7278] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.49.198/26] handle="k8s-pod-network.ca99b7277e91b5a6ee03053b1f0311e8e5f864492fb58d73550716fd40f5dcee" host="ci-4284.0.0-n-c871d2567c" May 14 00:25:51.347998 containerd[2740]: 2025-05-14 00:25:51.338 [INFO][7278] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:25:51.347998 containerd[2740]: 2025-05-14 00:25:51.338 [INFO][7278] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.49.198/26] IPv6=[] ContainerID="ca99b7277e91b5a6ee03053b1f0311e8e5f864492fb58d73550716fd40f5dcee" HandleID="k8s-pod-network.ca99b7277e91b5a6ee03053b1f0311e8e5f864492fb58d73550716fd40f5dcee" Workload="ci--4284.0.0--n--c871d2567c-k8s-coredns--668d6bf9bc--s6kln-eth0" May 14 00:25:51.348467 containerd[2740]: 2025-05-14 00:25:51.339 [INFO][7250] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ca99b7277e91b5a6ee03053b1f0311e8e5f864492fb58d73550716fd40f5dcee" Namespace="kube-system" Pod="coredns-668d6bf9bc-s6kln" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-coredns--668d6bf9bc--s6kln-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--c871d2567c-k8s-coredns--668d6bf9bc--s6kln-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"85f34c01-961a-4637-a5ea-c120695df56f", ResourceVersion:"667", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 25, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-c871d2567c", ContainerID:"", Pod:"coredns-668d6bf9bc-s6kln", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.49.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84fe76799b2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:25:51.348467 containerd[2740]: 2025-05-14 00:25:51.340 [INFO][7250] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.49.198/32] ContainerID="ca99b7277e91b5a6ee03053b1f0311e8e5f864492fb58d73550716fd40f5dcee" Namespace="kube-system" Pod="coredns-668d6bf9bc-s6kln" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-coredns--668d6bf9bc--s6kln-eth0" May 14 00:25:51.348467 containerd[2740]: 2025-05-14 00:25:51.340 [INFO][7250] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali84fe76799b2 ContainerID="ca99b7277e91b5a6ee03053b1f0311e8e5f864492fb58d73550716fd40f5dcee" Namespace="kube-system" Pod="coredns-668d6bf9bc-s6kln" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-coredns--668d6bf9bc--s6kln-eth0" May 14 00:25:51.348467 containerd[2740]: 2025-05-14 00:25:51.341 [INFO][7250] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca99b7277e91b5a6ee03053b1f0311e8e5f864492fb58d73550716fd40f5dcee" Namespace="kube-system" Pod="coredns-668d6bf9bc-s6kln" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-coredns--668d6bf9bc--s6kln-eth0" May 14 00:25:51.348467 containerd[2740]: 2025-05-14 00:25:51.341 [INFO][7250] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ca99b7277e91b5a6ee03053b1f0311e8e5f864492fb58d73550716fd40f5dcee" Namespace="kube-system" Pod="coredns-668d6bf9bc-s6kln" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-coredns--668d6bf9bc--s6kln-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--c871d2567c-k8s-coredns--668d6bf9bc--s6kln-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"85f34c01-961a-4637-a5ea-c120695df56f", ResourceVersion:"667", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 25, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-c871d2567c", ContainerID:"ca99b7277e91b5a6ee03053b1f0311e8e5f864492fb58d73550716fd40f5dcee", Pod:"coredns-668d6bf9bc-s6kln", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.49.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84fe76799b2", MAC:"86:3c:74:55:f2:db", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:25:51.348467 containerd[2740]: 2025-05-14 00:25:51.346 [INFO][7250] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ca99b7277e91b5a6ee03053b1f0311e8e5f864492fb58d73550716fd40f5dcee" Namespace="kube-system" Pod="coredns-668d6bf9bc-s6kln" WorkloadEndpoint="ci--4284.0.0--n--c871d2567c-k8s-coredns--668d6bf9bc--s6kln-eth0" May 14 00:25:51.360043 containerd[2740]: time="2025-05-14T00:25:51.360016518Z" level=info msg="connecting to shim ca99b7277e91b5a6ee03053b1f0311e8e5f864492fb58d73550716fd40f5dcee" address="unix:///run/containerd/s/a7f256b77776dbd55f85f0dac304a99cdc6f75887220d8cfdecc4a12b1fe5386" namespace=k8s.io protocol=ttrpc version=3 May 14 00:25:51.390481 systemd[1]: Started cri-containerd-ca99b7277e91b5a6ee03053b1f0311e8e5f864492fb58d73550716fd40f5dcee.scope - libcontainer container ca99b7277e91b5a6ee03053b1f0311e8e5f864492fb58d73550716fd40f5dcee. May 14 00:25:51.415200 containerd[2740]: time="2025-05-14T00:25:51.415171744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s6kln,Uid:85f34c01-961a-4637-a5ea-c120695df56f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca99b7277e91b5a6ee03053b1f0311e8e5f864492fb58d73550716fd40f5dcee\"" May 14 00:25:51.416958 containerd[2740]: time="2025-05-14T00:25:51.416933989Z" level=info msg="CreateContainer within sandbox \"ca99b7277e91b5a6ee03053b1f0311e8e5f864492fb58d73550716fd40f5dcee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:25:51.421260 containerd[2740]: time="2025-05-14T00:25:51.421233760Z" level=info msg="Container e59ea666fd4bb99cf7c20e0c200f631cbb6aa61d2bcc3e548f5adf82b1620d07: CDI devices from CRI Config.CDIDevices: []" May 14 00:25:51.423865 containerd[2740]: time="2025-05-14T00:25:51.423838847Z" level=info msg="CreateContainer within sandbox \"ca99b7277e91b5a6ee03053b1f0311e8e5f864492fb58d73550716fd40f5dcee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e59ea666fd4bb99cf7c20e0c200f631cbb6aa61d2bcc3e548f5adf82b1620d07\"" May 14 00:25:51.424166 containerd[2740]: time="2025-05-14T00:25:51.424144568Z" level=info msg="StartContainer for \"e59ea666fd4bb99cf7c20e0c200f631cbb6aa61d2bcc3e548f5adf82b1620d07\"" May 14 00:25:51.424873 containerd[2740]: time="2025-05-14T00:25:51.424850970Z" level=info msg="connecting to shim e59ea666fd4bb99cf7c20e0c200f631cbb6aa61d2bcc3e548f5adf82b1620d07" address="unix:///run/containerd/s/a7f256b77776dbd55f85f0dac304a99cdc6f75887220d8cfdecc4a12b1fe5386" protocol=ttrpc version=3 May 14 00:25:51.447541 systemd[1]: Started cri-containerd-e59ea666fd4bb99cf7c20e0c200f631cbb6aa61d2bcc3e548f5adf82b1620d07.scope - libcontainer container e59ea666fd4bb99cf7c20e0c200f631cbb6aa61d2bcc3e548f5adf82b1620d07. May 14 00:25:51.466940 containerd[2740]: time="2025-05-14T00:25:51.466914561Z" level=info msg="StartContainer for \"e59ea666fd4bb99cf7c20e0c200f631cbb6aa61d2bcc3e548f5adf82b1620d07\" returns successfully" May 14 00:25:52.032509 systemd-networkd[2653]: calib33f1060f99: Gained IPv6LL May 14 00:25:52.340208 kubelet[4273]: I0514 00:25:52.340113 4273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-s6kln" podStartSLOduration=26.34009782 podStartE2EDuration="26.34009782s" podCreationTimestamp="2025-05-14 00:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:25:52.339662019 +0000 UTC m=+32.155051203" watchObservedRunningTime="2025-05-14 00:25:52.34009782 +0000 UTC m=+32.155487004" May 14 00:25:53.376490 systemd-networkd[2653]: cali84fe76799b2: Gained IPv6LL May 14 00:25:53.500500 kubelet[4273]: I0514 00:25:53.500385 4273 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:25:53.708399 kernel: bpftool[7521]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 14 00:25:53.865165 systemd-networkd[2653]: vxlan.calico: Link UP May 14 00:25:53.865169 systemd-networkd[2653]: vxlan.calico: Gained carrier May 14 00:25:55.808487 systemd-networkd[2653]: vxlan.calico: Gained IPv6LL May 14 00:26:16.987474 containerd[2740]: time="2025-05-14T00:26:16.987398505Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5\" id:\"a6c34148e242953e9cab9b6e1bd0ce8f33750da059682a257ded13f62e1c9b0d\" pid:7877 exited_at:{seconds:1747182376 nanos:987180425}" May 14 00:26:19.220857 containerd[2740]: time="2025-05-14T00:26:19.220822128Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5\" id:\"3a06e6d409e3addf812a3c14b01a3707d5a82b9179a0482303c1688c9006e50f\" pid:7898 exited_at:{seconds:1747182379 nanos:220668888}" May 14 00:26:21.233414 containerd[2740]: time="2025-05-14T00:26:21.233365004Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c683fce9041df3a997ada6cec725cb3e54770debeddbc11182af3f9afeb9cffd\" id:\"f54c841d85b99634e1d9a1e7e3815d534e31fa206cb4d9f462378cc76cc526f5\" pid:7922 exited_at:{seconds:1747182381 nanos:233122764}" May 14 00:26:25.200540 kubelet[4273]: I0514 00:26:25.200441 4273 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:26:26.135073 kubelet[4273]: I0514 00:26:26.135032 4273 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:26:49.222035 containerd[2740]: time="2025-05-14T00:26:49.221990894Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5\" id:\"5a63393b1fe49996537b3a754a42e10075c1e7721566351dcdefabe9c9a4b7aa\" pid:7984 exited_at:{seconds:1747182409 nanos:221781448}" May 14 00:26:51.238441 containerd[2740]: time="2025-05-14T00:26:51.238393757Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c683fce9041df3a997ada6cec725cb3e54770debeddbc11182af3f9afeb9cffd\" id:\"6cf94bce9287a8139c65b45261656a0521bcd90b47a4056f3f8975abea438e8e\" pid:8005 exited_at:{seconds:1747182411 nanos:238183191}" May 14 00:27:16.986746 containerd[2740]: time="2025-05-14T00:27:16.986693147Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5\" id:\"2628fc15a4a96253393fdaaab028c18edde567b6c4697d4234dca28d8900c37e\" pid:8051 exited_at:{seconds:1747182436 nanos:986528264}" May 14 00:27:19.229891 containerd[2740]: time="2025-05-14T00:27:19.229833650Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5\" id:\"ad51a13cdaf06df6b4e4c88afea256d797cbf6b14d1c5c9a55d5a90f8ace2191\" pid:8072 exited_at:{seconds:1747182439 nanos:229667248}" May 14 00:27:21.233036 containerd[2740]: time="2025-05-14T00:27:21.232984901Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c683fce9041df3a997ada6cec725cb3e54770debeddbc11182af3f9afeb9cffd\" id:\"5faa569616879d1935228ad275587a9ae142725df66748b289fdd4c8068e946e\" pid:8096 exited_at:{seconds:1747182441 nanos:232765658}" May 14 00:27:49.222999 containerd[2740]: time="2025-05-14T00:27:49.222916039Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5\" id:\"219bc4f24122dfa69f4f0c3524a1ac0be1ffbdf2731680e3efe71fc08cfdcf57\" pid:8162 exited_at:{seconds:1747182469 nanos:222705957}" May 14 00:27:51.238785 containerd[2740]: time="2025-05-14T00:27:51.238709159Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c683fce9041df3a997ada6cec725cb3e54770debeddbc11182af3f9afeb9cffd\" id:\"dbd77a4b3380f3befeb7af63352d5a2dafcb97f364a87e9f25f4bc3b6ea94ef5\" pid:8185 exited_at:{seconds:1747182471 nanos:238454757}" May 14 00:28:16.991926 containerd[2740]: time="2025-05-14T00:28:16.991887398Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5\" id:\"e4e8181382653d571c3f890563d68f6f2841ae87702a8ce0194fba5326369361\" pid:8238 exited_at:{seconds:1747182496 nanos:991715757}" May 14 00:28:19.226820 containerd[2740]: time="2025-05-14T00:28:19.226785705Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5\" id:\"1fe62147052af0d7fef3381e1a8c20a18d6fe26bf4f4afdbda958105cccd633e\" pid:8261 exited_at:{seconds:1747182499 nanos:226608984}" May 14 00:28:21.241795 containerd[2740]: time="2025-05-14T00:28:21.241751298Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c683fce9041df3a997ada6cec725cb3e54770debeddbc11182af3f9afeb9cffd\" id:\"09b72385d4de51b131a60e02b6607830c660d93cfbc066572c6bed50bf753c3f\" pid:8284 exited_at:{seconds:1747182501 nanos:241575937}" May 14 00:28:28.560389 systemd[1]: Started sshd@7-147.75.51.18:22-109.94.172.237:49872.service - OpenSSH per-connection server daemon (109.94.172.237:49872). May 14 00:28:29.573226 sshd[8305]: Invalid user steam from 109.94.172.237 port 49872 May 14 00:28:29.761183 sshd[8305]: Received disconnect from 109.94.172.237 port 49872:11: Bye Bye [preauth] May 14 00:28:29.761183 sshd[8305]: Disconnected from invalid user steam 109.94.172.237 port 49872 [preauth] May 14 00:28:29.763053 systemd[1]: sshd@7-147.75.51.18:22-109.94.172.237:49872.service: Deactivated successfully. May 14 00:28:49.219711 containerd[2740]: time="2025-05-14T00:28:49.219672056Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5\" id:\"b02a761b070a78d66fdfbb5d704944db7695ea39b63e837603f2f9bee9188a67\" pid:8332 exited_at:{seconds:1747182529 nanos:219478695}" May 14 00:28:51.239642 containerd[2740]: time="2025-05-14T00:28:51.239595113Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c683fce9041df3a997ada6cec725cb3e54770debeddbc11182af3f9afeb9cffd\" id:\"b25d5bec239079892fab72b4e794dcc6543f6d140ad86b946bc1a777796169ed\" pid:8354 exited_at:{seconds:1747182531 nanos:239337192}" May 14 00:29:16.999121 containerd[2740]: time="2025-05-14T00:29:16.999030742Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5\" id:\"f5fcade0c8337e5e748e8bc39096fea3de021e8ec80421630bfcca574a1223db\" pid:8411 exited_at:{seconds:1747182556 nanos:998817181}" May 14 00:29:19.223114 containerd[2740]: time="2025-05-14T00:29:19.223073475Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5\" id:\"7f37a44a746e1962521d4327a503d2c0bdf66e0a4090816d3a630a8dac7b1119\" pid:8433 exited_at:{seconds:1747182559 nanos:222867274}" May 14 00:29:21.234433 containerd[2740]: time="2025-05-14T00:29:21.234394641Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c683fce9041df3a997ada6cec725cb3e54770debeddbc11182af3f9afeb9cffd\" id:\"4d1acfacb3de9435c571cbfcb0d4c5ebf8e2521c434e56b7868db34704f2ca59\" pid:8456 exited_at:{seconds:1747182561 nanos:234124919}" May 14 00:29:49.220865 containerd[2740]: time="2025-05-14T00:29:49.220822874Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5\" id:\"16c0be5f6ffd9dc20be4bef3e5937958b531ec6e5ca53bad29a6370c102e56b9\" pid:8500 exited_at:{seconds:1747182589 nanos:220628874}" May 14 00:29:51.236703 containerd[2740]: time="2025-05-14T00:29:51.236669652Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c683fce9041df3a997ada6cec725cb3e54770debeddbc11182af3f9afeb9cffd\" id:\"56a08b98092f59751b81757c9e089169383c95cbc154d08f55c9f787e5f116ea\" pid:8521 exited_at:{seconds:1747182591 nanos:236433811}" May 14 00:30:16.645401 containerd[2740]: time="2025-05-14T00:30:16.645318221Z" level=warning msg="container event discarded" container=7b2752355cf24b75287e8721065e568fc509ebf9b332cae1dc8cdf57eb548408 type=CONTAINER_CREATED_EVENT May 14 00:30:16.645401 containerd[2740]: time="2025-05-14T00:30:16.645389941Z" level=warning msg="container event discarded" container=7b2752355cf24b75287e8721065e568fc509ebf9b332cae1dc8cdf57eb548408 type=CONTAINER_STARTED_EVENT May 14 00:30:16.660598 containerd[2740]: time="2025-05-14T00:30:16.660557811Z" level=warning msg="container event discarded" container=b41cd57c28b6771a41a1acd4f5cbccd60de91f2f849eb309e45f0419cc4f0d89 type=CONTAINER_CREATED_EVENT May 14 00:30:16.660598 containerd[2740]: time="2025-05-14T00:30:16.660588771Z" level=warning msg="container event discarded" container=b41cd57c28b6771a41a1acd4f5cbccd60de91f2f849eb309e45f0419cc4f0d89 type=CONTAINER_STARTED_EVENT May 14 00:30:16.660702 containerd[2740]: time="2025-05-14T00:30:16.660607891Z" level=warning msg="container event discarded" container=ebd0d78105aa1e2105bcb9397de4933dccf358577faca4723e87bcce487c1ef2 type=CONTAINER_CREATED_EVENT May 14 00:30:16.660702 containerd[2740]: time="2025-05-14T00:30:16.660616091Z" level=warning msg="container event discarded" container=ebd0d78105aa1e2105bcb9397de4933dccf358577faca4723e87bcce487c1ef2 type=CONTAINER_STARTED_EVENT May 14 00:30:16.660702 containerd[2740]: time="2025-05-14T00:30:16.660622491Z" level=warning msg="container event discarded" container=f440b6ca1fb4cc5899b60672b46de6ea1905cd45c7c4857cc1c23a82c31386eb type=CONTAINER_CREATED_EVENT May 14 00:30:16.682798 containerd[2740]: time="2025-05-14T00:30:16.682776914Z" level=warning msg="container event discarded" container=aeaaca5edeb08597d8db0a9360b8fe3f128c491f898c4f7c09b1bdc99e68ce37 type=CONTAINER_CREATED_EVENT May 14 00:30:16.682798 containerd[2740]: time="2025-05-14T00:30:16.682791034Z" level=warning msg="container event discarded" container=13ad08b691eac4afc3749f4426692a716128e72673984155abec60960429aed2 type=CONTAINER_CREATED_EVENT May 14 00:30:16.719212 containerd[2740]: time="2025-05-14T00:30:16.719180961Z" level=warning msg="container event discarded" container=f440b6ca1fb4cc5899b60672b46de6ea1905cd45c7c4857cc1c23a82c31386eb type=CONTAINER_STARTED_EVENT May 14 00:30:16.719212 containerd[2740]: time="2025-05-14T00:30:16.719201721Z" level=warning msg="container event discarded" container=aeaaca5edeb08597d8db0a9360b8fe3f128c491f898c4f7c09b1bdc99e68ce37 type=CONTAINER_STARTED_EVENT May 14 00:30:16.719346 containerd[2740]: time="2025-05-14T00:30:16.719216681Z" level=warning msg="container event discarded" container=13ad08b691eac4afc3749f4426692a716128e72673984155abec60960429aed2 type=CONTAINER_STARTED_EVENT May 14 00:30:16.990896 containerd[2740]: time="2025-05-14T00:30:16.990828973Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5\" id:\"d2cfeb2f28c63245f25daf3e0710e545e72870401b1457131defc70863b58c4d\" pid:8568 exited_at:{seconds:1747182616 nanos:990603212}" May 14 00:30:19.228792 containerd[2740]: time="2025-05-14T00:30:19.228758201Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5\" id:\"7d7ffb53c114e48a4628be5fb41982764d59064ecf821917541b7312fc8d338a\" pid:8591 exited_at:{seconds:1747182619 nanos:228560440}" May 14 00:30:21.240785 containerd[2740]: time="2025-05-14T00:30:21.240746263Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c683fce9041df3a997ada6cec725cb3e54770debeddbc11182af3f9afeb9cffd\" id:\"c38fbd567860de172abb738dd5654664c266c8a164aa0829efbe427d6a3279c8\" pid:8615 exited_at:{seconds:1747182621 nanos:240559782}" May 14 00:30:27.095095 containerd[2740]: time="2025-05-14T00:30:27.095010035Z" level=warning msg="container event discarded" container=57500641735e7bd4f4c649d0b41957d0c17f22edef19da6a83d97ac4a3fc283c type=CONTAINER_CREATED_EVENT May 14 00:30:27.095095 containerd[2740]: time="2025-05-14T00:30:27.095069195Z" level=warning msg="container event discarded" container=57500641735e7bd4f4c649d0b41957d0c17f22edef19da6a83d97ac4a3fc283c type=CONTAINER_STARTED_EVENT May 14 00:30:27.106277 containerd[2740]: time="2025-05-14T00:30:27.106232326Z" level=warning msg="container event discarded" container=becc867d976838f66f2438f4d67a32715fa53c3cef44d8e0d99e96a76a021cc1 type=CONTAINER_CREATED_EVENT May 14 00:30:27.155574 containerd[2740]: time="2025-05-14T00:30:27.155536713Z" level=warning msg="container event discarded" container=becc867d976838f66f2438f4d67a32715fa53c3cef44d8e0d99e96a76a021cc1 type=CONTAINER_STARTED_EVENT May 14 00:30:27.242868 containerd[2740]: time="2025-05-14T00:30:27.242842554Z" level=warning msg="container event discarded" container=317dbac3a8088db8d0baf83455256b06900b01aded32488ce7f9e34a3814ef4b type=CONTAINER_CREATED_EVENT May 14 00:30:27.242868 containerd[2740]: time="2025-05-14T00:30:27.242864434Z" level=warning msg="container event discarded" container=317dbac3a8088db8d0baf83455256b06900b01aded32488ce7f9e34a3814ef4b type=CONTAINER_STARTED_EVENT May 14 00:30:28.651494 containerd[2740]: time="2025-05-14T00:30:28.651453710Z" level=warning msg="container event discarded" container=0172bd71f2248cb7ec8f4d01cf487438390ebb13fcb2251cb6e2dd1c942969a9 type=CONTAINER_CREATED_EVENT May 14 00:30:28.713703 containerd[2740]: time="2025-05-14T00:30:28.713644916Z" level=warning msg="container event discarded" container=0172bd71f2248cb7ec8f4d01cf487438390ebb13fcb2251cb6e2dd1c942969a9 type=CONTAINER_STARTED_EVENT May 14 00:30:32.728884 containerd[2740]: time="2025-05-14T00:30:32.728809327Z" level=warning msg="container event discarded" container=6049011c1253d105c245ec7c78e7c102b2f675c3de83c40f7828eef420339e86 type=CONTAINER_CREATED_EVENT May 14 00:30:32.728884 containerd[2740]: time="2025-05-14T00:30:32.728847887Z" level=warning msg="container event discarded" container=6049011c1253d105c245ec7c78e7c102b2f675c3de83c40f7828eef420339e86 type=CONTAINER_STARTED_EVENT May 14 00:30:32.728884 containerd[2740]: time="2025-05-14T00:30:32.728856167Z" level=warning msg="container event discarded" container=97ebdf9c4714793d537dcbe9c8b054d790db105cb4e442a7f3b4a3aba7eacc52 type=CONTAINER_CREATED_EVENT May 14 00:30:32.728884 containerd[2740]: time="2025-05-14T00:30:32.728863287Z" level=warning msg="container event discarded" container=97ebdf9c4714793d537dcbe9c8b054d790db105cb4e442a7f3b4a3aba7eacc52 type=CONTAINER_STARTED_EVENT May 14 00:30:33.142587 containerd[2740]: time="2025-05-14T00:30:33.142556148Z" level=warning msg="container event discarded" container=81122568053cca9fc64f56213b634791d4db70e38a6d015d2dce4f293d4c5a9e type=CONTAINER_CREATED_EVENT May 14 00:30:33.195887 containerd[2740]: time="2025-05-14T00:30:33.195850072Z" level=warning msg="container event discarded" container=81122568053cca9fc64f56213b634791d4db70e38a6d015d2dce4f293d4c5a9e type=CONTAINER_STARTED_EVENT May 14 00:30:33.285124 containerd[2740]: time="2025-05-14T00:30:33.285080282Z" level=warning msg="container event discarded" container=81122568053cca9fc64f56213b634791d4db70e38a6d015d2dce4f293d4c5a9e type=CONTAINER_STOPPED_EVENT May 14 00:30:33.762675 containerd[2740]: time="2025-05-14T00:30:33.762621676Z" level=warning msg="container event discarded" container=acd5b3fa31b1ad8ac765648713f543479f6c8a9d802f10f063914f271bedeee7 type=CONTAINER_CREATED_EVENT May 14 00:30:33.819796 containerd[2740]: time="2025-05-14T00:30:33.819761498Z" level=warning msg="container event discarded" container=acd5b3fa31b1ad8ac765648713f543479f6c8a9d802f10f063914f271bedeee7 type=CONTAINER_STARTED_EVENT May 14 00:30:35.384729 containerd[2740]: time="2025-05-14T00:30:35.384613325Z" level=warning msg="container event discarded" container=63a6cf7faf5505700db125d6638ecfcaf02450cc229c5688a440671b1c3dd3c2 type=CONTAINER_CREATED_EVENT May 14 00:30:35.432961 containerd[2740]: time="2025-05-14T00:30:35.432913427Z" level=warning msg="container event discarded" container=63a6cf7faf5505700db125d6638ecfcaf02450cc229c5688a440671b1c3dd3c2 type=CONTAINER_STARTED_EVENT May 14 00:30:36.053305 containerd[2740]: time="2025-05-14T00:30:36.053231116Z" level=warning msg="container event discarded" container=63a6cf7faf5505700db125d6638ecfcaf02450cc229c5688a440671b1c3dd3c2 type=CONTAINER_STOPPED_EVENT May 14 00:30:38.999184 containerd[2740]: time="2025-05-14T00:30:38.999095320Z" level=warning msg="container event discarded" container=c683fce9041df3a997ada6cec725cb3e54770debeddbc11182af3f9afeb9cffd type=CONTAINER_CREATED_EVENT May 14 00:30:39.049432 containerd[2740]: time="2025-05-14T00:30:39.049351231Z" level=warning msg="container event discarded" container=c683fce9041df3a997ada6cec725cb3e54770debeddbc11182af3f9afeb9cffd type=CONTAINER_STARTED_EVENT May 14 00:30:47.446197 containerd[2740]: time="2025-05-14T00:30:47.446084157Z" level=warning msg="container event discarded" container=e2d5bcb5dc74c1b7b42cc2d03fd154f8d9fb9fc99a51bea9474779f5e3eb8960 type=CONTAINER_CREATED_EVENT May 14 00:30:47.446197 containerd[2740]: time="2025-05-14T00:30:47.446126117Z" level=warning msg="container event discarded" container=e2d5bcb5dc74c1b7b42cc2d03fd154f8d9fb9fc99a51bea9474779f5e3eb8960 type=CONTAINER_STARTED_EVENT May 14 00:30:47.534335 containerd[2740]: time="2025-05-14T00:30:47.534295082Z" level=warning msg="container event discarded" container=b16240b5264a61810c099eae120ca39f5dacda714a0b2afbb02af6e48163c7f7 type=CONTAINER_CREATED_EVENT May 14 00:30:47.534335 containerd[2740]: time="2025-05-14T00:30:47.534326962Z" level=warning msg="container event discarded" container=b16240b5264a61810c099eae120ca39f5dacda714a0b2afbb02af6e48163c7f7 type=CONTAINER_STARTED_EVENT May 14 00:30:48.171488 containerd[2740]: time="2025-05-14T00:30:48.171457364Z" level=warning msg="container event discarded" container=9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5 type=CONTAINER_CREATED_EVENT May 14 00:30:48.218663 containerd[2740]: time="2025-05-14T00:30:48.218627900Z" level=warning msg="container event discarded" container=9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5 type=CONTAINER_STARTED_EVENT May 14 00:30:48.438072 containerd[2740]: time="2025-05-14T00:30:48.438009186Z" level=warning msg="container event discarded" container=b7284e1c3d24097829722c91125f92f37604b676b9ddd2fc4635e3cf209c469d type=CONTAINER_CREATED_EVENT May 14 00:30:48.438072 containerd[2740]: time="2025-05-14T00:30:48.438043067Z" level=warning msg="container event discarded" container=b7284e1c3d24097829722c91125f92f37604b676b9ddd2fc4635e3cf209c469d type=CONTAINER_STARTED_EVENT May 14 00:30:48.544688 containerd[2740]: time="2025-05-14T00:30:48.544649435Z" level=warning msg="container event discarded" container=7760e653d3e0c7a69088b039a7deb148272832a70cf6fae9f9d41b1e0ea2309a type=CONTAINER_CREATED_EVENT May 14 00:30:48.610718 containerd[2740]: time="2025-05-14T00:30:48.610680458Z" level=warning msg="container event discarded" container=7760e653d3e0c7a69088b039a7deb148272832a70cf6fae9f9d41b1e0ea2309a type=CONTAINER_STARTED_EVENT May 14 00:30:49.223918 containerd[2740]: time="2025-05-14T00:30:49.223886950Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5\" id:\"5e830e9182577eae4b4f262b2790f33897bc56c40a270869b657f177b1935d05\" pid:8669 exited_at:{seconds:1747182649 nanos:223702509}" May 14 00:30:49.426121 containerd[2740]: time="2025-05-14T00:30:49.426087037Z" level=warning msg="container event discarded" container=5051f9d397fcd41fda2e654a1e8f98fa0f20d80bae5a25ea9e120974d0eeae36 type=CONTAINER_CREATED_EVENT May 14 00:30:49.426121 containerd[2740]: time="2025-05-14T00:30:49.426112518Z" level=warning msg="container event discarded" container=5051f9d397fcd41fda2e654a1e8f98fa0f20d80bae5a25ea9e120974d0eeae36 type=CONTAINER_STARTED_EVENT May 14 00:30:49.529691 containerd[2740]: time="2025-05-14T00:30:49.529618352Z" level=warning msg="container event discarded" container=7ec883c7bbd72b080e4e321b849d92cc1a28a25fa01249a96a70d920ad01ca88 type=CONTAINER_CREATED_EVENT May 14 00:30:49.576818 containerd[2740]: time="2025-05-14T00:30:49.576790769Z" level=warning msg="container event discarded" container=7ec883c7bbd72b080e4e321b849d92cc1a28a25fa01249a96a70d920ad01ca88 type=CONTAINER_STARTED_EVENT May 14 00:30:50.000789 containerd[2740]: time="2025-05-14T00:30:50.000745273Z" level=warning msg="container event discarded" container=ecb215db3a0233319d29781969b96d5be546c181889b3b178d1b32ebd825e344 type=CONTAINER_CREATED_EVENT May 14 00:30:50.052071 containerd[2740]: time="2025-05-14T00:30:50.052034788Z" level=warning msg="container event discarded" container=ecb215db3a0233319d29781969b96d5be546c181889b3b178d1b32ebd825e344 type=CONTAINER_STARTED_EVENT May 14 00:30:50.085239 containerd[2740]: time="2025-05-14T00:30:50.085209420Z" level=warning msg="container event discarded" container=4f7ba79604349154910afcc4d174b565ae482af195f21e0ab5a2062c586f9dc0 type=CONTAINER_CREATED_EVENT May 14 00:30:50.135410 containerd[2740]: time="2025-05-14T00:30:50.135385170Z" level=warning msg="container event discarded" container=4f7ba79604349154910afcc4d174b565ae482af195f21e0ab5a2062c586f9dc0 type=CONTAINER_STARTED_EVENT May 14 00:30:50.431678 containerd[2740]: time="2025-05-14T00:30:50.431643928Z" level=warning msg="container event discarded" container=f98d73482f6c082546d3ec5e75610917ab3b3919c9fcb5e30edf473464eb6167 type=CONTAINER_CREATED_EVENT May 14 00:30:50.431678 containerd[2740]: time="2025-05-14T00:30:50.431671649Z" level=warning msg="container event discarded" container=f98d73482f6c082546d3ec5e75610917ab3b3919c9fcb5e30edf473464eb6167 type=CONTAINER_STARTED_EVENT May 14 00:30:50.431772 containerd[2740]: time="2025-05-14T00:30:50.431680609Z" level=warning msg="container event discarded" container=d037a48a867ea1bb2a3f7579f508606e55af601cc1598b304a8333b5f563959d type=CONTAINER_CREATED_EVENT May 14 00:30:50.483790 containerd[2740]: time="2025-05-14T00:30:50.483748727Z" level=warning msg="container event discarded" container=d037a48a867ea1bb2a3f7579f508606e55af601cc1598b304a8333b5f563959d type=CONTAINER_STARTED_EVENT May 14 00:30:51.238064 containerd[2740]: time="2025-05-14T00:30:51.238028826Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c683fce9041df3a997ada6cec725cb3e54770debeddbc11182af3f9afeb9cffd\" id:\"95aea48fe32fcc276834c4b2a6c00315620b1c186d592c8ada7f27ec75ff1ec0\" pid:8692 exited_at:{seconds:1747182651 nanos:237875305}" May 14 00:30:51.425302 containerd[2740]: time="2025-05-14T00:30:51.425255164Z" level=warning msg="container event discarded" container=ca99b7277e91b5a6ee03053b1f0311e8e5f864492fb58d73550716fd40f5dcee type=CONTAINER_CREATED_EVENT May 14 00:30:51.425302 containerd[2740]: time="2025-05-14T00:30:51.425291204Z" level=warning msg="container event discarded" container=ca99b7277e91b5a6ee03053b1f0311e8e5f864492fb58d73550716fd40f5dcee type=CONTAINER_STARTED_EVENT May 14 00:30:51.425302 containerd[2740]: time="2025-05-14T00:30:51.425305884Z" level=warning msg="container event discarded" container=e59ea666fd4bb99cf7c20e0c200f631cbb6aa61d2bcc3e548f5adf82b1620d07 type=CONTAINER_CREATED_EVENT May 14 00:30:51.476554 containerd[2740]: time="2025-05-14T00:30:51.476516919Z" level=warning msg="container event discarded" container=e59ea666fd4bb99cf7c20e0c200f631cbb6aa61d2bcc3e548f5adf82b1620d07 type=CONTAINER_STARTED_EVENT May 14 00:31:16.992966 containerd[2740]: time="2025-05-14T00:31:16.992925259Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5\" id:\"37b950ea429f586e6fd5063236e1b35896533f074db2f48753560c459dd7f452\" pid:8743 exited_at:{seconds:1747182676 nanos:992731698}" May 14 00:31:19.222987 containerd[2740]: time="2025-05-14T00:31:19.222947549Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5\" id:\"c82af0091a0493595dc6f960cdc10d0f2c9027b5906d7e088bbfdda823ce1baa\" pid:8765 exited_at:{seconds:1747182679 nanos:222753308}" May 14 00:31:21.238913 containerd[2740]: time="2025-05-14T00:31:21.238876498Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c683fce9041df3a997ada6cec725cb3e54770debeddbc11182af3f9afeb9cffd\" id:\"503dfbc4da245ef2cd86e7bac62bb5aed648cc130a8ffd959393134d525029a6\" pid:8791 exited_at:{seconds:1747182681 nanos:238638057}" May 14 00:31:28.733404 update_engine[2734]: I20250514 00:31:28.733011 2734 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 14 00:31:28.733404 update_engine[2734]: I20250514 00:31:28.733072 2734 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 14 00:31:28.733404 update_engine[2734]: I20250514 00:31:28.733297 2734 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 14 00:31:28.733808 update_engine[2734]: I20250514 00:31:28.733654 2734 omaha_request_params.cc:62] Current group set to alpha May 14 00:31:28.733808 update_engine[2734]: I20250514 00:31:28.733737 2734 update_attempter.cc:499] Already updated boot flags. Skipping. May 14 00:31:28.733808 update_engine[2734]: I20250514 00:31:28.733747 2734 update_attempter.cc:643] Scheduling an action processor start. May 14 00:31:28.733808 update_engine[2734]: I20250514 00:31:28.733759 2734 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 14 00:31:28.733808 update_engine[2734]: I20250514 00:31:28.733786 2734 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 14 00:31:28.733914 update_engine[2734]: I20250514 00:31:28.733829 2734 omaha_request_action.cc:271] Posting an Omaha request to disabled May 14 00:31:28.733914 update_engine[2734]: I20250514 00:31:28.733837 2734 omaha_request_action.cc:272] Request: May 14 00:31:28.733914 update_engine[2734]: May 14 00:31:28.733914 update_engine[2734]: May 14 00:31:28.733914 update_engine[2734]: May 14 00:31:28.733914 update_engine[2734]: May 14 00:31:28.733914 update_engine[2734]: May 14 00:31:28.733914 update_engine[2734]: May 14 00:31:28.733914 update_engine[2734]: May 14 00:31:28.733914 update_engine[2734]: May 14 00:31:28.733914 update_engine[2734]: I20250514 00:31:28.733842 2734 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 00:31:28.734103 locksmithd[2766]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 14 00:31:28.734816 update_engine[2734]: I20250514 00:31:28.734790 2734 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 00:31:28.735217 update_engine[2734]: I20250514 00:31:28.735191 2734 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 00:31:28.735869 update_engine[2734]: E20250514 00:31:28.735844 2734 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 00:31:28.735987 update_engine[2734]: I20250514 00:31:28.735970 2734 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 14 00:31:38.695900 update_engine[2734]: I20250514 00:31:38.695410 2734 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 00:31:38.695900 update_engine[2734]: I20250514 00:31:38.695698 2734 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 00:31:38.695900 update_engine[2734]: I20250514 00:31:38.695897 2734 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 00:31:38.696407 update_engine[2734]: E20250514 00:31:38.696363 2734 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 00:31:38.696549 update_engine[2734]: I20250514 00:31:38.696529 2734 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 14 00:31:48.699750 update_engine[2734]: I20250514 00:31:48.699397 2734 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 00:31:48.699750 update_engine[2734]: I20250514 00:31:48.699662 2734 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 00:31:48.700147 update_engine[2734]: I20250514 00:31:48.699874 2734 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 00:31:48.700309 update_engine[2734]: E20250514 00:31:48.700286 2734 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 00:31:48.700485 update_engine[2734]: I20250514 00:31:48.700464 2734 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 14 00:31:49.224976 containerd[2740]: time="2025-05-14T00:31:49.224932619Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5\" id:\"72aefb60630a6265059b7af6f93b81bed7656b7189c45f57f2459a7e2d13279b\" pid:8830 exited_at:{seconds:1747182709 nanos:224717378}" May 14 00:31:51.246038 containerd[2740]: time="2025-05-14T00:31:51.245996666Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c683fce9041df3a997ada6cec725cb3e54770debeddbc11182af3f9afeb9cffd\" id:\"403478ce59f79acdbf3f6f97e7bf01f2460ef75762248d7d24b200474b8c43c5\" pid:8855 exited_at:{seconds:1747182711 nanos:245789585}" May 14 00:31:58.695895 update_engine[2734]: I20250514 00:31:58.695412 2734 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 00:31:58.695895 update_engine[2734]: I20250514 00:31:58.695648 2734 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 00:31:58.695895 update_engine[2734]: I20250514 00:31:58.695855 2734 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 00:31:58.696844 update_engine[2734]: E20250514 00:31:58.696465 2734 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 00:31:58.696844 update_engine[2734]: I20250514 00:31:58.696521 2734 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 14 00:31:58.696844 update_engine[2734]: I20250514 00:31:58.696529 2734 omaha_request_action.cc:617] Omaha request response: May 14 00:31:58.696844 update_engine[2734]: E20250514 00:31:58.696599 2734 omaha_request_action.cc:636] Omaha request network transfer failed. May 14 00:31:58.696844 update_engine[2734]: I20250514 00:31:58.696614 2734 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 14 00:31:58.696844 update_engine[2734]: I20250514 00:31:58.696619 2734 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 14 00:31:58.696844 update_engine[2734]: I20250514 00:31:58.696622 2734 update_attempter.cc:306] Processing Done. May 14 00:31:58.696844 update_engine[2734]: E20250514 00:31:58.696635 2734 update_attempter.cc:619] Update failed. May 14 00:31:58.696844 update_engine[2734]: I20250514 00:31:58.696640 2734 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 14 00:31:58.696844 update_engine[2734]: I20250514 00:31:58.696645 2734 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 14 00:31:58.696844 update_engine[2734]: I20250514 00:31:58.696650 2734 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 14 00:31:58.696844 update_engine[2734]: I20250514 00:31:58.696705 2734 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 14 00:31:58.696844 update_engine[2734]: I20250514 00:31:58.696725 2734 omaha_request_action.cc:271] Posting an Omaha request to disabled May 14 00:31:58.696844 update_engine[2734]: I20250514 00:31:58.696730 2734 omaha_request_action.cc:272] Request: May 14 00:31:58.696844 update_engine[2734]: May 14 00:31:58.696844 update_engine[2734]: May 14 00:31:58.697193 update_engine[2734]: May 14 00:31:58.697193 update_engine[2734]: May 14 00:31:58.697193 update_engine[2734]: May 14 00:31:58.697193 update_engine[2734]: May 14 00:31:58.697193 update_engine[2734]: I20250514 00:31:58.696735 2734 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 00:31:58.697193 update_engine[2734]: I20250514 00:31:58.696847 2734 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 00:31:58.697193 update_engine[2734]: I20250514 00:31:58.697014 2734 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 00:31:58.697313 locksmithd[2766]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 14 00:31:58.697552 update_engine[2734]: E20250514 00:31:58.697249 2734 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 00:31:58.697552 update_engine[2734]: I20250514 00:31:58.697286 2734 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 14 00:31:58.697552 update_engine[2734]: I20250514 00:31:58.697294 2734 omaha_request_action.cc:617] Omaha request response: May 14 00:31:58.697552 update_engine[2734]: I20250514 00:31:58.697299 2734 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 14 00:31:58.697552 update_engine[2734]: I20250514 00:31:58.697304 2734 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 14 00:31:58.697552 update_engine[2734]: I20250514 00:31:58.697307 2734 update_attempter.cc:306] Processing Done. May 14 00:31:58.697552 update_engine[2734]: I20250514 00:31:58.697312 2734 update_attempter.cc:310] Error event sent. May 14 00:31:58.697552 update_engine[2734]: I20250514 00:31:58.697319 2734 update_check_scheduler.cc:74] Next update check in 45m58s May 14 00:31:58.697702 locksmithd[2766]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 14 00:32:16.987856 containerd[2740]: time="2025-05-14T00:32:16.987765670Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5\" id:\"08c289cc7008f1e30878fb2868f4beca71dd84201a6b72263c3819bedd17e85a\" pid:8902 exited_at:{seconds:1747182736 nanos:987575150}" May 14 00:32:19.223921 containerd[2740]: time="2025-05-14T00:32:19.223876659Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5\" id:\"fad1328630122a2974942f0e472e15b508ef5113abdf879f4f045eb1365fa676\" pid:8925 exited_at:{seconds:1747182739 nanos:223728859}" May 14 00:32:21.241728 containerd[2740]: time="2025-05-14T00:32:21.241683370Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c683fce9041df3a997ada6cec725cb3e54770debeddbc11182af3f9afeb9cffd\" id:\"fdffb8020e08713a47864edc1a7bd3150f73a0c3afbef49770b094c7e415dc9b\" pid:8965 exited_at:{seconds:1747182741 nanos:241503009}" May 14 00:32:49.227893 containerd[2740]: time="2025-05-14T00:32:49.227843302Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5\" id:\"761b5e778184476f49551d3a601a3c1b1f287bb3f3a38a7c87a1644a5f3e3c93\" pid:8997 exited_at:{seconds:1747182769 nanos:227646341}" May 14 00:32:51.239231 containerd[2740]: time="2025-05-14T00:32:51.239186542Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c683fce9041df3a997ada6cec725cb3e54770debeddbc11182af3f9afeb9cffd\" id:\"bc9abeef2bf85c8a66244813a60e4c474bc8f2ba8f41a0680a8ec00af4e5b8cd\" pid:9019 exited_at:{seconds:1747182771 nanos:238975741}" May 14 00:33:09.202348 systemd[1]: Started sshd@8-147.75.51.18:22-109.94.172.237:54854.service - OpenSSH per-connection server daemon (109.94.172.237:54854). May 14 00:33:10.205865 sshd[9045]: Invalid user hjm from 109.94.172.237 port 54854 May 14 00:33:10.392947 sshd[9045]: Received disconnect from 109.94.172.237 port 54854:11: Bye Bye [preauth] May 14 00:33:10.392947 sshd[9045]: Disconnected from invalid user hjm 109.94.172.237 port 54854 [preauth] May 14 00:33:10.394855 systemd[1]: sshd@8-147.75.51.18:22-109.94.172.237:54854.service: Deactivated successfully. May 14 00:33:16.991867 containerd[2740]: time="2025-05-14T00:33:16.991818533Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5\" id:\"3e622895a3251dffa8eaeccc2c078c88829e902331f036bb2de2e4d122eee8b4\" pid:9064 exited_at:{seconds:1747182796 nanos:991623454}" May 14 00:33:19.224072 containerd[2740]: time="2025-05-14T00:33:19.224024116Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5\" id:\"fef162ab547eb937be8345f3e346c394ebddf52e4881206a7dea17bb610c6912\" pid:9085 exited_at:{seconds:1747182799 nanos:223847278}" May 14 00:33:21.237851 containerd[2740]: time="2025-05-14T00:33:21.237806354Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c683fce9041df3a997ada6cec725cb3e54770debeddbc11182af3f9afeb9cffd\" id:\"c943f9af5c86e864faf7c333ed0fad4c18b73598e0c66c0c4baf117b5d7caf43\" pid:9109 exited_at:{seconds:1747182801 nanos:237562756}" May 14 00:33:36.060339 systemd[1]: Started sshd@9-147.75.51.18:22-139.178.68.195:36334.service - OpenSSH per-connection server daemon (139.178.68.195:36334). May 14 00:33:36.491950 sshd[9134]: Accepted publickey for core from 139.178.68.195 port 36334 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 00:33:36.493055 sshd-session[9134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:33:36.496492 systemd-logind[2724]: New session 10 of user core. May 14 00:33:36.504524 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 00:33:36.858766 sshd[9136]: Connection closed by 139.178.68.195 port 36334 May 14 00:33:36.859153 sshd-session[9134]: pam_unix(sshd:session): session closed for user core May 14 00:33:36.862118 systemd[1]: sshd@9-147.75.51.18:22-139.178.68.195:36334.service: Deactivated successfully. May 14 00:33:36.863873 systemd[1]: session-10.scope: Deactivated successfully. May 14 00:33:36.864437 systemd-logind[2724]: Session 10 logged out. Waiting for processes to exit. May 14 00:33:36.864979 systemd-logind[2724]: Removed session 10. May 14 00:33:41.934453 systemd[1]: Started sshd@10-147.75.51.18:22-139.178.68.195:36336.service - OpenSSH per-connection server daemon (139.178.68.195:36336). May 14 00:33:42.358403 sshd[9178]: Accepted publickey for core from 139.178.68.195 port 36336 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 00:33:42.359534 sshd-session[9178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:33:42.362849 systemd-logind[2724]: New session 11 of user core. May 14 00:33:42.372474 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 00:33:42.713410 sshd[9180]: Connection closed by 139.178.68.195 port 36336 May 14 00:33:42.713731 sshd-session[9178]: pam_unix(sshd:session): session closed for user core May 14 00:33:42.716761 systemd[1]: sshd@10-147.75.51.18:22-139.178.68.195:36336.service: Deactivated successfully. May 14 00:33:42.718482 systemd[1]: session-11.scope: Deactivated successfully. May 14 00:33:42.719003 systemd-logind[2724]: Session 11 logged out. Waiting for processes to exit. May 14 00:33:42.719585 systemd-logind[2724]: Removed session 11. May 14 00:33:42.797346 systemd[1]: Started sshd@11-147.75.51.18:22-139.178.68.195:36338.service - OpenSSH per-connection server daemon (139.178.68.195:36338). May 14 00:33:43.227912 sshd[9215]: Accepted publickey for core from 139.178.68.195 port 36338 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 00:33:43.229219 sshd-session[9215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:33:43.232359 systemd-logind[2724]: New session 12 of user core. May 14 00:33:43.245472 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 00:33:43.621195 sshd[9217]: Connection closed by 139.178.68.195 port 36338 May 14 00:33:43.621569 sshd-session[9215]: pam_unix(sshd:session): session closed for user core May 14 00:33:43.624552 systemd[1]: sshd@11-147.75.51.18:22-139.178.68.195:36338.service: Deactivated successfully. May 14 00:33:43.626927 systemd[1]: session-12.scope: Deactivated successfully. May 14 00:33:43.627499 systemd-logind[2724]: Session 12 logged out. Waiting for processes to exit. May 14 00:33:43.628053 systemd-logind[2724]: Removed session 12. May 14 00:33:43.695309 systemd[1]: Started sshd@12-147.75.51.18:22-139.178.68.195:54204.service - OpenSSH per-connection server daemon (139.178.68.195:54204). May 14 00:33:44.135314 sshd[9255]: Accepted publickey for core from 139.178.68.195 port 54204 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 00:33:44.136383 sshd-session[9255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:33:44.139499 systemd-logind[2724]: New session 13 of user core. May 14 00:33:44.154474 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 00:33:44.489512 sshd[9262]: Connection closed by 139.178.68.195 port 54204 May 14 00:33:44.489838 sshd-session[9255]: pam_unix(sshd:session): session closed for user core May 14 00:33:44.492679 systemd[1]: sshd@12-147.75.51.18:22-139.178.68.195:54204.service: Deactivated successfully. May 14 00:33:44.494499 systemd[1]: session-13.scope: Deactivated successfully. May 14 00:33:44.495106 systemd-logind[2724]: Session 13 logged out. Waiting for processes to exit. May 14 00:33:44.495695 systemd-logind[2724]: Removed session 13. May 14 00:33:49.220002 containerd[2740]: time="2025-05-14T00:33:49.219965024Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5\" id:\"49a2a2d25010e3519d60c83b543b5867c04ddcf399acda2d360689f56a77e7d7\" pid:9314 exited_at:{seconds:1747182829 nanos:219770465}" May 14 00:33:49.565298 systemd[1]: Started sshd@13-147.75.51.18:22-139.178.68.195:54218.service - OpenSSH per-connection server daemon (139.178.68.195:54218). May 14 00:33:50.002509 sshd[9326]: Accepted publickey for core from 139.178.68.195 port 54218 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 00:33:50.003594 sshd-session[9326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:33:50.006700 systemd-logind[2724]: New session 14 of user core. May 14 00:33:50.018469 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 00:33:50.362480 sshd[9328]: Connection closed by 139.178.68.195 port 54218 May 14 00:33:50.362888 sshd-session[9326]: pam_unix(sshd:session): session closed for user core May 14 00:33:50.365777 systemd[1]: sshd@13-147.75.51.18:22-139.178.68.195:54218.service: Deactivated successfully. May 14 00:33:50.367491 systemd[1]: session-14.scope: Deactivated successfully. May 14 00:33:50.368042 systemd-logind[2724]: Session 14 logged out. Waiting for processes to exit. May 14 00:33:50.368603 systemd-logind[2724]: Removed session 14. May 14 00:33:51.236110 containerd[2740]: time="2025-05-14T00:33:51.236073745Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c683fce9041df3a997ada6cec725cb3e54770debeddbc11182af3f9afeb9cffd\" id:\"9e3e0523e4ed59743bce80d809139ba85a5ab198851d09e97f36247abb19dea1\" pid:9376 exited_at:{seconds:1747182831 nanos:235714627}" May 14 00:33:55.434465 systemd[1]: Started sshd@14-147.75.51.18:22-139.178.68.195:59426.service - OpenSSH per-connection server daemon (139.178.68.195:59426). May 14 00:33:55.859599 sshd[9409]: Accepted publickey for core from 139.178.68.195 port 59426 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 00:33:55.860643 sshd-session[9409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:33:55.863895 systemd-logind[2724]: New session 15 of user core. May 14 00:33:55.875476 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 00:33:56.211708 sshd[9411]: Connection closed by 139.178.68.195 port 59426 May 14 00:33:56.211985 sshd-session[9409]: pam_unix(sshd:session): session closed for user core May 14 00:33:56.214914 systemd[1]: sshd@14-147.75.51.18:22-139.178.68.195:59426.service: Deactivated successfully. May 14 00:33:56.216621 systemd[1]: session-15.scope: Deactivated successfully. May 14 00:33:56.217141 systemd-logind[2724]: Session 15 logged out. Waiting for processes to exit. May 14 00:33:56.217726 systemd-logind[2724]: Removed session 15. May 14 00:34:01.284341 systemd[1]: Started sshd@15-147.75.51.18:22-139.178.68.195:59428.service - OpenSSH per-connection server daemon (139.178.68.195:59428). May 14 00:34:01.698618 sshd[9446]: Accepted publickey for core from 139.178.68.195 port 59428 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 00:34:01.699654 sshd-session[9446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:34:01.702660 systemd-logind[2724]: New session 16 of user core. May 14 00:34:01.718501 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 00:34:02.048972 sshd[9448]: Connection closed by 139.178.68.195 port 59428 May 14 00:34:02.049278 sshd-session[9446]: pam_unix(sshd:session): session closed for user core May 14 00:34:02.052153 systemd[1]: sshd@15-147.75.51.18:22-139.178.68.195:59428.service: Deactivated successfully. May 14 00:34:02.053823 systemd[1]: session-16.scope: Deactivated successfully. May 14 00:34:02.054367 systemd-logind[2724]: Session 16 logged out. Waiting for processes to exit. May 14 00:34:02.054914 systemd-logind[2724]: Removed session 16. May 14 00:34:02.120286 systemd[1]: Started sshd@16-147.75.51.18:22-139.178.68.195:59434.service - OpenSSH per-connection server daemon (139.178.68.195:59434). May 14 00:34:02.537815 sshd[9482]: Accepted publickey for core from 139.178.68.195 port 59434 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 00:34:02.538866 sshd-session[9482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:34:02.541817 systemd-logind[2724]: New session 17 of user core. May 14 00:34:02.557473 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 00:34:02.914499 sshd[9484]: Connection closed by 139.178.68.195 port 59434 May 14 00:34:02.914834 sshd-session[9482]: pam_unix(sshd:session): session closed for user core May 14 00:34:02.917784 systemd[1]: sshd@16-147.75.51.18:22-139.178.68.195:59434.service: Deactivated successfully. May 14 00:34:02.920066 systemd[1]: session-17.scope: Deactivated successfully. May 14 00:34:02.920603 systemd-logind[2724]: Session 17 logged out. Waiting for processes to exit. May 14 00:34:02.921149 systemd-logind[2724]: Removed session 17. May 14 00:34:02.993289 systemd[1]: Started sshd@17-147.75.51.18:22-139.178.68.195:59446.service - OpenSSH per-connection server daemon (139.178.68.195:59446). May 14 00:34:03.417047 sshd[9517]: Accepted publickey for core from 139.178.68.195 port 59446 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 00:34:03.418109 sshd-session[9517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:34:03.421226 systemd-logind[2724]: New session 18 of user core. May 14 00:34:03.440471 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 00:34:04.104305 sshd[9519]: Connection closed by 139.178.68.195 port 59446 May 14 00:34:04.104703 sshd-session[9517]: pam_unix(sshd:session): session closed for user core May 14 00:34:04.107532 systemd[1]: sshd@17-147.75.51.18:22-139.178.68.195:59446.service: Deactivated successfully. May 14 00:34:04.110018 systemd[1]: session-18.scope: Deactivated successfully. May 14 00:34:04.110669 systemd-logind[2724]: Session 18 logged out. Waiting for processes to exit. May 14 00:34:04.111257 systemd-logind[2724]: Removed session 18. May 14 00:34:04.178355 systemd[1]: Started sshd@18-147.75.51.18:22-139.178.68.195:48482.service - OpenSSH per-connection server daemon (139.178.68.195:48482). May 14 00:34:04.620596 sshd[9582]: Accepted publickey for core from 139.178.68.195 port 48482 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 00:34:04.621783 sshd-session[9582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:34:04.624997 systemd-logind[2724]: New session 19 of user core. May 14 00:34:04.640484 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 00:34:05.068319 sshd[9584]: Connection closed by 139.178.68.195 port 48482 May 14 00:34:05.068624 sshd-session[9582]: pam_unix(sshd:session): session closed for user core May 14 00:34:05.071451 systemd[1]: sshd@18-147.75.51.18:22-139.178.68.195:48482.service: Deactivated successfully. May 14 00:34:05.073761 systemd[1]: session-19.scope: Deactivated successfully. May 14 00:34:05.074301 systemd-logind[2724]: Session 19 logged out. Waiting for processes to exit. May 14 00:34:05.074863 systemd-logind[2724]: Removed session 19. May 14 00:34:05.143339 systemd[1]: Started sshd@19-147.75.51.18:22-139.178.68.195:48490.service - OpenSSH per-connection server daemon (139.178.68.195:48490). May 14 00:34:05.588247 sshd[9634]: Accepted publickey for core from 139.178.68.195 port 48490 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 00:34:05.589422 sshd-session[9634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:34:05.592472 systemd-logind[2724]: New session 20 of user core. May 14 00:34:05.603471 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 00:34:05.948469 sshd[9637]: Connection closed by 139.178.68.195 port 48490 May 14 00:34:05.948853 sshd-session[9634]: pam_unix(sshd:session): session closed for user core May 14 00:34:05.951817 systemd[1]: sshd@19-147.75.51.18:22-139.178.68.195:48490.service: Deactivated successfully. May 14 00:34:05.953493 systemd[1]: session-20.scope: Deactivated successfully. May 14 00:34:05.954009 systemd-logind[2724]: Session 20 logged out. Waiting for processes to exit. May 14 00:34:05.954582 systemd-logind[2724]: Removed session 20. May 14 00:34:11.024289 systemd[1]: Started sshd@20-147.75.51.18:22-139.178.68.195:48504.service - OpenSSH per-connection server daemon (139.178.68.195:48504). May 14 00:34:11.456598 sshd[9680]: Accepted publickey for core from 139.178.68.195 port 48504 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 00:34:11.457717 sshd-session[9680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:34:11.461182 systemd-logind[2724]: New session 21 of user core. May 14 00:34:11.470473 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 00:34:11.815267 sshd[9682]: Connection closed by 139.178.68.195 port 48504 May 14 00:34:11.815586 sshd-session[9680]: pam_unix(sshd:session): session closed for user core May 14 00:34:11.818482 systemd[1]: sshd@20-147.75.51.18:22-139.178.68.195:48504.service: Deactivated successfully. May 14 00:34:11.820781 systemd[1]: session-21.scope: Deactivated successfully. May 14 00:34:11.821311 systemd-logind[2724]: Session 21 logged out. Waiting for processes to exit. May 14 00:34:11.821971 systemd-logind[2724]: Removed session 21. May 14 00:34:16.887303 systemd[1]: Started sshd@21-147.75.51.18:22-139.178.68.195:40586.service - OpenSSH per-connection server daemon (139.178.68.195:40586). May 14 00:34:16.992001 containerd[2740]: time="2025-05-14T00:34:16.991967167Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5\" id:\"95f6ddbe16266e20e3a939c90aea41bd68105a7c7d9ef59e7951b46b593e5381\" pid:9737 exited_at:{seconds:1747182856 nanos:991773368}" May 14 00:34:17.310023 sshd[9724]: Accepted publickey for core from 139.178.68.195 port 40586 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 00:34:17.311124 sshd-session[9724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:34:17.314493 systemd-logind[2724]: New session 22 of user core. May 14 00:34:17.326472 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 00:34:17.661410 sshd[9748]: Connection closed by 139.178.68.195 port 40586 May 14 00:34:17.661761 sshd-session[9724]: pam_unix(sshd:session): session closed for user core May 14 00:34:17.664685 systemd[1]: sshd@21-147.75.51.18:22-139.178.68.195:40586.service: Deactivated successfully. May 14 00:34:17.666987 systemd[1]: session-22.scope: Deactivated successfully. May 14 00:34:17.667569 systemd-logind[2724]: Session 22 logged out. Waiting for processes to exit. May 14 00:34:17.668156 systemd-logind[2724]: Removed session 22. May 14 00:34:19.217728 containerd[2740]: time="2025-05-14T00:34:19.217695994Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ac0fccefeb9a5df0829599233be652e561f9f5a22ff2418946cde5c0657fbc5\" id:\"c4c2c9f123c1b6ea56932a7f230d8d77a340afccd11d6bf8fc675bd45c6d3819\" pid:9792 exited_at:{seconds:1747182859 nanos:217531914}" May 14 00:34:21.236074 containerd[2740]: time="2025-05-14T00:34:21.236020154Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c683fce9041df3a997ada6cec725cb3e54770debeddbc11182af3f9afeb9cffd\" id:\"20c134c7ed674894218d6e5d5abf74c0c7b86b298c5b7d4a24f7ff42f120deff\" pid:9815 exited_at:{seconds:1747182861 nanos:235817354}" May 14 00:34:22.736367 systemd[1]: Started sshd@22-147.75.51.18:22-139.178.68.195:40598.service - OpenSSH per-connection server daemon (139.178.68.195:40598). May 14 00:34:23.174102 sshd[9833]: Accepted publickey for core from 139.178.68.195 port 40598 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 00:34:23.175386 sshd-session[9833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:34:23.178687 systemd-logind[2724]: New session 23 of user core. May 14 00:34:23.191531 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 00:34:23.526318 sshd[9836]: Connection closed by 139.178.68.195 port 40598 May 14 00:34:23.526607 sshd-session[9833]: pam_unix(sshd:session): session closed for user core May 14 00:34:23.529511 systemd[1]: sshd@22-147.75.51.18:22-139.178.68.195:40598.service: Deactivated successfully. May 14 00:34:23.531165 systemd[1]: session-23.scope: Deactivated successfully. May 14 00:34:23.531815 systemd-logind[2724]: Session 23 logged out. Waiting for processes to exit. May 14 00:34:23.532352 systemd-logind[2724]: Removed session 23.