May 8 00:01:55.160884 kernel: Booting Linux on physical CPU 0x0000120000 [0x413fd0c1] May 8 00:01:55.160906 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed May 7 22:21:35 -00 2025 May 8 00:01:55.160914 kernel: KASLR enabled May 8 00:01:55.160919 kernel: efi: EFI v2.7 by American Megatrends May 8 00:01:55.160925 kernel: efi: ACPI 2.0=0xec080000 SMBIOS 3.0=0xf0a1ff98 ESRT=0xea468818 RNG=0xebf10018 MEMRESERVE=0xe4627e18 May 8 00:01:55.160930 kernel: random: crng init done May 8 00:01:55.160937 kernel: secureboot: Secure boot disabled May 8 00:01:55.160943 kernel: esrt: Reserving ESRT space from 0x00000000ea468818 to 0x00000000ea468878. May 8 00:01:55.160950 kernel: ACPI: Early table checksum verification disabled May 8 00:01:55.160956 kernel: ACPI: RSDP 0x00000000EC080000 000024 (v02 Ampere) May 8 00:01:55.160962 kernel: ACPI: XSDT 0x00000000EC070000 0000A4 (v01 Ampere Altra 00000000 AMI 01000013) May 8 00:01:55.160967 kernel: ACPI: FACP 0x00000000EC050000 000114 (v06 Ampere Altra 00000000 INTL 20190509) May 8 00:01:55.160973 kernel: ACPI: DSDT 0x00000000EBFF0000 019B57 (v02 Ampere Jade 00000001 INTL 20200717) May 8 00:01:55.160979 kernel: ACPI: DBG2 0x00000000EC060000 00005C (v00 Ampere Altra 00000000 INTL 20190509) May 8 00:01:55.160987 kernel: ACPI: GTDT 0x00000000EC040000 000110 (v03 Ampere Altra 00000000 INTL 20190509) May 8 00:01:55.160993 kernel: ACPI: SSDT 0x00000000EC030000 00002D (v02 Ampere Altra 00000001 INTL 20190509) May 8 00:01:55.160999 kernel: ACPI: FIDT 0x00000000EBFE0000 00009C (v01 ALASKA A M I 01072009 AMI 00010013) May 8 00:01:55.161005 kernel: ACPI: SPCR 0x00000000EBFD0000 000050 (v02 ALASKA A M I 01072009 AMI 0005000F) May 8 00:01:55.161012 kernel: ACPI: BGRT 0x00000000EBFC0000 000038 (v01 ALASKA A M I 01072009 AMI 00010013) May 8 00:01:55.161018 kernel: ACPI: MCFG 0x00000000EBFB0000 0000AC (v01 Ampere Altra 00000001 AMP. 01000013) May 8 00:01:55.161024 kernel: ACPI: IORT 0x00000000EBFA0000 000610 (v00 Ampere Altra 00000000 AMP. 01000013) May 8 00:01:55.161030 kernel: ACPI: PPTT 0x00000000EBF80000 006E60 (v02 Ampere Altra 00000000 AMP. 01000013) May 8 00:01:55.161036 kernel: ACPI: SLIT 0x00000000EBF70000 00002D (v01 Ampere Altra 00000000 AMP. 01000013) May 8 00:01:55.161042 kernel: ACPI: SRAT 0x00000000EBF60000 0006D0 (v03 Ampere Altra 00000000 AMP. 01000013) May 8 00:01:55.161049 kernel: ACPI: APIC 0x00000000EBF90000 0019F4 (v05 Ampere Altra 00000003 AMI 01000013) May 8 00:01:55.161055 kernel: ACPI: PCCT 0x00000000EBF40000 000576 (v02 Ampere Altra 00000003 AMP. 01000013) May 8 00:01:55.161062 kernel: ACPI: WSMT 0x00000000EBF30000 000028 (v01 ALASKA A M I 01072009 AMI 00010013) May 8 00:01:55.161068 kernel: ACPI: FPDT 0x00000000EBF20000 000044 (v01 ALASKA A M I 01072009 AMI 01000013) May 8 00:01:55.161074 kernel: ACPI: SPCR: console: pl011,mmio32,0x100002600000,115200 May 8 00:01:55.161080 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x88300000-0x883fffff] May 8 00:01:55.161086 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x90000000-0xffffffff] May 8 00:01:55.161092 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0x8007fffffff] May 8 00:01:55.161098 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80100000000-0x83fffffffff] May 8 00:01:55.161104 kernel: NUMA: NODE_DATA [mem 0x83fdffcb800-0x83fdffd0fff] May 8 00:01:55.161110 kernel: Zone ranges: May 8 00:01:55.161117 kernel: DMA [mem 0x0000000088300000-0x00000000ffffffff] May 8 00:01:55.161123 kernel: DMA32 empty May 8 00:01:55.161129 kernel: Normal [mem 0x0000000100000000-0x0000083fffffffff] May 8 00:01:55.161135 kernel: Movable zone start for each node May 8 00:01:55.161142 kernel: Early memory node ranges May 8 00:01:55.161151 kernel: node 0: [mem 0x0000000088300000-0x00000000883fffff] May 8 00:01:55.161157 kernel: node 0: [mem 0x0000000090000000-0x0000000091ffffff] May 8 00:01:55.161165 kernel: node 0: [mem 0x0000000092000000-0x0000000093ffffff] May 8 00:01:55.161171 kernel: node 0: [mem 0x0000000094000000-0x00000000eba2dfff] May 8 00:01:55.161178 kernel: node 0: [mem 0x00000000eba2e000-0x00000000ebeaffff] May 8 00:01:55.161184 kernel: node 0: [mem 0x00000000ebeb0000-0x00000000ebeb9fff] May 8 00:01:55.161191 kernel: node 0: [mem 0x00000000ebeba000-0x00000000ebeccfff] May 8 00:01:55.161197 kernel: node 0: [mem 0x00000000ebecd000-0x00000000ebecdfff] May 8 00:01:55.161204 kernel: node 0: [mem 0x00000000ebece000-0x00000000ebecffff] May 8 00:01:55.161210 kernel: node 0: [mem 0x00000000ebed0000-0x00000000ec0effff] May 8 00:01:55.161216 kernel: node 0: [mem 0x00000000ec0f0000-0x00000000ec0fffff] May 8 00:01:55.161223 kernel: node 0: [mem 0x00000000ec100000-0x00000000ee53ffff] May 8 00:01:55.161230 kernel: node 0: [mem 0x00000000ee540000-0x00000000f765ffff] May 8 00:01:55.161237 kernel: node 0: [mem 0x00000000f7660000-0x00000000f784ffff] May 8 00:01:55.161243 kernel: node 0: [mem 0x00000000f7850000-0x00000000f7fdffff] May 8 00:01:55.161249 kernel: node 0: [mem 0x00000000f7fe0000-0x00000000ffc8efff] May 8 00:01:55.161256 kernel: node 0: [mem 0x00000000ffc8f000-0x00000000ffc8ffff] May 8 00:01:55.161262 kernel: node 0: [mem 0x00000000ffc90000-0x00000000ffffffff] May 8 00:01:55.161269 kernel: node 0: [mem 0x0000080000000000-0x000008007fffffff] May 8 00:01:55.161275 kernel: node 0: [mem 0x0000080100000000-0x0000083fffffffff] May 8 00:01:55.161282 kernel: Initmem setup node 0 [mem 0x0000000088300000-0x0000083fffffffff] May 8 00:01:55.161288 kernel: On node 0, zone DMA: 768 pages in unavailable ranges May 8 00:01:55.161295 kernel: On node 0, zone DMA: 31744 pages in unavailable ranges May 8 00:01:55.161303 kernel: psci: probing for conduit method from ACPI. May 8 00:01:55.161309 kernel: psci: PSCIv1.1 detected in firmware. May 8 00:01:55.161316 kernel: psci: Using standard PSCI v0.2 function IDs May 8 00:01:55.161322 kernel: psci: MIGRATE_INFO_TYPE not supported. May 8 00:01:55.161328 kernel: psci: SMC Calling Convention v1.2 May 8 00:01:55.161335 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 8 00:01:55.161341 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100 -> Node 0 May 8 00:01:55.161348 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10000 -> Node 0 May 8 00:01:55.161354 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10100 -> Node 0 May 8 00:01:55.161360 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20000 -> Node 0 May 8 00:01:55.161367 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20100 -> Node 0 May 8 00:01:55.161373 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30000 -> Node 0 May 8 00:01:55.161381 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30100 -> Node 0 May 8 00:01:55.161387 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40000 -> Node 0 May 8 00:01:55.161394 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40100 -> Node 0 May 8 00:01:55.161400 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50000 -> Node 0 May 8 00:01:55.161406 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50100 -> Node 0 May 8 00:01:55.161413 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60000 -> Node 0 May 8 00:01:55.161419 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60100 -> Node 0 May 8 00:01:55.161425 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70000 -> Node 0 May 8 00:01:55.161432 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70100 -> Node 0 May 8 00:01:55.161438 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80000 -> Node 0 May 8 00:01:55.161444 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80100 -> Node 0 May 8 00:01:55.161450 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90000 -> Node 0 May 8 00:01:55.161458 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90100 -> Node 0 May 8 00:01:55.161464 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0000 -> Node 0 May 8 00:01:55.161471 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0100 -> Node 0 May 8 00:01:55.161477 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0000 -> Node 0 May 8 00:01:55.161484 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0100 -> Node 0 May 8 00:01:55.161490 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0000 -> Node 0 May 8 00:01:55.161496 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0100 -> Node 0 May 8 00:01:55.161503 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0000 -> Node 0 May 8 00:01:55.161509 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0100 -> Node 0 May 8 00:01:55.161515 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0000 -> Node 0 May 8 00:01:55.161522 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0100 -> Node 0 May 8 00:01:55.161530 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0000 -> Node 0 May 8 00:01:55.161536 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0100 -> Node 0 May 8 00:01:55.161542 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100000 -> Node 0 May 8 00:01:55.161549 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100100 -> Node 0 May 8 00:01:55.161555 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110000 -> Node 0 May 8 00:01:55.161561 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110100 -> Node 0 May 8 00:01:55.161568 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120000 -> Node 0 May 8 00:01:55.161574 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120100 -> Node 0 May 8 00:01:55.161581 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130000 -> Node 0 May 8 00:01:55.161587 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130100 -> Node 0 May 8 00:01:55.161593 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140000 -> Node 0 May 8 00:01:55.161600 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140100 -> Node 0 May 8 00:01:55.161607 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150000 -> Node 0 May 8 00:01:55.161614 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150100 -> Node 0 May 8 00:01:55.161620 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160000 -> Node 0 May 8 00:01:55.161627 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160100 -> Node 0 May 8 00:01:55.161633 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170000 -> Node 0 May 8 00:01:55.161639 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170100 -> Node 0 May 8 00:01:55.161646 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180000 -> Node 0 May 8 00:01:55.161652 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180100 -> Node 0 May 8 00:01:55.161664 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190000 -> Node 0 May 8 00:01:55.161671 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190100 -> Node 0 May 8 00:01:55.161679 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0000 -> Node 0 May 8 00:01:55.161686 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0100 -> Node 0 May 8 00:01:55.161693 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0000 -> Node 0 May 8 00:01:55.161700 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0100 -> Node 0 May 8 00:01:55.161706 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0000 -> Node 0 May 8 00:01:55.161713 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0100 -> Node 0 May 8 00:01:55.161721 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0000 -> Node 0 May 8 00:01:55.161728 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0100 -> Node 0 May 8 00:01:55.161735 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0000 -> Node 0 May 8 00:01:55.161742 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0100 -> Node 0 May 8 00:01:55.161748 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0000 -> Node 0 May 8 00:01:55.161755 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0100 -> Node 0 May 8 00:01:55.161762 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200000 -> Node 0 May 8 00:01:55.161768 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200100 -> Node 0 May 8 00:01:55.161775 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210000 -> Node 0 May 8 00:01:55.161782 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210100 -> Node 0 May 8 00:01:55.161789 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220000 -> Node 0 May 8 00:01:55.161795 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220100 -> Node 0 May 8 00:01:55.161804 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230000 -> Node 0 May 8 00:01:55.161818 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230100 -> Node 0 May 8 00:01:55.161824 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240000 -> Node 0 May 8 00:01:55.161831 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240100 -> Node 0 May 8 00:01:55.161838 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250000 -> Node 0 May 8 00:01:55.161845 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250100 -> Node 0 May 8 00:01:55.161851 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260000 -> Node 0 May 8 00:01:55.161858 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260100 -> Node 0 May 8 00:01:55.161865 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270000 -> Node 0 May 8 00:01:55.161872 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270100 -> Node 0 May 8 00:01:55.161878 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 May 8 00:01:55.161887 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 May 8 00:01:55.161894 kernel: pcpu-alloc: [0] 00 [0] 01 [0] 02 [0] 03 [0] 04 [0] 05 [0] 06 [0] 07 May 8 00:01:55.161901 kernel: pcpu-alloc: [0] 08 [0] 09 [0] 10 [0] 11 [0] 12 [0] 13 [0] 14 [0] 15 May 8 00:01:55.161908 kernel: pcpu-alloc: [0] 16 [0] 17 [0] 18 [0] 19 [0] 20 [0] 21 [0] 22 [0] 23 May 8 00:01:55.161914 kernel: pcpu-alloc: [0] 24 [0] 25 [0] 26 [0] 27 [0] 28 [0] 29 [0] 30 [0] 31 May 8 00:01:55.161921 kernel: pcpu-alloc: [0] 32 [0] 33 [0] 34 [0] 35 [0] 36 [0] 37 [0] 38 [0] 39 May 8 00:01:55.161928 kernel: pcpu-alloc: [0] 40 [0] 41 [0] 42 [0] 43 [0] 44 [0] 45 [0] 46 [0] 47 May 8 00:01:55.161935 kernel: pcpu-alloc: [0] 48 [0] 49 [0] 50 [0] 51 [0] 52 [0] 53 [0] 54 [0] 55 May 8 00:01:55.161941 kernel: pcpu-alloc: [0] 56 [0] 57 [0] 58 [0] 59 [0] 60 [0] 61 [0] 62 [0] 63 May 8 00:01:55.161948 kernel: pcpu-alloc: [0] 64 [0] 65 [0] 66 [0] 67 [0] 68 [0] 69 [0] 70 [0] 71 May 8 00:01:55.161955 kernel: pcpu-alloc: [0] 72 [0] 73 [0] 74 [0] 75 [0] 76 [0] 77 [0] 78 [0] 79 May 8 00:01:55.161963 kernel: Detected PIPT I-cache on CPU0 May 8 00:01:55.161970 kernel: CPU features: detected: GIC system register CPU interface May 8 00:01:55.161977 kernel: CPU features: detected: Virtualization Host Extensions May 8 00:01:55.161984 kernel: CPU features: detected: Hardware dirty bit management May 8 00:01:55.161991 kernel: CPU features: detected: Spectre-v4 May 8 00:01:55.161998 kernel: CPU features: detected: Spectre-BHB May 8 00:01:55.162005 kernel: CPU features: kernel page table isolation forced ON by KASLR May 8 00:01:55.162011 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 8 00:01:55.162018 kernel: CPU features: detected: ARM erratum 1418040 May 8 00:01:55.162025 kernel: CPU features: detected: SSBS not fully self-synchronizing May 8 00:01:55.162032 kernel: alternatives: applying boot alternatives May 8 00:01:55.162040 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=82f9441f083668f7b43f8fe99c3dc9ee441b8a3ef2f63ecd1e548de4dde5b207 May 8 00:01:55.162048 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:01:55.162055 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 8 00:01:55.162062 kernel: printk: log_buf_len total cpu_extra contributions: 323584 bytes May 8 00:01:55.162069 kernel: printk: log_buf_len min size: 262144 bytes May 8 00:01:55.162076 kernel: printk: log_buf_len: 1048576 bytes May 8 00:01:55.162083 kernel: printk: early log buf free: 249864(95%) May 8 00:01:55.162089 kernel: Dentry cache hash table entries: 16777216 (order: 15, 134217728 bytes, linear) May 8 00:01:55.162096 kernel: Inode-cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear) May 8 00:01:55.162103 kernel: Fallback order for Node 0: 0 May 8 00:01:55.162110 kernel: Built 1 zonelists, mobility grouping on. Total pages: 65996028 May 8 00:01:55.162118 kernel: Policy zone: Normal May 8 00:01:55.162125 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:01:55.162132 kernel: software IO TLB: area num 128. May 8 00:01:55.162139 kernel: software IO TLB: mapped [mem 0x00000000fbc8f000-0x00000000ffc8f000] (64MB) May 8 00:01:55.162146 kernel: Memory: 262923416K/268174336K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38336K init, 897K bss, 5250920K reserved, 0K cma-reserved) May 8 00:01:55.162153 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=80, Nodes=1 May 8 00:01:55.162160 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:01:55.162167 kernel: rcu: RCU event tracing is enabled. May 8 00:01:55.162174 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=80. May 8 00:01:55.162181 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:01:55.162189 kernel: Tracing variant of Tasks RCU enabled. May 8 00:01:55.162195 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:01:55.162204 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=80 May 8 00:01:55.162211 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 8 00:01:55.162218 kernel: GICv3: GIC: Using split EOI/Deactivate mode May 8 00:01:55.162224 kernel: GICv3: 672 SPIs implemented May 8 00:01:55.162231 kernel: GICv3: 0 Extended SPIs implemented May 8 00:01:55.162238 kernel: Root IRQ handler: gic_handle_irq May 8 00:01:55.162245 kernel: GICv3: GICv3 features: 16 PPIs May 8 00:01:55.162252 kernel: GICv3: CPU0: found redistributor 120000 region 0:0x00001001005c0000 May 8 00:01:55.162258 kernel: SRAT: PXM 0 -> ITS 0 -> Node 0 May 8 00:01:55.162265 kernel: SRAT: PXM 0 -> ITS 1 -> Node 0 May 8 00:01:55.162272 kernel: SRAT: PXM 0 -> ITS 2 -> Node 0 May 8 00:01:55.162278 kernel: SRAT: PXM 0 -> ITS 3 -> Node 0 May 8 00:01:55.162286 kernel: SRAT: PXM 0 -> ITS 4 -> Node 0 May 8 00:01:55.162293 kernel: SRAT: PXM 0 -> ITS 5 -> Node 0 May 8 00:01:55.162300 kernel: SRAT: PXM 0 -> ITS 6 -> Node 0 May 8 00:01:55.162307 kernel: SRAT: PXM 0 -> ITS 7 -> Node 0 May 8 00:01:55.162313 kernel: ITS [mem 0x100100040000-0x10010005ffff] May 8 00:01:55.162320 kernel: ITS@0x0000100100040000: allocated 8192 Devices @80000270000 (indirect, esz 8, psz 64K, shr 1) May 8 00:01:55.162327 kernel: ITS@0x0000100100040000: allocated 32768 Interrupt Collections @80000280000 (flat, esz 2, psz 64K, shr 1) May 8 00:01:55.162334 kernel: ITS [mem 0x100100060000-0x10010007ffff] May 8 00:01:55.162341 kernel: ITS@0x0000100100060000: allocated 8192 Devices @800002a0000 (indirect, esz 8, psz 64K, shr 1) May 8 00:01:55.162348 kernel: ITS@0x0000100100060000: allocated 32768 Interrupt Collections @800002b0000 (flat, esz 2, psz 64K, shr 1) May 8 00:01:55.162355 kernel: ITS [mem 0x100100080000-0x10010009ffff] May 8 00:01:55.162363 kernel: ITS@0x0000100100080000: allocated 8192 Devices @800002d0000 (indirect, esz 8, psz 64K, shr 1) May 8 00:01:55.162371 kernel: ITS@0x0000100100080000: allocated 32768 Interrupt Collections @800002e0000 (flat, esz 2, psz 64K, shr 1) May 8 00:01:55.162377 kernel: ITS [mem 0x1001000a0000-0x1001000bffff] May 8 00:01:55.162384 kernel: ITS@0x00001001000a0000: allocated 8192 Devices @80000300000 (indirect, esz 8, psz 64K, shr 1) May 8 00:01:55.162392 kernel: ITS@0x00001001000a0000: allocated 32768 Interrupt Collections @80000310000 (flat, esz 2, psz 64K, shr 1) May 8 00:01:55.162398 kernel: ITS [mem 0x1001000c0000-0x1001000dffff] May 8 00:01:55.162405 kernel: ITS@0x00001001000c0000: allocated 8192 Devices @80000330000 (indirect, esz 8, psz 64K, shr 1) May 8 00:01:55.162412 kernel: ITS@0x00001001000c0000: allocated 32768 Interrupt Collections @80000340000 (flat, esz 2, psz 64K, shr 1) May 8 00:01:55.162419 kernel: ITS [mem 0x1001000e0000-0x1001000fffff] May 8 00:01:55.162426 kernel: ITS@0x00001001000e0000: allocated 8192 Devices @80000360000 (indirect, esz 8, psz 64K, shr 1) May 8 00:01:55.162433 kernel: ITS@0x00001001000e0000: allocated 32768 Interrupt Collections @80000370000 (flat, esz 2, psz 64K, shr 1) May 8 00:01:55.162441 kernel: ITS [mem 0x100100100000-0x10010011ffff] May 8 00:01:55.162448 kernel: ITS@0x0000100100100000: allocated 8192 Devices @80000390000 (indirect, esz 8, psz 64K, shr 1) May 8 00:01:55.162455 kernel: ITS@0x0000100100100000: allocated 32768 Interrupt Collections @800003a0000 (flat, esz 2, psz 64K, shr 1) May 8 00:01:55.162462 kernel: ITS [mem 0x100100120000-0x10010013ffff] May 8 00:01:55.162469 kernel: ITS@0x0000100100120000: allocated 8192 Devices @800003c0000 (indirect, esz 8, psz 64K, shr 1) May 8 00:01:55.162476 kernel: ITS@0x0000100100120000: allocated 32768 Interrupt Collections @800003d0000 (flat, esz 2, psz 64K, shr 1) May 8 00:01:55.162483 kernel: GICv3: using LPI property table @0x00000800003e0000 May 8 00:01:55.162490 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000800003f0000 May 8 00:01:55.162496 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 00:01:55.162503 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.162510 kernel: ACPI GTDT: found 1 memory-mapped timer block(s). May 8 00:01:55.162518 kernel: arch_timer: cp15 and mmio timer(s) running at 25.00MHz (phys/phys). May 8 00:01:55.162525 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 8 00:01:55.162533 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 8 00:01:55.162539 kernel: Console: colour dummy device 80x25 May 8 00:01:55.162547 kernel: printk: console [tty0] enabled May 8 00:01:55.162554 kernel: ACPI: Core revision 20230628 May 8 00:01:55.162561 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 8 00:01:55.162568 kernel: pid_max: default: 81920 minimum: 640 May 8 00:01:55.162575 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:01:55.162582 kernel: landlock: Up and running. May 8 00:01:55.162590 kernel: SELinux: Initializing. May 8 00:01:55.162597 kernel: Mount-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:01:55.162604 kernel: Mountpoint-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:01:55.162611 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. May 8 00:01:55.162619 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. May 8 00:01:55.162625 kernel: rcu: Hierarchical SRCU implementation. May 8 00:01:55.162633 kernel: rcu: Max phase no-delay instances is 400. May 8 00:01:55.162640 kernel: Platform MSI: ITS@0x100100040000 domain created May 8 00:01:55.162647 kernel: Platform MSI: ITS@0x100100060000 domain created May 8 00:01:55.162655 kernel: Platform MSI: ITS@0x100100080000 domain created May 8 00:01:55.162662 kernel: Platform MSI: ITS@0x1001000a0000 domain created May 8 00:01:55.162669 kernel: Platform MSI: ITS@0x1001000c0000 domain created May 8 00:01:55.162676 kernel: Platform MSI: ITS@0x1001000e0000 domain created May 8 00:01:55.162683 kernel: Platform MSI: ITS@0x100100100000 domain created May 8 00:01:55.162689 kernel: Platform MSI: ITS@0x100100120000 domain created May 8 00:01:55.162697 kernel: PCI/MSI: ITS@0x100100040000 domain created May 8 00:01:55.162703 kernel: PCI/MSI: ITS@0x100100060000 domain created May 8 00:01:55.162710 kernel: PCI/MSI: ITS@0x100100080000 domain created May 8 00:01:55.162718 kernel: PCI/MSI: ITS@0x1001000a0000 domain created May 8 00:01:55.162725 kernel: PCI/MSI: ITS@0x1001000c0000 domain created May 8 00:01:55.162732 kernel: PCI/MSI: ITS@0x1001000e0000 domain created May 8 00:01:55.162739 kernel: PCI/MSI: ITS@0x100100100000 domain created May 8 00:01:55.162746 kernel: PCI/MSI: ITS@0x100100120000 domain created May 8 00:01:55.162753 kernel: Remapping and enabling EFI services. May 8 00:01:55.162760 kernel: smp: Bringing up secondary CPUs ... May 8 00:01:55.162767 kernel: Detected PIPT I-cache on CPU1 May 8 00:01:55.162774 kernel: GICv3: CPU1: found redistributor 1a0000 region 0:0x00001001007c0000 May 8 00:01:55.162781 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000080000800000 May 8 00:01:55.162789 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.162796 kernel: CPU1: Booted secondary processor 0x00001a0000 [0x413fd0c1] May 8 00:01:55.162803 kernel: Detected PIPT I-cache on CPU2 May 8 00:01:55.162826 kernel: GICv3: CPU2: found redistributor 140000 region 0:0x0000100100640000 May 8 00:01:55.162833 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000080000810000 May 8 00:01:55.162840 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.162847 kernel: CPU2: Booted secondary processor 0x0000140000 [0x413fd0c1] May 8 00:01:55.162854 kernel: Detected PIPT I-cache on CPU3 May 8 00:01:55.162861 kernel: GICv3: CPU3: found redistributor 1c0000 region 0:0x0000100100840000 May 8 00:01:55.162870 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000080000820000 May 8 00:01:55.162877 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.162884 kernel: CPU3: Booted secondary processor 0x00001c0000 [0x413fd0c1] May 8 00:01:55.162891 kernel: Detected PIPT I-cache on CPU4 May 8 00:01:55.162898 kernel: GICv3: CPU4: found redistributor 100000 region 0:0x0000100100540000 May 8 00:01:55.162905 kernel: GICv3: CPU4: using allocated LPI pending table @0x0000080000830000 May 8 00:01:55.162911 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.162918 kernel: CPU4: Booted secondary processor 0x0000100000 [0x413fd0c1] May 8 00:01:55.162925 kernel: Detected PIPT I-cache on CPU5 May 8 00:01:55.162932 kernel: GICv3: CPU5: found redistributor 180000 region 0:0x0000100100740000 May 8 00:01:55.162940 kernel: GICv3: CPU5: using allocated LPI pending table @0x0000080000840000 May 8 00:01:55.162947 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.162954 kernel: CPU5: Booted secondary processor 0x0000180000 [0x413fd0c1] May 8 00:01:55.162962 kernel: Detected PIPT I-cache on CPU6 May 8 00:01:55.162968 kernel: GICv3: CPU6: found redistributor 160000 region 0:0x00001001006c0000 May 8 00:01:55.162975 kernel: GICv3: CPU6: using allocated LPI pending table @0x0000080000850000 May 8 00:01:55.162982 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.162989 kernel: CPU6: Booted secondary processor 0x0000160000 [0x413fd0c1] May 8 00:01:55.162996 kernel: Detected PIPT I-cache on CPU7 May 8 00:01:55.163004 kernel: GICv3: CPU7: found redistributor 1e0000 region 0:0x00001001008c0000 May 8 00:01:55.163011 kernel: GICv3: CPU7: using allocated LPI pending table @0x0000080000860000 May 8 00:01:55.163018 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.163025 kernel: CPU7: Booted secondary processor 0x00001e0000 [0x413fd0c1] May 8 00:01:55.163032 kernel: Detected PIPT I-cache on CPU8 May 8 00:01:55.163039 kernel: GICv3: CPU8: found redistributor a0000 region 0:0x00001001003c0000 May 8 00:01:55.163046 kernel: GICv3: CPU8: using allocated LPI pending table @0x0000080000870000 May 8 00:01:55.163053 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.163060 kernel: CPU8: Booted secondary processor 0x00000a0000 [0x413fd0c1] May 8 00:01:55.163067 kernel: Detected PIPT I-cache on CPU9 May 8 00:01:55.163075 kernel: GICv3: CPU9: found redistributor 220000 region 0:0x00001001009c0000 May 8 00:01:55.163083 kernel: GICv3: CPU9: using allocated LPI pending table @0x0000080000880000 May 8 00:01:55.163090 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.163096 kernel: CPU9: Booted secondary processor 0x0000220000 [0x413fd0c1] May 8 00:01:55.163103 kernel: Detected PIPT I-cache on CPU10 May 8 00:01:55.163110 kernel: GICv3: CPU10: found redistributor c0000 region 0:0x0000100100440000 May 8 00:01:55.163118 kernel: GICv3: CPU10: using allocated LPI pending table @0x0000080000890000 May 8 00:01:55.163125 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.163131 kernel: CPU10: Booted secondary processor 0x00000c0000 [0x413fd0c1] May 8 00:01:55.163140 kernel: Detected PIPT I-cache on CPU11 May 8 00:01:55.163147 kernel: GICv3: CPU11: found redistributor 240000 region 0:0x0000100100a40000 May 8 00:01:55.163154 kernel: GICv3: CPU11: using allocated LPI pending table @0x00000800008a0000 May 8 00:01:55.163161 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.163168 kernel: CPU11: Booted secondary processor 0x0000240000 [0x413fd0c1] May 8 00:01:55.163175 kernel: Detected PIPT I-cache on CPU12 May 8 00:01:55.163182 kernel: GICv3: CPU12: found redistributor 80000 region 0:0x0000100100340000 May 8 00:01:55.163189 kernel: GICv3: CPU12: using allocated LPI pending table @0x00000800008b0000 May 8 00:01:55.163196 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.163203 kernel: CPU12: Booted secondary processor 0x0000080000 [0x413fd0c1] May 8 00:01:55.163211 kernel: Detected PIPT I-cache on CPU13 May 8 00:01:55.163218 kernel: GICv3: CPU13: found redistributor 200000 region 0:0x0000100100940000 May 8 00:01:55.163225 kernel: GICv3: CPU13: using allocated LPI pending table @0x00000800008c0000 May 8 00:01:55.163232 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.163239 kernel: CPU13: Booted secondary processor 0x0000200000 [0x413fd0c1] May 8 00:01:55.163246 kernel: Detected PIPT I-cache on CPU14 May 8 00:01:55.163253 kernel: GICv3: CPU14: found redistributor e0000 region 0:0x00001001004c0000 May 8 00:01:55.163260 kernel: GICv3: CPU14: using allocated LPI pending table @0x00000800008d0000 May 8 00:01:55.163267 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.163275 kernel: CPU14: Booted secondary processor 0x00000e0000 [0x413fd0c1] May 8 00:01:55.163282 kernel: Detected PIPT I-cache on CPU15 May 8 00:01:55.163289 kernel: GICv3: CPU15: found redistributor 260000 region 0:0x0000100100ac0000 May 8 00:01:55.163296 kernel: GICv3: CPU15: using allocated LPI pending table @0x00000800008e0000 May 8 00:01:55.163303 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.163310 kernel: CPU15: Booted secondary processor 0x0000260000 [0x413fd0c1] May 8 00:01:55.163316 kernel: Detected PIPT I-cache on CPU16 May 8 00:01:55.163323 kernel: GICv3: CPU16: found redistributor 20000 region 0:0x00001001001c0000 May 8 00:01:55.163330 kernel: GICv3: CPU16: using allocated LPI pending table @0x00000800008f0000 May 8 00:01:55.163346 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.163355 kernel: CPU16: Booted secondary processor 0x0000020000 [0x413fd0c1] May 8 00:01:55.163362 kernel: Detected PIPT I-cache on CPU17 May 8 00:01:55.163370 kernel: GICv3: CPU17: found redistributor 40000 region 0:0x0000100100240000 May 8 00:01:55.163377 kernel: GICv3: CPU17: using allocated LPI pending table @0x0000080000900000 May 8 00:01:55.163384 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.163391 kernel: CPU17: Booted secondary processor 0x0000040000 [0x413fd0c1] May 8 00:01:55.163398 kernel: Detected PIPT I-cache on CPU18 May 8 00:01:55.163406 kernel: GICv3: CPU18: found redistributor 0 region 0:0x0000100100140000 May 8 00:01:55.163413 kernel: GICv3: CPU18: using allocated LPI pending table @0x0000080000910000 May 8 00:01:55.163421 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.163428 kernel: CPU18: Booted secondary processor 0x0000000000 [0x413fd0c1] May 8 00:01:55.163435 kernel: Detected PIPT I-cache on CPU19 May 8 00:01:55.163443 kernel: GICv3: CPU19: found redistributor 60000 region 0:0x00001001002c0000 May 8 00:01:55.163450 kernel: GICv3: CPU19: using allocated LPI pending table @0x0000080000920000 May 8 00:01:55.163457 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.163466 kernel: CPU19: Booted secondary processor 0x0000060000 [0x413fd0c1] May 8 00:01:55.163473 kernel: Detected PIPT I-cache on CPU20 May 8 00:01:55.163480 kernel: GICv3: CPU20: found redistributor 130000 region 0:0x0000100100600000 May 8 00:01:55.163488 kernel: GICv3: CPU20: using allocated LPI pending table @0x0000080000930000 May 8 00:01:55.163495 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.163502 kernel: CPU20: Booted secondary processor 0x0000130000 [0x413fd0c1] May 8 00:01:55.163509 kernel: Detected PIPT I-cache on CPU21 May 8 00:01:55.163516 kernel: GICv3: CPU21: found redistributor 1b0000 region 0:0x0000100100800000 May 8 00:01:55.163524 kernel: GICv3: CPU21: using allocated LPI pending table @0x0000080000940000 May 8 00:01:55.163532 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.163540 kernel: CPU21: Booted secondary processor 0x00001b0000 [0x413fd0c1] May 8 00:01:55.163547 kernel: Detected PIPT I-cache on CPU22 May 8 00:01:55.163554 kernel: GICv3: CPU22: found redistributor 150000 region 0:0x0000100100680000 May 8 00:01:55.163561 kernel: GICv3: CPU22: using allocated LPI pending table @0x0000080000950000 May 8 00:01:55.163569 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.163576 kernel: CPU22: Booted secondary processor 0x0000150000 [0x413fd0c1] May 8 00:01:55.163583 kernel: Detected PIPT I-cache on CPU23 May 8 00:01:55.163590 kernel: GICv3: CPU23: found redistributor 1d0000 region 0:0x0000100100880000 May 8 00:01:55.163599 kernel: GICv3: CPU23: using allocated LPI pending table @0x0000080000960000 May 8 00:01:55.163606 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.163613 kernel: CPU23: Booted secondary processor 0x00001d0000 [0x413fd0c1] May 8 00:01:55.163621 kernel: Detected PIPT I-cache on CPU24 May 8 00:01:55.163628 kernel: GICv3: CPU24: found redistributor 110000 region 0:0x0000100100580000 May 8 00:01:55.163635 kernel: GICv3: CPU24: using allocated LPI pending table @0x0000080000970000 May 8 00:01:55.163644 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.163653 kernel: CPU24: Booted secondary processor 0x0000110000 [0x413fd0c1] May 8 00:01:55.163660 kernel: Detected PIPT I-cache on CPU25 May 8 00:01:55.163667 kernel: GICv3: CPU25: found redistributor 190000 region 0:0x0000100100780000 May 8 00:01:55.163676 kernel: GICv3: CPU25: using allocated LPI pending table @0x0000080000980000 May 8 00:01:55.163683 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.163691 kernel: CPU25: Booted secondary processor 0x0000190000 [0x413fd0c1] May 8 00:01:55.163698 kernel: Detected PIPT I-cache on CPU26 May 8 00:01:55.163705 kernel: GICv3: CPU26: found redistributor 170000 region 0:0x0000100100700000 May 8 00:01:55.163712 kernel: GICv3: CPU26: using allocated LPI pending table @0x0000080000990000 May 8 00:01:55.163720 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.163727 kernel: CPU26: Booted secondary processor 0x0000170000 [0x413fd0c1] May 8 00:01:55.163734 kernel: Detected PIPT I-cache on CPU27 May 8 00:01:55.163743 kernel: GICv3: CPU27: found redistributor 1f0000 region 0:0x0000100100900000 May 8 00:01:55.163750 kernel: GICv3: CPU27: using allocated LPI pending table @0x00000800009a0000 May 8 00:01:55.163757 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.163765 kernel: CPU27: Booted secondary processor 0x00001f0000 [0x413fd0c1] May 8 00:01:55.163772 kernel: Detected PIPT I-cache on CPU28 May 8 00:01:55.163779 kernel: GICv3: CPU28: found redistributor b0000 region 0:0x0000100100400000 May 8 00:01:55.163787 kernel: GICv3: CPU28: using allocated LPI pending table @0x00000800009b0000 May 8 00:01:55.163794 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.163801 kernel: CPU28: Booted secondary processor 0x00000b0000 [0x413fd0c1] May 8 00:01:55.163819 kernel: Detected PIPT I-cache on CPU29 May 8 00:01:55.163829 kernel: GICv3: CPU29: found redistributor 230000 region 0:0x0000100100a00000 May 8 00:01:55.163837 kernel: GICv3: CPU29: using allocated LPI pending table @0x00000800009c0000 May 8 00:01:55.163844 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.163851 kernel: CPU29: Booted secondary processor 0x0000230000 [0x413fd0c1] May 8 00:01:55.163858 kernel: Detected PIPT I-cache on CPU30 May 8 00:01:55.163866 kernel: GICv3: CPU30: found redistributor d0000 region 0:0x0000100100480000 May 8 00:01:55.163873 kernel: GICv3: CPU30: using allocated LPI pending table @0x00000800009d0000 May 8 00:01:55.163880 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.163888 kernel: CPU30: Booted secondary processor 0x00000d0000 [0x413fd0c1] May 8 00:01:55.163896 kernel: Detected PIPT I-cache on CPU31 May 8 00:01:55.163904 kernel: GICv3: CPU31: found redistributor 250000 region 0:0x0000100100a80000 May 8 00:01:55.163911 kernel: GICv3: CPU31: using allocated LPI pending table @0x00000800009e0000 May 8 00:01:55.163918 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.163926 kernel: CPU31: Booted secondary processor 0x0000250000 [0x413fd0c1] May 8 00:01:55.163933 kernel: Detected PIPT I-cache on CPU32 May 8 00:01:55.163940 kernel: GICv3: CPU32: found redistributor 90000 region 0:0x0000100100380000 May 8 00:01:55.163948 kernel: GICv3: CPU32: using allocated LPI pending table @0x00000800009f0000 May 8 00:01:55.163955 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.163962 kernel: CPU32: Booted secondary processor 0x0000090000 [0x413fd0c1] May 8 00:01:55.163971 kernel: Detected PIPT I-cache on CPU33 May 8 00:01:55.163979 kernel: GICv3: CPU33: found redistributor 210000 region 0:0x0000100100980000 May 8 00:01:55.163986 kernel: GICv3: CPU33: using allocated LPI pending table @0x0000080000a00000 May 8 00:01:55.163993 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.164000 kernel: CPU33: Booted secondary processor 0x0000210000 [0x413fd0c1] May 8 00:01:55.164008 kernel: Detected PIPT I-cache on CPU34 May 8 00:01:55.164015 kernel: GICv3: CPU34: found redistributor f0000 region 0:0x0000100100500000 May 8 00:01:55.164022 kernel: GICv3: CPU34: using allocated LPI pending table @0x0000080000a10000 May 8 00:01:55.164030 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.164038 kernel: CPU34: Booted secondary processor 0x00000f0000 [0x413fd0c1] May 8 00:01:55.164046 kernel: Detected PIPT I-cache on CPU35 May 8 00:01:55.164053 kernel: GICv3: CPU35: found redistributor 270000 region 0:0x0000100100b00000 May 8 00:01:55.164060 kernel: GICv3: CPU35: using allocated LPI pending table @0x0000080000a20000 May 8 00:01:55.164068 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.164075 kernel: CPU35: Booted secondary processor 0x0000270000 [0x413fd0c1] May 8 00:01:55.164082 kernel: Detected PIPT I-cache on CPU36 May 8 00:01:55.164089 kernel: GICv3: CPU36: found redistributor 30000 region 0:0x0000100100200000 May 8 00:01:55.164097 kernel: GICv3: CPU36: using allocated LPI pending table @0x0000080000a30000 May 8 00:01:55.164105 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.164112 kernel: CPU36: Booted secondary processor 0x0000030000 [0x413fd0c1] May 8 00:01:55.164120 kernel: Detected PIPT I-cache on CPU37 May 8 00:01:55.164127 kernel: GICv3: CPU37: found redistributor 50000 region 0:0x0000100100280000 May 8 00:01:55.164134 kernel: GICv3: CPU37: using allocated LPI pending table @0x0000080000a40000 May 8 00:01:55.164141 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.164149 kernel: CPU37: Booted secondary processor 0x0000050000 [0x413fd0c1] May 8 00:01:55.164156 kernel: Detected PIPT I-cache on CPU38 May 8 00:01:55.164164 kernel: GICv3: CPU38: found redistributor 10000 region 0:0x0000100100180000 May 8 00:01:55.164172 kernel: GICv3: CPU38: using allocated LPI pending table @0x0000080000a50000 May 8 00:01:55.164180 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.164188 kernel: CPU38: Booted secondary processor 0x0000010000 [0x413fd0c1] May 8 00:01:55.164195 kernel: Detected PIPT I-cache on CPU39 May 8 00:01:55.164202 kernel: GICv3: CPU39: found redistributor 70000 region 0:0x0000100100300000 May 8 00:01:55.164210 kernel: GICv3: CPU39: using allocated LPI pending table @0x0000080000a60000 May 8 00:01:55.164217 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.164224 kernel: CPU39: Booted secondary processor 0x0000070000 [0x413fd0c1] May 8 00:01:55.164231 kernel: Detected PIPT I-cache on CPU40 May 8 00:01:55.164240 kernel: GICv3: CPU40: found redistributor 120100 region 0:0x00001001005e0000 May 8 00:01:55.164247 kernel: GICv3: CPU40: using allocated LPI pending table @0x0000080000a70000 May 8 00:01:55.164255 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.164262 kernel: CPU40: Booted secondary processor 0x0000120100 [0x413fd0c1] May 8 00:01:55.164269 kernel: Detected PIPT I-cache on CPU41 May 8 00:01:55.164276 kernel: GICv3: CPU41: found redistributor 1a0100 region 0:0x00001001007e0000 May 8 00:01:55.164284 kernel: GICv3: CPU41: using allocated LPI pending table @0x0000080000a80000 May 8 00:01:55.164291 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.164298 kernel: CPU41: Booted secondary processor 0x00001a0100 [0x413fd0c1] May 8 00:01:55.164307 kernel: Detected PIPT I-cache on CPU42 May 8 00:01:55.164314 kernel: GICv3: CPU42: found redistributor 140100 region 0:0x0000100100660000 May 8 00:01:55.164322 kernel: GICv3: CPU42: using allocated LPI pending table @0x0000080000a90000 May 8 00:01:55.164329 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.164336 kernel: CPU42: Booted secondary processor 0x0000140100 [0x413fd0c1] May 8 00:01:55.164343 kernel: Detected PIPT I-cache on CPU43 May 8 00:01:55.164351 kernel: GICv3: CPU43: found redistributor 1c0100 region 0:0x0000100100860000 May 8 00:01:55.164358 kernel: GICv3: CPU43: using allocated LPI pending table @0x0000080000aa0000 May 8 00:01:55.164365 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.164373 kernel: CPU43: Booted secondary processor 0x00001c0100 [0x413fd0c1] May 8 00:01:55.164381 kernel: Detected PIPT I-cache on CPU44 May 8 00:01:55.164389 kernel: GICv3: CPU44: found redistributor 100100 region 0:0x0000100100560000 May 8 00:01:55.164396 kernel: GICv3: CPU44: using allocated LPI pending table @0x0000080000ab0000 May 8 00:01:55.164403 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.164411 kernel: CPU44: Booted secondary processor 0x0000100100 [0x413fd0c1] May 8 00:01:55.164418 kernel: Detected PIPT I-cache on CPU45 May 8 00:01:55.164425 kernel: GICv3: CPU45: found redistributor 180100 region 0:0x0000100100760000 May 8 00:01:55.164432 kernel: GICv3: CPU45: using allocated LPI pending table @0x0000080000ac0000 May 8 00:01:55.164440 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.164448 kernel: CPU45: Booted secondary processor 0x0000180100 [0x413fd0c1] May 8 00:01:55.164456 kernel: Detected PIPT I-cache on CPU46 May 8 00:01:55.164463 kernel: GICv3: CPU46: found redistributor 160100 region 0:0x00001001006e0000 May 8 00:01:55.164470 kernel: GICv3: CPU46: using allocated LPI pending table @0x0000080000ad0000 May 8 00:01:55.164477 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.164485 kernel: CPU46: Booted secondary processor 0x0000160100 [0x413fd0c1] May 8 00:01:55.164492 kernel: Detected PIPT I-cache on CPU47 May 8 00:01:55.164499 kernel: GICv3: CPU47: found redistributor 1e0100 region 0:0x00001001008e0000 May 8 00:01:55.164506 kernel: GICv3: CPU47: using allocated LPI pending table @0x0000080000ae0000 May 8 00:01:55.164514 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.164522 kernel: CPU47: Booted secondary processor 0x00001e0100 [0x413fd0c1] May 8 00:01:55.164529 kernel: Detected PIPT I-cache on CPU48 May 8 00:01:55.164537 kernel: GICv3: CPU48: found redistributor a0100 region 0:0x00001001003e0000 May 8 00:01:55.164544 kernel: GICv3: CPU48: using allocated LPI pending table @0x0000080000af0000 May 8 00:01:55.164552 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.164559 kernel: CPU48: Booted secondary processor 0x00000a0100 [0x413fd0c1] May 8 00:01:55.164566 kernel: Detected PIPT I-cache on CPU49 May 8 00:01:55.164573 kernel: GICv3: CPU49: found redistributor 220100 region 0:0x00001001009e0000 May 8 00:01:55.164581 kernel: GICv3: CPU49: using allocated LPI pending table @0x0000080000b00000 May 8 00:01:55.164589 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.164597 kernel: CPU49: Booted secondary processor 0x0000220100 [0x413fd0c1] May 8 00:01:55.164605 kernel: Detected PIPT I-cache on CPU50 May 8 00:01:55.164612 kernel: GICv3: CPU50: found redistributor c0100 region 0:0x0000100100460000 May 8 00:01:55.164620 kernel: GICv3: CPU50: using allocated LPI pending table @0x0000080000b10000 May 8 00:01:55.164627 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.164634 kernel: CPU50: Booted secondary processor 0x00000c0100 [0x413fd0c1] May 8 00:01:55.164642 kernel: Detected PIPT I-cache on CPU51 May 8 00:01:55.164649 kernel: GICv3: CPU51: found redistributor 240100 region 0:0x0000100100a60000 May 8 00:01:55.164656 kernel: GICv3: CPU51: using allocated LPI pending table @0x0000080000b20000 May 8 00:01:55.164665 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.164672 kernel: CPU51: Booted secondary processor 0x0000240100 [0x413fd0c1] May 8 00:01:55.164679 kernel: Detected PIPT I-cache on CPU52 May 8 00:01:55.164687 kernel: GICv3: CPU52: found redistributor 80100 region 0:0x0000100100360000 May 8 00:01:55.164694 kernel: GICv3: CPU52: using allocated LPI pending table @0x0000080000b30000 May 8 00:01:55.164701 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.164709 kernel: CPU52: Booted secondary processor 0x0000080100 [0x413fd0c1] May 8 00:01:55.164716 kernel: Detected PIPT I-cache on CPU53 May 8 00:01:55.164723 kernel: GICv3: CPU53: found redistributor 200100 region 0:0x0000100100960000 May 8 00:01:55.164732 kernel: GICv3: CPU53: using allocated LPI pending table @0x0000080000b40000 May 8 00:01:55.164740 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.164747 kernel: CPU53: Booted secondary processor 0x0000200100 [0x413fd0c1] May 8 00:01:55.164754 kernel: Detected PIPT I-cache on CPU54 May 8 00:01:55.164762 kernel: GICv3: CPU54: found redistributor e0100 region 0:0x00001001004e0000 May 8 00:01:55.164769 kernel: GICv3: CPU54: using allocated LPI pending table @0x0000080000b50000 May 8 00:01:55.164776 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.164783 kernel: CPU54: Booted secondary processor 0x00000e0100 [0x413fd0c1] May 8 00:01:55.164791 kernel: Detected PIPT I-cache on CPU55 May 8 00:01:55.164798 kernel: GICv3: CPU55: found redistributor 260100 region 0:0x0000100100ae0000 May 8 00:01:55.164818 kernel: GICv3: CPU55: using allocated LPI pending table @0x0000080000b60000 May 8 00:01:55.164825 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.164832 kernel: CPU55: Booted secondary processor 0x0000260100 [0x413fd0c1] May 8 00:01:55.164839 kernel: Detected PIPT I-cache on CPU56 May 8 00:01:55.164847 kernel: GICv3: CPU56: found redistributor 20100 region 0:0x00001001001e0000 May 8 00:01:55.164854 kernel: GICv3: CPU56: using allocated LPI pending table @0x0000080000b70000 May 8 00:01:55.164862 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.164870 kernel: CPU56: Booted secondary processor 0x0000020100 [0x413fd0c1] May 8 00:01:55.164878 kernel: Detected PIPT I-cache on CPU57 May 8 00:01:55.164886 kernel: GICv3: CPU57: found redistributor 40100 region 0:0x0000100100260000 May 8 00:01:55.164894 kernel: GICv3: CPU57: using allocated LPI pending table @0x0000080000b80000 May 8 00:01:55.164901 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.164908 kernel: CPU57: Booted secondary processor 0x0000040100 [0x413fd0c1] May 8 00:01:55.164915 kernel: Detected PIPT I-cache on CPU58 May 8 00:01:55.164923 kernel: GICv3: CPU58: found redistributor 100 region 0:0x0000100100160000 May 8 00:01:55.164930 kernel: GICv3: CPU58: using allocated LPI pending table @0x0000080000b90000 May 8 00:01:55.164938 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.164945 kernel: CPU58: Booted secondary processor 0x0000000100 [0x413fd0c1] May 8 00:01:55.164954 kernel: Detected PIPT I-cache on CPU59 May 8 00:01:55.164961 kernel: GICv3: CPU59: found redistributor 60100 region 0:0x00001001002e0000 May 8 00:01:55.164968 kernel: GICv3: CPU59: using allocated LPI pending table @0x0000080000ba0000 May 8 00:01:55.164975 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.164983 kernel: CPU59: Booted secondary processor 0x0000060100 [0x413fd0c1] May 8 00:01:55.164990 kernel: Detected PIPT I-cache on CPU60 May 8 00:01:55.164997 kernel: GICv3: CPU60: found redistributor 130100 region 0:0x0000100100620000 May 8 00:01:55.165005 kernel: GICv3: CPU60: using allocated LPI pending table @0x0000080000bb0000 May 8 00:01:55.165012 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.165019 kernel: CPU60: Booted secondary processor 0x0000130100 [0x413fd0c1] May 8 00:01:55.165028 kernel: Detected PIPT I-cache on CPU61 May 8 00:01:55.165035 kernel: GICv3: CPU61: found redistributor 1b0100 region 0:0x0000100100820000 May 8 00:01:55.165042 kernel: GICv3: CPU61: using allocated LPI pending table @0x0000080000bc0000 May 8 00:01:55.165050 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.165057 kernel: CPU61: Booted secondary processor 0x00001b0100 [0x413fd0c1] May 8 00:01:55.165064 kernel: Detected PIPT I-cache on CPU62 May 8 00:01:55.165071 kernel: GICv3: CPU62: found redistributor 150100 region 0:0x00001001006a0000 May 8 00:01:55.165079 kernel: GICv3: CPU62: using allocated LPI pending table @0x0000080000bd0000 May 8 00:01:55.165086 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.165095 kernel: CPU62: Booted secondary processor 0x0000150100 [0x413fd0c1] May 8 00:01:55.165102 kernel: Detected PIPT I-cache on CPU63 May 8 00:01:55.165110 kernel: GICv3: CPU63: found redistributor 1d0100 region 0:0x00001001008a0000 May 8 00:01:55.165117 kernel: GICv3: CPU63: using allocated LPI pending table @0x0000080000be0000 May 8 00:01:55.165124 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.165131 kernel: CPU63: Booted secondary processor 0x00001d0100 [0x413fd0c1] May 8 00:01:55.165139 kernel: Detected PIPT I-cache on CPU64 May 8 00:01:55.165146 kernel: GICv3: CPU64: found redistributor 110100 region 0:0x00001001005a0000 May 8 00:01:55.165153 kernel: GICv3: CPU64: using allocated LPI pending table @0x0000080000bf0000 May 8 00:01:55.165160 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.165169 kernel: CPU64: Booted secondary processor 0x0000110100 [0x413fd0c1] May 8 00:01:55.165176 kernel: Detected PIPT I-cache on CPU65 May 8 00:01:55.165183 kernel: GICv3: CPU65: found redistributor 190100 region 0:0x00001001007a0000 May 8 00:01:55.165191 kernel: GICv3: CPU65: using allocated LPI pending table @0x0000080000c00000 May 8 00:01:55.165198 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.165205 kernel: CPU65: Booted secondary processor 0x0000190100 [0x413fd0c1] May 8 00:01:55.165212 kernel: Detected PIPT I-cache on CPU66 May 8 00:01:55.165220 kernel: GICv3: CPU66: found redistributor 170100 region 0:0x0000100100720000 May 8 00:01:55.165227 kernel: GICv3: CPU66: using allocated LPI pending table @0x0000080000c10000 May 8 00:01:55.165236 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.165243 kernel: CPU66: Booted secondary processor 0x0000170100 [0x413fd0c1] May 8 00:01:55.165250 kernel: Detected PIPT I-cache on CPU67 May 8 00:01:55.165258 kernel: GICv3: CPU67: found redistributor 1f0100 region 0:0x0000100100920000 May 8 00:01:55.165265 kernel: GICv3: CPU67: using allocated LPI pending table @0x0000080000c20000 May 8 00:01:55.165272 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.165279 kernel: CPU67: Booted secondary processor 0x00001f0100 [0x413fd0c1] May 8 00:01:55.165287 kernel: Detected PIPT I-cache on CPU68 May 8 00:01:55.165294 kernel: GICv3: CPU68: found redistributor b0100 region 0:0x0000100100420000 May 8 00:01:55.165301 kernel: GICv3: CPU68: using allocated LPI pending table @0x0000080000c30000 May 8 00:01:55.165310 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.165317 kernel: CPU68: Booted secondary processor 0x00000b0100 [0x413fd0c1] May 8 00:01:55.165324 kernel: Detected PIPT I-cache on CPU69 May 8 00:01:55.165331 kernel: GICv3: CPU69: found redistributor 230100 region 0:0x0000100100a20000 May 8 00:01:55.165339 kernel: GICv3: CPU69: using allocated LPI pending table @0x0000080000c40000 May 8 00:01:55.165346 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.165353 kernel: CPU69: Booted secondary processor 0x0000230100 [0x413fd0c1] May 8 00:01:55.165360 kernel: Detected PIPT I-cache on CPU70 May 8 00:01:55.165368 kernel: GICv3: CPU70: found redistributor d0100 region 0:0x00001001004a0000 May 8 00:01:55.165376 kernel: GICv3: CPU70: using allocated LPI pending table @0x0000080000c50000 May 8 00:01:55.165384 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.165391 kernel: CPU70: Booted secondary processor 0x00000d0100 [0x413fd0c1] May 8 00:01:55.165398 kernel: Detected PIPT I-cache on CPU71 May 8 00:01:55.165406 kernel: GICv3: CPU71: found redistributor 250100 region 0:0x0000100100aa0000 May 8 00:01:55.165413 kernel: GICv3: CPU71: using allocated LPI pending table @0x0000080000c60000 May 8 00:01:55.165420 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.165427 kernel: CPU71: Booted secondary processor 0x0000250100 [0x413fd0c1] May 8 00:01:55.165435 kernel: Detected PIPT I-cache on CPU72 May 8 00:01:55.165442 kernel: GICv3: CPU72: found redistributor 90100 region 0:0x00001001003a0000 May 8 00:01:55.165450 kernel: GICv3: CPU72: using allocated LPI pending table @0x0000080000c70000 May 8 00:01:55.165458 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.165465 kernel: CPU72: Booted secondary processor 0x0000090100 [0x413fd0c1] May 8 00:01:55.165472 kernel: Detected PIPT I-cache on CPU73 May 8 00:01:55.165479 kernel: GICv3: CPU73: found redistributor 210100 region 0:0x00001001009a0000 May 8 00:01:55.165487 kernel: GICv3: CPU73: using allocated LPI pending table @0x0000080000c80000 May 8 00:01:55.165494 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.165501 kernel: CPU73: Booted secondary processor 0x0000210100 [0x413fd0c1] May 8 00:01:55.165509 kernel: Detected PIPT I-cache on CPU74 May 8 00:01:55.165517 kernel: GICv3: CPU74: found redistributor f0100 region 0:0x0000100100520000 May 8 00:01:55.165525 kernel: GICv3: CPU74: using allocated LPI pending table @0x0000080000c90000 May 8 00:01:55.165532 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.165539 kernel: CPU74: Booted secondary processor 0x00000f0100 [0x413fd0c1] May 8 00:01:55.165546 kernel: Detected PIPT I-cache on CPU75 May 8 00:01:55.165554 kernel: GICv3: CPU75: found redistributor 270100 region 0:0x0000100100b20000 May 8 00:01:55.165561 kernel: GICv3: CPU75: using allocated LPI pending table @0x0000080000ca0000 May 8 00:01:55.165568 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.165576 kernel: CPU75: Booted secondary processor 0x0000270100 [0x413fd0c1] May 8 00:01:55.165583 kernel: Detected PIPT I-cache on CPU76 May 8 00:01:55.165592 kernel: GICv3: CPU76: found redistributor 30100 region 0:0x0000100100220000 May 8 00:01:55.165599 kernel: GICv3: CPU76: using allocated LPI pending table @0x0000080000cb0000 May 8 00:01:55.165607 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.165614 kernel: CPU76: Booted secondary processor 0x0000030100 [0x413fd0c1] May 8 00:01:55.165621 kernel: Detected PIPT I-cache on CPU77 May 8 00:01:55.165628 kernel: GICv3: CPU77: found redistributor 50100 region 0:0x00001001002a0000 May 8 00:01:55.165636 kernel: GICv3: CPU77: using allocated LPI pending table @0x0000080000cc0000 May 8 00:01:55.165643 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.165650 kernel: CPU77: Booted secondary processor 0x0000050100 [0x413fd0c1] May 8 00:01:55.165659 kernel: Detected PIPT I-cache on CPU78 May 8 00:01:55.165666 kernel: GICv3: CPU78: found redistributor 10100 region 0:0x00001001001a0000 May 8 00:01:55.165673 kernel: GICv3: CPU78: using allocated LPI pending table @0x0000080000cd0000 May 8 00:01:55.165680 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.165688 kernel: CPU78: Booted secondary processor 0x0000010100 [0x413fd0c1] May 8 00:01:55.165695 kernel: Detected PIPT I-cache on CPU79 May 8 00:01:55.165703 kernel: GICv3: CPU79: found redistributor 70100 region 0:0x0000100100320000 May 8 00:01:55.165710 kernel: GICv3: CPU79: using allocated LPI pending table @0x0000080000ce0000 May 8 00:01:55.165717 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:01:55.165726 kernel: CPU79: Booted secondary processor 0x0000070100 [0x413fd0c1] May 8 00:01:55.165733 kernel: smp: Brought up 1 node, 80 CPUs May 8 00:01:55.165740 kernel: SMP: Total of 80 processors activated. May 8 00:01:55.165748 kernel: CPU features: detected: 32-bit EL0 Support May 8 00:01:55.165755 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 8 00:01:55.165762 kernel: CPU features: detected: Common not Private translations May 8 00:01:55.165770 kernel: CPU features: detected: CRC32 instructions May 8 00:01:55.165777 kernel: CPU features: detected: Enhanced Virtualization Traps May 8 00:01:55.165784 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 8 00:01:55.165793 kernel: CPU features: detected: LSE atomic instructions May 8 00:01:55.165800 kernel: CPU features: detected: Privileged Access Never May 8 00:01:55.165822 kernel: CPU features: detected: RAS Extension Support May 8 00:01:55.165830 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 8 00:01:55.165837 kernel: CPU: All CPU(s) started at EL2 May 8 00:01:55.165845 kernel: alternatives: applying system-wide alternatives May 8 00:01:55.165852 kernel: devtmpfs: initialized May 8 00:01:55.165859 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:01:55.165867 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 8 00:01:55.165874 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:01:55.165883 kernel: SMBIOS 3.4.0 present. May 8 00:01:55.165890 kernel: DMI: GIGABYTE R272-P30-JG/MP32-AR0-JG, BIOS F17a (SCP: 1.07.20210713) 07/22/2021 May 8 00:01:55.165898 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:01:55.165905 kernel: DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations May 8 00:01:55.165913 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 8 00:01:55.165920 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 8 00:01:55.165927 kernel: audit: initializing netlink subsys (disabled) May 8 00:01:55.165935 kernel: audit: type=2000 audit(0.042:1): state=initialized audit_enabled=0 res=1 May 8 00:01:55.165944 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:01:55.165951 kernel: cpuidle: using governor menu May 8 00:01:55.165958 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 8 00:01:55.165966 kernel: ASID allocator initialised with 32768 entries May 8 00:01:55.165973 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:01:55.165980 kernel: Serial: AMBA PL011 UART driver May 8 00:01:55.165988 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 8 00:01:55.165995 kernel: Modules: 0 pages in range for non-PLT usage May 8 00:01:55.166003 kernel: Modules: 509264 pages in range for PLT usage May 8 00:01:55.166011 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:01:55.166019 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 8 00:01:55.166027 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 8 00:01:55.166034 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 8 00:01:55.166041 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:01:55.166049 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:01:55.166056 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 8 00:01:55.166063 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 8 00:01:55.166070 kernel: ACPI: Added _OSI(Module Device) May 8 00:01:55.166079 kernel: ACPI: Added _OSI(Processor Device) May 8 00:01:55.166086 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:01:55.166094 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:01:55.166101 kernel: ACPI: 2 ACPI AML tables successfully acquired and loaded May 8 00:01:55.166108 kernel: ACPI: Interpreter enabled May 8 00:01:55.166116 kernel: ACPI: Using GIC for interrupt routing May 8 00:01:55.166123 kernel: ACPI: MCFG table detected, 8 entries May 8 00:01:55.166130 kernel: ACPI: IORT: SMMU-v3[33ffe0000000] Mapped to Proximity domain 0 May 8 00:01:55.166137 kernel: ACPI: IORT: SMMU-v3[37ffe0000000] Mapped to Proximity domain 0 May 8 00:01:55.166145 kernel: ACPI: IORT: SMMU-v3[3bffe0000000] Mapped to Proximity domain 0 May 8 00:01:55.166154 kernel: ACPI: IORT: SMMU-v3[3fffe0000000] Mapped to Proximity domain 0 May 8 00:01:55.166161 kernel: ACPI: IORT: SMMU-v3[23ffe0000000] Mapped to Proximity domain 0 May 8 00:01:55.166168 kernel: ACPI: IORT: SMMU-v3[27ffe0000000] Mapped to Proximity domain 0 May 8 00:01:55.166176 kernel: ACPI: IORT: SMMU-v3[2bffe0000000] Mapped to Proximity domain 0 May 8 00:01:55.166183 kernel: ACPI: IORT: SMMU-v3[2fffe0000000] Mapped to Proximity domain 0 May 8 00:01:55.166190 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x100002600000 (irq = 19, base_baud = 0) is a SBSA May 8 00:01:55.166198 kernel: printk: console [ttyAMA0] enabled May 8 00:01:55.166205 kernel: ARMH0011:01: ttyAMA1 at MMIO 0x100002620000 (irq = 20, base_baud = 0) is a SBSA May 8 00:01:55.166214 kernel: ACPI: PCI Root Bridge [PCI1] (domain 000d [bus 00-ff]) May 8 00:01:55.166343 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:01:55.166414 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug PME LTR] May 8 00:01:55.166479 kernel: acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability] May 8 00:01:55.166541 kernel: acpi PNP0A08:00: MCFG quirk: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 8 00:01:55.166603 kernel: acpi PNP0A08:00: ECAM area [mem 0x37fff0000000-0x37ffffffffff] reserved by PNP0C02:00 May 8 00:01:55.166664 kernel: acpi PNP0A08:00: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] May 8 00:01:55.166676 kernel: PCI host bridge to bus 000d:00 May 8 00:01:55.166745 kernel: pci_bus 000d:00: root bus resource [mem 0x50000000-0x5fffffff window] May 8 00:01:55.166807 kernel: pci_bus 000d:00: root bus resource [mem 0x340000000000-0x37ffdfffffff window] May 8 00:01:55.166867 kernel: pci_bus 000d:00: root bus resource [bus 00-ff] May 8 00:01:55.166946 kernel: pci 000d:00:00.0: [1def:e100] type 00 class 0x060000 May 8 00:01:55.167019 kernel: pci 000d:00:01.0: [1def:e101] type 01 class 0x060400 May 8 00:01:55.167089 kernel: pci 000d:00:01.0: enabling Extended Tags May 8 00:01:55.167153 kernel: pci 000d:00:01.0: supports D1 D2 May 8 00:01:55.167217 kernel: pci 000d:00:01.0: PME# supported from D0 D1 D3hot May 8 00:01:55.167289 kernel: pci 000d:00:02.0: [1def:e102] type 01 class 0x060400 May 8 00:01:55.167353 kernel: pci 000d:00:02.0: supports D1 D2 May 8 00:01:55.167419 kernel: pci 000d:00:02.0: PME# supported from D0 D1 D3hot May 8 00:01:55.167490 kernel: pci 000d:00:03.0: [1def:e103] type 01 class 0x060400 May 8 00:01:55.167558 kernel: pci 000d:00:03.0: supports D1 D2 May 8 00:01:55.167621 kernel: pci 000d:00:03.0: PME# supported from D0 D1 D3hot May 8 00:01:55.167693 kernel: pci 000d:00:04.0: [1def:e104] type 01 class 0x060400 May 8 00:01:55.167756 kernel: pci 000d:00:04.0: supports D1 D2 May 8 00:01:55.167824 kernel: pci 000d:00:04.0: PME# supported from D0 D1 D3hot May 8 00:01:55.167833 kernel: acpiphp: Slot [1] registered May 8 00:01:55.167841 kernel: acpiphp: Slot [2] registered May 8 00:01:55.167851 kernel: acpiphp: Slot [3] registered May 8 00:01:55.167858 kernel: acpiphp: Slot [4] registered May 8 00:01:55.167915 kernel: pci_bus 000d:00: on NUMA node 0 May 8 00:01:55.167980 kernel: pci 000d:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 8 00:01:55.168044 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 8 00:01:55.168108 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 8 00:01:55.168173 kernel: pci 000d:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 8 00:01:55.168238 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 8 00:01:55.168306 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 8 00:01:55.168371 kernel: pci 000d:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 8 00:01:55.168434 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 8 00:01:55.168498 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 8 00:01:55.168561 kernel: pci 000d:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 8 00:01:55.168624 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 8 00:01:55.168689 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 8 00:01:55.168754 kernel: pci 000d:00:01.0: BAR 14: assigned [mem 0x50000000-0x501fffff] May 8 00:01:55.168820 kernel: pci 000d:00:01.0: BAR 15: assigned [mem 0x340000000000-0x3400001fffff 64bit pref] May 8 00:01:55.168885 kernel: pci 000d:00:02.0: BAR 14: assigned [mem 0x50200000-0x503fffff] May 8 00:01:55.168948 kernel: pci 000d:00:02.0: BAR 15: assigned [mem 0x340000200000-0x3400003fffff 64bit pref] May 8 00:01:55.169010 kernel: pci 000d:00:03.0: BAR 14: assigned [mem 0x50400000-0x505fffff] May 8 00:01:55.169074 kernel: pci 000d:00:03.0: BAR 15: assigned [mem 0x340000400000-0x3400005fffff 64bit pref] May 8 00:01:55.169136 kernel: pci 000d:00:04.0: BAR 14: assigned [mem 0x50600000-0x507fffff] May 8 00:01:55.169202 kernel: pci 000d:00:04.0: BAR 15: assigned [mem 0x340000600000-0x3400007fffff 64bit pref] May 8 00:01:55.169264 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.169327 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.169389 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.169452 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.169516 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.169579 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.169642 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.169707 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.169770 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.169835 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.169899 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.169963 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.170025 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.170089 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.170152 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.170218 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.170281 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] May 8 00:01:55.170344 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff] May 8 00:01:55.170406 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref] May 8 00:01:55.170469 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] May 8 00:01:55.170533 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff] May 8 00:01:55.170596 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref] May 8 00:01:55.170661 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] May 8 00:01:55.170723 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff] May 8 00:01:55.170787 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref] May 8 00:01:55.170872 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] May 8 00:01:55.170934 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff] May 8 00:01:55.170996 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref] May 8 00:01:55.171055 kernel: pci_bus 000d:00: resource 4 [mem 0x50000000-0x5fffffff window] May 8 00:01:55.171110 kernel: pci_bus 000d:00: resource 5 [mem 0x340000000000-0x37ffdfffffff window] May 8 00:01:55.171179 kernel: pci_bus 000d:01: resource 1 [mem 0x50000000-0x501fffff] May 8 00:01:55.171238 kernel: pci_bus 000d:01: resource 2 [mem 0x340000000000-0x3400001fffff 64bit pref] May 8 00:01:55.171307 kernel: pci_bus 000d:02: resource 1 [mem 0x50200000-0x503fffff] May 8 00:01:55.171367 kernel: pci_bus 000d:02: resource 2 [mem 0x340000200000-0x3400003fffff 64bit pref] May 8 00:01:55.171443 kernel: pci_bus 000d:03: resource 1 [mem 0x50400000-0x505fffff] May 8 00:01:55.171503 kernel: pci_bus 000d:03: resource 2 [mem 0x340000400000-0x3400005fffff 64bit pref] May 8 00:01:55.171567 kernel: pci_bus 000d:04: resource 1 [mem 0x50600000-0x507fffff] May 8 00:01:55.171626 kernel: pci_bus 000d:04: resource 2 [mem 0x340000600000-0x3400007fffff 64bit pref] May 8 00:01:55.171636 kernel: ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 00-ff]) May 8 00:01:55.171703 kernel: acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:01:55.171770 kernel: acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug PME LTR] May 8 00:01:55.171835 kernel: acpi PNP0A08:01: _OSC: OS now controls [AER PCIeCapability] May 8 00:01:55.171895 kernel: acpi PNP0A08:01: MCFG quirk: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] with pci_32b_read_ops May 8 00:01:55.171957 kernel: acpi PNP0A08:01: ECAM area [mem 0x3ffff0000000-0x3fffffffffff] reserved by PNP0C02:00 May 8 00:01:55.172017 kernel: acpi PNP0A08:01: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] May 8 00:01:55.172026 kernel: PCI host bridge to bus 0000:00 May 8 00:01:55.172091 kernel: pci_bus 0000:00: root bus resource [mem 0x70000000-0x7fffffff window] May 8 00:01:55.172150 kernel: pci_bus 0000:00: root bus resource [mem 0x3c0000000000-0x3fffdfffffff window] May 8 00:01:55.172206 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:01:55.172277 kernel: pci 0000:00:00.0: [1def:e100] type 00 class 0x060000 May 8 00:01:55.172348 kernel: pci 0000:00:01.0: [1def:e101] type 01 class 0x060400 May 8 00:01:55.172413 kernel: pci 0000:00:01.0: enabling Extended Tags May 8 00:01:55.172476 kernel: pci 0000:00:01.0: supports D1 D2 May 8 00:01:55.172539 kernel: pci 0000:00:01.0: PME# supported from D0 D1 D3hot May 8 00:01:55.172613 kernel: pci 0000:00:02.0: [1def:e102] type 01 class 0x060400 May 8 00:01:55.172678 kernel: pci 0000:00:02.0: supports D1 D2 May 8 00:01:55.172740 kernel: pci 0000:00:02.0: PME# supported from D0 D1 D3hot May 8 00:01:55.172813 kernel: pci 0000:00:03.0: [1def:e103] type 01 class 0x060400 May 8 00:01:55.172877 kernel: pci 0000:00:03.0: supports D1 D2 May 8 00:01:55.172939 kernel: pci 0000:00:03.0: PME# supported from D0 D1 D3hot May 8 00:01:55.173008 kernel: pci 0000:00:04.0: [1def:e104] type 01 class 0x060400 May 8 00:01:55.173074 kernel: pci 0000:00:04.0: supports D1 D2 May 8 00:01:55.173137 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D3hot May 8 00:01:55.173146 kernel: acpiphp: Slot [1-1] registered May 8 00:01:55.173154 kernel: acpiphp: Slot [2-1] registered May 8 00:01:55.173161 kernel: acpiphp: Slot [3-1] registered May 8 00:01:55.173168 kernel: acpiphp: Slot [4-1] registered May 8 00:01:55.173223 kernel: pci_bus 0000:00: on NUMA node 0 May 8 00:01:55.173286 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 8 00:01:55.173351 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 8 00:01:55.173416 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 8 00:01:55.173480 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 8 00:01:55.173544 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 8 00:01:55.173607 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 8 00:01:55.173670 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 8 00:01:55.173733 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 8 00:01:55.173797 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 8 00:01:55.173866 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 8 00:01:55.173929 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 8 00:01:55.173992 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 8 00:01:55.174056 kernel: pci 0000:00:01.0: BAR 14: assigned [mem 0x70000000-0x701fffff] May 8 00:01:55.174120 kernel: pci 0000:00:01.0: BAR 15: assigned [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 8 00:01:55.174184 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x70200000-0x703fffff] May 8 00:01:55.174248 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 8 00:01:55.174312 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x70400000-0x705fffff] May 8 00:01:55.174375 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 8 00:01:55.174438 kernel: pci 0000:00:04.0: BAR 14: assigned [mem 0x70600000-0x707fffff] May 8 00:01:55.174500 kernel: pci 0000:00:04.0: BAR 15: assigned [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 8 00:01:55.174562 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.174625 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.174689 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.174752 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.174816 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.174882 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.174943 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.175006 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.175070 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.175131 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.175195 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.175259 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.175322 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.175384 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.175447 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.175510 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.175572 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 8 00:01:55.175636 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff] May 8 00:01:55.175698 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 8 00:01:55.175764 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] May 8 00:01:55.175828 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff] May 8 00:01:55.175892 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 8 00:01:55.175955 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] May 8 00:01:55.176020 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff] May 8 00:01:55.176084 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 8 00:01:55.176146 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] May 8 00:01:55.176210 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff] May 8 00:01:55.176273 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 8 00:01:55.176334 kernel: pci_bus 0000:00: resource 4 [mem 0x70000000-0x7fffffff window] May 8 00:01:55.176390 kernel: pci_bus 0000:00: resource 5 [mem 0x3c0000000000-0x3fffdfffffff window] May 8 00:01:55.176458 kernel: pci_bus 0000:01: resource 1 [mem 0x70000000-0x701fffff] May 8 00:01:55.176517 kernel: pci_bus 0000:01: resource 2 [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 8 00:01:55.176583 kernel: pci_bus 0000:02: resource 1 [mem 0x70200000-0x703fffff] May 8 00:01:55.176642 kernel: pci_bus 0000:02: resource 2 [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 8 00:01:55.176715 kernel: pci_bus 0000:03: resource 1 [mem 0x70400000-0x705fffff] May 8 00:01:55.176778 kernel: pci_bus 0000:03: resource 2 [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 8 00:01:55.176848 kernel: pci_bus 0000:04: resource 1 [mem 0x70600000-0x707fffff] May 8 00:01:55.176907 kernel: pci_bus 0000:04: resource 2 [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 8 00:01:55.176917 kernel: ACPI: PCI Root Bridge [PCI7] (domain 0005 [bus 00-ff]) May 8 00:01:55.176984 kernel: acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:01:55.177046 kernel: acpi PNP0A08:02: _OSC: platform does not support [PCIeHotplug PME LTR] May 8 00:01:55.177109 kernel: acpi PNP0A08:02: _OSC: OS now controls [AER PCIeCapability] May 8 00:01:55.177170 kernel: acpi PNP0A08:02: MCFG quirk: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] with pci_32b_read_ops May 8 00:01:55.177231 kernel: acpi PNP0A08:02: ECAM area [mem 0x2ffff0000000-0x2fffffffffff] reserved by PNP0C02:00 May 8 00:01:55.177290 kernel: acpi PNP0A08:02: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] May 8 00:01:55.177300 kernel: PCI host bridge to bus 0005:00 May 8 00:01:55.177366 kernel: pci_bus 0005:00: root bus resource [mem 0x30000000-0x3fffffff window] May 8 00:01:55.177422 kernel: pci_bus 0005:00: root bus resource [mem 0x2c0000000000-0x2fffdfffffff window] May 8 00:01:55.177480 kernel: pci_bus 0005:00: root bus resource [bus 00-ff] May 8 00:01:55.177554 kernel: pci 0005:00:00.0: [1def:e110] type 00 class 0x060000 May 8 00:01:55.177624 kernel: pci 0005:00:01.0: [1def:e111] type 01 class 0x060400 May 8 00:01:55.177689 kernel: pci 0005:00:01.0: supports D1 D2 May 8 00:01:55.177751 kernel: pci 0005:00:01.0: PME# supported from D0 D1 D3hot May 8 00:01:55.177827 kernel: pci 0005:00:03.0: [1def:e113] type 01 class 0x060400 May 8 00:01:55.177890 kernel: pci 0005:00:03.0: supports D1 D2 May 8 00:01:55.177957 kernel: pci 0005:00:03.0: PME# supported from D0 D1 D3hot May 8 00:01:55.178028 kernel: pci 0005:00:05.0: [1def:e115] type 01 class 0x060400 May 8 00:01:55.178090 kernel: pci 0005:00:05.0: supports D1 D2 May 8 00:01:55.178153 kernel: pci 0005:00:05.0: PME# supported from D0 D1 D3hot May 8 00:01:55.178222 kernel: pci 0005:00:07.0: [1def:e117] type 01 class 0x060400 May 8 00:01:55.178285 kernel: pci 0005:00:07.0: supports D1 D2 May 8 00:01:55.178351 kernel: pci 0005:00:07.0: PME# supported from D0 D1 D3hot May 8 00:01:55.178361 kernel: acpiphp: Slot [1-2] registered May 8 00:01:55.178368 kernel: acpiphp: Slot [2-2] registered May 8 00:01:55.178443 kernel: pci 0005:03:00.0: [144d:a808] type 00 class 0x010802 May 8 00:01:55.178509 kernel: pci 0005:03:00.0: reg 0x10: [mem 0x30110000-0x30113fff 64bit] May 8 00:01:55.178575 kernel: pci 0005:03:00.0: reg 0x30: [mem 0x30100000-0x3010ffff pref] May 8 00:01:55.178649 kernel: pci 0005:04:00.0: [144d:a808] type 00 class 0x010802 May 8 00:01:55.178714 kernel: pci 0005:04:00.0: reg 0x10: [mem 0x30010000-0x30013fff 64bit] May 8 00:01:55.178784 kernel: pci 0005:04:00.0: reg 0x30: [mem 0x30000000-0x3000ffff pref] May 8 00:01:55.178846 kernel: pci_bus 0005:00: on NUMA node 0 May 8 00:01:55.178912 kernel: pci 0005:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 8 00:01:55.178976 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 8 00:01:55.179039 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 8 00:01:55.179105 kernel: pci 0005:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 8 00:01:55.179167 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 8 00:01:55.179233 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 8 00:01:55.179297 kernel: pci 0005:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 8 00:01:55.179361 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 8 00:01:55.179423 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 8 00:01:55.179488 kernel: pci 0005:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 8 00:01:55.179553 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 8 00:01:55.179618 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x001fffff] to [bus 04] add_size 100000 add_align 100000 May 8 00:01:55.179694 kernel: pci 0005:00:01.0: BAR 14: assigned [mem 0x30000000-0x301fffff] May 8 00:01:55.179770 kernel: pci 0005:00:01.0: BAR 15: assigned [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 8 00:01:55.179838 kernel: pci 0005:00:03.0: BAR 14: assigned [mem 0x30200000-0x303fffff] May 8 00:01:55.179901 kernel: pci 0005:00:03.0: BAR 15: assigned [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 8 00:01:55.179973 kernel: pci 0005:00:05.0: BAR 14: assigned [mem 0x30400000-0x305fffff] May 8 00:01:55.180038 kernel: pci 0005:00:05.0: BAR 15: assigned [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 8 00:01:55.180100 kernel: pci 0005:00:07.0: BAR 14: assigned [mem 0x30600000-0x307fffff] May 8 00:01:55.180168 kernel: pci 0005:00:07.0: BAR 15: assigned [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 8 00:01:55.180231 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.180294 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.180356 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.180421 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.180484 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.180546 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.180609 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.180674 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.180736 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.180798 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.180867 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.180929 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.180993 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.181057 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.181119 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.181182 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.181247 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] May 8 00:01:55.181310 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff] May 8 00:01:55.181374 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 8 00:01:55.181438 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] May 8 00:01:55.181501 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff] May 8 00:01:55.181564 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 8 00:01:55.181635 kernel: pci 0005:03:00.0: BAR 6: assigned [mem 0x30400000-0x3040ffff pref] May 8 00:01:55.181699 kernel: pci 0005:03:00.0: BAR 0: assigned [mem 0x30410000-0x30413fff 64bit] May 8 00:01:55.181763 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] May 8 00:01:55.181831 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff] May 8 00:01:55.181896 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 8 00:01:55.181965 kernel: pci 0005:04:00.0: BAR 6: assigned [mem 0x30600000-0x3060ffff pref] May 8 00:01:55.182029 kernel: pci 0005:04:00.0: BAR 0: assigned [mem 0x30610000-0x30613fff 64bit] May 8 00:01:55.182096 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] May 8 00:01:55.182158 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff] May 8 00:01:55.182222 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 8 00:01:55.182281 kernel: pci_bus 0005:00: resource 4 [mem 0x30000000-0x3fffffff window] May 8 00:01:55.182337 kernel: pci_bus 0005:00: resource 5 [mem 0x2c0000000000-0x2fffdfffffff window] May 8 00:01:55.182407 kernel: pci_bus 0005:01: resource 1 [mem 0x30000000-0x301fffff] May 8 00:01:55.182467 kernel: pci_bus 0005:01: resource 2 [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 8 00:01:55.182543 kernel: pci_bus 0005:02: resource 1 [mem 0x30200000-0x303fffff] May 8 00:01:55.182602 kernel: pci_bus 0005:02: resource 2 [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 8 00:01:55.182669 kernel: pci_bus 0005:03: resource 1 [mem 0x30400000-0x305fffff] May 8 00:01:55.182728 kernel: pci_bus 0005:03: resource 2 [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 8 00:01:55.182794 kernel: pci_bus 0005:04: resource 1 [mem 0x30600000-0x307fffff] May 8 00:01:55.182860 kernel: pci_bus 0005:04: resource 2 [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 8 00:01:55.182870 kernel: ACPI: PCI Root Bridge [PCI5] (domain 0003 [bus 00-ff]) May 8 00:01:55.182941 kernel: acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:01:55.183003 kernel: acpi PNP0A08:03: _OSC: platform does not support [PCIeHotplug PME LTR] May 8 00:01:55.183063 kernel: acpi PNP0A08:03: _OSC: OS now controls [AER PCIeCapability] May 8 00:01:55.183125 kernel: acpi PNP0A08:03: MCFG quirk: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 8 00:01:55.183185 kernel: acpi PNP0A08:03: ECAM area [mem 0x27fff0000000-0x27ffffffffff] reserved by PNP0C02:00 May 8 00:01:55.183251 kernel: acpi PNP0A08:03: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] May 8 00:01:55.183261 kernel: PCI host bridge to bus 0003:00 May 8 00:01:55.183324 kernel: pci_bus 0003:00: root bus resource [mem 0x10000000-0x1fffffff window] May 8 00:01:55.183383 kernel: pci_bus 0003:00: root bus resource [mem 0x240000000000-0x27ffdfffffff window] May 8 00:01:55.183439 kernel: pci_bus 0003:00: root bus resource [bus 00-ff] May 8 00:01:55.183510 kernel: pci 0003:00:00.0: [1def:e110] type 00 class 0x060000 May 8 00:01:55.183581 kernel: pci 0003:00:01.0: [1def:e111] type 01 class 0x060400 May 8 00:01:55.183649 kernel: pci 0003:00:01.0: supports D1 D2 May 8 00:01:55.183714 kernel: pci 0003:00:01.0: PME# supported from D0 D1 D3hot May 8 00:01:55.183784 kernel: pci 0003:00:03.0: [1def:e113] type 01 class 0x060400 May 8 00:01:55.183852 kernel: pci 0003:00:03.0: supports D1 D2 May 8 00:01:55.183916 kernel: pci 0003:00:03.0: PME# supported from D0 D1 D3hot May 8 00:01:55.183992 kernel: pci 0003:00:05.0: [1def:e115] type 01 class 0x060400 May 8 00:01:55.184062 kernel: pci 0003:00:05.0: supports D1 D2 May 8 00:01:55.184125 kernel: pci 0003:00:05.0: PME# supported from D0 D1 D3hot May 8 00:01:55.184135 kernel: acpiphp: Slot [1-3] registered May 8 00:01:55.184142 kernel: acpiphp: Slot [2-3] registered May 8 00:01:55.184214 kernel: pci 0003:03:00.0: [8086:1521] type 00 class 0x020000 May 8 00:01:55.184284 kernel: pci 0003:03:00.0: reg 0x10: [mem 0x10020000-0x1003ffff] May 8 00:01:55.184353 kernel: pci 0003:03:00.0: reg 0x18: [io 0x0020-0x003f] May 8 00:01:55.184418 kernel: pci 0003:03:00.0: reg 0x1c: [mem 0x10044000-0x10047fff] May 8 00:01:55.184484 kernel: pci 0003:03:00.0: PME# supported from D0 D3hot D3cold May 8 00:01:55.184551 kernel: pci 0003:03:00.0: reg 0x184: [mem 0x240000060000-0x240000063fff 64bit pref] May 8 00:01:55.184616 kernel: pci 0003:03:00.0: VF(n) BAR0 space: [mem 0x240000060000-0x24000007ffff 64bit pref] (contains BAR0 for 8 VFs) May 8 00:01:55.184681 kernel: pci 0003:03:00.0: reg 0x190: [mem 0x240000040000-0x240000043fff 64bit pref] May 8 00:01:55.184746 kernel: pci 0003:03:00.0: VF(n) BAR3 space: [mem 0x240000040000-0x24000005ffff 64bit pref] (contains BAR3 for 8 VFs) May 8 00:01:55.184900 kernel: pci 0003:03:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x2 link at 0003:00:05.0 (capable of 16.000 Gb/s with 5.0 GT/s PCIe x4 link) May 8 00:01:55.184984 kernel: pci 0003:03:00.1: [8086:1521] type 00 class 0x020000 May 8 00:01:55.185055 kernel: pci 0003:03:00.1: reg 0x10: [mem 0x10000000-0x1001ffff] May 8 00:01:55.185119 kernel: pci 0003:03:00.1: reg 0x18: [io 0x0000-0x001f] May 8 00:01:55.185184 kernel: pci 0003:03:00.1: reg 0x1c: [mem 0x10040000-0x10043fff] May 8 00:01:55.185248 kernel: pci 0003:03:00.1: PME# supported from D0 D3hot D3cold May 8 00:01:55.185311 kernel: pci 0003:03:00.1: reg 0x184: [mem 0x240000020000-0x240000023fff 64bit pref] May 8 00:01:55.185376 kernel: pci 0003:03:00.1: VF(n) BAR0 space: [mem 0x240000020000-0x24000003ffff 64bit pref] (contains BAR0 for 8 VFs) May 8 00:01:55.185440 kernel: pci 0003:03:00.1: reg 0x190: [mem 0x240000000000-0x240000003fff 64bit pref] May 8 00:01:55.185507 kernel: pci 0003:03:00.1: VF(n) BAR3 space: [mem 0x240000000000-0x24000001ffff 64bit pref] (contains BAR3 for 8 VFs) May 8 00:01:55.185565 kernel: pci_bus 0003:00: on NUMA node 0 May 8 00:01:55.185629 kernel: pci 0003:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 8 00:01:55.185691 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 8 00:01:55.185754 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 8 00:01:55.185820 kernel: pci 0003:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 8 00:01:55.185882 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 8 00:01:55.185947 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 8 00:01:55.186011 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03-04] add_size 300000 add_align 100000 May 8 00:01:55.186074 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03-04] add_size 100000 add_align 100000 May 8 00:01:55.186135 kernel: pci 0003:00:01.0: BAR 14: assigned [mem 0x10000000-0x101fffff] May 8 00:01:55.186208 kernel: pci 0003:00:01.0: BAR 15: assigned [mem 0x240000000000-0x2400001fffff 64bit pref] May 8 00:01:55.186272 kernel: pci 0003:00:03.0: BAR 14: assigned [mem 0x10200000-0x103fffff] May 8 00:01:55.186334 kernel: pci 0003:00:03.0: BAR 15: assigned [mem 0x240000200000-0x2400003fffff 64bit pref] May 8 00:01:55.186395 kernel: pci 0003:00:05.0: BAR 14: assigned [mem 0x10400000-0x105fffff] May 8 00:01:55.186460 kernel: pci 0003:00:05.0: BAR 15: assigned [mem 0x240000400000-0x2400006fffff 64bit pref] May 8 00:01:55.186522 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.186584 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.186646 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.186707 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.186769 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.186834 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.186897 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.186960 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.187022 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.187084 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.187145 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.187207 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.187268 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] May 8 00:01:55.187330 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff] May 8 00:01:55.187392 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref] May 8 00:01:55.187458 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] May 8 00:01:55.187523 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff] May 8 00:01:55.187585 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref] May 8 00:01:55.187650 kernel: pci 0003:03:00.0: BAR 0: assigned [mem 0x10400000-0x1041ffff] May 8 00:01:55.187714 kernel: pci 0003:03:00.1: BAR 0: assigned [mem 0x10420000-0x1043ffff] May 8 00:01:55.187781 kernel: pci 0003:03:00.0: BAR 3: assigned [mem 0x10440000-0x10443fff] May 8 00:01:55.187851 kernel: pci 0003:03:00.0: BAR 7: assigned [mem 0x240000400000-0x24000041ffff 64bit pref] May 8 00:01:55.187915 kernel: pci 0003:03:00.0: BAR 10: assigned [mem 0x240000420000-0x24000043ffff 64bit pref] May 8 00:01:55.187980 kernel: pci 0003:03:00.1: BAR 3: assigned [mem 0x10444000-0x10447fff] May 8 00:01:55.188043 kernel: pci 0003:03:00.1: BAR 7: assigned [mem 0x240000440000-0x24000045ffff 64bit pref] May 8 00:01:55.188107 kernel: pci 0003:03:00.1: BAR 10: assigned [mem 0x240000460000-0x24000047ffff 64bit pref] May 8 00:01:55.188170 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] May 8 00:01:55.188235 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] May 8 00:01:55.188303 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] May 8 00:01:55.188366 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] May 8 00:01:55.188430 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] May 8 00:01:55.188494 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] May 8 00:01:55.188557 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] May 8 00:01:55.188621 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] May 8 00:01:55.188684 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] May 8 00:01:55.188746 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff] May 8 00:01:55.188813 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref] May 8 00:01:55.188872 kernel: pci_bus 0003:00: Some PCI device resources are unassigned, try booting with pci=realloc May 8 00:01:55.188927 kernel: pci_bus 0003:00: resource 4 [mem 0x10000000-0x1fffffff window] May 8 00:01:55.188983 kernel: pci_bus 0003:00: resource 5 [mem 0x240000000000-0x27ffdfffffff window] May 8 00:01:55.189059 kernel: pci_bus 0003:01: resource 1 [mem 0x10000000-0x101fffff] May 8 00:01:55.189118 kernel: pci_bus 0003:01: resource 2 [mem 0x240000000000-0x2400001fffff 64bit pref] May 8 00:01:55.189187 kernel: pci_bus 0003:02: resource 1 [mem 0x10200000-0x103fffff] May 8 00:01:55.189246 kernel: pci_bus 0003:02: resource 2 [mem 0x240000200000-0x2400003fffff 64bit pref] May 8 00:01:55.189311 kernel: pci_bus 0003:03: resource 1 [mem 0x10400000-0x105fffff] May 8 00:01:55.189370 kernel: pci_bus 0003:03: resource 2 [mem 0x240000400000-0x2400006fffff 64bit pref] May 8 00:01:55.189380 kernel: ACPI: PCI Root Bridge [PCI0] (domain 000c [bus 00-ff]) May 8 00:01:55.189449 kernel: acpi PNP0A08:04: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:01:55.189514 kernel: acpi PNP0A08:04: _OSC: platform does not support [PCIeHotplug PME LTR] May 8 00:01:55.189577 kernel: acpi PNP0A08:04: _OSC: OS now controls [AER PCIeCapability] May 8 00:01:55.189639 kernel: acpi PNP0A08:04: MCFG quirk: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 8 00:01:55.189700 kernel: acpi PNP0A08:04: ECAM area [mem 0x33fff0000000-0x33ffffffffff] reserved by PNP0C02:00 May 8 00:01:55.189761 kernel: acpi PNP0A08:04: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] May 8 00:01:55.189772 kernel: PCI host bridge to bus 000c:00 May 8 00:01:55.189838 kernel: pci_bus 000c:00: root bus resource [mem 0x40000000-0x4fffffff window] May 8 00:01:55.189899 kernel: pci_bus 000c:00: root bus resource [mem 0x300000000000-0x33ffdfffffff window] May 8 00:01:55.189954 kernel: pci_bus 000c:00: root bus resource [bus 00-ff] May 8 00:01:55.190027 kernel: pci 000c:00:00.0: [1def:e100] type 00 class 0x060000 May 8 00:01:55.190099 kernel: pci 000c:00:01.0: [1def:e101] type 01 class 0x060400 May 8 00:01:55.190163 kernel: pci 000c:00:01.0: enabling Extended Tags May 8 00:01:55.190227 kernel: pci 000c:00:01.0: supports D1 D2 May 8 00:01:55.190289 kernel: pci 000c:00:01.0: PME# supported from D0 D1 D3hot May 8 00:01:55.190365 kernel: pci 000c:00:02.0: [1def:e102] type 01 class 0x060400 May 8 00:01:55.190428 kernel: pci 000c:00:02.0: supports D1 D2 May 8 00:01:55.190491 kernel: pci 000c:00:02.0: PME# supported from D0 D1 D3hot May 8 00:01:55.190563 kernel: pci 000c:00:03.0: [1def:e103] type 01 class 0x060400 May 8 00:01:55.190627 kernel: pci 000c:00:03.0: supports D1 D2 May 8 00:01:55.190691 kernel: pci 000c:00:03.0: PME# supported from D0 D1 D3hot May 8 00:01:55.190760 kernel: pci 000c:00:04.0: [1def:e104] type 01 class 0x060400 May 8 00:01:55.190831 kernel: pci 000c:00:04.0: supports D1 D2 May 8 00:01:55.190895 kernel: pci 000c:00:04.0: PME# supported from D0 D1 D3hot May 8 00:01:55.190905 kernel: acpiphp: Slot [1-4] registered May 8 00:01:55.190913 kernel: acpiphp: Slot [2-4] registered May 8 00:01:55.190920 kernel: acpiphp: Slot [3-2] registered May 8 00:01:55.190928 kernel: acpiphp: Slot [4-2] registered May 8 00:01:55.190984 kernel: pci_bus 000c:00: on NUMA node 0 May 8 00:01:55.191048 kernel: pci 000c:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 8 00:01:55.191114 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 8 00:01:55.191178 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 8 00:01:55.191244 kernel: pci 000c:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 8 00:01:55.191311 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 8 00:01:55.191378 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 8 00:01:55.191443 kernel: pci 000c:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 8 00:01:55.191506 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 8 00:01:55.191573 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 8 00:01:55.191637 kernel: pci 000c:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 8 00:01:55.191701 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 8 00:01:55.191764 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 8 00:01:55.191830 kernel: pci 000c:00:01.0: BAR 14: assigned [mem 0x40000000-0x401fffff] May 8 00:01:55.191893 kernel: pci 000c:00:01.0: BAR 15: assigned [mem 0x300000000000-0x3000001fffff 64bit pref] May 8 00:01:55.191956 kernel: pci 000c:00:02.0: BAR 14: assigned [mem 0x40200000-0x403fffff] May 8 00:01:55.192021 kernel: pci 000c:00:02.0: BAR 15: assigned [mem 0x300000200000-0x3000003fffff 64bit pref] May 8 00:01:55.192084 kernel: pci 000c:00:03.0: BAR 14: assigned [mem 0x40400000-0x405fffff] May 8 00:01:55.192148 kernel: pci 000c:00:03.0: BAR 15: assigned [mem 0x300000400000-0x3000005fffff 64bit pref] May 8 00:01:55.192210 kernel: pci 000c:00:04.0: BAR 14: assigned [mem 0x40600000-0x407fffff] May 8 00:01:55.192274 kernel: pci 000c:00:04.0: BAR 15: assigned [mem 0x300000600000-0x3000007fffff 64bit pref] May 8 00:01:55.192338 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.192401 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.192466 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.192531 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.192595 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.192658 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.192721 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.192785 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.192850 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.192914 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.192977 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.193040 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.193106 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.193170 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.193231 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.193296 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.193360 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] May 8 00:01:55.193423 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff] May 8 00:01:55.193486 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref] May 8 00:01:55.193552 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] May 8 00:01:55.193617 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff] May 8 00:01:55.193680 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref] May 8 00:01:55.193744 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] May 8 00:01:55.193810 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff] May 8 00:01:55.193873 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref] May 8 00:01:55.193937 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] May 8 00:01:55.194003 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff] May 8 00:01:55.194067 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref] May 8 00:01:55.194124 kernel: pci_bus 000c:00: resource 4 [mem 0x40000000-0x4fffffff window] May 8 00:01:55.194181 kernel: pci_bus 000c:00: resource 5 [mem 0x300000000000-0x33ffdfffffff window] May 8 00:01:55.194250 kernel: pci_bus 000c:01: resource 1 [mem 0x40000000-0x401fffff] May 8 00:01:55.194311 kernel: pci_bus 000c:01: resource 2 [mem 0x300000000000-0x3000001fffff 64bit pref] May 8 00:01:55.194388 kernel: pci_bus 000c:02: resource 1 [mem 0x40200000-0x403fffff] May 8 00:01:55.194448 kernel: pci_bus 000c:02: resource 2 [mem 0x300000200000-0x3000003fffff 64bit pref] May 8 00:01:55.194514 kernel: pci_bus 000c:03: resource 1 [mem 0x40400000-0x405fffff] May 8 00:01:55.194573 kernel: pci_bus 000c:03: resource 2 [mem 0x300000400000-0x3000005fffff 64bit pref] May 8 00:01:55.194639 kernel: pci_bus 000c:04: resource 1 [mem 0x40600000-0x407fffff] May 8 00:01:55.194698 kernel: pci_bus 000c:04: resource 2 [mem 0x300000600000-0x3000007fffff 64bit pref] May 8 00:01:55.194711 kernel: ACPI: PCI Root Bridge [PCI4] (domain 0002 [bus 00-ff]) May 8 00:01:55.194780 kernel: acpi PNP0A08:05: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:01:55.194847 kernel: acpi PNP0A08:05: _OSC: platform does not support [PCIeHotplug PME LTR] May 8 00:01:55.194908 kernel: acpi PNP0A08:05: _OSC: OS now controls [AER PCIeCapability] May 8 00:01:55.194969 kernel: acpi PNP0A08:05: MCFG quirk: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 8 00:01:55.195029 kernel: acpi PNP0A08:05: ECAM area [mem 0x23fff0000000-0x23ffffffffff] reserved by PNP0C02:00 May 8 00:01:55.195090 kernel: acpi PNP0A08:05: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] May 8 00:01:55.195103 kernel: PCI host bridge to bus 0002:00 May 8 00:01:55.195169 kernel: pci_bus 0002:00: root bus resource [mem 0x00800000-0x0fffffff window] May 8 00:01:55.195227 kernel: pci_bus 0002:00: root bus resource [mem 0x200000000000-0x23ffdfffffff window] May 8 00:01:55.195283 kernel: pci_bus 0002:00: root bus resource [bus 00-ff] May 8 00:01:55.195353 kernel: pci 0002:00:00.0: [1def:e110] type 00 class 0x060000 May 8 00:01:55.195427 kernel: pci 0002:00:01.0: [1def:e111] type 01 class 0x060400 May 8 00:01:55.195491 kernel: pci 0002:00:01.0: supports D1 D2 May 8 00:01:55.195558 kernel: pci 0002:00:01.0: PME# supported from D0 D1 D3hot May 8 00:01:55.195627 kernel: pci 0002:00:03.0: [1def:e113] type 01 class 0x060400 May 8 00:01:55.195692 kernel: pci 0002:00:03.0: supports D1 D2 May 8 00:01:55.195754 kernel: pci 0002:00:03.0: PME# supported from D0 D1 D3hot May 8 00:01:55.195828 kernel: pci 0002:00:05.0: [1def:e115] type 01 class 0x060400 May 8 00:01:55.195894 kernel: pci 0002:00:05.0: supports D1 D2 May 8 00:01:55.195956 kernel: pci 0002:00:05.0: PME# supported from D0 D1 D3hot May 8 00:01:55.196030 kernel: pci 0002:00:07.0: [1def:e117] type 01 class 0x060400 May 8 00:01:55.196093 kernel: pci 0002:00:07.0: supports D1 D2 May 8 00:01:55.196156 kernel: pci 0002:00:07.0: PME# supported from D0 D1 D3hot May 8 00:01:55.196166 kernel: acpiphp: Slot [1-5] registered May 8 00:01:55.196174 kernel: acpiphp: Slot [2-5] registered May 8 00:01:55.196182 kernel: acpiphp: Slot [3-3] registered May 8 00:01:55.196190 kernel: acpiphp: Slot [4-3] registered May 8 00:01:55.196245 kernel: pci_bus 0002:00: on NUMA node 0 May 8 00:01:55.196312 kernel: pci 0002:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 8 00:01:55.196376 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 8 00:01:55.196439 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 8 00:01:55.196506 kernel: pci 0002:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 8 00:01:55.196572 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 8 00:01:55.196635 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 8 00:01:55.196699 kernel: pci 0002:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 8 00:01:55.196763 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 8 00:01:55.196829 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 8 00:01:55.196893 kernel: pci 0002:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 8 00:01:55.196957 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 8 00:01:55.197023 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 8 00:01:55.197087 kernel: pci 0002:00:01.0: BAR 14: assigned [mem 0x00800000-0x009fffff] May 8 00:01:55.197150 kernel: pci 0002:00:01.0: BAR 15: assigned [mem 0x200000000000-0x2000001fffff 64bit pref] May 8 00:01:55.197215 kernel: pci 0002:00:03.0: BAR 14: assigned [mem 0x00a00000-0x00bfffff] May 8 00:01:55.197278 kernel: pci 0002:00:03.0: BAR 15: assigned [mem 0x200000200000-0x2000003fffff 64bit pref] May 8 00:01:55.197341 kernel: pci 0002:00:05.0: BAR 14: assigned [mem 0x00c00000-0x00dfffff] May 8 00:01:55.197405 kernel: pci 0002:00:05.0: BAR 15: assigned [mem 0x200000400000-0x2000005fffff 64bit pref] May 8 00:01:55.197471 kernel: pci 0002:00:07.0: BAR 14: assigned [mem 0x00e00000-0x00ffffff] May 8 00:01:55.197534 kernel: pci 0002:00:07.0: BAR 15: assigned [mem 0x200000600000-0x2000007fffff 64bit pref] May 8 00:01:55.197596 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.197660 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.197723 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.197786 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.197852 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.197916 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.197981 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.198047 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.198112 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.198174 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.198238 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.198302 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.198365 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.198429 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.198492 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.198557 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.198622 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] May 8 00:01:55.198688 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff] May 8 00:01:55.198753 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref] May 8 00:01:55.198820 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] May 8 00:01:55.198884 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff] May 8 00:01:55.198948 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref] May 8 00:01:55.199015 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] May 8 00:01:55.199077 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff] May 8 00:01:55.199142 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref] May 8 00:01:55.199204 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] May 8 00:01:55.199268 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff] May 8 00:01:55.199331 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref] May 8 00:01:55.199393 kernel: pci_bus 0002:00: resource 4 [mem 0x00800000-0x0fffffff window] May 8 00:01:55.199449 kernel: pci_bus 0002:00: resource 5 [mem 0x200000000000-0x23ffdfffffff window] May 8 00:01:55.199518 kernel: pci_bus 0002:01: resource 1 [mem 0x00800000-0x009fffff] May 8 00:01:55.199578 kernel: pci_bus 0002:01: resource 2 [mem 0x200000000000-0x2000001fffff 64bit pref] May 8 00:01:55.199645 kernel: pci_bus 0002:02: resource 1 [mem 0x00a00000-0x00bfffff] May 8 00:01:55.199705 kernel: pci_bus 0002:02: resource 2 [mem 0x200000200000-0x2000003fffff 64bit pref] May 8 00:01:55.199795 kernel: pci_bus 0002:03: resource 1 [mem 0x00c00000-0x00dfffff] May 8 00:01:55.200071 kernel: pci_bus 0002:03: resource 2 [mem 0x200000400000-0x2000005fffff 64bit pref] May 8 00:01:55.200141 kernel: pci_bus 0002:04: resource 1 [mem 0x00e00000-0x00ffffff] May 8 00:01:55.200198 kernel: pci_bus 0002:04: resource 2 [mem 0x200000600000-0x2000007fffff 64bit pref] May 8 00:01:55.200209 kernel: ACPI: PCI Root Bridge [PCI2] (domain 0001 [bus 00-ff]) May 8 00:01:55.200276 kernel: acpi PNP0A08:06: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:01:55.200337 kernel: acpi PNP0A08:06: _OSC: platform does not support [PCIeHotplug PME LTR] May 8 00:01:55.200401 kernel: acpi PNP0A08:06: _OSC: OS now controls [AER PCIeCapability] May 8 00:01:55.200461 kernel: acpi PNP0A08:06: MCFG quirk: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] with pci_32b_read_ops May 8 00:01:55.200520 kernel: acpi PNP0A08:06: ECAM area [mem 0x3bfff0000000-0x3bffffffffff] reserved by PNP0C02:00 May 8 00:01:55.200581 kernel: acpi PNP0A08:06: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] May 8 00:01:55.200591 kernel: PCI host bridge to bus 0001:00 May 8 00:01:55.200653 kernel: pci_bus 0001:00: root bus resource [mem 0x60000000-0x6fffffff window] May 8 00:01:55.200712 kernel: pci_bus 0001:00: root bus resource [mem 0x380000000000-0x3bffdfffffff window] May 8 00:01:55.200768 kernel: pci_bus 0001:00: root bus resource [bus 00-ff] May 8 00:01:55.200842 kernel: pci 0001:00:00.0: [1def:e100] type 00 class 0x060000 May 8 00:01:55.200913 kernel: pci 0001:00:01.0: [1def:e101] type 01 class 0x060400 May 8 00:01:55.200976 kernel: pci 0001:00:01.0: enabling Extended Tags May 8 00:01:55.201038 kernel: pci 0001:00:01.0: supports D1 D2 May 8 00:01:55.201100 kernel: pci 0001:00:01.0: PME# supported from D0 D1 D3hot May 8 00:01:55.201174 kernel: pci 0001:00:02.0: [1def:e102] type 01 class 0x060400 May 8 00:01:55.201237 kernel: pci 0001:00:02.0: supports D1 D2 May 8 00:01:55.201298 kernel: pci 0001:00:02.0: PME# supported from D0 D1 D3hot May 8 00:01:55.201367 kernel: pci 0001:00:03.0: [1def:e103] type 01 class 0x060400 May 8 00:01:55.201430 kernel: pci 0001:00:03.0: supports D1 D2 May 8 00:01:55.201492 kernel: pci 0001:00:03.0: PME# supported from D0 D1 D3hot May 8 00:01:55.201561 kernel: pci 0001:00:04.0: [1def:e104] type 01 class 0x060400 May 8 00:01:55.201625 kernel: pci 0001:00:04.0: supports D1 D2 May 8 00:01:55.201689 kernel: pci 0001:00:04.0: PME# supported from D0 D1 D3hot May 8 00:01:55.201699 kernel: acpiphp: Slot [1-6] registered May 8 00:01:55.201769 kernel: pci 0001:01:00.0: [15b3:1015] type 00 class 0x020000 May 8 00:01:55.201843 kernel: pci 0001:01:00.0: reg 0x10: [mem 0x380002000000-0x380003ffffff 64bit pref] May 8 00:01:55.201908 kernel: pci 0001:01:00.0: reg 0x30: [mem 0x60100000-0x601fffff pref] May 8 00:01:55.201973 kernel: pci 0001:01:00.0: PME# supported from D3cold May 8 00:01:55.202040 kernel: pci 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 8 00:01:55.202114 kernel: pci 0001:01:00.1: [15b3:1015] type 00 class 0x020000 May 8 00:01:55.202179 kernel: pci 0001:01:00.1: reg 0x10: [mem 0x380000000000-0x380001ffffff 64bit pref] May 8 00:01:55.202243 kernel: pci 0001:01:00.1: reg 0x30: [mem 0x60000000-0x600fffff pref] May 8 00:01:55.202307 kernel: pci 0001:01:00.1: PME# supported from D3cold May 8 00:01:55.202317 kernel: acpiphp: Slot [2-6] registered May 8 00:01:55.202325 kernel: acpiphp: Slot [3-4] registered May 8 00:01:55.202334 kernel: acpiphp: Slot [4-4] registered May 8 00:01:55.202391 kernel: pci_bus 0001:00: on NUMA node 0 May 8 00:01:55.202454 kernel: pci 0001:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 8 00:01:55.202518 kernel: pci 0001:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 8 00:01:55.202580 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 8 00:01:55.202643 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 8 00:01:55.202705 kernel: pci 0001:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 8 00:01:55.202767 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 8 00:01:55.202849 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 8 00:01:55.202915 kernel: pci 0001:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 8 00:01:55.202977 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 8 00:01:55.203039 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 8 00:01:55.203102 kernel: pci 0001:00:01.0: BAR 15: assigned [mem 0x380000000000-0x380003ffffff 64bit pref] May 8 00:01:55.203165 kernel: pci 0001:00:01.0: BAR 14: assigned [mem 0x60000000-0x601fffff] May 8 00:01:55.203228 kernel: pci 0001:00:02.0: BAR 14: assigned [mem 0x60200000-0x603fffff] May 8 00:01:55.203292 kernel: pci 0001:00:02.0: BAR 15: assigned [mem 0x380004000000-0x3800041fffff 64bit pref] May 8 00:01:55.203355 kernel: pci 0001:00:03.0: BAR 14: assigned [mem 0x60400000-0x605fffff] May 8 00:01:55.203418 kernel: pci 0001:00:03.0: BAR 15: assigned [mem 0x380004200000-0x3800043fffff 64bit pref] May 8 00:01:55.203479 kernel: pci 0001:00:04.0: BAR 14: assigned [mem 0x60600000-0x607fffff] May 8 00:01:55.203541 kernel: pci 0001:00:04.0: BAR 15: assigned [mem 0x380004400000-0x3800045fffff 64bit pref] May 8 00:01:55.203603 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.203665 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.203729 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.203791 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.203898 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.203961 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.204023 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.204084 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.204146 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.204206 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.204271 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.204334 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.204395 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.204456 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.204519 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.204581 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.204645 kernel: pci 0001:01:00.0: BAR 0: assigned [mem 0x380000000000-0x380001ffffff 64bit pref] May 8 00:01:55.204711 kernel: pci 0001:01:00.1: BAR 0: assigned [mem 0x380002000000-0x380003ffffff 64bit pref] May 8 00:01:55.204774 kernel: pci 0001:01:00.0: BAR 6: assigned [mem 0x60000000-0x600fffff pref] May 8 00:01:55.204844 kernel: pci 0001:01:00.1: BAR 6: assigned [mem 0x60100000-0x601fffff pref] May 8 00:01:55.204906 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] May 8 00:01:55.204968 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] May 8 00:01:55.205030 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref] May 8 00:01:55.205092 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] May 8 00:01:55.205153 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff] May 8 00:01:55.205215 kernel: pci 0001:00:02.0: bridge window [mem 0x380004000000-0x3800041fffff 64bit pref] May 8 00:01:55.205279 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] May 8 00:01:55.205341 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff] May 8 00:01:55.205403 kernel: pci 0001:00:03.0: bridge window [mem 0x380004200000-0x3800043fffff 64bit pref] May 8 00:01:55.205465 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] May 8 00:01:55.205527 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff] May 8 00:01:55.205590 kernel: pci 0001:00:04.0: bridge window [mem 0x380004400000-0x3800045fffff 64bit pref] May 8 00:01:55.205649 kernel: pci_bus 0001:00: resource 4 [mem 0x60000000-0x6fffffff window] May 8 00:01:55.205704 kernel: pci_bus 0001:00: resource 5 [mem 0x380000000000-0x3bffdfffffff window] May 8 00:01:55.205778 kernel: pci_bus 0001:01: resource 1 [mem 0x60000000-0x601fffff] May 8 00:01:55.205841 kernel: pci_bus 0001:01: resource 2 [mem 0x380000000000-0x380003ffffff 64bit pref] May 8 00:01:55.205906 kernel: pci_bus 0001:02: resource 1 [mem 0x60200000-0x603fffff] May 8 00:01:55.205964 kernel: pci_bus 0001:02: resource 2 [mem 0x380004000000-0x3800041fffff 64bit pref] May 8 00:01:55.206032 kernel: pci_bus 0001:03: resource 1 [mem 0x60400000-0x605fffff] May 8 00:01:55.206091 kernel: pci_bus 0001:03: resource 2 [mem 0x380004200000-0x3800043fffff 64bit pref] May 8 00:01:55.206156 kernel: pci_bus 0001:04: resource 1 [mem 0x60600000-0x607fffff] May 8 00:01:55.206213 kernel: pci_bus 0001:04: resource 2 [mem 0x380004400000-0x3800045fffff 64bit pref] May 8 00:01:55.206223 kernel: ACPI: PCI Root Bridge [PCI6] (domain 0004 [bus 00-ff]) May 8 00:01:55.206292 kernel: acpi PNP0A08:07: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:01:55.206355 kernel: acpi PNP0A08:07: _OSC: platform does not support [PCIeHotplug PME LTR] May 8 00:01:55.206416 kernel: acpi PNP0A08:07: _OSC: OS now controls [AER PCIeCapability] May 8 00:01:55.206475 kernel: acpi PNP0A08:07: MCFG quirk: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] with pci_32b_read_ops May 8 00:01:55.206535 kernel: acpi PNP0A08:07: ECAM area [mem 0x2bfff0000000-0x2bffffffffff] reserved by PNP0C02:00 May 8 00:01:55.206595 kernel: acpi PNP0A08:07: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] May 8 00:01:55.206606 kernel: PCI host bridge to bus 0004:00 May 8 00:01:55.206668 kernel: pci_bus 0004:00: root bus resource [mem 0x20000000-0x2fffffff window] May 8 00:01:55.206727 kernel: pci_bus 0004:00: root bus resource [mem 0x280000000000-0x2bffdfffffff window] May 8 00:01:55.206782 kernel: pci_bus 0004:00: root bus resource [bus 00-ff] May 8 00:01:55.206858 kernel: pci 0004:00:00.0: [1def:e110] type 00 class 0x060000 May 8 00:01:55.206927 kernel: pci 0004:00:01.0: [1def:e111] type 01 class 0x060400 May 8 00:01:55.206991 kernel: pci 0004:00:01.0: supports D1 D2 May 8 00:01:55.207052 kernel: pci 0004:00:01.0: PME# supported from D0 D1 D3hot May 8 00:01:55.207121 kernel: pci 0004:00:03.0: [1def:e113] type 01 class 0x060400 May 8 00:01:55.207189 kernel: pci 0004:00:03.0: supports D1 D2 May 8 00:01:55.207251 kernel: pci 0004:00:03.0: PME# supported from D0 D1 D3hot May 8 00:01:55.207321 kernel: pci 0004:00:05.0: [1def:e115] type 01 class 0x060400 May 8 00:01:55.207384 kernel: pci 0004:00:05.0: supports D1 D2 May 8 00:01:55.207446 kernel: pci 0004:00:05.0: PME# supported from D0 D1 D3hot May 8 00:01:55.207518 kernel: pci 0004:01:00.0: [1a03:1150] type 01 class 0x060400 May 8 00:01:55.207583 kernel: pci 0004:01:00.0: enabling Extended Tags May 8 00:01:55.207650 kernel: pci 0004:01:00.0: supports D1 D2 May 8 00:01:55.207714 kernel: pci 0004:01:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 8 00:01:55.207790 kernel: pci_bus 0004:02: extended config space not accessible May 8 00:01:55.207868 kernel: pci 0004:02:00.0: [1a03:2000] type 00 class 0x030000 May 8 00:01:55.207936 kernel: pci 0004:02:00.0: reg 0x10: [mem 0x20000000-0x21ffffff] May 8 00:01:55.208003 kernel: pci 0004:02:00.0: reg 0x14: [mem 0x22000000-0x2201ffff] May 8 00:01:55.208071 kernel: pci 0004:02:00.0: reg 0x18: [io 0x0000-0x007f] May 8 00:01:55.208140 kernel: pci 0004:02:00.0: BAR 0: assigned to efifb May 8 00:01:55.208207 kernel: pci 0004:02:00.0: supports D1 D2 May 8 00:01:55.208273 kernel: pci 0004:02:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 8 00:01:55.208345 kernel: pci 0004:03:00.0: [1912:0014] type 00 class 0x0c0330 May 8 00:01:55.208410 kernel: pci 0004:03:00.0: reg 0x10: [mem 0x22200000-0x22201fff 64bit] May 8 00:01:55.208474 kernel: pci 0004:03:00.0: PME# supported from D0 D3hot D3cold May 8 00:01:55.208531 kernel: pci_bus 0004:00: on NUMA node 0 May 8 00:01:55.208597 kernel: pci 0004:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01-02] add_size 200000 add_align 100000 May 8 00:01:55.208661 kernel: pci 0004:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 8 00:01:55.208723 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 8 00:01:55.208785 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 8 00:01:55.208851 kernel: pci 0004:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 8 00:01:55.208914 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 8 00:01:55.208976 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 8 00:01:55.209040 kernel: pci 0004:00:01.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] May 8 00:01:55.209104 kernel: pci 0004:00:01.0: BAR 15: assigned [mem 0x280000000000-0x2800001fffff 64bit pref] May 8 00:01:55.209167 kernel: pci 0004:00:03.0: BAR 14: assigned [mem 0x23000000-0x231fffff] May 8 00:01:55.209229 kernel: pci 0004:00:03.0: BAR 15: assigned [mem 0x280000200000-0x2800003fffff 64bit pref] May 8 00:01:55.209291 kernel: pci 0004:00:05.0: BAR 14: assigned [mem 0x23200000-0x233fffff] May 8 00:01:55.209354 kernel: pci 0004:00:05.0: BAR 15: assigned [mem 0x280000400000-0x2800005fffff 64bit pref] May 8 00:01:55.209415 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.209478 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.209542 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.209604 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.209666 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.209728 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.209790 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.209855 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.209918 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.209980 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.210045 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.210106 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.210171 kernel: pci 0004:01:00.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] May 8 00:01:55.210236 kernel: pci 0004:01:00.0: BAR 13: no space for [io size 0x1000] May 8 00:01:55.210299 kernel: pci 0004:01:00.0: BAR 13: failed to assign [io size 0x1000] May 8 00:01:55.210366 kernel: pci 0004:02:00.0: BAR 0: assigned [mem 0x20000000-0x21ffffff] May 8 00:01:55.210434 kernel: pci 0004:02:00.0: BAR 1: assigned [mem 0x22000000-0x2201ffff] May 8 00:01:55.210500 kernel: pci 0004:02:00.0: BAR 2: no space for [io size 0x0080] May 8 00:01:55.210569 kernel: pci 0004:02:00.0: BAR 2: failed to assign [io size 0x0080] May 8 00:01:55.210633 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] May 8 00:01:55.210697 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff] May 8 00:01:55.210759 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] May 8 00:01:55.210825 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff] May 8 00:01:55.210889 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref] May 8 00:01:55.210953 kernel: pci 0004:03:00.0: BAR 0: assigned [mem 0x23000000-0x23001fff 64bit] May 8 00:01:55.211016 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] May 8 00:01:55.211081 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff] May 8 00:01:55.211143 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref] May 8 00:01:55.211207 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] May 8 00:01:55.211268 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff] May 8 00:01:55.211331 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref] May 8 00:01:55.211387 kernel: pci_bus 0004:00: Some PCI device resources are unassigned, try booting with pci=realloc May 8 00:01:55.211445 kernel: pci_bus 0004:00: resource 4 [mem 0x20000000-0x2fffffff window] May 8 00:01:55.211501 kernel: pci_bus 0004:00: resource 5 [mem 0x280000000000-0x2bffdfffffff window] May 8 00:01:55.211569 kernel: pci_bus 0004:01: resource 1 [mem 0x20000000-0x22ffffff] May 8 00:01:55.211627 kernel: pci_bus 0004:01: resource 2 [mem 0x280000000000-0x2800001fffff 64bit pref] May 8 00:01:55.211689 kernel: pci_bus 0004:02: resource 1 [mem 0x20000000-0x22ffffff] May 8 00:01:55.211755 kernel: pci_bus 0004:03: resource 1 [mem 0x23000000-0x231fffff] May 8 00:01:55.211815 kernel: pci_bus 0004:03: resource 2 [mem 0x280000200000-0x2800003fffff 64bit pref] May 8 00:01:55.211885 kernel: pci_bus 0004:04: resource 1 [mem 0x23200000-0x233fffff] May 8 00:01:55.211945 kernel: pci_bus 0004:04: resource 2 [mem 0x280000400000-0x2800005fffff 64bit pref] May 8 00:01:55.211955 kernel: iommu: Default domain type: Translated May 8 00:01:55.211963 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 8 00:01:55.211971 kernel: efivars: Registered efivars operations May 8 00:01:55.212037 kernel: pci 0004:02:00.0: vgaarb: setting as boot VGA device May 8 00:01:55.212104 kernel: pci 0004:02:00.0: vgaarb: bridge control possible May 8 00:01:55.212173 kernel: pci 0004:02:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none May 8 00:01:55.212183 kernel: vgaarb: loaded May 8 00:01:55.212191 kernel: clocksource: Switched to clocksource arch_sys_counter May 8 00:01:55.212199 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:01:55.212206 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:01:55.212214 kernel: pnp: PnP ACPI init May 8 00:01:55.212282 kernel: system 00:00: [mem 0x3bfff0000000-0x3bffffffffff window] could not be reserved May 8 00:01:55.212340 kernel: system 00:00: [mem 0x3ffff0000000-0x3fffffffffff window] could not be reserved May 8 00:01:55.212400 kernel: system 00:00: [mem 0x23fff0000000-0x23ffffffffff window] could not be reserved May 8 00:01:55.212456 kernel: system 00:00: [mem 0x27fff0000000-0x27ffffffffff window] could not be reserved May 8 00:01:55.212513 kernel: system 00:00: [mem 0x2bfff0000000-0x2bffffffffff window] could not be reserved May 8 00:01:55.212570 kernel: system 00:00: [mem 0x2ffff0000000-0x2fffffffffff window] could not be reserved May 8 00:01:55.212627 kernel: system 00:00: [mem 0x33fff0000000-0x33ffffffffff window] could not be reserved May 8 00:01:55.212684 kernel: system 00:00: [mem 0x37fff0000000-0x37ffffffffff window] could not be reserved May 8 00:01:55.212696 kernel: pnp: PnP ACPI: found 1 devices May 8 00:01:55.212704 kernel: NET: Registered PF_INET protocol family May 8 00:01:55.212712 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:01:55.212720 kernel: tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear) May 8 00:01:55.212728 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:01:55.212735 kernel: TCP established hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 00:01:55.212743 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 8 00:01:55.212751 kernel: TCP: Hash tables configured (established 524288 bind 65536) May 8 00:01:55.212759 kernel: UDP hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 8 00:01:55.212768 kernel: UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 8 00:01:55.212776 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:01:55.212845 kernel: pci 0001:01:00.0: CLS mismatch (64 != 32), using 64 bytes May 8 00:01:55.212855 kernel: kvm [1]: IPA Size Limit: 48 bits May 8 00:01:55.212863 kernel: kvm [1]: GICv3: no GICV resource entry May 8 00:01:55.212871 kernel: kvm [1]: disabling GICv2 emulation May 8 00:01:55.212879 kernel: kvm [1]: GIC system register CPU interface enabled May 8 00:01:55.212886 kernel: kvm [1]: vgic interrupt IRQ9 May 8 00:01:55.212894 kernel: kvm [1]: VHE mode initialized successfully May 8 00:01:55.212904 kernel: Initialise system trusted keyrings May 8 00:01:55.212911 kernel: workingset: timestamp_bits=39 max_order=26 bucket_order=0 May 8 00:01:55.212919 kernel: Key type asymmetric registered May 8 00:01:55.212926 kernel: Asymmetric key parser 'x509' registered May 8 00:01:55.212934 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 8 00:01:55.212942 kernel: io scheduler mq-deadline registered May 8 00:01:55.212949 kernel: io scheduler kyber registered May 8 00:01:55.212957 kernel: io scheduler bfq registered May 8 00:01:55.212965 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 8 00:01:55.212974 kernel: ACPI: button: Power Button [PWRB] May 8 00:01:55.212982 kernel: ACPI GTDT: found 1 SBSA generic Watchdog(s). May 8 00:01:55.212990 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:01:55.213062 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: option mask 0x0 May 8 00:01:55.213122 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: IDR0.COHACC overridden by FW configuration (false) May 8 00:01:55.213181 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 8 00:01:55.213239 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for cmdq May 8 00:01:55.213297 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 131072 entries for evtq May 8 00:01:55.213358 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for priq May 8 00:01:55.213424 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: option mask 0x0 May 8 00:01:55.213483 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: IDR0.COHACC overridden by FW configuration (false) May 8 00:01:55.213540 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 8 00:01:55.213599 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for cmdq May 8 00:01:55.213656 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 131072 entries for evtq May 8 00:01:55.213719 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for priq May 8 00:01:55.213785 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: option mask 0x0 May 8 00:01:55.213847 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: IDR0.COHACC overridden by FW configuration (false) May 8 00:01:55.213906 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 8 00:01:55.213963 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for cmdq May 8 00:01:55.214022 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 131072 entries for evtq May 8 00:01:55.214080 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for priq May 8 00:01:55.214149 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: option mask 0x0 May 8 00:01:55.214208 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: IDR0.COHACC overridden by FW configuration (false) May 8 00:01:55.214268 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 8 00:01:55.214327 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for cmdq May 8 00:01:55.214385 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 131072 entries for evtq May 8 00:01:55.214444 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for priq May 8 00:01:55.214518 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: option mask 0x0 May 8 00:01:55.214581 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: IDR0.COHACC overridden by FW configuration (false) May 8 00:01:55.214639 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 8 00:01:55.214700 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for cmdq May 8 00:01:55.214760 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 131072 entries for evtq May 8 00:01:55.215045 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for priq May 8 00:01:55.215124 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: option mask 0x0 May 8 00:01:55.215188 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: IDR0.COHACC overridden by FW configuration (false) May 8 00:01:55.215246 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 8 00:01:55.215303 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for cmdq May 8 00:01:55.215361 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 131072 entries for evtq May 8 00:01:55.215418 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for priq May 8 00:01:55.215483 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: option mask 0x0 May 8 00:01:55.215544 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: IDR0.COHACC overridden by FW configuration (false) May 8 00:01:55.215602 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 8 00:01:55.215661 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for cmdq May 8 00:01:55.215720 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 131072 entries for evtq May 8 00:01:55.215778 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for priq May 8 00:01:55.215854 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: option mask 0x0 May 8 00:01:55.215918 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: IDR0.COHACC overridden by FW configuration (false) May 8 00:01:55.215975 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 8 00:01:55.216033 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for cmdq May 8 00:01:55.216091 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 131072 entries for evtq May 8 00:01:55.216148 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for priq May 8 00:01:55.216158 kernel: thunder_xcv, ver 1.0 May 8 00:01:55.216166 kernel: thunder_bgx, ver 1.0 May 8 00:01:55.216174 kernel: nicpf, ver 1.0 May 8 00:01:55.216184 kernel: nicvf, ver 1.0 May 8 00:01:55.216249 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 8 00:01:55.216307 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-08T00:01:53 UTC (1746662513) May 8 00:01:55.216317 kernel: efifb: probing for efifb May 8 00:01:55.216325 kernel: efifb: framebuffer at 0x20000000, using 1876k, total 1875k May 8 00:01:55.216333 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 May 8 00:01:55.216341 kernel: efifb: scrolling: redraw May 8 00:01:55.216349 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 8 00:01:55.216359 kernel: Console: switching to colour frame buffer device 100x37 May 8 00:01:55.216366 kernel: fb0: EFI VGA frame buffer device May 8 00:01:55.216374 kernel: SMCCC: SOC_ID: ID = jep106:0a16:0001 Revision = 0x000000a1 May 8 00:01:55.216382 kernel: hid: raw HID events driver (C) Jiri Kosina May 8 00:01:55.216390 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 8 00:01:55.216397 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 8 00:01:55.216405 kernel: watchdog: Hard watchdog permanently disabled May 8 00:01:55.216413 kernel: NET: Registered PF_INET6 protocol family May 8 00:01:55.216421 kernel: Segment Routing with IPv6 May 8 00:01:55.216430 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:01:55.216438 kernel: NET: Registered PF_PACKET protocol family May 8 00:01:55.216446 kernel: Key type dns_resolver registered May 8 00:01:55.216453 kernel: registered taskstats version 1 May 8 00:01:55.216462 kernel: Loading compiled-in X.509 certificates May 8 00:01:55.216470 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: f45666b1b2057b901dda15e57012558a26abdeb0' May 8 00:01:55.216478 kernel: Key type .fscrypt registered May 8 00:01:55.216486 kernel: Key type fscrypt-provisioning registered May 8 00:01:55.216493 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:01:55.216502 kernel: ima: Allocated hash algorithm: sha1 May 8 00:01:55.216510 kernel: ima: No architecture policies found May 8 00:01:55.216518 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 8 00:01:55.216584 kernel: pcieport 000d:00:01.0: Adding to iommu group 0 May 8 00:01:55.216648 kernel: pcieport 000d:00:01.0: AER: enabled with IRQ 91 May 8 00:01:55.216713 kernel: pcieport 000d:00:02.0: Adding to iommu group 1 May 8 00:01:55.216776 kernel: pcieport 000d:00:02.0: AER: enabled with IRQ 91 May 8 00:01:55.216843 kernel: pcieport 000d:00:03.0: Adding to iommu group 2 May 8 00:01:55.216907 kernel: pcieport 000d:00:03.0: AER: enabled with IRQ 91 May 8 00:01:55.216973 kernel: pcieport 000d:00:04.0: Adding to iommu group 3 May 8 00:01:55.217036 kernel: pcieport 000d:00:04.0: AER: enabled with IRQ 91 May 8 00:01:55.217099 kernel: pcieport 0000:00:01.0: Adding to iommu group 4 May 8 00:01:55.217161 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 92 May 8 00:01:55.217224 kernel: pcieport 0000:00:02.0: Adding to iommu group 5 May 8 00:01:55.217287 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 92 May 8 00:01:55.217350 kernel: pcieport 0000:00:03.0: Adding to iommu group 6 May 8 00:01:55.217412 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 92 May 8 00:01:55.217479 kernel: pcieport 0000:00:04.0: Adding to iommu group 7 May 8 00:01:55.217541 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 92 May 8 00:01:55.217605 kernel: pcieport 0005:00:01.0: Adding to iommu group 8 May 8 00:01:55.217667 kernel: pcieport 0005:00:01.0: AER: enabled with IRQ 93 May 8 00:01:55.217731 kernel: pcieport 0005:00:03.0: Adding to iommu group 9 May 8 00:01:55.217794 kernel: pcieport 0005:00:03.0: AER: enabled with IRQ 93 May 8 00:01:55.217860 kernel: pcieport 0005:00:05.0: Adding to iommu group 10 May 8 00:01:55.217923 kernel: pcieport 0005:00:05.0: AER: enabled with IRQ 93 May 8 00:01:55.217989 kernel: pcieport 0005:00:07.0: Adding to iommu group 11 May 8 00:01:55.218052 kernel: pcieport 0005:00:07.0: AER: enabled with IRQ 93 May 8 00:01:55.218115 kernel: pcieport 0003:00:01.0: Adding to iommu group 12 May 8 00:01:55.218178 kernel: pcieport 0003:00:01.0: AER: enabled with IRQ 94 May 8 00:01:55.218241 kernel: pcieport 0003:00:03.0: Adding to iommu group 13 May 8 00:01:55.218303 kernel: pcieport 0003:00:03.0: AER: enabled with IRQ 94 May 8 00:01:55.218368 kernel: pcieport 0003:00:05.0: Adding to iommu group 14 May 8 00:01:55.218430 kernel: pcieport 0003:00:05.0: AER: enabled with IRQ 94 May 8 00:01:55.218494 kernel: pcieport 000c:00:01.0: Adding to iommu group 15 May 8 00:01:55.218559 kernel: pcieport 000c:00:01.0: AER: enabled with IRQ 95 May 8 00:01:55.218623 kernel: pcieport 000c:00:02.0: Adding to iommu group 16 May 8 00:01:55.218685 kernel: pcieport 000c:00:02.0: AER: enabled with IRQ 95 May 8 00:01:55.218749 kernel: pcieport 000c:00:03.0: Adding to iommu group 17 May 8 00:01:55.218893 kernel: pcieport 000c:00:03.0: AER: enabled with IRQ 95 May 8 00:01:55.218965 kernel: pcieport 000c:00:04.0: Adding to iommu group 18 May 8 00:01:55.219028 kernel: pcieport 000c:00:04.0: AER: enabled with IRQ 95 May 8 00:01:55.219092 kernel: pcieport 0002:00:01.0: Adding to iommu group 19 May 8 00:01:55.219158 kernel: pcieport 0002:00:01.0: AER: enabled with IRQ 96 May 8 00:01:55.219222 kernel: pcieport 0002:00:03.0: Adding to iommu group 20 May 8 00:01:55.219284 kernel: pcieport 0002:00:03.0: AER: enabled with IRQ 96 May 8 00:01:55.219347 kernel: pcieport 0002:00:05.0: Adding to iommu group 21 May 8 00:01:55.219409 kernel: pcieport 0002:00:05.0: AER: enabled with IRQ 96 May 8 00:01:55.219471 kernel: pcieport 0002:00:07.0: Adding to iommu group 22 May 8 00:01:55.219533 kernel: pcieport 0002:00:07.0: AER: enabled with IRQ 96 May 8 00:01:55.219597 kernel: pcieport 0001:00:01.0: Adding to iommu group 23 May 8 00:01:55.219661 kernel: pcieport 0001:00:01.0: AER: enabled with IRQ 97 May 8 00:01:55.219726 kernel: pcieport 0001:00:02.0: Adding to iommu group 24 May 8 00:01:55.219787 kernel: pcieport 0001:00:02.0: AER: enabled with IRQ 97 May 8 00:01:55.219854 kernel: pcieport 0001:00:03.0: Adding to iommu group 25 May 8 00:01:55.219917 kernel: pcieport 0001:00:03.0: AER: enabled with IRQ 97 May 8 00:01:55.219979 kernel: pcieport 0001:00:04.0: Adding to iommu group 26 May 8 00:01:55.220045 kernel: pcieport 0001:00:04.0: AER: enabled with IRQ 97 May 8 00:01:55.220108 kernel: pcieport 0004:00:01.0: Adding to iommu group 27 May 8 00:01:55.220173 kernel: pcieport 0004:00:01.0: AER: enabled with IRQ 98 May 8 00:01:55.220237 kernel: pcieport 0004:00:03.0: Adding to iommu group 28 May 8 00:01:55.220300 kernel: pcieport 0004:00:03.0: AER: enabled with IRQ 98 May 8 00:01:55.220363 kernel: pcieport 0004:00:05.0: Adding to iommu group 29 May 8 00:01:55.220425 kernel: pcieport 0004:00:05.0: AER: enabled with IRQ 98 May 8 00:01:55.220492 kernel: pcieport 0004:01:00.0: Adding to iommu group 30 May 8 00:01:55.220503 kernel: clk: Disabling unused clocks May 8 00:01:55.220511 kernel: Freeing unused kernel memory: 38336K May 8 00:01:55.220522 kernel: Run /init as init process May 8 00:01:55.220529 kernel: with arguments: May 8 00:01:55.220537 kernel: /init May 8 00:01:55.220545 kernel: with environment: May 8 00:01:55.220552 kernel: HOME=/ May 8 00:01:55.220560 kernel: TERM=linux May 8 00:01:55.220567 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:01:55.220576 systemd[1]: Successfully made /usr/ read-only. May 8 00:01:55.220587 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 8 00:01:55.220597 systemd[1]: Detected architecture arm64. May 8 00:01:55.220605 systemd[1]: Running in initrd. May 8 00:01:55.220613 systemd[1]: No hostname configured, using default hostname. May 8 00:01:55.220621 systemd[1]: Hostname set to . May 8 00:01:55.220629 systemd[1]: Initializing machine ID from random generator. May 8 00:01:55.220637 systemd[1]: Queued start job for default target initrd.target. May 8 00:01:55.220645 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:01:55.220655 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:01:55.220664 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:01:55.220672 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:01:55.220681 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:01:55.220689 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:01:55.220699 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:01:55.220707 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:01:55.220717 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:01:55.220725 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:01:55.220733 systemd[1]: Reached target paths.target - Path Units. May 8 00:01:55.220741 systemd[1]: Reached target slices.target - Slice Units. May 8 00:01:55.220749 systemd[1]: Reached target swap.target - Swaps. May 8 00:01:55.220757 systemd[1]: Reached target timers.target - Timer Units. May 8 00:01:55.220766 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:01:55.220774 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:01:55.220783 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:01:55.220791 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 8 00:01:55.220800 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:01:55.220811 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:01:55.220819 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:01:55.220827 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:01:55.220835 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:01:55.220843 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:01:55.220851 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:01:55.220861 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:01:55.220869 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:01:55.220877 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:01:55.220908 systemd-journald[899]: Collecting audit messages is disabled. May 8 00:01:55.220930 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:01:55.220938 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:01:55.220946 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:01:55.220954 kernel: Bridge firewalling registered May 8 00:01:55.220963 systemd-journald[899]: Journal started May 8 00:01:55.220981 systemd-journald[899]: Runtime Journal (/run/log/journal/7f14eccb6247412e8b07eafe8e9b81af) is 8M, max 4G, 3.9G free. May 8 00:01:55.180328 systemd-modules-load[903]: Inserted module 'overlay' May 8 00:01:55.260506 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:01:55.203014 systemd-modules-load[903]: Inserted module 'br_netfilter' May 8 00:01:55.266115 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:01:55.276912 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:01:55.287829 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:01:55.298697 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:01:55.325969 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:01:55.332513 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:01:55.367943 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:01:55.374167 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:01:55.390838 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:01:55.406955 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:01:55.423652 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:01:55.435010 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:01:55.461899 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:01:55.470750 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:01:55.482563 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:01:55.507807 dracut-cmdline[944]: dracut-dracut-053 May 8 00:01:55.507807 dracut-cmdline[944]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=82f9441f083668f7b43f8fe99c3dc9ee441b8a3ef2f63ecd1e548de4dde5b207 May 8 00:01:55.495854 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:01:55.509956 systemd-resolved[946]: Positive Trust Anchors: May 8 00:01:55.509965 systemd-resolved[946]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:01:55.509996 systemd-resolved[946]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:01:55.524275 systemd-resolved[946]: Defaulting to hostname 'linux'. May 8 00:01:55.525699 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:01:55.559619 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:01:55.663002 kernel: SCSI subsystem initialized May 8 00:01:55.674814 kernel: Loading iSCSI transport class v2.0-870. May 8 00:01:55.692814 kernel: iscsi: registered transport (tcp) May 8 00:01:55.720196 kernel: iscsi: registered transport (qla4xxx) May 8 00:01:55.720217 kernel: QLogic iSCSI HBA Driver May 8 00:01:55.763158 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:01:55.781925 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:01:55.826729 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:01:55.826760 kernel: device-mapper: uevent: version 1.0.3 May 8 00:01:55.836383 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:01:55.901816 kernel: raid6: neonx8 gen() 15851 MB/s May 8 00:01:55.926814 kernel: raid6: neonx4 gen() 15868 MB/s May 8 00:01:55.951814 kernel: raid6: neonx2 gen() 13271 MB/s May 8 00:01:55.976815 kernel: raid6: neonx1 gen() 10464 MB/s May 8 00:01:56.001815 kernel: raid6: int64x8 gen() 6811 MB/s May 8 00:01:56.026814 kernel: raid6: int64x4 gen() 7379 MB/s May 8 00:01:56.051815 kernel: raid6: int64x2 gen() 6133 MB/s May 8 00:01:56.079854 kernel: raid6: int64x1 gen() 5077 MB/s May 8 00:01:56.079875 kernel: raid6: using algorithm neonx4 gen() 15868 MB/s May 8 00:01:56.114283 kernel: raid6: .... xor() 12459 MB/s, rmw enabled May 8 00:01:56.114304 kernel: raid6: using neon recovery algorithm May 8 00:01:56.137365 kernel: xor: measuring software checksum speed May 8 00:01:56.137388 kernel: 8regs : 21613 MB/sec May 8 00:01:56.153097 kernel: 32regs : 21260 MB/sec May 8 00:01:56.153118 kernel: arm64_neon : 28022 MB/sec May 8 00:01:56.160759 kernel: xor: using function: arm64_neon (28022 MB/sec) May 8 00:01:56.220814 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:01:56.230724 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:01:56.255925 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:01:56.270858 systemd-udevd[1144]: Using default interface naming scheme 'v255'. May 8 00:01:56.274407 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:01:56.300910 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:01:56.314978 dracut-pre-trigger[1155]: rd.md=0: removing MD RAID activation May 8 00:01:56.340691 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:01:56.359950 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:01:56.460950 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:01:56.490505 kernel: pps_core: LinuxPPS API ver. 1 registered May 8 00:01:56.490526 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 8 00:01:56.513408 kernel: ACPI: bus type USB registered May 8 00:01:56.513435 kernel: usbcore: registered new interface driver usbfs May 8 00:01:56.523735 kernel: usbcore: registered new interface driver hub May 8 00:01:56.523758 kernel: PTP clock support registered May 8 00:01:56.523776 kernel: usbcore: registered new device driver usb May 8 00:01:56.548947 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:01:56.558132 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:01:56.721441 kernel: igb: Intel(R) Gigabit Ethernet Network Driver May 8 00:01:56.721457 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. May 8 00:01:56.721466 kernel: igb 0003:03:00.0: Adding to iommu group 31 May 8 00:01:56.770059 kernel: xhci_hcd 0004:03:00.0: Adding to iommu group 32 May 8 00:01:57.058969 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller May 8 00:01:57.059094 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 1 May 8 00:01:57.059174 kernel: xhci_hcd 0004:03:00.0: Zeroing 64bit base registers, expecting fault May 8 00:01:57.059255 kernel: nvme 0005:03:00.0: Adding to iommu group 33 May 8 00:01:57.254875 kernel: igb 0003:03:00.0: added PHC on eth0 May 8 00:01:57.255040 kernel: mlx5_core 0001:01:00.0: Adding to iommu group 34 May 8 00:01:57.710194 kernel: igb 0003:03:00.0: Intel(R) Gigabit Ethernet Network Connection May 8 00:01:57.710350 kernel: nvme 0005:04:00.0: Adding to iommu group 35 May 8 00:01:57.710432 kernel: igb 0003:03:00.0: eth0: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0c:6f:94 May 8 00:01:57.710506 kernel: igb 0003:03:00.0: eth0: PBA No: 106300-000 May 8 00:01:57.710583 kernel: igb 0003:03:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) May 8 00:01:57.710659 kernel: igb 0003:03:00.1: Adding to iommu group 36 May 8 00:01:57.710739 kernel: xhci_hcd 0004:03:00.0: hcc params 0x014051cf hci version 0x100 quirks 0x0000001100000010 May 8 00:01:57.710832 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller May 8 00:01:57.710912 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 2 May 8 00:01:57.710988 kernel: xhci_hcd 0004:03:00.0: Host supports USB 3.0 SuperSpeed May 8 00:01:57.711063 kernel: nvme nvme0: pci function 0005:03:00.0 May 8 00:01:57.711153 kernel: hub 1-0:1.0: USB hub found May 8 00:01:57.711255 kernel: hub 1-0:1.0: 4 ports detected May 8 00:01:57.711339 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 8 00:01:57.711465 kernel: nvme nvme0: Shutdown timeout set to 8 seconds May 8 00:01:57.711540 kernel: hub 2-0:1.0: USB hub found May 8 00:01:57.711632 kernel: mlx5_core 0001:01:00.0: firmware version: 14.31.1014 May 8 00:01:57.711712 kernel: hub 2-0:1.0: 4 ports detected May 8 00:01:57.711799 kernel: mlx5_core 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 8 00:01:57.711887 kernel: nvme nvme1: pci function 0005:04:00.0 May 8 00:01:57.711969 kernel: nvme nvme1: Shutdown timeout set to 8 seconds May 8 00:01:57.712041 kernel: nvme nvme0: 32/0/0 default/read/poll queues May 8 00:01:57.712111 kernel: igb 0003:03:00.1: added PHC on eth1 May 8 00:01:57.712191 kernel: igb 0003:03:00.1: Intel(R) Gigabit Ethernet Network Connection May 8 00:01:57.712268 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:01:57.712281 kernel: nvme nvme1: 32/0/0 default/read/poll queues May 8 00:01:57.712352 kernel: GPT:9289727 != 1875385007 May 8 00:01:57.712361 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:01:57.712371 kernel: igb 0003:03:00.1: eth1: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0c:6f:95 May 8 00:01:57.712447 kernel: GPT:9289727 != 1875385007 May 8 00:01:57.712457 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:01:57.712466 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 8 00:01:57.712476 kernel: igb 0003:03:00.1: eth1: PBA No: 106300-000 May 8 00:01:57.712552 kernel: igb 0003:03:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) May 8 00:01:57.712630 kernel: igb 0003:03:00.0 eno1: renamed from eth0 May 8 00:01:57.712708 kernel: BTRFS: device fsid a4d66dad-2d34-4ed0-87a7-f6519531b08f devid 1 transid 42 /dev/nvme0n1p3 scanned by (udev-worker) (1226) May 8 00:01:57.712718 kernel: igb 0003:03:00.1 eno2: renamed from eth1 May 8 00:01:57.712792 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by (udev-worker) (1234) May 8 00:01:57.712802 kernel: mlx5_core 0001:01:00.0: Port module event: module 0, Cable plugged May 8 00:01:57.712883 kernel: usb 1-3: new high-speed USB device number 2 using xhci_hcd May 8 00:01:57.713009 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 8 00:01:57.713020 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 8 00:01:57.713029 kernel: hub 1-3:1.0: USB hub found May 8 00:01:57.713129 kernel: hub 1-3:1.0: 4 ports detected May 8 00:01:57.713215 kernel: usb 2-3: new SuperSpeed USB device number 2 using xhci_hcd May 8 00:01:57.713337 kernel: hub 2-3:1.0: USB hub found May 8 00:01:57.713434 kernel: hub 2-3:1.0: 4 ports detected May 8 00:01:57.713520 kernel: mlx5_core 0001:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 8 00:01:57.713598 kernel: mlx5_core 0001:01:00.1: Adding to iommu group 37 May 8 00:01:58.393177 kernel: mlx5_core 0001:01:00.1: firmware version: 14.31.1014 May 8 00:01:58.393348 kernel: mlx5_core 0001:01:00.1: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 8 00:01:58.393434 kernel: mlx5_core 0001:01:00.1: Port module event: module 1, Cable plugged May 8 00:01:58.393509 kernel: mlx5_core 0001:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 8 00:01:56.558291 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:01:58.425260 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: renamed from eth1 May 8 00:01:58.425387 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: renamed from eth0 May 8 00:01:56.715567 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:01:56.727505 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:01:56.727678 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:01:56.734989 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:01:58.462527 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 8 00:01:56.752112 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:01:58.468100 disk-uuid[1316]: Primary Header is updated. May 8 00:01:58.468100 disk-uuid[1316]: Secondary Entries is updated. May 8 00:01:58.468100 disk-uuid[1316]: Secondary Header is updated. May 8 00:01:56.758585 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:01:58.489128 disk-uuid[1317]: The operation has completed successfully. May 8 00:01:56.767041 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:01:56.808723 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:01:56.814089 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:01:56.819323 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:01:56.832939 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:01:56.838951 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:01:56.852386 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:01:57.088092 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:01:57.301588 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - SAMSUNG MZ1LB960HAJQ-00007 EFI-SYSTEM. May 8 00:01:57.349363 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - SAMSUNG MZ1LB960HAJQ-00007 ROOT. May 8 00:01:57.366172 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. May 8 00:01:57.395412 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. May 8 00:01:57.408795 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. May 8 00:01:58.629018 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 8 00:01:57.426970 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:01:58.545359 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:01:58.646171 sh[1481]: Success May 8 00:01:58.545441 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:01:58.590963 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:01:58.660054 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:01:58.668935 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:01:58.693891 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:01:58.734125 kernel: BTRFS info (device dm-0): first mount of filesystem a4d66dad-2d34-4ed0-87a7-f6519531b08f May 8 00:01:58.734155 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 8 00:01:58.753379 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:01:58.768362 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:01:58.780656 kernel: BTRFS info (device dm-0): using free space tree May 8 00:01:58.801816 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 8 00:01:58.802656 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:01:58.814469 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 00:01:58.824957 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:01:58.832566 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:01:58.947670 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 8 00:01:58.947691 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 8 00:01:58.947701 kernel: BTRFS info (device nvme0n1p6): using free space tree May 8 00:01:58.947711 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 8 00:01:58.947721 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 8 00:01:58.947730 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 8 00:01:58.940233 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:01:58.953156 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:01:58.980008 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:01:58.992522 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:01:59.022687 systemd-networkd[1669]: lo: Link UP May 8 00:01:59.022692 systemd-networkd[1669]: lo: Gained carrier May 8 00:01:59.026703 systemd-networkd[1669]: Enumeration completed May 8 00:01:59.026957 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:01:59.027987 systemd-networkd[1669]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:01:59.033951 systemd[1]: Reached target network.target - Network. May 8 00:01:59.071614 ignition[1664]: Ignition 2.20.0 May 8 00:01:59.071622 ignition[1664]: Stage: fetch-offline May 8 00:01:59.071663 ignition[1664]: no configs at "/usr/lib/ignition/base.d" May 8 00:01:59.079153 systemd-networkd[1669]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:01:59.071671 ignition[1664]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 8 00:01:59.082473 unknown[1664]: fetched base config from "system" May 8 00:01:59.071842 ignition[1664]: parsed url from cmdline: "" May 8 00:01:59.082479 unknown[1664]: fetched user config from "system" May 8 00:01:59.071845 ignition[1664]: no config URL provided May 8 00:01:59.084984 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:01:59.071849 ignition[1664]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:01:59.093122 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:01:59.071899 ignition[1664]: parsing config with SHA512: bdc898cdde9202255d15ea0d347269773ea5fc5e1bdd08e4b318918814a7c26b26651c5a3c0503ba605a792f52aa3821d47bc650bcba4e46beecc1d70b42fb1d May 8 00:01:59.105982 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:01:59.082951 ignition[1664]: fetch-offline: fetch-offline passed May 8 00:01:59.130672 systemd-networkd[1669]: enP1p1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:01:59.082955 ignition[1664]: POST message to Packet Timeline May 8 00:01:59.082960 ignition[1664]: POST Status error: resource requires networking May 8 00:01:59.083025 ignition[1664]: Ignition finished successfully May 8 00:01:59.121252 ignition[1702]: Ignition 2.20.0 May 8 00:01:59.121258 ignition[1702]: Stage: kargs May 8 00:01:59.121396 ignition[1702]: no configs at "/usr/lib/ignition/base.d" May 8 00:01:59.121405 ignition[1702]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 8 00:01:59.122765 ignition[1702]: kargs: kargs passed May 8 00:01:59.122769 ignition[1702]: POST message to Packet Timeline May 8 00:01:59.123014 ignition[1702]: GET https://metadata.packet.net/metadata: attempt #1 May 8 00:01:59.125573 ignition[1702]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49856->[::1]:53: read: connection refused May 8 00:01:59.326331 ignition[1702]: GET https://metadata.packet.net/metadata: attempt #2 May 8 00:01:59.327612 ignition[1702]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:59579->[::1]:53: read: connection refused May 8 00:01:59.712813 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up May 8 00:01:59.715585 systemd-networkd[1669]: enP1p1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:01:59.728035 ignition[1702]: GET https://metadata.packet.net/metadata: attempt #3 May 8 00:01:59.729071 ignition[1702]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:37047->[::1]:53: read: connection refused May 8 00:02:00.312818 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up May 8 00:02:00.316242 systemd-networkd[1669]: eno1: Link UP May 8 00:02:00.316372 systemd-networkd[1669]: eno2: Link UP May 8 00:02:00.316493 systemd-networkd[1669]: enP1p1s0f0np0: Link UP May 8 00:02:00.316631 systemd-networkd[1669]: enP1p1s0f0np0: Gained carrier May 8 00:02:00.326959 systemd-networkd[1669]: enP1p1s0f1np1: Link UP May 8 00:02:00.362833 systemd-networkd[1669]: enP1p1s0f0np0: DHCPv4 address 145.40.69.49/31, gateway 145.40.69.48 acquired from 147.28.144.140 May 8 00:02:00.529941 ignition[1702]: GET https://metadata.packet.net/metadata: attempt #4 May 8 00:02:00.530696 ignition[1702]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:53737->[::1]:53: read: connection refused May 8 00:02:00.716341 systemd-networkd[1669]: enP1p1s0f1np1: Gained carrier May 8 00:02:01.787993 systemd-networkd[1669]: enP1p1s0f1np1: Gained IPv6LL May 8 00:02:01.979955 systemd-networkd[1669]: enP1p1s0f0np0: Gained IPv6LL May 8 00:02:02.131660 ignition[1702]: GET https://metadata.packet.net/metadata: attempt #5 May 8 00:02:02.132389 ignition[1702]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:41555->[::1]:53: read: connection refused May 8 00:02:05.334783 ignition[1702]: GET https://metadata.packet.net/metadata: attempt #6 May 8 00:02:06.603612 ignition[1702]: GET result: OK May 8 00:02:06.982782 ignition[1702]: Ignition finished successfully May 8 00:02:06.985932 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:02:07.001932 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:02:07.013749 ignition[1724]: Ignition 2.20.0 May 8 00:02:07.013756 ignition[1724]: Stage: disks May 8 00:02:07.013914 ignition[1724]: no configs at "/usr/lib/ignition/base.d" May 8 00:02:07.013924 ignition[1724]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 8 00:02:07.014746 ignition[1724]: disks: disks passed May 8 00:02:07.014750 ignition[1724]: POST message to Packet Timeline May 8 00:02:07.014767 ignition[1724]: GET https://metadata.packet.net/metadata: attempt #1 May 8 00:02:07.591501 ignition[1724]: GET result: OK May 8 00:02:07.945092 ignition[1724]: Ignition finished successfully May 8 00:02:07.948171 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:02:07.953835 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:02:07.961485 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:02:07.969598 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:02:07.978226 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:02:07.987249 systemd[1]: Reached target basic.target - Basic System. May 8 00:02:08.007950 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:02:08.023728 systemd-fsck[1741]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 8 00:02:08.026929 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:02:08.046862 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:02:08.115812 kernel: EXT4-fs (nvme0n1p9): mounted filesystem f291ddc8-664e-45dc-bbf9-8344dca1a297 r/w with ordered data mode. Quota mode: none. May 8 00:02:08.116290 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:02:08.126770 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:02:08.151886 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:02:08.245288 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/nvme0n1p6 scanned by mount (1753) May 8 00:02:08.245307 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 8 00:02:08.245318 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 8 00:02:08.245328 kernel: BTRFS info (device nvme0n1p6): using free space tree May 8 00:02:08.245337 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 8 00:02:08.245347 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 8 00:02:08.158221 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:02:08.251772 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 8 00:02:08.262971 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... May 8 00:02:08.278693 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:02:08.278737 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:02:08.311437 coreos-metadata[1776]: May 08 00:02:08.308 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 8 00:02:08.328206 coreos-metadata[1773]: May 08 00:02:08.308 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 8 00:02:08.292440 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:02:08.305967 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:02:08.327997 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:02:08.361654 initrd-setup-root[1798]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:02:08.367740 initrd-setup-root[1805]: cut: /sysroot/etc/group: No such file or directory May 8 00:02:08.374206 initrd-setup-root[1813]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:02:08.380459 initrd-setup-root[1820]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:02:08.449892 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:02:08.468867 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:02:08.477812 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 8 00:02:08.500246 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:02:08.507314 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:02:08.526677 ignition[1893]: INFO : Ignition 2.20.0 May 8 00:02:08.532198 ignition[1893]: INFO : Stage: mount May 8 00:02:08.532198 ignition[1893]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:02:08.532198 ignition[1893]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 8 00:02:08.532198 ignition[1893]: INFO : mount: mount passed May 8 00:02:08.532198 ignition[1893]: INFO : POST message to Packet Timeline May 8 00:02:08.532198 ignition[1893]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 8 00:02:08.527317 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:02:09.046574 coreos-metadata[1776]: May 08 00:02:09.046 INFO Fetch successful May 8 00:02:09.086320 ignition[1893]: INFO : GET result: OK May 8 00:02:09.095461 systemd[1]: flatcar-static-network.service: Deactivated successfully. May 8 00:02:09.095555 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. May 8 00:02:09.109763 coreos-metadata[1773]: May 08 00:02:09.109 INFO Fetch successful May 8 00:02:09.155512 coreos-metadata[1773]: May 08 00:02:09.155 INFO wrote hostname ci-4230.1.1-n-1f162da554 to /sysroot/etc/hostname May 8 00:02:09.159914 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 8 00:02:09.496535 ignition[1893]: INFO : Ignition finished successfully May 8 00:02:09.500891 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:02:09.519913 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:02:09.532492 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:02:09.568746 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/nvme0n1p6 scanned by mount (1921) May 8 00:02:09.568781 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 8 00:02:09.583221 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 8 00:02:09.596328 kernel: BTRFS info (device nvme0n1p6): using free space tree May 8 00:02:09.614815 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 8 00:02:09.614836 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 8 00:02:09.627603 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:02:09.658424 ignition[1939]: INFO : Ignition 2.20.0 May 8 00:02:09.658424 ignition[1939]: INFO : Stage: files May 8 00:02:09.668318 ignition[1939]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:02:09.668318 ignition[1939]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 8 00:02:09.668318 ignition[1939]: DEBUG : files: compiled without relabeling support, skipping May 8 00:02:09.668318 ignition[1939]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:02:09.668318 ignition[1939]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:02:09.668318 ignition[1939]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:02:09.668318 ignition[1939]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:02:09.668318 ignition[1939]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:02:09.668318 ignition[1939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 8 00:02:09.668318 ignition[1939]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 8 00:02:09.664295 unknown[1939]: wrote ssh authorized keys file for user: core May 8 00:02:09.761780 ignition[1939]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 00:02:09.826671 ignition[1939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 8 00:02:09.837216 ignition[1939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 8 00:02:09.837216 ignition[1939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:02:09.837216 ignition[1939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:02:09.837216 ignition[1939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:02:09.837216 ignition[1939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:02:09.837216 ignition[1939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:02:09.837216 ignition[1939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:02:09.837216 ignition[1939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:02:09.837216 ignition[1939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:02:09.837216 ignition[1939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:02:09.837216 ignition[1939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 00:02:09.837216 ignition[1939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 00:02:09.837216 ignition[1939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 00:02:09.837216 ignition[1939]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 8 00:02:10.078954 ignition[1939]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 8 00:02:10.386751 ignition[1939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 00:02:10.386751 ignition[1939]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 8 00:02:10.411416 ignition[1939]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:02:10.411416 ignition[1939]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:02:10.411416 ignition[1939]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 8 00:02:10.411416 ignition[1939]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 8 00:02:10.411416 ignition[1939]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:02:10.411416 ignition[1939]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:02:10.411416 ignition[1939]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:02:10.411416 ignition[1939]: INFO : files: files passed May 8 00:02:10.411416 ignition[1939]: INFO : POST message to Packet Timeline May 8 00:02:10.411416 ignition[1939]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 8 00:02:10.896664 ignition[1939]: INFO : GET result: OK May 8 00:02:11.210922 ignition[1939]: INFO : Ignition finished successfully May 8 00:02:11.214144 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:02:11.236961 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:02:11.249467 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:02:11.267999 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:02:11.268199 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:02:11.286128 initrd-setup-root-after-ignition[1977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:02:11.286128 initrd-setup-root-after-ignition[1977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:02:11.280631 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:02:11.338189 initrd-setup-root-after-ignition[1981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:02:11.293455 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:02:11.317993 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:02:11.352268 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:02:11.352340 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:02:11.362253 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:02:11.378324 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:02:11.389662 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:02:11.399001 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:02:11.422563 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:02:11.449914 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:02:11.464796 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:02:11.473655 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:02:11.485080 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:02:11.496525 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:02:11.496638 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:02:11.508198 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:02:11.519404 systemd[1]: Stopped target basic.target - Basic System. May 8 00:02:11.530822 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:02:11.542127 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:02:11.553276 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:02:11.564416 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:02:11.575625 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:02:11.586775 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:02:11.598074 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:02:11.614737 systemd[1]: Stopped target swap.target - Swaps. May 8 00:02:11.626007 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:02:11.626143 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:02:11.637499 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:02:11.648603 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:02:11.659864 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:02:11.663834 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:02:11.671196 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:02:11.671309 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:02:11.682626 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:02:11.682715 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:02:11.693937 systemd[1]: Stopped target paths.target - Path Units. May 8 00:02:11.705118 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:02:11.708827 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:02:11.722308 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:02:11.733882 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:02:11.745396 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:02:11.745474 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:02:11.844884 ignition[2002]: INFO : Ignition 2.20.0 May 8 00:02:11.844884 ignition[2002]: INFO : Stage: umount May 8 00:02:11.844884 ignition[2002]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:02:11.844884 ignition[2002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 8 00:02:11.844884 ignition[2002]: INFO : umount: umount passed May 8 00:02:11.844884 ignition[2002]: INFO : POST message to Packet Timeline May 8 00:02:11.844884 ignition[2002]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 8 00:02:11.757049 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:02:11.757110 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:02:11.768859 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:02:11.768951 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:02:11.780503 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:02:11.780585 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:02:11.792175 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 8 00:02:11.792258 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 8 00:02:11.819936 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:02:11.827369 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:02:11.827468 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:02:11.839839 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:02:11.850854 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:02:11.850982 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:02:11.862416 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:02:11.862502 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:02:11.876350 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:02:11.879086 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:02:11.879170 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:02:11.909508 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:02:11.909702 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:02:12.407506 ignition[2002]: INFO : GET result: OK May 8 00:02:12.946753 ignition[2002]: INFO : Ignition finished successfully May 8 00:02:12.949249 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:02:12.949466 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:02:12.957306 systemd[1]: Stopped target network.target - Network. May 8 00:02:12.966544 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:02:12.966606 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:02:12.976152 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:02:12.976219 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:02:12.985715 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:02:12.985794 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:02:12.995615 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:02:12.995648 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:02:13.005453 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:02:13.005509 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:02:13.015418 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:02:13.025151 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:02:13.035113 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:02:13.035201 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:02:13.049207 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 8 00:02:13.049555 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:02:13.049677 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:02:13.056100 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 8 00:02:13.058047 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:02:13.058192 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:02:13.077926 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:02:13.085001 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:02:13.085051 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:02:13.095207 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:02:13.095245 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:02:13.105276 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:02:13.105325 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:02:13.115355 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:02:13.115389 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:02:13.131031 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:02:13.142932 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 8 00:02:13.143028 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 8 00:02:13.154057 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:02:13.154177 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:02:13.165562 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:02:13.165734 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:02:13.175117 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:02:13.175176 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:02:13.186181 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:02:13.186240 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:02:13.202779 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:02:13.202823 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:02:13.219538 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:02:13.219609 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:02:13.248916 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:02:13.259211 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:02:13.259256 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:02:13.276217 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:02:13.276270 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:02:13.295322 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 8 00:02:13.295426 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 8 00:02:13.295756 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:02:13.295831 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:02:13.807596 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:02:13.807771 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:02:13.819319 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:02:13.839967 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:02:13.849716 systemd[1]: Switching root. May 8 00:02:13.904913 systemd-journald[899]: Journal stopped May 8 00:02:15.982821 systemd-journald[899]: Received SIGTERM from PID 1 (systemd). May 8 00:02:15.982848 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:02:15.982859 kernel: SELinux: policy capability open_perms=1 May 8 00:02:15.982867 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:02:15.982874 kernel: SELinux: policy capability always_check_network=0 May 8 00:02:15.982881 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:02:15.982890 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:02:15.982899 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:02:15.982909 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:02:15.982916 kernel: audit: type=1403 audit(1746662534.075:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:02:15.982925 systemd[1]: Successfully loaded SELinux policy in 115.336ms. May 8 00:02:15.982934 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.689ms. May 8 00:02:15.982944 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 8 00:02:15.982952 systemd[1]: Detected architecture arm64. May 8 00:02:15.982963 systemd[1]: Detected first boot. May 8 00:02:15.982972 systemd[1]: Hostname set to . May 8 00:02:15.982981 systemd[1]: Initializing machine ID from random generator. May 8 00:02:15.982989 zram_generator::config[2081]: No configuration found. May 8 00:02:15.983000 systemd[1]: Populated /etc with preset unit settings. May 8 00:02:15.983009 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 8 00:02:15.983017 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 00:02:15.983026 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 00:02:15.983034 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 00:02:15.983043 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 00:02:15.983052 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 00:02:15.983062 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 00:02:15.983071 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 00:02:15.983080 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 00:02:15.983089 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 00:02:15.983098 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 00:02:15.983106 systemd[1]: Created slice user.slice - User and Session Slice. May 8 00:02:15.983115 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:02:15.983124 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:02:15.983134 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 00:02:15.983143 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 00:02:15.983152 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 00:02:15.983161 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:02:15.983169 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 8 00:02:15.983178 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:02:15.983187 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 00:02:15.983198 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 00:02:15.983207 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 00:02:15.983217 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 00:02:15.983226 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:02:15.983235 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:02:15.983244 systemd[1]: Reached target slices.target - Slice Units. May 8 00:02:15.983253 systemd[1]: Reached target swap.target - Swaps. May 8 00:02:15.983262 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 00:02:15.983271 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 00:02:15.983281 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 8 00:02:15.983292 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:02:15.983301 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:02:15.983310 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:02:15.983319 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 00:02:15.983330 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 00:02:15.983339 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 00:02:15.983348 systemd[1]: Mounting media.mount - External Media Directory... May 8 00:02:15.983357 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 00:02:15.983366 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 00:02:15.983375 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 00:02:15.983384 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:02:15.983393 systemd[1]: Reached target machines.target - Containers. May 8 00:02:15.983404 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 00:02:15.983413 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:02:15.983422 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:02:15.983432 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 00:02:15.983441 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:02:15.983450 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:02:15.983458 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:02:15.983467 kernel: ACPI: bus type drm_connector registered May 8 00:02:15.983475 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 00:02:15.983486 kernel: fuse: init (API version 7.39) May 8 00:02:15.983494 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:02:15.983504 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:02:15.983512 kernel: loop: module loaded May 8 00:02:15.983521 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 00:02:15.983530 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 00:02:15.983539 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 00:02:15.983548 systemd[1]: Stopped systemd-fsck-usr.service. May 8 00:02:15.983559 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:02:15.983568 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:02:15.983577 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:02:15.983603 systemd-journald[2191]: Collecting audit messages is disabled. May 8 00:02:15.983624 systemd-journald[2191]: Journal started May 8 00:02:15.983643 systemd-journald[2191]: Runtime Journal (/run/log/journal/2904af25daa34b319b1a77367444fca5) is 8M, max 4G, 3.9G free. May 8 00:02:14.630105 systemd[1]: Queued start job for default target multi-user.target. May 8 00:02:14.643113 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 8 00:02:14.643449 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 00:02:14.643748 systemd[1]: systemd-journald.service: Consumed 3.319s CPU time. May 8 00:02:16.007818 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 00:02:16.034819 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 00:02:16.062821 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 8 00:02:16.083817 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:02:16.107070 systemd[1]: verity-setup.service: Deactivated successfully. May 8 00:02:16.107103 systemd[1]: Stopped verity-setup.service. May 8 00:02:16.132822 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:02:16.138283 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 00:02:16.143942 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 00:02:16.149499 systemd[1]: Mounted media.mount - External Media Directory. May 8 00:02:16.154949 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 00:02:16.160364 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 00:02:16.165690 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 00:02:16.171188 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 00:02:16.177836 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:02:16.183438 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:02:16.183593 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 00:02:16.189067 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:02:16.190837 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:02:16.196263 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:02:16.196423 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:02:16.202895 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:02:16.203050 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:02:16.208321 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:02:16.208472 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 00:02:16.213590 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:02:16.213737 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:02:16.219071 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:02:16.224867 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 00:02:16.230131 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 00:02:16.235348 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 8 00:02:16.241833 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:02:16.257575 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 00:02:16.276912 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 00:02:16.283166 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 00:02:16.288122 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:02:16.288211 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:02:16.294144 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 8 00:02:16.299784 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 00:02:16.305812 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 00:02:16.310763 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:02:16.312288 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 00:02:16.317855 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 00:02:16.322547 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:02:16.323619 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 00:02:16.328305 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:02:16.328892 systemd-journald[2191]: Time spent on flushing to /var/log/journal/2904af25daa34b319b1a77367444fca5 is 25.626ms for 2357 entries. May 8 00:02:16.328892 systemd-journald[2191]: System Journal (/var/log/journal/2904af25daa34b319b1a77367444fca5) is 8M, max 195.6M, 187.6M free. May 8 00:02:16.374287 systemd-journald[2191]: Received client request to flush runtime journal. May 8 00:02:16.374370 kernel: loop0: detected capacity change from 0 to 113512 May 8 00:02:16.374450 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:02:16.329386 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:02:16.347012 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 00:02:16.352729 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 00:02:16.358400 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 00:02:16.375381 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 00:02:16.388909 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 00:02:16.394130 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 00:02:16.398851 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 00:02:16.403569 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 00:02:16.408424 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:02:16.413702 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 00:02:16.423977 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 00:02:16.442229 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 8 00:02:16.452812 kernel: loop1: detected capacity change from 0 to 123192 May 8 00:02:16.457724 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:02:16.463427 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:02:16.464895 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 8 00:02:16.471606 udevadm[2233]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 8 00:02:16.479529 systemd-tmpfiles[2251]: ACLs are not supported, ignoring. May 8 00:02:16.479541 systemd-tmpfiles[2251]: ACLs are not supported, ignoring. May 8 00:02:16.483411 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:02:16.515822 kernel: loop2: detected capacity change from 0 to 194096 May 8 00:02:16.568817 kernel: loop3: detected capacity change from 0 to 8 May 8 00:02:16.593331 ldconfig[2223]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:02:16.596843 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 00:02:16.618815 kernel: loop4: detected capacity change from 0 to 113512 May 8 00:02:16.634814 kernel: loop5: detected capacity change from 0 to 123192 May 8 00:02:16.650814 kernel: loop6: detected capacity change from 0 to 194096 May 8 00:02:16.667818 kernel: loop7: detected capacity change from 0 to 8 May 8 00:02:16.668133 (sd-merge)[2261]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. May 8 00:02:16.668588 (sd-merge)[2261]: Merged extensions into '/usr'. May 8 00:02:16.671543 systemd[1]: Reload requested from client PID 2230 ('systemd-sysext') (unit systemd-sysext.service)... May 8 00:02:16.671554 systemd[1]: Reloading... May 8 00:02:16.717811 zram_generator::config[2294]: No configuration found. May 8 00:02:16.810024 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:02:16.870938 systemd[1]: Reloading finished in 199 ms. May 8 00:02:16.888273 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 00:02:16.893056 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 00:02:16.914245 systemd[1]: Starting ensure-sysext.service... May 8 00:02:16.919912 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:02:16.926413 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:02:16.937127 systemd[1]: Reload requested from client PID 2345 ('systemctl') (unit ensure-sysext.service)... May 8 00:02:16.937138 systemd[1]: Reloading... May 8 00:02:16.939084 systemd-tmpfiles[2346]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:02:16.939311 systemd-tmpfiles[2346]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 00:02:16.939924 systemd-tmpfiles[2346]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:02:16.940125 systemd-tmpfiles[2346]: ACLs are not supported, ignoring. May 8 00:02:16.940171 systemd-tmpfiles[2346]: ACLs are not supported, ignoring. May 8 00:02:16.942787 systemd-tmpfiles[2346]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:02:16.942794 systemd-tmpfiles[2346]: Skipping /boot May 8 00:02:16.951273 systemd-tmpfiles[2346]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:02:16.951280 systemd-tmpfiles[2346]: Skipping /boot May 8 00:02:16.952991 systemd-udevd[2347]: Using default interface naming scheme 'v255'. May 8 00:02:16.983813 zram_generator::config[2387]: No configuration found. May 8 00:02:17.011820 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (2379) May 8 00:02:17.032819 kernel: IPMI message handler: version 39.2 May 8 00:02:17.042817 kernel: ipmi device interface May 8 00:02:17.060028 kernel: ipmi_si: IPMI System Interface driver May 8 00:02:17.060095 kernel: ipmi_si: Unable to find any System Interface(s) May 8 00:02:17.075813 kernel: ipmi_ssif: IPMI SSIF Interface driver May 8 00:02:17.094283 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:02:17.174140 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 8 00:02:17.174428 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. May 8 00:02:17.179025 systemd[1]: Reloading finished in 241 ms. May 8 00:02:17.192218 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:02:17.215727 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:02:17.233693 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 00:02:17.244815 systemd[1]: Finished ensure-sysext.service. May 8 00:02:17.281935 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 00:02:17.287915 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 00:02:17.292844 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:02:17.293889 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 00:02:17.299723 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:02:17.305526 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:02:17.311233 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:02:17.311689 lvm[2571]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:02:17.316966 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:02:17.321761 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:02:17.322702 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 00:02:17.323171 augenrules[2593]: No rules May 8 00:02:17.327394 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:02:17.328635 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 00:02:17.334984 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:02:17.341425 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:02:17.347511 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 00:02:17.353042 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 00:02:17.358618 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:02:17.364043 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:02:17.364775 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 00:02:17.369879 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 00:02:17.376015 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 00:02:17.380906 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:02:17.381055 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:02:17.385998 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:02:17.386140 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:02:17.391077 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:02:17.391222 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:02:17.395957 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:02:17.396097 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:02:17.400960 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 00:02:17.406007 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 00:02:17.411059 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:02:17.425677 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 00:02:17.430879 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:02:17.451097 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 00:02:17.455634 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:02:17.455697 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:02:17.456843 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 00:02:17.458040 lvm[2625]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:02:17.463353 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 00:02:17.468250 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:02:17.469812 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 00:02:17.492291 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 00:02:17.498739 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 00:02:17.556085 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 00:02:17.561090 systemd[1]: Reached target time-set.target - System Time Set. May 8 00:02:17.564603 systemd-resolved[2603]: Positive Trust Anchors: May 8 00:02:17.564616 systemd-resolved[2603]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:02:17.564648 systemd-resolved[2603]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:02:17.568284 systemd-resolved[2603]: Using system hostname 'ci-4230.1.1-n-1f162da554'. May 8 00:02:17.569532 systemd-networkd[2602]: lo: Link UP May 8 00:02:17.569537 systemd-networkd[2602]: lo: Gained carrier May 8 00:02:17.569875 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:02:17.573274 systemd-networkd[2602]: bond0: netdev ready May 8 00:02:17.574291 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:02:17.578593 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:02:17.582478 systemd-networkd[2602]: Enumeration completed May 8 00:02:17.582904 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 00:02:17.587172 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 00:02:17.591661 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 00:02:17.593587 systemd-networkd[2602]: enP1p1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:49:ed:dc.network. May 8 00:02:17.596062 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 00:02:17.600413 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 00:02:17.604794 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:02:17.604819 systemd[1]: Reached target paths.target - Path Units. May 8 00:02:17.609163 systemd[1]: Reached target timers.target - Timer Units. May 8 00:02:17.614248 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 00:02:17.620056 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 00:02:17.626293 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 8 00:02:17.633001 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 00:02:17.637932 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 8 00:02:17.642914 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:02:17.647628 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 00:02:17.652202 systemd[1]: Reached target network.target - Network. May 8 00:02:17.656606 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:02:17.660944 systemd[1]: Reached target basic.target - Basic System. May 8 00:02:17.665219 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 00:02:17.665239 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 00:02:17.676916 systemd[1]: Starting containerd.service - containerd container runtime... May 8 00:02:17.682561 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 8 00:02:17.688222 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 00:02:17.693818 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 00:02:17.699479 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 00:02:17.703725 jq[2662]: false May 8 00:02:17.703982 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 00:02:17.704207 coreos-metadata[2657]: May 08 00:02:17.704 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 8 00:02:17.705073 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 00:02:17.706961 coreos-metadata[2657]: May 08 00:02:17.706 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 8 00:02:17.709500 dbus-daemon[2658]: [system] SELinux support is enabled May 8 00:02:17.710595 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 00:02:17.716205 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 00:02:17.719347 extend-filesystems[2663]: Found loop4 May 8 00:02:17.725520 extend-filesystems[2663]: Found loop5 May 8 00:02:17.725520 extend-filesystems[2663]: Found loop6 May 8 00:02:17.725520 extend-filesystems[2663]: Found loop7 May 8 00:02:17.725520 extend-filesystems[2663]: Found nvme0n1 May 8 00:02:17.725520 extend-filesystems[2663]: Found nvme0n1p1 May 8 00:02:17.725520 extend-filesystems[2663]: Found nvme0n1p2 May 8 00:02:17.725520 extend-filesystems[2663]: Found nvme0n1p3 May 8 00:02:17.725520 extend-filesystems[2663]: Found usr May 8 00:02:17.725520 extend-filesystems[2663]: Found nvme0n1p4 May 8 00:02:17.725520 extend-filesystems[2663]: Found nvme0n1p6 May 8 00:02:17.725520 extend-filesystems[2663]: Found nvme0n1p7 May 8 00:02:17.725520 extend-filesystems[2663]: Found nvme0n1p9 May 8 00:02:17.725520 extend-filesystems[2663]: Checking size of /dev/nvme0n1p9 May 8 00:02:17.866429 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 233815889 blocks May 8 00:02:17.866456 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (2518) May 8 00:02:17.721959 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 00:02:17.866521 extend-filesystems[2663]: Resized partition /dev/nvme0n1p9 May 8 00:02:17.863070 dbus-daemon[2658]: [system] Successfully activated service 'org.freedesktop.systemd1' May 8 00:02:17.734082 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 00:02:17.871564 extend-filesystems[2687]: resize2fs 1.47.1 (20-May-2024) May 8 00:02:17.740221 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 8 00:02:17.780942 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 00:02:17.790221 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:02:17.790820 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:02:17.882951 update_engine[2693]: I20250508 00:02:17.834265 2693 main.cc:92] Flatcar Update Engine starting May 8 00:02:17.882951 update_engine[2693]: I20250508 00:02:17.836845 2693 update_check_scheduler.cc:74] Next update check in 9m23s May 8 00:02:17.791502 systemd[1]: Starting update-engine.service - Update Engine... May 8 00:02:17.883226 jq[2694]: true May 8 00:02:17.799842 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 00:02:17.809007 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 00:02:17.883505 tar[2699]: linux-arm64/helm May 8 00:02:17.813565 systemd-logind[2680]: Watching system buttons on /dev/input/event0 (Power Button) May 8 00:02:17.883830 jq[2701]: true May 8 00:02:17.818057 systemd-logind[2680]: New seat seat0. May 8 00:02:17.822389 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:02:17.822663 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 00:02:17.823009 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:02:17.823217 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 00:02:17.829096 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:02:17.829285 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 00:02:17.842695 systemd[1]: Started systemd-logind.service - User Login Management. May 8 00:02:17.862445 (ntainerd)[2702]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 00:02:17.870613 systemd[1]: Started update-engine.service - Update Engine. May 8 00:02:17.888141 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:02:17.888302 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 00:02:17.893290 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:02:17.893409 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 00:02:17.897186 bash[2721]: Updated "/home/core/.ssh/authorized_keys" May 8 00:02:17.921023 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 00:02:17.930857 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 00:02:17.938322 systemd[1]: Starting sshkeys.service... May 8 00:02:17.949052 locksmithd[2726]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:02:17.951063 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 8 00:02:17.957098 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 8 00:02:17.976794 coreos-metadata[2744]: May 08 00:02:17.976 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 8 00:02:17.977949 coreos-metadata[2744]: May 08 00:02:17.977 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 8 00:02:18.006951 containerd[2702]: time="2025-05-08T00:02:18.006870040Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 8 00:02:18.028858 containerd[2702]: time="2025-05-08T00:02:18.028817640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:02:18.030118 containerd[2702]: time="2025-05-08T00:02:18.030091120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:02:18.030141 containerd[2702]: time="2025-05-08T00:02:18.030117400Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:02:18.030141 containerd[2702]: time="2025-05-08T00:02:18.030133400Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:02:18.030305 containerd[2702]: time="2025-05-08T00:02:18.030289040Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 00:02:18.030328 containerd[2702]: time="2025-05-08T00:02:18.030307120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 00:02:18.030377 containerd[2702]: time="2025-05-08T00:02:18.030361960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:02:18.030397 containerd[2702]: time="2025-05-08T00:02:18.030375360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:02:18.030576 containerd[2702]: time="2025-05-08T00:02:18.030558960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:02:18.030596 containerd[2702]: time="2025-05-08T00:02:18.030574520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:02:18.030596 containerd[2702]: time="2025-05-08T00:02:18.030587880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:02:18.030628 containerd[2702]: time="2025-05-08T00:02:18.030596880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:02:18.030690 containerd[2702]: time="2025-05-08T00:02:18.030676720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:02:18.031141 containerd[2702]: time="2025-05-08T00:02:18.031124240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:02:18.031274 containerd[2702]: time="2025-05-08T00:02:18.031257880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:02:18.031299 containerd[2702]: time="2025-05-08T00:02:18.031272640Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:02:18.031365 containerd[2702]: time="2025-05-08T00:02:18.031351960Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:02:18.031405 containerd[2702]: time="2025-05-08T00:02:18.031393680Z" level=info msg="metadata content store policy set" policy=shared May 8 00:02:18.038306 containerd[2702]: time="2025-05-08T00:02:18.038277000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:02:18.038339 containerd[2702]: time="2025-05-08T00:02:18.038326360Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:02:18.038364 containerd[2702]: time="2025-05-08T00:02:18.038353840Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 00:02:18.038383 containerd[2702]: time="2025-05-08T00:02:18.038371840Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 00:02:18.038407 containerd[2702]: time="2025-05-08T00:02:18.038386240Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:02:18.038547 containerd[2702]: time="2025-05-08T00:02:18.038530960Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:02:18.038754 containerd[2702]: time="2025-05-08T00:02:18.038740640Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:02:18.038864 containerd[2702]: time="2025-05-08T00:02:18.038850040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 00:02:18.038884 containerd[2702]: time="2025-05-08T00:02:18.038866560Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 00:02:18.038884 containerd[2702]: time="2025-05-08T00:02:18.038880640Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 00:02:18.038921 containerd[2702]: time="2025-05-08T00:02:18.038894280Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:02:18.038921 containerd[2702]: time="2025-05-08T00:02:18.038908240Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:02:18.038956 containerd[2702]: time="2025-05-08T00:02:18.038920920Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:02:18.038956 containerd[2702]: time="2025-05-08T00:02:18.038937920Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:02:18.038987 containerd[2702]: time="2025-05-08T00:02:18.038952560Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:02:18.038987 containerd[2702]: time="2025-05-08T00:02:18.038966160Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:02:18.038987 containerd[2702]: time="2025-05-08T00:02:18.038977760Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:02:18.039035 containerd[2702]: time="2025-05-08T00:02:18.038988840Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:02:18.039035 containerd[2702]: time="2025-05-08T00:02:18.039008640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:02:18.039035 containerd[2702]: time="2025-05-08T00:02:18.039021600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:02:18.039035 containerd[2702]: time="2025-05-08T00:02:18.039034160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:02:18.039099 containerd[2702]: time="2025-05-08T00:02:18.039047400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:02:18.039099 containerd[2702]: time="2025-05-08T00:02:18.039059000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:02:18.039099 containerd[2702]: time="2025-05-08T00:02:18.039071160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:02:18.039099 containerd[2702]: time="2025-05-08T00:02:18.039081880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:02:18.039099 containerd[2702]: time="2025-05-08T00:02:18.039095360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:02:18.039178 containerd[2702]: time="2025-05-08T00:02:18.039109000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 00:02:18.039178 containerd[2702]: time="2025-05-08T00:02:18.039123600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 00:02:18.039178 containerd[2702]: time="2025-05-08T00:02:18.039135520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:02:18.039178 containerd[2702]: time="2025-05-08T00:02:18.039149080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 00:02:18.039178 containerd[2702]: time="2025-05-08T00:02:18.039161560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:02:18.039178 containerd[2702]: time="2025-05-08T00:02:18.039175760Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 00:02:18.039273 containerd[2702]: time="2025-05-08T00:02:18.039196240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 00:02:18.039273 containerd[2702]: time="2025-05-08T00:02:18.039209480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:02:18.039273 containerd[2702]: time="2025-05-08T00:02:18.039220520Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:02:18.039403 containerd[2702]: time="2025-05-08T00:02:18.039391120Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:02:18.039424 containerd[2702]: time="2025-05-08T00:02:18.039408240Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 00:02:18.039424 containerd[2702]: time="2025-05-08T00:02:18.039418320Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:02:18.039458 containerd[2702]: time="2025-05-08T00:02:18.039430080Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 00:02:18.039458 containerd[2702]: time="2025-05-08T00:02:18.039439400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:02:18.039458 containerd[2702]: time="2025-05-08T00:02:18.039451240Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 00:02:18.039509 containerd[2702]: time="2025-05-08T00:02:18.039463400Z" level=info msg="NRI interface is disabled by configuration." May 8 00:02:18.039509 containerd[2702]: time="2025-05-08T00:02:18.039473680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:02:18.039829 containerd[2702]: time="2025-05-08T00:02:18.039784880Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:02:18.039924 containerd[2702]: time="2025-05-08T00:02:18.039836920Z" level=info msg="Connect containerd service" May 8 00:02:18.039924 containerd[2702]: time="2025-05-08T00:02:18.039873160Z" level=info msg="using legacy CRI server" May 8 00:02:18.039924 containerd[2702]: time="2025-05-08T00:02:18.039880240Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 00:02:18.040112 containerd[2702]: time="2025-05-08T00:02:18.040098800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:02:18.040727 containerd[2702]: time="2025-05-08T00:02:18.040706960Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:02:18.040941 containerd[2702]: time="2025-05-08T00:02:18.040901720Z" level=info msg="Start subscribing containerd event" May 8 00:02:18.040980 containerd[2702]: time="2025-05-08T00:02:18.040953840Z" level=info msg="Start recovering state" May 8 00:02:18.041025 containerd[2702]: time="2025-05-08T00:02:18.041018240Z" level=info msg="Start event monitor" May 8 00:02:18.041066 containerd[2702]: time="2025-05-08T00:02:18.041028960Z" level=info msg="Start snapshots syncer" May 8 00:02:18.041066 containerd[2702]: time="2025-05-08T00:02:18.041038720Z" level=info msg="Start cni network conf syncer for default" May 8 00:02:18.041066 containerd[2702]: time="2025-05-08T00:02:18.041047040Z" level=info msg="Start streaming server" May 8 00:02:18.041277 containerd[2702]: time="2025-05-08T00:02:18.041260280Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:02:18.041313 containerd[2702]: time="2025-05-08T00:02:18.041302240Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:02:18.041360 containerd[2702]: time="2025-05-08T00:02:18.041348760Z" level=info msg="containerd successfully booted in 0.035333s" May 8 00:02:18.041399 systemd[1]: Started containerd.service - containerd container runtime. May 8 00:02:18.161460 tar[2699]: linux-arm64/LICENSE May 8 00:02:18.161545 tar[2699]: linux-arm64/README.md May 8 00:02:18.175038 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 00:02:18.277821 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 233815889 May 8 00:02:18.293904 extend-filesystems[2687]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 8 00:02:18.293904 extend-filesystems[2687]: old_desc_blocks = 1, new_desc_blocks = 112 May 8 00:02:18.293904 extend-filesystems[2687]: The filesystem on /dev/nvme0n1p9 is now 233815889 (4k) blocks long. May 8 00:02:18.324422 extend-filesystems[2663]: Resized filesystem in /dev/nvme0n1p9 May 8 00:02:18.324422 extend-filesystems[2663]: Found nvme1n1 May 8 00:02:18.296424 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:02:18.296757 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 00:02:18.310391 systemd[1]: extend-filesystems.service: Consumed 215ms CPU time, 68.9M memory peak. May 8 00:02:18.707116 coreos-metadata[2657]: May 08 00:02:18.707 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 May 8 00:02:18.707449 coreos-metadata[2657]: May 08 00:02:18.707 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 8 00:02:18.856445 sshd_keygen[2689]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:02:18.875792 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 00:02:18.876811 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up May 8 00:02:18.880810 kernel: bond0: (slave enP1p1s0f0np0): Enslaving as a backup interface with an up link May 8 00:02:18.881957 systemd-networkd[2602]: enP1p1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:49:ed:dd.network. May 8 00:02:18.906136 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 00:02:18.914896 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:02:18.915672 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 00:02:18.922460 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 00:02:18.934922 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 00:02:18.941433 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 00:02:18.947782 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 8 00:02:18.953164 systemd[1]: Reached target getty.target - Login Prompts. May 8 00:02:18.978116 coreos-metadata[2744]: May 08 00:02:18.978 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 May 8 00:02:18.978585 coreos-metadata[2744]: May 08 00:02:18.978 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 8 00:02:19.488809 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up May 8 00:02:19.505828 kernel: bond0: (slave enP1p1s0f1np1): Enslaving as a backup interface with an up link May 8 00:02:19.505745 systemd-networkd[2602]: bond0: Configuring with /etc/systemd/network/05-bond0.network. May 8 00:02:19.507071 systemd-networkd[2602]: enP1p1s0f0np0: Link UP May 8 00:02:19.507309 systemd-networkd[2602]: enP1p1s0f0np0: Gained carrier May 8 00:02:19.507582 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 8 00:02:19.525814 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond May 8 00:02:19.535133 systemd-networkd[2602]: enP1p1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:49:ed:dc.network. May 8 00:02:19.535414 systemd-networkd[2602]: enP1p1s0f1np1: Link UP May 8 00:02:19.535608 systemd-networkd[2602]: enP1p1s0f1np1: Gained carrier May 8 00:02:19.545080 systemd-networkd[2602]: bond0: Link UP May 8 00:02:19.545323 systemd-networkd[2602]: bond0: Gained carrier May 8 00:02:19.545495 systemd-timesyncd[2604]: Network configuration changed, trying to establish connection. May 8 00:02:19.546095 systemd-timesyncd[2604]: Network configuration changed, trying to establish connection. May 8 00:02:19.546332 systemd-timesyncd[2604]: Network configuration changed, trying to establish connection. May 8 00:02:19.546463 systemd-timesyncd[2604]: Network configuration changed, trying to establish connection. May 8 00:02:19.631348 kernel: bond0: (slave enP1p1s0f0np0): link status definitely up, 25000 Mbps full duplex May 8 00:02:19.631413 kernel: bond0: active interface up! May 8 00:02:19.754817 kernel: bond0: (slave enP1p1s0f1np1): link status definitely up, 25000 Mbps full duplex May 8 00:02:20.707604 coreos-metadata[2657]: May 08 00:02:20.707 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 May 8 00:02:20.795873 systemd-networkd[2602]: bond0: Gained IPv6LL May 8 00:02:20.796254 systemd-timesyncd[2604]: Network configuration changed, trying to establish connection. May 8 00:02:20.860182 systemd-timesyncd[2604]: Network configuration changed, trying to establish connection. May 8 00:02:20.860270 systemd-timesyncd[2604]: Network configuration changed, trying to establish connection. May 8 00:02:20.862061 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 00:02:20.867871 systemd[1]: Reached target network-online.target - Network is Online. May 8 00:02:20.885041 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:02:20.891621 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 00:02:20.920160 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 00:02:20.978705 coreos-metadata[2744]: May 08 00:02:20.978 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 May 8 00:02:21.476087 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:02:21.482127 (kubelet)[2807]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:02:21.942047 kubelet[2807]: E0508 00:02:21.941975 2807 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:02:21.944419 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:02:21.944561 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:02:21.944874 systemd[1]: kubelet.service: Consumed 721ms CPU time, 252.2M memory peak. May 8 00:02:23.025994 kernel: mlx5_core 0001:01:00.0: lag map: port 1:1 port 2:2 May 8 00:02:23.026299 kernel: mlx5_core 0001:01:00.0: shared_fdb:0 mode:queue_affinity May 8 00:02:23.264480 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 00:02:23.279150 systemd[1]: Started sshd@0-145.40.69.49:22-139.178.68.195:35104.service - OpenSSH per-connection server daemon (139.178.68.195:35104). May 8 00:02:23.323582 coreos-metadata[2744]: May 08 00:02:23.323 INFO Fetch successful May 8 00:02:23.391123 unknown[2744]: wrote ssh authorized keys file for user: core May 8 00:02:23.410816 update-ssh-keys[2833]: Updated "/home/core/.ssh/authorized_keys" May 8 00:02:23.411988 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 8 00:02:23.419053 systemd[1]: Finished sshkeys.service. May 8 00:02:23.426313 coreos-metadata[2657]: May 08 00:02:23.426 INFO Fetch successful May 8 00:02:23.497666 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 8 00:02:23.504433 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... May 8 00:02:23.705791 sshd[2830]: Accepted publickey for core from 139.178.68.195 port 35104 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:02:23.707528 sshd-session[2830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:02:23.713019 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 00:02:23.732988 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 00:02:23.742437 systemd-logind[2680]: New session 1 of user core. May 8 00:02:23.745449 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 00:02:23.752792 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 00:02:23.761019 (systemd)[2844]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:02:23.762686 systemd-logind[2680]: New session c1 of user core. May 8 00:02:23.824838 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. May 8 00:02:23.830032 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 00:02:23.878514 systemd[2844]: Queued start job for default target default.target. May 8 00:02:23.893964 systemd[2844]: Created slice app.slice - User Application Slice. May 8 00:02:23.893990 systemd[2844]: Reached target paths.target - Paths. May 8 00:02:23.894021 systemd[2844]: Reached target timers.target - Timers. May 8 00:02:23.895244 systemd[2844]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 00:02:23.903380 systemd[2844]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 00:02:23.903431 systemd[2844]: Reached target sockets.target - Sockets. May 8 00:02:23.903474 systemd[2844]: Reached target basic.target - Basic System. May 8 00:02:23.903502 systemd[2844]: Reached target default.target - Main User Target. May 8 00:02:23.903524 systemd[2844]: Startup finished in 136ms. May 8 00:02:23.903811 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 00:02:23.915980 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 00:02:23.921151 systemd[1]: Startup finished in 3.221s (kernel) + 19.606s (initrd) + 9.960s (userspace) = 32.788s. May 8 00:02:23.960032 login[2787]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 8 00:02:23.960971 login[2788]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 8 00:02:23.963323 systemd-logind[2680]: New session 2 of user core. May 8 00:02:23.982912 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 00:02:23.984583 systemd-logind[2680]: New session 3 of user core. May 8 00:02:23.985594 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 00:02:24.224478 systemd[1]: Started sshd@1-145.40.69.49:22-139.178.68.195:35110.service - OpenSSH per-connection server daemon (139.178.68.195:35110). May 8 00:02:24.646335 sshd[2885]: Accepted publickey for core from 139.178.68.195 port 35110 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:02:24.647336 sshd-session[2885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:02:24.650167 systemd-logind[2680]: New session 4 of user core. May 8 00:02:24.665926 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 00:02:24.943087 sshd[2887]: Connection closed by 139.178.68.195 port 35110 May 8 00:02:24.943577 sshd-session[2885]: pam_unix(sshd:session): session closed for user core May 8 00:02:24.947191 systemd[1]: sshd@1-145.40.69.49:22-139.178.68.195:35110.service: Deactivated successfully. May 8 00:02:24.948967 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:02:24.951201 systemd-logind[2680]: Session 4 logged out. Waiting for processes to exit. May 8 00:02:24.951706 systemd-logind[2680]: Removed session 4. May 8 00:02:25.015475 systemd[1]: Started sshd@2-145.40.69.49:22-139.178.68.195:55764.service - OpenSSH per-connection server daemon (139.178.68.195:55764). May 8 00:02:25.444672 sshd[2895]: Accepted publickey for core from 139.178.68.195 port 55764 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:02:25.445659 sshd-session[2895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:02:25.448481 systemd-logind[2680]: New session 5 of user core. May 8 00:02:25.460981 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 00:02:25.743406 sshd[2897]: Connection closed by 139.178.68.195 port 55764 May 8 00:02:25.743959 sshd-session[2895]: pam_unix(sshd:session): session closed for user core May 8 00:02:25.747526 systemd[1]: sshd@2-145.40.69.49:22-139.178.68.195:55764.service: Deactivated successfully. May 8 00:02:25.749371 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:02:25.750290 systemd-logind[2680]: Session 5 logged out. Waiting for processes to exit. May 8 00:02:25.750822 systemd-logind[2680]: Removed session 5. May 8 00:02:25.812455 systemd[1]: Started sshd@3-145.40.69.49:22-139.178.68.195:55774.service - OpenSSH per-connection server daemon (139.178.68.195:55774). May 8 00:02:26.228410 sshd[2904]: Accepted publickey for core from 139.178.68.195 port 55774 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:02:26.229381 sshd-session[2904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:02:26.232237 systemd-logind[2680]: New session 6 of user core. May 8 00:02:26.243926 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 00:02:26.267331 systemd-timesyncd[2604]: Network configuration changed, trying to establish connection. May 8 00:02:26.519531 sshd[2907]: Connection closed by 139.178.68.195 port 55774 May 8 00:02:26.519987 sshd-session[2904]: pam_unix(sshd:session): session closed for user core May 8 00:02:26.523573 systemd[1]: sshd@3-145.40.69.49:22-139.178.68.195:55774.service: Deactivated successfully. May 8 00:02:26.525368 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:02:26.525940 systemd-logind[2680]: Session 6 logged out. Waiting for processes to exit. May 8 00:02:26.526495 systemd-logind[2680]: Removed session 6. May 8 00:02:26.593476 systemd[1]: Started sshd@4-145.40.69.49:22-139.178.68.195:55780.service - OpenSSH per-connection server daemon (139.178.68.195:55780). May 8 00:02:27.020854 sshd[2913]: Accepted publickey for core from 139.178.68.195 port 55780 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:02:27.021915 sshd-session[2913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:02:27.024650 systemd-logind[2680]: New session 7 of user core. May 8 00:02:27.033916 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 00:02:27.265008 sudo[2916]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 00:02:27.265277 sudo[2916]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:02:27.285606 sudo[2916]: pam_unix(sudo:session): session closed for user root May 8 00:02:27.349220 sshd[2915]: Connection closed by 139.178.68.195 port 55780 May 8 00:02:27.349624 sshd-session[2913]: pam_unix(sshd:session): session closed for user core May 8 00:02:27.352517 systemd[1]: sshd@4-145.40.69.49:22-139.178.68.195:55780.service: Deactivated successfully. May 8 00:02:27.354026 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:02:27.354569 systemd-logind[2680]: Session 7 logged out. Waiting for processes to exit. May 8 00:02:27.355187 systemd-logind[2680]: Removed session 7. May 8 00:02:27.421566 systemd[1]: Started sshd@5-145.40.69.49:22-139.178.68.195:55782.service - OpenSSH per-connection server daemon (139.178.68.195:55782). May 8 00:02:27.849366 sshd[2922]: Accepted publickey for core from 139.178.68.195 port 55782 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:02:27.850406 sshd-session[2922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:02:27.853826 systemd-logind[2680]: New session 8 of user core. May 8 00:02:27.861977 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 00:02:28.085268 sudo[2926]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 00:02:28.085539 sudo[2926]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:02:28.087996 sudo[2926]: pam_unix(sudo:session): session closed for user root May 8 00:02:28.092231 sudo[2925]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 8 00:02:28.092486 sudo[2925]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:02:28.108127 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 00:02:28.129689 augenrules[2948]: No rules May 8 00:02:28.130751 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:02:28.131894 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 00:02:28.132619 sudo[2925]: pam_unix(sudo:session): session closed for user root May 8 00:02:28.196168 sshd[2924]: Connection closed by 139.178.68.195 port 55782 May 8 00:02:28.196777 sshd-session[2922]: pam_unix(sshd:session): session closed for user core May 8 00:02:28.200475 systemd[1]: sshd@5-145.40.69.49:22-139.178.68.195:55782.service: Deactivated successfully. May 8 00:02:28.202198 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:02:28.203325 systemd-logind[2680]: Session 8 logged out. Waiting for processes to exit. May 8 00:02:28.203928 systemd-logind[2680]: Removed session 8. May 8 00:02:28.273400 systemd[1]: Started sshd@6-145.40.69.49:22-139.178.68.195:55792.service - OpenSSH per-connection server daemon (139.178.68.195:55792). May 8 00:02:28.701274 sshd[2958]: Accepted publickey for core from 139.178.68.195 port 55792 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:02:28.702289 sshd-session[2958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:02:28.705185 systemd-logind[2680]: New session 9 of user core. May 8 00:02:28.713973 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 00:02:28.940495 sudo[2961]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:02:28.940770 sudo[2961]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:02:29.236095 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 00:02:29.236157 (dockerd)[2989]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 00:02:29.437126 dockerd[2989]: time="2025-05-08T00:02:29.437083320Z" level=info msg="Starting up" May 8 00:02:29.504909 dockerd[2989]: time="2025-05-08T00:02:29.504839600Z" level=info msg="Loading containers: start." May 8 00:02:29.643816 kernel: Initializing XFRM netlink socket May 8 00:02:29.661568 systemd-timesyncd[2604]: Network configuration changed, trying to establish connection. May 8 00:02:29.710560 systemd-networkd[2602]: docker0: Link UP May 8 00:02:29.740979 dockerd[2989]: time="2025-05-08T00:02:29.740951320Z" level=info msg="Loading containers: done." May 8 00:02:29.749648 dockerd[2989]: time="2025-05-08T00:02:29.749616840Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:02:29.749721 dockerd[2989]: time="2025-05-08T00:02:29.749687480Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 8 00:02:29.750269 dockerd[2989]: time="2025-05-08T00:02:29.749852840Z" level=info msg="Daemon has completed initialization" May 8 00:02:29.769643 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 00:02:29.769825 dockerd[2989]: time="2025-05-08T00:02:29.769535200Z" level=info msg="API listen on /run/docker.sock" May 8 00:02:29.882197 systemd-timesyncd[2604]: Contacted time server [2602:f9bd:80:100::a]:123 (2.flatcar.pool.ntp.org). May 8 00:02:29.882249 systemd-timesyncd[2604]: Initial clock synchronization to Thu 2025-05-08 00:02:29.946710 UTC. May 8 00:02:30.408430 containerd[2702]: time="2025-05-08T00:02:30.408387091Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 8 00:02:30.494636 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1841362903-merged.mount: Deactivated successfully. May 8 00:02:30.791661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2551522199.mount: Deactivated successfully. May 8 00:02:31.760550 containerd[2702]: time="2025-05-08T00:02:31.760442202Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794150" May 8 00:02:31.760550 containerd[2702]: time="2025-05-08T00:02:31.760451910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:31.761629 containerd[2702]: time="2025-05-08T00:02:31.761585515Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:31.764554 containerd[2702]: time="2025-05-08T00:02:31.764528395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:31.765703 containerd[2702]: time="2025-05-08T00:02:31.765681820Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 1.357252842s" May 8 00:02:31.765730 containerd[2702]: time="2025-05-08T00:02:31.765712757Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 8 00:02:31.783943 containerd[2702]: time="2025-05-08T00:02:31.783920259Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 8 00:02:32.197729 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:02:32.207017 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:02:32.298701 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:02:32.301984 (kubelet)[3329]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:02:32.347985 kubelet[3329]: E0508 00:02:32.347947 3329 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:02:32.351281 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:02:32.351414 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:02:32.351725 systemd[1]: kubelet.service: Consumed 132ms CPU time, 108.8M memory peak. May 8 00:02:33.043771 containerd[2702]: time="2025-05-08T00:02:33.043704917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:33.044008 containerd[2702]: time="2025-05-08T00:02:33.043791222Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855550" May 8 00:02:33.045664 containerd[2702]: time="2025-05-08T00:02:33.045636204Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:33.048436 containerd[2702]: time="2025-05-08T00:02:33.048418558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:33.049559 containerd[2702]: time="2025-05-08T00:02:33.049519732Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.265560302s" May 8 00:02:33.049585 containerd[2702]: time="2025-05-08T00:02:33.049569560Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 8 00:02:33.069083 containerd[2702]: time="2025-05-08T00:02:33.069058342Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 8 00:02:33.831341 containerd[2702]: time="2025-05-08T00:02:33.831307732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:33.832213 containerd[2702]: time="2025-05-08T00:02:33.832174603Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263945" May 8 00:02:33.832325 containerd[2702]: time="2025-05-08T00:02:33.832298471Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:33.835133 containerd[2702]: time="2025-05-08T00:02:33.835105879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:33.836271 containerd[2702]: time="2025-05-08T00:02:33.836249080Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 767.15961ms" May 8 00:02:33.836293 containerd[2702]: time="2025-05-08T00:02:33.836276065Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 8 00:02:33.855257 containerd[2702]: time="2025-05-08T00:02:33.855235153Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 8 00:02:34.380249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4223129518.mount: Deactivated successfully. May 8 00:02:34.548214 containerd[2702]: time="2025-05-08T00:02:34.548171180Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775705" May 8 00:02:34.548214 containerd[2702]: time="2025-05-08T00:02:34.548173672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:34.548958 containerd[2702]: time="2025-05-08T00:02:34.548933698Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:34.550668 containerd[2702]: time="2025-05-08T00:02:34.550648510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:34.551359 containerd[2702]: time="2025-05-08T00:02:34.551340374Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 696.076633ms" May 8 00:02:34.551390 containerd[2702]: time="2025-05-08T00:02:34.551365894Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 8 00:02:34.570515 containerd[2702]: time="2025-05-08T00:02:34.570485508Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 8 00:02:34.950205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3616365589.mount: Deactivated successfully. May 8 00:02:35.348008 containerd[2702]: time="2025-05-08T00:02:35.347970767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:35.348149 containerd[2702]: time="2025-05-08T00:02:35.348024950Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" May 8 00:02:35.349144 containerd[2702]: time="2025-05-08T00:02:35.349117022Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:35.351950 containerd[2702]: time="2025-05-08T00:02:35.351922411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:35.353068 containerd[2702]: time="2025-05-08T00:02:35.353040871Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 782.520927ms" May 8 00:02:35.353094 containerd[2702]: time="2025-05-08T00:02:35.353072241Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 8 00:02:35.372564 containerd[2702]: time="2025-05-08T00:02:35.372515723Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 8 00:02:35.606499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount426850476.mount: Deactivated successfully. May 8 00:02:35.607027 containerd[2702]: time="2025-05-08T00:02:35.606948775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:35.607027 containerd[2702]: time="2025-05-08T00:02:35.606987495Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" May 8 00:02:35.607824 containerd[2702]: time="2025-05-08T00:02:35.607717229Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:35.609758 containerd[2702]: time="2025-05-08T00:02:35.609700382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:35.610649 containerd[2702]: time="2025-05-08T00:02:35.610577164Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 238.026818ms" May 8 00:02:35.610649 containerd[2702]: time="2025-05-08T00:02:35.610602388Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 8 00:02:35.629957 containerd[2702]: time="2025-05-08T00:02:35.629934610Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 8 00:02:36.042799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2730709945.mount: Deactivated successfully. May 8 00:02:37.469003 containerd[2702]: time="2025-05-08T00:02:37.468963062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:37.469401 containerd[2702]: time="2025-05-08T00:02:37.468996247Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" May 8 00:02:37.470242 containerd[2702]: time="2025-05-08T00:02:37.470217835Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:37.473262 containerd[2702]: time="2025-05-08T00:02:37.473240163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:37.474451 containerd[2702]: time="2025-05-08T00:02:37.474431655Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 1.844471467s" May 8 00:02:37.474476 containerd[2702]: time="2025-05-08T00:02:37.474457778Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 8 00:02:42.466837 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 00:02:42.483983 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:02:42.491899 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 8 00:02:42.491962 systemd[1]: kubelet.service: Failed with result 'signal'. May 8 00:02:42.492167 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:02:42.495233 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:02:42.510563 systemd[1]: Reload requested from client PID 3764 ('systemctl') (unit session-9.scope)... May 8 00:02:42.510574 systemd[1]: Reloading... May 8 00:02:42.574812 zram_generator::config[3816]: No configuration found. May 8 00:02:42.665537 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:02:42.756862 systemd[1]: Reloading finished in 245 ms. May 8 00:02:42.796974 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:02:42.799646 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:02:42.800402 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:02:42.800605 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:02:42.800642 systemd[1]: kubelet.service: Consumed 78ms CPU time, 82.5M memory peak. May 8 00:02:42.802225 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:02:42.903103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:02:42.906485 (kubelet)[3881]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:02:42.937647 kubelet[3881]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:02:42.937647 kubelet[3881]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:02:42.937647 kubelet[3881]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:02:42.938612 kubelet[3881]: I0508 00:02:42.938580 3881 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:02:43.640668 kubelet[3881]: I0508 00:02:43.640644 3881 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:02:43.640668 kubelet[3881]: I0508 00:02:43.640665 3881 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:02:43.640851 kubelet[3881]: I0508 00:02:43.640838 3881 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:02:43.653591 kubelet[3881]: I0508 00:02:43.653563 3881 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:02:43.653669 kubelet[3881]: E0508 00:02:43.653651 3881 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://145.40.69.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 145.40.69.49:6443: connect: connection refused May 8 00:02:43.678305 kubelet[3881]: I0508 00:02:43.678281 3881 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:02:43.679486 kubelet[3881]: I0508 00:02:43.679453 3881 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:02:43.679639 kubelet[3881]: I0508 00:02:43.679490 3881 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.1-n-1f162da554","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:02:43.679724 kubelet[3881]: I0508 00:02:43.679713 3881 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:02:43.679724 kubelet[3881]: I0508 00:02:43.679723 3881 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:02:43.679944 kubelet[3881]: I0508 00:02:43.679932 3881 state_mem.go:36] "Initialized new in-memory state store" May 8 00:02:43.681060 kubelet[3881]: I0508 00:02:43.681046 3881 kubelet.go:400] "Attempting to sync node with API server" May 8 00:02:43.681081 kubelet[3881]: I0508 00:02:43.681063 3881 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:02:43.681305 kubelet[3881]: I0508 00:02:43.681296 3881 kubelet.go:312] "Adding apiserver pod source" May 8 00:02:43.681446 kubelet[3881]: I0508 00:02:43.681437 3881 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:02:43.682397 kubelet[3881]: I0508 00:02:43.682376 3881 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 00:02:43.682508 kubelet[3881]: W0508 00:02:43.682471 3881 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://145.40.69.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 145.40.69.49:6443: connect: connection refused May 8 00:02:43.682535 kubelet[3881]: W0508 00:02:43.682496 3881 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://145.40.69.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-1f162da554&limit=500&resourceVersion=0": dial tcp 145.40.69.49:6443: connect: connection refused May 8 00:02:43.682560 kubelet[3881]: E0508 00:02:43.682544 3881 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://145.40.69.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-1f162da554&limit=500&resourceVersion=0": dial tcp 145.40.69.49:6443: connect: connection refused May 8 00:02:43.682560 kubelet[3881]: E0508 00:02:43.682523 3881 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://145.40.69.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 145.40.69.49:6443: connect: connection refused May 8 00:02:43.682737 kubelet[3881]: I0508 00:02:43.682726 3881 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:02:43.682877 kubelet[3881]: W0508 00:02:43.682867 3881 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:02:43.683599 kubelet[3881]: I0508 00:02:43.683589 3881 server.go:1264] "Started kubelet" May 8 00:02:43.683860 kubelet[3881]: I0508 00:02:43.683813 3881 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:02:43.683942 kubelet[3881]: I0508 00:02:43.683891 3881 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:02:43.684207 kubelet[3881]: I0508 00:02:43.684192 3881 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:02:43.684932 kubelet[3881]: I0508 00:02:43.684915 3881 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:02:43.688046 kubelet[3881]: I0508 00:02:43.687998 3881 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:02:43.688103 kubelet[3881]: I0508 00:02:43.687999 3881 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:02:43.688393 kubelet[3881]: I0508 00:02:43.688382 3881 reconciler.go:26] "Reconciler: start to sync state" May 8 00:02:43.688423 kubelet[3881]: W0508 00:02:43.688360 3881 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://145.40.69.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 145.40.69.49:6443: connect: connection refused May 8 00:02:43.688423 kubelet[3881]: E0508 00:02:43.688419 3881 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://145.40.69.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 145.40.69.49:6443: connect: connection refused May 8 00:02:43.688855 kubelet[3881]: E0508 00:02:43.688820 3881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.69.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-1f162da554?timeout=10s\": dial tcp 145.40.69.49:6443: connect: connection refused" interval="200ms" May 8 00:02:43.688916 kubelet[3881]: I0508 00:02:43.688900 3881 factory.go:221] Registration of the systemd container factory successfully May 8 00:02:43.688998 kubelet[3881]: E0508 00:02:43.688776 3881 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://145.40.69.49:6443/api/v1/namespaces/default/events\": dial tcp 145.40.69.49:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.1.1-n-1f162da554.183d6458f2d509b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.1-n-1f162da554,UID:ci-4230.1.1-n-1f162da554,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.1-n-1f162da554,},FirstTimestamp:2025-05-08 00:02:43.683568051 +0000 UTC m=+0.774293691,LastTimestamp:2025-05-08 00:02:43.683568051 +0000 UTC m=+0.774293691,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.1-n-1f162da554,}" May 8 00:02:43.689959 kubelet[3881]: E0508 00:02:43.689941 3881 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:02:43.690032 kubelet[3881]: I0508 00:02:43.690012 3881 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:02:43.690753 kubelet[3881]: I0508 00:02:43.690737 3881 server.go:455] "Adding debug handlers to kubelet server" May 8 00:02:43.690817 kubelet[3881]: I0508 00:02:43.690740 3881 factory.go:221] Registration of the containerd container factory successfully May 8 00:02:43.702125 kubelet[3881]: I0508 00:02:43.702090 3881 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:02:43.703037 kubelet[3881]: I0508 00:02:43.703027 3881 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:02:43.703182 kubelet[3881]: I0508 00:02:43.703176 3881 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:02:43.703201 kubelet[3881]: I0508 00:02:43.703195 3881 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:02:43.703246 kubelet[3881]: E0508 00:02:43.703233 3881 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:02:43.703613 kubelet[3881]: W0508 00:02:43.703574 3881 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://145.40.69.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 145.40.69.49:6443: connect: connection refused May 8 00:02:43.703636 kubelet[3881]: E0508 00:02:43.703625 3881 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://145.40.69.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 145.40.69.49:6443: connect: connection refused May 8 00:02:43.706567 kubelet[3881]: I0508 00:02:43.706547 3881 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:02:43.706567 kubelet[3881]: I0508 00:02:43.706560 3881 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:02:43.706644 kubelet[3881]: I0508 00:02:43.706578 3881 state_mem.go:36] "Initialized new in-memory state store" May 8 00:02:43.707333 kubelet[3881]: I0508 00:02:43.707317 3881 policy_none.go:49] "None policy: Start" May 8 00:02:43.707671 kubelet[3881]: I0508 00:02:43.707655 3881 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:02:43.707694 kubelet[3881]: I0508 00:02:43.707676 3881 state_mem.go:35] "Initializing new in-memory state store" May 8 00:02:43.711337 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 00:02:43.728005 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 00:02:43.730487 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 00:02:43.745536 kubelet[3881]: I0508 00:02:43.745507 3881 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:02:43.745739 kubelet[3881]: I0508 00:02:43.745698 3881 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:02:43.745828 kubelet[3881]: I0508 00:02:43.745816 3881 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:02:43.746523 kubelet[3881]: E0508 00:02:43.746505 3881 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.1.1-n-1f162da554\" not found" May 8 00:02:43.783232 kubelet[3881]: E0508 00:02:43.783146 3881 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://145.40.69.49:6443/api/v1/namespaces/default/events\": dial tcp 145.40.69.49:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.1.1-n-1f162da554.183d6458f2d509b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.1-n-1f162da554,UID:ci-4230.1.1-n-1f162da554,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.1-n-1f162da554,},FirstTimestamp:2025-05-08 00:02:43.683568051 +0000 UTC m=+0.774293691,LastTimestamp:2025-05-08 00:02:43.683568051 +0000 UTC m=+0.774293691,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.1-n-1f162da554,}" May 8 00:02:43.786713 kubelet[3881]: I0508 00:02:43.786691 3881 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-n-1f162da554" May 8 00:02:43.786969 kubelet[3881]: E0508 00:02:43.786941 3881 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://145.40.69.49:6443/api/v1/nodes\": dial tcp 145.40.69.49:6443: connect: connection refused" node="ci-4230.1.1-n-1f162da554" May 8 00:02:43.803346 kubelet[3881]: I0508 00:02:43.803305 3881 topology_manager.go:215] "Topology Admit Handler" podUID="58ca97bf6f7786467edefca738146cdb" podNamespace="kube-system" podName="kube-scheduler-ci-4230.1.1-n-1f162da554" May 8 00:02:43.804771 kubelet[3881]: I0508 00:02:43.804749 3881 topology_manager.go:215] "Topology Admit Handler" podUID="4181a93c6579b43bd9bc9ce5bc98a821" podNamespace="kube-system" podName="kube-apiserver-ci-4230.1.1-n-1f162da554" May 8 00:02:43.806312 kubelet[3881]: I0508 00:02:43.806285 3881 topology_manager.go:215] "Topology Admit Handler" podUID="bde43824c71317f55cf7b80f3f29339a" podNamespace="kube-system" podName="kube-controller-manager-ci-4230.1.1-n-1f162da554" May 8 00:02:43.809969 systemd[1]: Created slice kubepods-burstable-pod58ca97bf6f7786467edefca738146cdb.slice - libcontainer container kubepods-burstable-pod58ca97bf6f7786467edefca738146cdb.slice. May 8 00:02:43.841234 systemd[1]: Created slice kubepods-burstable-pod4181a93c6579b43bd9bc9ce5bc98a821.slice - libcontainer container kubepods-burstable-pod4181a93c6579b43bd9bc9ce5bc98a821.slice. May 8 00:02:43.854832 systemd[1]: Created slice kubepods-burstable-podbde43824c71317f55cf7b80f3f29339a.slice - libcontainer container kubepods-burstable-podbde43824c71317f55cf7b80f3f29339a.slice. May 8 00:02:43.889379 kubelet[3881]: E0508 00:02:43.889342 3881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.69.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-1f162da554?timeout=10s\": dial tcp 145.40.69.49:6443: connect: connection refused" interval="400ms" May 8 00:02:43.988928 kubelet[3881]: I0508 00:02:43.988904 3881 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-n-1f162da554" May 8 00:02:43.989252 kubelet[3881]: E0508 00:02:43.989133 3881 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://145.40.69.49:6443/api/v1/nodes\": dial tcp 145.40.69.49:6443: connect: connection refused" node="ci-4230.1.1-n-1f162da554" May 8 00:02:43.989252 kubelet[3881]: I0508 00:02:43.989145 3881 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4181a93c6579b43bd9bc9ce5bc98a821-k8s-certs\") pod \"kube-apiserver-ci-4230.1.1-n-1f162da554\" (UID: \"4181a93c6579b43bd9bc9ce5bc98a821\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-1f162da554" May 8 00:02:43.989252 kubelet[3881]: I0508 00:02:43.989171 3881 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4181a93c6579b43bd9bc9ce5bc98a821-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.1-n-1f162da554\" (UID: \"4181a93c6579b43bd9bc9ce5bc98a821\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-1f162da554" May 8 00:02:43.989252 kubelet[3881]: I0508 00:02:43.989191 3881 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bde43824c71317f55cf7b80f3f29339a-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-1f162da554\" (UID: \"bde43824c71317f55cf7b80f3f29339a\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-1f162da554" May 8 00:02:43.989252 kubelet[3881]: I0508 00:02:43.989206 3881 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bde43824c71317f55cf7b80f3f29339a-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.1-n-1f162da554\" (UID: \"bde43824c71317f55cf7b80f3f29339a\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-1f162da554" May 8 00:02:43.989373 kubelet[3881]: I0508 00:02:43.989222 3881 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/58ca97bf6f7786467edefca738146cdb-kubeconfig\") pod \"kube-scheduler-ci-4230.1.1-n-1f162da554\" (UID: \"58ca97bf6f7786467edefca738146cdb\") " pod="kube-system/kube-scheduler-ci-4230.1.1-n-1f162da554" May 8 00:02:43.989373 kubelet[3881]: I0508 00:02:43.989235 3881 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4181a93c6579b43bd9bc9ce5bc98a821-ca-certs\") pod \"kube-apiserver-ci-4230.1.1-n-1f162da554\" (UID: \"4181a93c6579b43bd9bc9ce5bc98a821\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-1f162da554" May 8 00:02:43.989373 kubelet[3881]: I0508 00:02:43.989251 3881 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bde43824c71317f55cf7b80f3f29339a-ca-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-1f162da554\" (UID: \"bde43824c71317f55cf7b80f3f29339a\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-1f162da554" May 8 00:02:43.989373 kubelet[3881]: I0508 00:02:43.989283 3881 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bde43824c71317f55cf7b80f3f29339a-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.1-n-1f162da554\" (UID: \"bde43824c71317f55cf7b80f3f29339a\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-1f162da554" May 8 00:02:43.989373 kubelet[3881]: I0508 00:02:43.989316 3881 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bde43824c71317f55cf7b80f3f29339a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.1-n-1f162da554\" (UID: \"bde43824c71317f55cf7b80f3f29339a\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-1f162da554" May 8 00:02:44.140602 containerd[2702]: time="2025-05-08T00:02:44.140569596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.1-n-1f162da554,Uid:58ca97bf6f7786467edefca738146cdb,Namespace:kube-system,Attempt:0,}" May 8 00:02:44.154165 containerd[2702]: time="2025-05-08T00:02:44.154123999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.1-n-1f162da554,Uid:4181a93c6579b43bd9bc9ce5bc98a821,Namespace:kube-system,Attempt:0,}" May 8 00:02:44.156595 containerd[2702]: time="2025-05-08T00:02:44.156571928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.1-n-1f162da554,Uid:bde43824c71317f55cf7b80f3f29339a,Namespace:kube-system,Attempt:0,}" May 8 00:02:44.290056 kubelet[3881]: E0508 00:02:44.289959 3881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.69.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-1f162da554?timeout=10s\": dial tcp 145.40.69.49:6443: connect: connection refused" interval="800ms" May 8 00:02:44.366055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3668232813.mount: Deactivated successfully. May 8 00:02:44.366636 containerd[2702]: time="2025-05-08T00:02:44.366605873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:02:44.367124 containerd[2702]: time="2025-05-08T00:02:44.367090877Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" May 8 00:02:44.367565 containerd[2702]: time="2025-05-08T00:02:44.367540156Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:02:44.367782 containerd[2702]: time="2025-05-08T00:02:44.367747254Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:02:44.368114 containerd[2702]: time="2025-05-08T00:02:44.368084835Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:02:44.368280 containerd[2702]: time="2025-05-08T00:02:44.368259332Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:02:44.371636 containerd[2702]: time="2025-05-08T00:02:44.371610026Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:02:44.373057 containerd[2702]: time="2025-05-08T00:02:44.373036963Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 232.401766ms" May 8 00:02:44.373653 containerd[2702]: time="2025-05-08T00:02:44.373625056Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 217.004747ms" May 8 00:02:44.374225 containerd[2702]: time="2025-05-08T00:02:44.374205499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:02:44.376384 containerd[2702]: time="2025-05-08T00:02:44.376355937Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 222.15292ms" May 8 00:02:44.390989 kubelet[3881]: I0508 00:02:44.390965 3881 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-n-1f162da554" May 8 00:02:44.391313 kubelet[3881]: E0508 00:02:44.391282 3881 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://145.40.69.49:6443/api/v1/nodes\": dial tcp 145.40.69.49:6443: connect: connection refused" node="ci-4230.1.1-n-1f162da554" May 8 00:02:44.491821 containerd[2702]: time="2025-05-08T00:02:44.491743948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:02:44.491850 containerd[2702]: time="2025-05-08T00:02:44.491818641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:02:44.491850 containerd[2702]: time="2025-05-08T00:02:44.491830776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:02:44.491922 containerd[2702]: time="2025-05-08T00:02:44.491903907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:02:44.493513 containerd[2702]: time="2025-05-08T00:02:44.493453758Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:02:44.493533 containerd[2702]: time="2025-05-08T00:02:44.493514433Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:02:44.493552 containerd[2702]: time="2025-05-08T00:02:44.493526328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:02:44.493621 containerd[2702]: time="2025-05-08T00:02:44.493602022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:02:44.494036 containerd[2702]: time="2025-05-08T00:02:44.493681842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:02:44.494061 containerd[2702]: time="2025-05-08T00:02:44.494038686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:02:44.494061 containerd[2702]: time="2025-05-08T00:02:44.494051983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:02:44.494140 containerd[2702]: time="2025-05-08T00:02:44.494122030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:02:44.515998 systemd[1]: Started cri-containerd-5b19e723a0255cbec8d4413687f1eae2232745290e1fcf1c83a945481dec0c87.scope - libcontainer container 5b19e723a0255cbec8d4413687f1eae2232745290e1fcf1c83a945481dec0c87. May 8 00:02:44.517460 systemd[1]: Started cri-containerd-6c2304a02c8f2343fae95ddbf2d04b39edba25c51000f29597d20c8593d315cf.scope - libcontainer container 6c2304a02c8f2343fae95ddbf2d04b39edba25c51000f29597d20c8593d315cf. May 8 00:02:44.518761 systemd[1]: Started cri-containerd-ed7d3f0f5dbba8f9b8504f45da2707a67f58a096a153424a85ed3e8939d8e991.scope - libcontainer container ed7d3f0f5dbba8f9b8504f45da2707a67f58a096a153424a85ed3e8939d8e991. May 8 00:02:44.539596 containerd[2702]: time="2025-05-08T00:02:44.539553701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.1-n-1f162da554,Uid:58ca97bf6f7786467edefca738146cdb,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b19e723a0255cbec8d4413687f1eae2232745290e1fcf1c83a945481dec0c87\"" May 8 00:02:44.541890 containerd[2702]: time="2025-05-08T00:02:44.541834021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.1-n-1f162da554,Uid:bde43824c71317f55cf7b80f3f29339a,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c2304a02c8f2343fae95ddbf2d04b39edba25c51000f29597d20c8593d315cf\"" May 8 00:02:44.542489 containerd[2702]: time="2025-05-08T00:02:44.542469293Z" level=info msg="CreateContainer within sandbox \"5b19e723a0255cbec8d4413687f1eae2232745290e1fcf1c83a945481dec0c87\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:02:44.543559 containerd[2702]: time="2025-05-08T00:02:44.543532417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.1-n-1f162da554,Uid:4181a93c6579b43bd9bc9ce5bc98a821,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed7d3f0f5dbba8f9b8504f45da2707a67f58a096a153424a85ed3e8939d8e991\"" May 8 00:02:44.543833 containerd[2702]: time="2025-05-08T00:02:44.543802994Z" level=info msg="CreateContainer within sandbox \"6c2304a02c8f2343fae95ddbf2d04b39edba25c51000f29597d20c8593d315cf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:02:44.545525 containerd[2702]: time="2025-05-08T00:02:44.545501670Z" level=info msg="CreateContainer within sandbox \"ed7d3f0f5dbba8f9b8504f45da2707a67f58a096a153424a85ed3e8939d8e991\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:02:44.549469 containerd[2702]: time="2025-05-08T00:02:44.549442819Z" level=info msg="CreateContainer within sandbox \"5b19e723a0255cbec8d4413687f1eae2232745290e1fcf1c83a945481dec0c87\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7057f17ec17e4580ba3016c78e45405056e8b8cffd8241dcf648f4a5178feebe\"" May 8 00:02:44.549512 containerd[2702]: time="2025-05-08T00:02:44.549488316Z" level=info msg="CreateContainer within sandbox \"6c2304a02c8f2343fae95ddbf2d04b39edba25c51000f29597d20c8593d315cf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"26bae1c42b4fb2abf89847b0c0d1ec2a11bdd1bd423a8fa3be0d71fde71e6acc\"" May 8 00:02:44.549922 containerd[2702]: time="2025-05-08T00:02:44.549905155Z" level=info msg="StartContainer for \"26bae1c42b4fb2abf89847b0c0d1ec2a11bdd1bd423a8fa3be0d71fde71e6acc\"" May 8 00:02:44.549981 containerd[2702]: time="2025-05-08T00:02:44.549969035Z" level=info msg="StartContainer for \"7057f17ec17e4580ba3016c78e45405056e8b8cffd8241dcf648f4a5178feebe\"" May 8 00:02:44.550746 containerd[2702]: time="2025-05-08T00:02:44.550725177Z" level=info msg="CreateContainer within sandbox \"ed7d3f0f5dbba8f9b8504f45da2707a67f58a096a153424a85ed3e8939d8e991\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"92b0e22a3d24163065bf6ef32308fe5f9715a63b0cbdf9a11824f6460385dcb8\"" May 8 00:02:44.551026 containerd[2702]: time="2025-05-08T00:02:44.551007208Z" level=info msg="StartContainer for \"92b0e22a3d24163065bf6ef32308fe5f9715a63b0cbdf9a11824f6460385dcb8\"" May 8 00:02:44.562488 kubelet[3881]: W0508 00:02:44.562442 3881 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://145.40.69.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 145.40.69.49:6443: connect: connection refused May 8 00:02:44.562546 kubelet[3881]: E0508 00:02:44.562498 3881 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://145.40.69.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 145.40.69.49:6443: connect: connection refused May 8 00:02:44.577983 systemd[1]: Started cri-containerd-26bae1c42b4fb2abf89847b0c0d1ec2a11bdd1bd423a8fa3be0d71fde71e6acc.scope - libcontainer container 26bae1c42b4fb2abf89847b0c0d1ec2a11bdd1bd423a8fa3be0d71fde71e6acc. May 8 00:02:44.579116 systemd[1]: Started cri-containerd-7057f17ec17e4580ba3016c78e45405056e8b8cffd8241dcf648f4a5178feebe.scope - libcontainer container 7057f17ec17e4580ba3016c78e45405056e8b8cffd8241dcf648f4a5178feebe. May 8 00:02:44.580228 systemd[1]: Started cri-containerd-92b0e22a3d24163065bf6ef32308fe5f9715a63b0cbdf9a11824f6460385dcb8.scope - libcontainer container 92b0e22a3d24163065bf6ef32308fe5f9715a63b0cbdf9a11824f6460385dcb8. May 8 00:02:44.603489 containerd[2702]: time="2025-05-08T00:02:44.603096932Z" level=info msg="StartContainer for \"7057f17ec17e4580ba3016c78e45405056e8b8cffd8241dcf648f4a5178feebe\" returns successfully" May 8 00:02:44.603880 containerd[2702]: time="2025-05-08T00:02:44.603846826Z" level=info msg="StartContainer for \"26bae1c42b4fb2abf89847b0c0d1ec2a11bdd1bd423a8fa3be0d71fde71e6acc\" returns successfully" May 8 00:02:44.605369 containerd[2702]: time="2025-05-08T00:02:44.605341208Z" level=info msg="StartContainer for \"92b0e22a3d24163065bf6ef32308fe5f9715a63b0cbdf9a11824f6460385dcb8\" returns successfully" May 8 00:02:44.624066 kubelet[3881]: W0508 00:02:44.624019 3881 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://145.40.69.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 145.40.69.49:6443: connect: connection refused May 8 00:02:44.624115 kubelet[3881]: E0508 00:02:44.624077 3881 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://145.40.69.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 145.40.69.49:6443: connect: connection refused May 8 00:02:45.193631 kubelet[3881]: I0508 00:02:45.193611 3881 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-n-1f162da554" May 8 00:02:46.145451 kubelet[3881]: E0508 00:02:46.145415 3881 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.1.1-n-1f162da554\" not found" node="ci-4230.1.1-n-1f162da554" May 8 00:02:46.246181 kubelet[3881]: I0508 00:02:46.246153 3881 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230.1.1-n-1f162da554" May 8 00:02:46.683955 kubelet[3881]: I0508 00:02:46.683931 3881 apiserver.go:52] "Watching apiserver" May 8 00:02:46.688890 kubelet[3881]: I0508 00:02:46.688873 3881 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:02:46.777538 kubelet[3881]: E0508 00:02:46.777496 3881 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.1.1-n-1f162da554\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.1.1-n-1f162da554" May 8 00:02:47.277821 kubelet[3881]: W0508 00:02:47.277790 3881 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 8 00:02:48.332898 systemd[1]: Reload requested from client PID 4309 ('systemctl') (unit session-9.scope)... May 8 00:02:48.332909 systemd[1]: Reloading... May 8 00:02:48.400820 zram_generator::config[4359]: No configuration found. May 8 00:02:48.490593 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:02:48.592477 systemd[1]: Reloading finished in 259 ms. May 8 00:02:48.612290 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:02:48.629252 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:02:48.629515 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:02:48.629565 systemd[1]: kubelet.service: Consumed 1.220s CPU time, 135.6M memory peak. May 8 00:02:48.640078 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:02:48.738907 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:02:48.742524 (kubelet)[4417]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:02:48.774677 kubelet[4417]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:02:48.774677 kubelet[4417]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:02:48.774677 kubelet[4417]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:02:48.774963 kubelet[4417]: I0508 00:02:48.774717 4417 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:02:48.778324 kubelet[4417]: I0508 00:02:48.778306 4417 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:02:48.778353 kubelet[4417]: I0508 00:02:48.778325 4417 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:02:48.778486 kubelet[4417]: I0508 00:02:48.778479 4417 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:02:48.779713 kubelet[4417]: I0508 00:02:48.779701 4417 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:02:48.780819 kubelet[4417]: I0508 00:02:48.780801 4417 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:02:48.802597 kubelet[4417]: I0508 00:02:48.802572 4417 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:02:48.802779 kubelet[4417]: I0508 00:02:48.802749 4417 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:02:48.802935 kubelet[4417]: I0508 00:02:48.802776 4417 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.1-n-1f162da554","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:02:48.803001 kubelet[4417]: I0508 00:02:48.802940 4417 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:02:48.803001 kubelet[4417]: I0508 00:02:48.802949 4417 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:02:48.803001 kubelet[4417]: I0508 00:02:48.802982 4417 state_mem.go:36] "Initialized new in-memory state store" May 8 00:02:48.803078 kubelet[4417]: I0508 00:02:48.803068 4417 kubelet.go:400] "Attempting to sync node with API server" May 8 00:02:48.803099 kubelet[4417]: I0508 00:02:48.803079 4417 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:02:48.803120 kubelet[4417]: I0508 00:02:48.803108 4417 kubelet.go:312] "Adding apiserver pod source" May 8 00:02:48.803141 kubelet[4417]: I0508 00:02:48.803120 4417 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:02:48.803564 kubelet[4417]: I0508 00:02:48.803545 4417 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 00:02:48.803711 kubelet[4417]: I0508 00:02:48.803700 4417 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:02:48.804065 kubelet[4417]: I0508 00:02:48.804053 4417 server.go:1264] "Started kubelet" May 8 00:02:48.804148 kubelet[4417]: I0508 00:02:48.804104 4417 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:02:48.804173 kubelet[4417]: I0508 00:02:48.804110 4417 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:02:48.804320 kubelet[4417]: I0508 00:02:48.804307 4417 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:02:48.805028 kubelet[4417]: I0508 00:02:48.805012 4417 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:02:48.805105 kubelet[4417]: I0508 00:02:48.805089 4417 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:02:48.805105 kubelet[4417]: E0508 00:02:48.805094 4417 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-1f162da554\" not found" May 8 00:02:48.805148 kubelet[4417]: I0508 00:02:48.805129 4417 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:02:48.805266 kubelet[4417]: I0508 00:02:48.805255 4417 reconciler.go:26] "Reconciler: start to sync state" May 8 00:02:48.805619 kubelet[4417]: E0508 00:02:48.805604 4417 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:02:48.805659 kubelet[4417]: I0508 00:02:48.805645 4417 factory.go:221] Registration of the systemd container factory successfully May 8 00:02:48.805741 kubelet[4417]: I0508 00:02:48.805728 4417 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:02:48.806122 kubelet[4417]: I0508 00:02:48.806107 4417 server.go:455] "Adding debug handlers to kubelet server" May 8 00:02:48.808540 kubelet[4417]: I0508 00:02:48.808522 4417 factory.go:221] Registration of the containerd container factory successfully May 8 00:02:48.813448 kubelet[4417]: I0508 00:02:48.813418 4417 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:02:48.814396 kubelet[4417]: I0508 00:02:48.814385 4417 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:02:48.814422 kubelet[4417]: I0508 00:02:48.814413 4417 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:02:48.814445 kubelet[4417]: I0508 00:02:48.814428 4417 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:02:48.814494 kubelet[4417]: E0508 00:02:48.814473 4417 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:02:48.840507 kubelet[4417]: I0508 00:02:48.840484 4417 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:02:48.840507 kubelet[4417]: I0508 00:02:48.840500 4417 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:02:48.840596 kubelet[4417]: I0508 00:02:48.840520 4417 state_mem.go:36] "Initialized new in-memory state store" May 8 00:02:48.840674 kubelet[4417]: I0508 00:02:48.840657 4417 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:02:48.840714 kubelet[4417]: I0508 00:02:48.840668 4417 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:02:48.840714 kubelet[4417]: I0508 00:02:48.840688 4417 policy_none.go:49] "None policy: Start" May 8 00:02:48.841099 kubelet[4417]: I0508 00:02:48.841086 4417 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:02:48.841121 kubelet[4417]: I0508 00:02:48.841105 4417 state_mem.go:35] "Initializing new in-memory state store" May 8 00:02:48.841275 kubelet[4417]: I0508 00:02:48.841264 4417 state_mem.go:75] "Updated machine memory state" May 8 00:02:48.844446 kubelet[4417]: I0508 00:02:48.844397 4417 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:02:48.844583 kubelet[4417]: I0508 00:02:48.844551 4417 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:02:48.844667 kubelet[4417]: I0508 00:02:48.844657 4417 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:02:48.908927 kubelet[4417]: I0508 00:02:48.908905 4417 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-n-1f162da554" May 8 00:02:48.913146 kubelet[4417]: I0508 00:02:48.913125 4417 kubelet_node_status.go:112] "Node was previously registered" node="ci-4230.1.1-n-1f162da554" May 8 00:02:48.913194 kubelet[4417]: I0508 00:02:48.913186 4417 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230.1.1-n-1f162da554" May 8 00:02:48.914661 kubelet[4417]: I0508 00:02:48.914633 4417 topology_manager.go:215] "Topology Admit Handler" podUID="4181a93c6579b43bd9bc9ce5bc98a821" podNamespace="kube-system" podName="kube-apiserver-ci-4230.1.1-n-1f162da554" May 8 00:02:48.914738 kubelet[4417]: I0508 00:02:48.914726 4417 topology_manager.go:215] "Topology Admit Handler" podUID="bde43824c71317f55cf7b80f3f29339a" podNamespace="kube-system" podName="kube-controller-manager-ci-4230.1.1-n-1f162da554" May 8 00:02:48.914772 kubelet[4417]: I0508 00:02:48.914761 4417 topology_manager.go:215] "Topology Admit Handler" podUID="58ca97bf6f7786467edefca738146cdb" podNamespace="kube-system" podName="kube-scheduler-ci-4230.1.1-n-1f162da554" May 8 00:02:48.917766 kubelet[4417]: W0508 00:02:48.917746 4417 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 8 00:02:48.917833 kubelet[4417]: W0508 00:02:48.917823 4417 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 8 00:02:48.917971 kubelet[4417]: W0508 00:02:48.917954 4417 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 8 00:02:48.918012 kubelet[4417]: E0508 00:02:48.917995 4417 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4230.1.1-n-1f162da554\" already exists" pod="kube-system/kube-scheduler-ci-4230.1.1-n-1f162da554" May 8 00:02:49.106271 kubelet[4417]: I0508 00:02:49.106219 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4181a93c6579b43bd9bc9ce5bc98a821-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.1-n-1f162da554\" (UID: \"4181a93c6579b43bd9bc9ce5bc98a821\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-1f162da554" May 8 00:02:49.106271 kubelet[4417]: I0508 00:02:49.106248 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bde43824c71317f55cf7b80f3f29339a-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.1-n-1f162da554\" (UID: \"bde43824c71317f55cf7b80f3f29339a\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-1f162da554" May 8 00:02:49.106271 kubelet[4417]: I0508 00:02:49.106269 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bde43824c71317f55cf7b80f3f29339a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.1-n-1f162da554\" (UID: \"bde43824c71317f55cf7b80f3f29339a\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-1f162da554" May 8 00:02:49.106416 kubelet[4417]: I0508 00:02:49.106289 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/58ca97bf6f7786467edefca738146cdb-kubeconfig\") pod \"kube-scheduler-ci-4230.1.1-n-1f162da554\" (UID: \"58ca97bf6f7786467edefca738146cdb\") " pod="kube-system/kube-scheduler-ci-4230.1.1-n-1f162da554" May 8 00:02:49.106416 kubelet[4417]: I0508 00:02:49.106304 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4181a93c6579b43bd9bc9ce5bc98a821-ca-certs\") pod \"kube-apiserver-ci-4230.1.1-n-1f162da554\" (UID: \"4181a93c6579b43bd9bc9ce5bc98a821\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-1f162da554" May 8 00:02:49.106416 kubelet[4417]: I0508 00:02:49.106318 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4181a93c6579b43bd9bc9ce5bc98a821-k8s-certs\") pod \"kube-apiserver-ci-4230.1.1-n-1f162da554\" (UID: \"4181a93c6579b43bd9bc9ce5bc98a821\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-1f162da554" May 8 00:02:49.106416 kubelet[4417]: I0508 00:02:49.106332 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bde43824c71317f55cf7b80f3f29339a-ca-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-1f162da554\" (UID: \"bde43824c71317f55cf7b80f3f29339a\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-1f162da554" May 8 00:02:49.106416 kubelet[4417]: I0508 00:02:49.106346 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bde43824c71317f55cf7b80f3f29339a-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-1f162da554\" (UID: \"bde43824c71317f55cf7b80f3f29339a\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-1f162da554" May 8 00:02:49.106590 kubelet[4417]: I0508 00:02:49.106363 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bde43824c71317f55cf7b80f3f29339a-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.1-n-1f162da554\" (UID: \"bde43824c71317f55cf7b80f3f29339a\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-1f162da554" May 8 00:02:49.803553 kubelet[4417]: I0508 00:02:49.803529 4417 apiserver.go:52] "Watching apiserver" May 8 00:02:49.805564 kubelet[4417]: I0508 00:02:49.805549 4417 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:02:49.821990 kubelet[4417]: I0508 00:02:49.821955 4417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.1.1-n-1f162da554" podStartSLOduration=1.821944081 podStartE2EDuration="1.821944081s" podCreationTimestamp="2025-05-08 00:02:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:02:49.821926269 +0000 UTC m=+1.076523230" watchObservedRunningTime="2025-05-08 00:02:49.821944081 +0000 UTC m=+1.076541002" May 8 00:02:49.822709 kubelet[4417]: W0508 00:02:49.822694 4417 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 8 00:02:49.822747 kubelet[4417]: E0508 00:02:49.822737 4417 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.1.1-n-1f162da554\" already exists" pod="kube-system/kube-apiserver-ci-4230.1.1-n-1f162da554" May 8 00:02:49.831661 kubelet[4417]: I0508 00:02:49.831610 4417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.1.1-n-1f162da554" podStartSLOduration=1.831592969 podStartE2EDuration="1.831592969s" podCreationTimestamp="2025-05-08 00:02:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:02:49.827033855 +0000 UTC m=+1.081630776" watchObservedRunningTime="2025-05-08 00:02:49.831592969 +0000 UTC m=+1.086189930" May 8 00:02:49.837705 kubelet[4417]: I0508 00:02:49.837667 4417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.1.1-n-1f162da554" podStartSLOduration=2.837653283 podStartE2EDuration="2.837653283s" podCreationTimestamp="2025-05-08 00:02:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:02:49.831726494 +0000 UTC m=+1.086323375" watchObservedRunningTime="2025-05-08 00:02:49.837653283 +0000 UTC m=+1.092250204" May 8 00:02:53.296835 sudo[2961]: pam_unix(sudo:session): session closed for user root May 8 00:02:53.360728 sshd[2960]: Connection closed by 139.178.68.195 port 55792 May 8 00:02:53.361190 sshd-session[2958]: pam_unix(sshd:session): session closed for user core May 8 00:02:53.364349 systemd[1]: sshd@6-145.40.69.49:22-139.178.68.195:55792.service: Deactivated successfully. May 8 00:02:53.366145 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:02:53.366360 systemd[1]: session-9.scope: Consumed 7.316s CPU time, 267.5M memory peak. May 8 00:02:53.367411 systemd-logind[2680]: Session 9 logged out. Waiting for processes to exit. May 8 00:02:53.367988 systemd-logind[2680]: Removed session 9. May 8 00:03:03.143022 update_engine[2693]: I20250508 00:03:03.142958 2693 update_attempter.cc:509] Updating boot flags... May 8 00:03:03.174819 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (4698) May 8 00:03:03.203817 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (4697) May 8 00:03:03.851765 kubelet[4417]: I0508 00:03:03.851731 4417 topology_manager.go:215] "Topology Admit Handler" podUID="8d908ac2-2b2c-45c9-9132-6af439209d45" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-dkhvd" May 8 00:03:03.856820 systemd[1]: Created slice kubepods-besteffort-pod8d908ac2_2b2c_45c9_9132_6af439209d45.slice - libcontainer container kubepods-besteffort-pod8d908ac2_2b2c_45c9_9132_6af439209d45.slice. May 8 00:03:03.918813 kubelet[4417]: I0508 00:03:03.918781 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zggrb\" (UniqueName: \"kubernetes.io/projected/8d908ac2-2b2c-45c9-9132-6af439209d45-kube-api-access-zggrb\") pod \"tigera-operator-797db67f8-dkhvd\" (UID: \"8d908ac2-2b2c-45c9-9132-6af439209d45\") " pod="tigera-operator/tigera-operator-797db67f8-dkhvd" May 8 00:03:03.918905 kubelet[4417]: I0508 00:03:03.918839 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8d908ac2-2b2c-45c9-9132-6af439209d45-var-lib-calico\") pod \"tigera-operator-797db67f8-dkhvd\" (UID: \"8d908ac2-2b2c-45c9-9132-6af439209d45\") " pod="tigera-operator/tigera-operator-797db67f8-dkhvd" May 8 00:03:03.941280 kubelet[4417]: I0508 00:03:03.941251 4417 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:03:03.941582 containerd[2702]: time="2025-05-08T00:03:03.941546743Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:03:03.941814 kubelet[4417]: I0508 00:03:03.941696 4417 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:03:04.173155 containerd[2702]: time="2025-05-08T00:03:04.173035967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-dkhvd,Uid:8d908ac2-2b2c-45c9-9132-6af439209d45,Namespace:tigera-operator,Attempt:0,}" May 8 00:03:04.188074 containerd[2702]: time="2025-05-08T00:03:04.188012859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:03:04.188074 containerd[2702]: time="2025-05-08T00:03:04.188056943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:03:04.188196 containerd[2702]: time="2025-05-08T00:03:04.188068424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:03:04.188196 containerd[2702]: time="2025-05-08T00:03:04.188131230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:03:04.212800 kubelet[4417]: I0508 00:03:04.212767 4417 topology_manager.go:215] "Topology Admit Handler" podUID="fd648ce6-7708-46f9-953d-beeed2b395ad" podNamespace="kube-system" podName="kube-proxy-w95jz" May 8 00:03:04.217989 systemd[1]: Started cri-containerd-68e218d934e93c3cdab2f8d28caf216dc440aedcc53e5dd3c52e5c88153c4aa1.scope - libcontainer container 68e218d934e93c3cdab2f8d28caf216dc440aedcc53e5dd3c52e5c88153c4aa1. May 8 00:03:04.222103 systemd[1]: Created slice kubepods-besteffort-podfd648ce6_7708_46f9_953d_beeed2b395ad.slice - libcontainer container kubepods-besteffort-podfd648ce6_7708_46f9_953d_beeed2b395ad.slice. May 8 00:03:04.241096 containerd[2702]: time="2025-05-08T00:03:04.241056658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-dkhvd,Uid:8d908ac2-2b2c-45c9-9132-6af439209d45,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"68e218d934e93c3cdab2f8d28caf216dc440aedcc53e5dd3c52e5c88153c4aa1\"" May 8 00:03:04.242359 containerd[2702]: time="2025-05-08T00:03:04.242339418Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 8 00:03:04.321498 kubelet[4417]: I0508 00:03:04.321460 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fd648ce6-7708-46f9-953d-beeed2b395ad-kube-proxy\") pod \"kube-proxy-w95jz\" (UID: \"fd648ce6-7708-46f9-953d-beeed2b395ad\") " pod="kube-system/kube-proxy-w95jz" May 8 00:03:04.321498 kubelet[4417]: I0508 00:03:04.321494 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd648ce6-7708-46f9-953d-beeed2b395ad-xtables-lock\") pod \"kube-proxy-w95jz\" (UID: \"fd648ce6-7708-46f9-953d-beeed2b395ad\") " pod="kube-system/kube-proxy-w95jz" May 8 00:03:04.321780 kubelet[4417]: I0508 00:03:04.321513 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd648ce6-7708-46f9-953d-beeed2b395ad-lib-modules\") pod \"kube-proxy-w95jz\" (UID: \"fd648ce6-7708-46f9-953d-beeed2b395ad\") " pod="kube-system/kube-proxy-w95jz" May 8 00:03:04.321780 kubelet[4417]: I0508 00:03:04.321530 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkhnt\" (UniqueName: \"kubernetes.io/projected/fd648ce6-7708-46f9-953d-beeed2b395ad-kube-api-access-tkhnt\") pod \"kube-proxy-w95jz\" (UID: \"fd648ce6-7708-46f9-953d-beeed2b395ad\") " pod="kube-system/kube-proxy-w95jz" May 8 00:03:04.523876 containerd[2702]: time="2025-05-08T00:03:04.523844868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w95jz,Uid:fd648ce6-7708-46f9-953d-beeed2b395ad,Namespace:kube-system,Attempt:0,}" May 8 00:03:04.536297 containerd[2702]: time="2025-05-08T00:03:04.535831278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:03:04.536376 containerd[2702]: time="2025-05-08T00:03:04.536292241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:03:04.536376 containerd[2702]: time="2025-05-08T00:03:04.536306522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:03:04.536427 containerd[2702]: time="2025-05-08T00:03:04.536383810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:03:04.559990 systemd[1]: Started cri-containerd-edfc8c68895e05c138f3748df31a156db37d9bb541f656e0c4bdec2a8d849353.scope - libcontainer container edfc8c68895e05c138f3748df31a156db37d9bb541f656e0c4bdec2a8d849353. May 8 00:03:04.575795 containerd[2702]: time="2025-05-08T00:03:04.575760881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w95jz,Uid:fd648ce6-7708-46f9-953d-beeed2b395ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"edfc8c68895e05c138f3748df31a156db37d9bb541f656e0c4bdec2a8d849353\"" May 8 00:03:04.577703 containerd[2702]: time="2025-05-08T00:03:04.577671621Z" level=info msg="CreateContainer within sandbox \"edfc8c68895e05c138f3748df31a156db37d9bb541f656e0c4bdec2a8d849353\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:03:04.584254 containerd[2702]: time="2025-05-08T00:03:04.584222838Z" level=info msg="CreateContainer within sandbox \"edfc8c68895e05c138f3748df31a156db37d9bb541f656e0c4bdec2a8d849353\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"673ad176ba406bbe929346063c5a5479f745f24524d8ccc7cd5c2199739c2318\"" May 8 00:03:04.584655 containerd[2702]: time="2025-05-08T00:03:04.584630717Z" level=info msg="StartContainer for \"673ad176ba406bbe929346063c5a5479f745f24524d8ccc7cd5c2199739c2318\"" May 8 00:03:04.609983 systemd[1]: Started cri-containerd-673ad176ba406bbe929346063c5a5479f745f24524d8ccc7cd5c2199739c2318.scope - libcontainer container 673ad176ba406bbe929346063c5a5479f745f24524d8ccc7cd5c2199739c2318. May 8 00:03:04.630075 containerd[2702]: time="2025-05-08T00:03:04.630045957Z" level=info msg="StartContainer for \"673ad176ba406bbe929346063c5a5479f745f24524d8ccc7cd5c2199739c2318\" returns successfully" May 8 00:03:04.844868 kubelet[4417]: I0508 00:03:04.844750 4417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-w95jz" podStartSLOduration=0.84473763 podStartE2EDuration="844.73763ms" podCreationTimestamp="2025-05-08 00:03:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:03:04.844607137 +0000 UTC m=+16.099204058" watchObservedRunningTime="2025-05-08 00:03:04.84473763 +0000 UTC m=+16.099334551" May 8 00:03:13.358298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2134975099.mount: Deactivated successfully. May 8 00:03:13.537728 containerd[2702]: time="2025-05-08T00:03:13.537692285Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:03:13.538034 containerd[2702]: time="2025-05-08T00:03:13.537660083Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 8 00:03:13.538455 containerd[2702]: time="2025-05-08T00:03:13.538436408Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:03:13.540459 containerd[2702]: time="2025-05-08T00:03:13.540442244Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:03:13.541264 containerd[2702]: time="2025-05-08T00:03:13.541240650Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 9.298872509s" May 8 00:03:13.541290 containerd[2702]: time="2025-05-08T00:03:13.541270211Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 8 00:03:13.542962 containerd[2702]: time="2025-05-08T00:03:13.542942788Z" level=info msg="CreateContainer within sandbox \"68e218d934e93c3cdab2f8d28caf216dc440aedcc53e5dd3c52e5c88153c4aa1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 8 00:03:13.548095 containerd[2702]: time="2025-05-08T00:03:13.548072363Z" level=info msg="CreateContainer within sandbox \"68e218d934e93c3cdab2f8d28caf216dc440aedcc53e5dd3c52e5c88153c4aa1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"132408ecf206ac3e69ff99794a0aebc8656ad485faf7217d2509f5b06f24dcd3\"" May 8 00:03:13.548349 containerd[2702]: time="2025-05-08T00:03:13.548327138Z" level=info msg="StartContainer for \"132408ecf206ac3e69ff99794a0aebc8656ad485faf7217d2509f5b06f24dcd3\"" May 8 00:03:13.574967 systemd[1]: Started cri-containerd-132408ecf206ac3e69ff99794a0aebc8656ad485faf7217d2509f5b06f24dcd3.scope - libcontainer container 132408ecf206ac3e69ff99794a0aebc8656ad485faf7217d2509f5b06f24dcd3. May 8 00:03:13.592253 containerd[2702]: time="2025-05-08T00:03:13.592222067Z" level=info msg="StartContainer for \"132408ecf206ac3e69ff99794a0aebc8656ad485faf7217d2509f5b06f24dcd3\" returns successfully" May 8 00:03:13.856676 kubelet[4417]: I0508 00:03:13.856629 4417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-dkhvd" podStartSLOduration=1.556754761 podStartE2EDuration="10.856615538s" podCreationTimestamp="2025-05-08 00:03:03 +0000 UTC" firstStartedPulling="2025-05-08 00:03:04.242044511 +0000 UTC m=+15.496641432" lastFinishedPulling="2025-05-08 00:03:13.541905328 +0000 UTC m=+24.796502209" observedRunningTime="2025-05-08 00:03:13.856488051 +0000 UTC m=+25.111084972" watchObservedRunningTime="2025-05-08 00:03:13.856615538 +0000 UTC m=+25.111212459" May 8 00:03:17.183332 kubelet[4417]: I0508 00:03:17.183290 4417 topology_manager.go:215] "Topology Admit Handler" podUID="47716153-f8e9-4f32-ba9e-6639bdcfa571" podNamespace="calico-system" podName="calico-typha-5dc8c697cc-vprqp" May 8 00:03:17.189272 systemd[1]: Created slice kubepods-besteffort-pod47716153_f8e9_4f32_ba9e_6639bdcfa571.slice - libcontainer container kubepods-besteffort-pod47716153_f8e9_4f32_ba9e_6639bdcfa571.slice. May 8 00:03:17.210581 kubelet[4417]: I0508 00:03:17.210530 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47716153-f8e9-4f32-ba9e-6639bdcfa571-tigera-ca-bundle\") pod \"calico-typha-5dc8c697cc-vprqp\" (UID: \"47716153-f8e9-4f32-ba9e-6639bdcfa571\") " pod="calico-system/calico-typha-5dc8c697cc-vprqp" May 8 00:03:17.210581 kubelet[4417]: I0508 00:03:17.210567 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbcz9\" (UniqueName: \"kubernetes.io/projected/47716153-f8e9-4f32-ba9e-6639bdcfa571-kube-api-access-sbcz9\") pod \"calico-typha-5dc8c697cc-vprqp\" (UID: \"47716153-f8e9-4f32-ba9e-6639bdcfa571\") " pod="calico-system/calico-typha-5dc8c697cc-vprqp" May 8 00:03:17.210581 kubelet[4417]: I0508 00:03:17.210585 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/47716153-f8e9-4f32-ba9e-6639bdcfa571-typha-certs\") pod \"calico-typha-5dc8c697cc-vprqp\" (UID: \"47716153-f8e9-4f32-ba9e-6639bdcfa571\") " pod="calico-system/calico-typha-5dc8c697cc-vprqp" May 8 00:03:17.218894 kubelet[4417]: I0508 00:03:17.218861 4417 topology_manager.go:215] "Topology Admit Handler" podUID="86fff13f-c6f9-41fd-b975-3428bbe9b7de" podNamespace="calico-system" podName="calico-node-zhh4v" May 8 00:03:17.223403 systemd[1]: Created slice kubepods-besteffort-pod86fff13f_c6f9_41fd_b975_3428bbe9b7de.slice - libcontainer container kubepods-besteffort-pod86fff13f_c6f9_41fd_b975_3428bbe9b7de.slice. May 8 00:03:17.311124 kubelet[4417]: I0508 00:03:17.311087 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/86fff13f-c6f9-41fd-b975-3428bbe9b7de-policysync\") pod \"calico-node-zhh4v\" (UID: \"86fff13f-c6f9-41fd-b975-3428bbe9b7de\") " pod="calico-system/calico-node-zhh4v" May 8 00:03:17.311323 kubelet[4417]: I0508 00:03:17.311296 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/86fff13f-c6f9-41fd-b975-3428bbe9b7de-node-certs\") pod \"calico-node-zhh4v\" (UID: \"86fff13f-c6f9-41fd-b975-3428bbe9b7de\") " pod="calico-system/calico-node-zhh4v" May 8 00:03:17.311352 kubelet[4417]: I0508 00:03:17.311337 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86fff13f-c6f9-41fd-b975-3428bbe9b7de-xtables-lock\") pod \"calico-node-zhh4v\" (UID: \"86fff13f-c6f9-41fd-b975-3428bbe9b7de\") " pod="calico-system/calico-node-zhh4v" May 8 00:03:17.311385 kubelet[4417]: I0508 00:03:17.311354 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/86fff13f-c6f9-41fd-b975-3428bbe9b7de-var-lib-calico\") pod \"calico-node-zhh4v\" (UID: \"86fff13f-c6f9-41fd-b975-3428bbe9b7de\") " pod="calico-system/calico-node-zhh4v" May 8 00:03:17.311385 kubelet[4417]: I0508 00:03:17.311372 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/86fff13f-c6f9-41fd-b975-3428bbe9b7de-cni-net-dir\") pod \"calico-node-zhh4v\" (UID: \"86fff13f-c6f9-41fd-b975-3428bbe9b7de\") " pod="calico-system/calico-node-zhh4v" May 8 00:03:17.311427 kubelet[4417]: I0508 00:03:17.311388 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/86fff13f-c6f9-41fd-b975-3428bbe9b7de-cni-log-dir\") pod \"calico-node-zhh4v\" (UID: \"86fff13f-c6f9-41fd-b975-3428bbe9b7de\") " pod="calico-system/calico-node-zhh4v" May 8 00:03:17.311427 kubelet[4417]: I0508 00:03:17.311405 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86fff13f-c6f9-41fd-b975-3428bbe9b7de-lib-modules\") pod \"calico-node-zhh4v\" (UID: \"86fff13f-c6f9-41fd-b975-3428bbe9b7de\") " pod="calico-system/calico-node-zhh4v" May 8 00:03:17.311427 kubelet[4417]: I0508 00:03:17.311420 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/86fff13f-c6f9-41fd-b975-3428bbe9b7de-var-run-calico\") pod \"calico-node-zhh4v\" (UID: \"86fff13f-c6f9-41fd-b975-3428bbe9b7de\") " pod="calico-system/calico-node-zhh4v" May 8 00:03:17.311487 kubelet[4417]: I0508 00:03:17.311445 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86fff13f-c6f9-41fd-b975-3428bbe9b7de-tigera-ca-bundle\") pod \"calico-node-zhh4v\" (UID: \"86fff13f-c6f9-41fd-b975-3428bbe9b7de\") " pod="calico-system/calico-node-zhh4v" May 8 00:03:17.311487 kubelet[4417]: I0508 00:03:17.311464 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/86fff13f-c6f9-41fd-b975-3428bbe9b7de-cni-bin-dir\") pod \"calico-node-zhh4v\" (UID: \"86fff13f-c6f9-41fd-b975-3428bbe9b7de\") " pod="calico-system/calico-node-zhh4v" May 8 00:03:17.311529 kubelet[4417]: I0508 00:03:17.311479 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54b9c\" (UniqueName: \"kubernetes.io/projected/86fff13f-c6f9-41fd-b975-3428bbe9b7de-kube-api-access-54b9c\") pod \"calico-node-zhh4v\" (UID: \"86fff13f-c6f9-41fd-b975-3428bbe9b7de\") " pod="calico-system/calico-node-zhh4v" May 8 00:03:17.311529 kubelet[4417]: I0508 00:03:17.311507 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/86fff13f-c6f9-41fd-b975-3428bbe9b7de-flexvol-driver-host\") pod \"calico-node-zhh4v\" (UID: \"86fff13f-c6f9-41fd-b975-3428bbe9b7de\") " pod="calico-system/calico-node-zhh4v" May 8 00:03:17.325758 kubelet[4417]: I0508 00:03:17.325729 4417 topology_manager.go:215] "Topology Admit Handler" podUID="ecdba017-f46d-4d93-b251-179fdcd1b734" podNamespace="calico-system" podName="csi-node-driver-78qnk" May 8 00:03:17.326003 kubelet[4417]: E0508 00:03:17.325984 4417 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-78qnk" podUID="ecdba017-f46d-4d93-b251-179fdcd1b734" May 8 00:03:17.411933 kubelet[4417]: I0508 00:03:17.411889 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ecdba017-f46d-4d93-b251-179fdcd1b734-socket-dir\") pod \"csi-node-driver-78qnk\" (UID: \"ecdba017-f46d-4d93-b251-179fdcd1b734\") " pod="calico-system/csi-node-driver-78qnk" May 8 00:03:17.412033 kubelet[4417]: I0508 00:03:17.411943 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ecdba017-f46d-4d93-b251-179fdcd1b734-registration-dir\") pod \"csi-node-driver-78qnk\" (UID: \"ecdba017-f46d-4d93-b251-179fdcd1b734\") " pod="calico-system/csi-node-driver-78qnk" May 8 00:03:17.412033 kubelet[4417]: I0508 00:03:17.412022 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ecdba017-f46d-4d93-b251-179fdcd1b734-varrun\") pod \"csi-node-driver-78qnk\" (UID: \"ecdba017-f46d-4d93-b251-179fdcd1b734\") " pod="calico-system/csi-node-driver-78qnk" May 8 00:03:17.412147 kubelet[4417]: I0508 00:03:17.412066 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ecdba017-f46d-4d93-b251-179fdcd1b734-kubelet-dir\") pod \"csi-node-driver-78qnk\" (UID: \"ecdba017-f46d-4d93-b251-179fdcd1b734\") " pod="calico-system/csi-node-driver-78qnk" May 8 00:03:17.412533 kubelet[4417]: E0508 00:03:17.412507 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.412558 kubelet[4417]: W0508 00:03:17.412531 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.412582 kubelet[4417]: E0508 00:03:17.412562 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.412798 kubelet[4417]: E0508 00:03:17.412787 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.412798 kubelet[4417]: W0508 00:03:17.412795 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.412866 kubelet[4417]: E0508 00:03:17.412812 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.412866 kubelet[4417]: I0508 00:03:17.412829 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjfjx\" (UniqueName: \"kubernetes.io/projected/ecdba017-f46d-4d93-b251-179fdcd1b734-kube-api-access-qjfjx\") pod \"csi-node-driver-78qnk\" (UID: \"ecdba017-f46d-4d93-b251-179fdcd1b734\") " pod="calico-system/csi-node-driver-78qnk" May 8 00:03:17.413004 kubelet[4417]: E0508 00:03:17.412991 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.413004 kubelet[4417]: W0508 00:03:17.413002 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.413052 kubelet[4417]: E0508 00:03:17.413013 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.413174 kubelet[4417]: E0508 00:03:17.413163 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.413195 kubelet[4417]: W0508 00:03:17.413173 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.413195 kubelet[4417]: E0508 00:03:17.413184 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.413452 kubelet[4417]: E0508 00:03:17.413439 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.413452 kubelet[4417]: W0508 00:03:17.413449 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.413498 kubelet[4417]: E0508 00:03:17.413471 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.413650 kubelet[4417]: E0508 00:03:17.413639 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.413650 kubelet[4417]: W0508 00:03:17.413648 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.413692 kubelet[4417]: E0508 00:03:17.413665 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.413866 kubelet[4417]: E0508 00:03:17.413856 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.413866 kubelet[4417]: W0508 00:03:17.413863 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.413915 kubelet[4417]: E0508 00:03:17.413879 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.414085 kubelet[4417]: E0508 00:03:17.414074 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.414085 kubelet[4417]: W0508 00:03:17.414082 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.414130 kubelet[4417]: E0508 00:03:17.414097 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.414288 kubelet[4417]: E0508 00:03:17.414280 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.414288 kubelet[4417]: W0508 00:03:17.414287 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.414328 kubelet[4417]: E0508 00:03:17.414301 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.414475 kubelet[4417]: E0508 00:03:17.414468 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.414475 kubelet[4417]: W0508 00:03:17.414474 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.414519 kubelet[4417]: E0508 00:03:17.414498 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.414699 kubelet[4417]: E0508 00:03:17.414691 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.414722 kubelet[4417]: W0508 00:03:17.414701 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.414722 kubelet[4417]: E0508 00:03:17.414717 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.414915 kubelet[4417]: E0508 00:03:17.414907 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.414915 kubelet[4417]: W0508 00:03:17.414914 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.414961 kubelet[4417]: E0508 00:03:17.414928 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.415052 kubelet[4417]: E0508 00:03:17.415044 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.415052 kubelet[4417]: W0508 00:03:17.415051 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.415094 kubelet[4417]: E0508 00:03:17.415064 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.415224 kubelet[4417]: E0508 00:03:17.415217 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.415224 kubelet[4417]: W0508 00:03:17.415223 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.415267 kubelet[4417]: E0508 00:03:17.415237 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.415419 kubelet[4417]: E0508 00:03:17.415411 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.415419 kubelet[4417]: W0508 00:03:17.415418 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.415459 kubelet[4417]: E0508 00:03:17.415433 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.415629 kubelet[4417]: E0508 00:03:17.415620 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.415629 kubelet[4417]: W0508 00:03:17.415628 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.415672 kubelet[4417]: E0508 00:03:17.415642 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.415831 kubelet[4417]: E0508 00:03:17.415823 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.415851 kubelet[4417]: W0508 00:03:17.415830 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.415851 kubelet[4417]: E0508 00:03:17.415845 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.415966 kubelet[4417]: E0508 00:03:17.415957 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.415989 kubelet[4417]: W0508 00:03:17.415965 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.415989 kubelet[4417]: E0508 00:03:17.415979 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.416107 kubelet[4417]: E0508 00:03:17.416096 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.416107 kubelet[4417]: W0508 00:03:17.416103 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.416149 kubelet[4417]: E0508 00:03:17.416117 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.416237 kubelet[4417]: E0508 00:03:17.416227 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.416237 kubelet[4417]: W0508 00:03:17.416234 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.416279 kubelet[4417]: E0508 00:03:17.416251 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.416368 kubelet[4417]: E0508 00:03:17.416359 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.416368 kubelet[4417]: W0508 00:03:17.416365 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.416410 kubelet[4417]: E0508 00:03:17.416381 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.416520 kubelet[4417]: E0508 00:03:17.416510 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.416520 kubelet[4417]: W0508 00:03:17.416517 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.416567 kubelet[4417]: E0508 00:03:17.416530 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.416650 kubelet[4417]: E0508 00:03:17.416640 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.416650 kubelet[4417]: W0508 00:03:17.416647 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.416691 kubelet[4417]: E0508 00:03:17.416661 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.416772 kubelet[4417]: E0508 00:03:17.416765 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.416772 kubelet[4417]: W0508 00:03:17.416772 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.416818 kubelet[4417]: E0508 00:03:17.416785 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.416910 kubelet[4417]: E0508 00:03:17.416902 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.416932 kubelet[4417]: W0508 00:03:17.416909 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.416932 kubelet[4417]: E0508 00:03:17.416922 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.417125 kubelet[4417]: E0508 00:03:17.417116 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.417149 kubelet[4417]: W0508 00:03:17.417126 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.417149 kubelet[4417]: E0508 00:03:17.417138 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.417378 kubelet[4417]: E0508 00:03:17.417369 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.417402 kubelet[4417]: W0508 00:03:17.417378 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.417402 kubelet[4417]: E0508 00:03:17.417389 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.417610 kubelet[4417]: E0508 00:03:17.417602 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.417635 kubelet[4417]: W0508 00:03:17.417611 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.417655 kubelet[4417]: E0508 00:03:17.417633 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.417756 kubelet[4417]: E0508 00:03:17.417746 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.417756 kubelet[4417]: W0508 00:03:17.417753 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.417794 kubelet[4417]: E0508 00:03:17.417768 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.417908 kubelet[4417]: E0508 00:03:17.417897 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.417908 kubelet[4417]: W0508 00:03:17.417906 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.417954 kubelet[4417]: E0508 00:03:17.417920 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.418150 kubelet[4417]: E0508 00:03:17.418139 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.418150 kubelet[4417]: W0508 00:03:17.418148 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.418191 kubelet[4417]: E0508 00:03:17.418166 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.418307 kubelet[4417]: E0508 00:03:17.418297 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.418307 kubelet[4417]: W0508 00:03:17.418304 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.418354 kubelet[4417]: E0508 00:03:17.418318 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.418532 kubelet[4417]: E0508 00:03:17.418521 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.418532 kubelet[4417]: W0508 00:03:17.418529 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.418575 kubelet[4417]: E0508 00:03:17.418543 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.418718 kubelet[4417]: E0508 00:03:17.418708 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.418718 kubelet[4417]: W0508 00:03:17.418715 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.418765 kubelet[4417]: E0508 00:03:17.418726 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.418930 kubelet[4417]: E0508 00:03:17.418917 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.418951 kubelet[4417]: W0508 00:03:17.418932 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.418970 kubelet[4417]: E0508 00:03:17.418950 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.419187 kubelet[4417]: E0508 00:03:17.419178 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.419211 kubelet[4417]: W0508 00:03:17.419188 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.419211 kubelet[4417]: E0508 00:03:17.419196 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.426285 kubelet[4417]: E0508 00:03:17.426269 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.426285 kubelet[4417]: W0508 00:03:17.426282 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.426328 kubelet[4417]: E0508 00:03:17.426293 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.503160 containerd[2702]: time="2025-05-08T00:03:17.503106600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5dc8c697cc-vprqp,Uid:47716153-f8e9-4f32-ba9e-6639bdcfa571,Namespace:calico-system,Attempt:0,}" May 8 00:03:17.514577 kubelet[4417]: E0508 00:03:17.514549 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.514577 kubelet[4417]: W0508 00:03:17.514563 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.514577 kubelet[4417]: E0508 00:03:17.514577 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.514829 kubelet[4417]: E0508 00:03:17.514820 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.514829 kubelet[4417]: W0508 00:03:17.514829 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.515439 kubelet[4417]: E0508 00:03:17.514843 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.515439 kubelet[4417]: E0508 00:03:17.515052 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.515439 kubelet[4417]: W0508 00:03:17.515060 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.515439 kubelet[4417]: E0508 00:03:17.515070 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.515439 kubelet[4417]: E0508 00:03:17.515286 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.515439 kubelet[4417]: W0508 00:03:17.515295 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.515439 kubelet[4417]: E0508 00:03:17.515305 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.515606 kubelet[4417]: E0508 00:03:17.515522 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.515606 kubelet[4417]: W0508 00:03:17.515530 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.515606 kubelet[4417]: E0508 00:03:17.515539 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.515753 kubelet[4417]: E0508 00:03:17.515742 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.515753 kubelet[4417]: W0508 00:03:17.515751 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.515821 kubelet[4417]: E0508 00:03:17.515763 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.515964 kubelet[4417]: E0508 00:03:17.515955 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.515964 kubelet[4417]: W0508 00:03:17.515963 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.516023 kubelet[4417]: E0508 00:03:17.516008 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.516156 kubelet[4417]: E0508 00:03:17.516148 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.516156 kubelet[4417]: W0508 00:03:17.516155 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.516203 kubelet[4417]: E0508 00:03:17.516174 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.516292 kubelet[4417]: E0508 00:03:17.516284 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.516292 kubelet[4417]: W0508 00:03:17.516292 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.516343 kubelet[4417]: E0508 00:03:17.516309 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.516469 kubelet[4417]: E0508 00:03:17.516461 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.516469 kubelet[4417]: W0508 00:03:17.516468 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.516525 kubelet[4417]: E0508 00:03:17.516487 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.516630 kubelet[4417]: E0508 00:03:17.516623 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.516666 kubelet[4417]: W0508 00:03:17.516630 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.516666 kubelet[4417]: E0508 00:03:17.516649 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.516768 kubelet[4417]: E0508 00:03:17.516760 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.516768 kubelet[4417]: W0508 00:03:17.516767 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.516837 kubelet[4417]: E0508 00:03:17.516778 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.516864 containerd[2702]: time="2025-05-08T00:03:17.516770603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:03:17.516864 containerd[2702]: time="2025-05-08T00:03:17.516823805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:03:17.516864 containerd[2702]: time="2025-05-08T00:03:17.516835606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:03:17.516930 containerd[2702]: time="2025-05-08T00:03:17.516904929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:03:17.516968 kubelet[4417]: E0508 00:03:17.516955 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.516993 kubelet[4417]: W0508 00:03:17.516968 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.516993 kubelet[4417]: E0508 00:03:17.516983 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.517124 kubelet[4417]: E0508 00:03:17.517117 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.517124 kubelet[4417]: W0508 00:03:17.517124 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.517177 kubelet[4417]: E0508 00:03:17.517133 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.517368 kubelet[4417]: E0508 00:03:17.517359 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.517368 kubelet[4417]: W0508 00:03:17.517368 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.517420 kubelet[4417]: E0508 00:03:17.517379 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.517871 kubelet[4417]: E0508 00:03:17.517754 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.517871 kubelet[4417]: W0508 00:03:17.517769 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.517871 kubelet[4417]: E0508 00:03:17.517786 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.517997 kubelet[4417]: E0508 00:03:17.517982 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.518021 kubelet[4417]: W0508 00:03:17.517997 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.518050 kubelet[4417]: E0508 00:03:17.518025 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.518167 kubelet[4417]: E0508 00:03:17.518158 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.518190 kubelet[4417]: W0508 00:03:17.518167 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.518210 kubelet[4417]: E0508 00:03:17.518196 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.518303 kubelet[4417]: E0508 00:03:17.518296 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.518326 kubelet[4417]: W0508 00:03:17.518303 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.518326 kubelet[4417]: E0508 00:03:17.518318 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.518471 kubelet[4417]: E0508 00:03:17.518463 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.518490 kubelet[4417]: W0508 00:03:17.518471 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.518490 kubelet[4417]: E0508 00:03:17.518485 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.518624 kubelet[4417]: E0508 00:03:17.518616 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.518649 kubelet[4417]: W0508 00:03:17.518623 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.518649 kubelet[4417]: E0508 00:03:17.518635 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.518786 kubelet[4417]: E0508 00:03:17.518777 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.518814 kubelet[4417]: W0508 00:03:17.518788 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.518814 kubelet[4417]: E0508 00:03:17.518799 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.518996 kubelet[4417]: E0508 00:03:17.518984 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.519020 kubelet[4417]: W0508 00:03:17.518996 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.519020 kubelet[4417]: E0508 00:03:17.519012 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.519186 kubelet[4417]: E0508 00:03:17.519178 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.519206 kubelet[4417]: W0508 00:03:17.519195 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.519225 kubelet[4417]: E0508 00:03:17.519206 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.519428 kubelet[4417]: E0508 00:03:17.519420 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.519451 kubelet[4417]: W0508 00:03:17.519428 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.519451 kubelet[4417]: E0508 00:03:17.519436 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.525186 containerd[2702]: time="2025-05-08T00:03:17.525161157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zhh4v,Uid:86fff13f-c6f9-41fd-b975-3428bbe9b7de,Namespace:calico-system,Attempt:0,}" May 8 00:03:17.525808 kubelet[4417]: E0508 00:03:17.525792 4417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:03:17.525835 kubelet[4417]: W0508 00:03:17.525811 4417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:03:17.525835 kubelet[4417]: E0508 00:03:17.525823 4417 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:03:17.537528 containerd[2702]: time="2025-05-08T00:03:17.537462856Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:03:17.537528 containerd[2702]: time="2025-05-08T00:03:17.537507698Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:03:17.537528 containerd[2702]: time="2025-05-08T00:03:17.537518698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:03:17.537642 containerd[2702]: time="2025-05-08T00:03:17.537588222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:03:17.548998 systemd[1]: Started cri-containerd-1f3b68b98334277695cff2960067b7331feee7c00070be7c8c992b62e82b7d60.scope - libcontainer container 1f3b68b98334277695cff2960067b7331feee7c00070be7c8c992b62e82b7d60. May 8 00:03:17.551392 systemd[1]: Started cri-containerd-ad9610d78830eb12f30bb58dd9ac2d2bb3b421109715899e41830b6d990305d4.scope - libcontainer container ad9610d78830eb12f30bb58dd9ac2d2bb3b421109715899e41830b6d990305d4. May 8 00:03:17.567225 containerd[2702]: time="2025-05-08T00:03:17.567193654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zhh4v,Uid:86fff13f-c6f9-41fd-b975-3428bbe9b7de,Namespace:calico-system,Attempt:0,} returns sandbox id \"ad9610d78830eb12f30bb58dd9ac2d2bb3b421109715899e41830b6d990305d4\"" May 8 00:03:17.568335 containerd[2702]: time="2025-05-08T00:03:17.568316067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 8 00:03:17.572357 containerd[2702]: time="2025-05-08T00:03:17.572333455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5dc8c697cc-vprqp,Uid:47716153-f8e9-4f32-ba9e-6639bdcfa571,Namespace:calico-system,Attempt:0,} returns sandbox id \"1f3b68b98334277695cff2960067b7331feee7c00070be7c8c992b62e82b7d60\"" May 8 00:03:17.840740 containerd[2702]: time="2025-05-08T00:03:17.840660873Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 8 00:03:17.840740 containerd[2702]: time="2025-05-08T00:03:17.840676394Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:03:17.841430 containerd[2702]: time="2025-05-08T00:03:17.841404668Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:03:17.843257 containerd[2702]: time="2025-05-08T00:03:17.843232754Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:03:17.843990 containerd[2702]: time="2025-05-08T00:03:17.843960268Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 275.61488ms" May 8 00:03:17.844033 containerd[2702]: time="2025-05-08T00:03:17.843989750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 8 00:03:17.844736 containerd[2702]: time="2025-05-08T00:03:17.844714304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 8 00:03:17.845615 containerd[2702]: time="2025-05-08T00:03:17.845593705Z" level=info msg="CreateContainer within sandbox \"ad9610d78830eb12f30bb58dd9ac2d2bb3b421109715899e41830b6d990305d4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 8 00:03:17.851734 containerd[2702]: time="2025-05-08T00:03:17.851703833Z" level=info msg="CreateContainer within sandbox \"ad9610d78830eb12f30bb58dd9ac2d2bb3b421109715899e41830b6d990305d4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a8ea64c19c687a83e4b111c68c9e71d2e01d345932c18f29ab818639153452d8\"" May 8 00:03:17.852086 containerd[2702]: time="2025-05-08T00:03:17.852058929Z" level=info msg="StartContainer for \"a8ea64c19c687a83e4b111c68c9e71d2e01d345932c18f29ab818639153452d8\"" May 8 00:03:17.879929 systemd[1]: Started cri-containerd-a8ea64c19c687a83e4b111c68c9e71d2e01d345932c18f29ab818639153452d8.scope - libcontainer container a8ea64c19c687a83e4b111c68c9e71d2e01d345932c18f29ab818639153452d8. May 8 00:03:17.899915 containerd[2702]: time="2025-05-08T00:03:17.899883618Z" level=info msg="StartContainer for \"a8ea64c19c687a83e4b111c68c9e71d2e01d345932c18f29ab818639153452d8\" returns successfully" May 8 00:03:17.912860 systemd[1]: cri-containerd-a8ea64c19c687a83e4b111c68c9e71d2e01d345932c18f29ab818639153452d8.scope: Deactivated successfully. May 8 00:03:17.990188 containerd[2702]: time="2025-05-08T00:03:17.990139142Z" level=info msg="shim disconnected" id=a8ea64c19c687a83e4b111c68c9e71d2e01d345932c18f29ab818639153452d8 namespace=k8s.io May 8 00:03:17.990188 containerd[2702]: time="2025-05-08T00:03:17.990185545Z" level=warning msg="cleaning up after shim disconnected" id=a8ea64c19c687a83e4b111c68c9e71d2e01d345932c18f29ab818639153452d8 namespace=k8s.io May 8 00:03:17.990354 containerd[2702]: time="2025-05-08T00:03:17.990194625Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:03:18.359199 containerd[2702]: time="2025-05-08T00:03:18.359166811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:03:18.359331 containerd[2702]: time="2025-05-08T00:03:18.359194892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 8 00:03:18.359923 containerd[2702]: time="2025-05-08T00:03:18.359902524Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:03:18.361613 containerd[2702]: time="2025-05-08T00:03:18.361586799Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:03:18.362278 containerd[2702]: time="2025-05-08T00:03:18.362251069Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 517.505283ms" May 8 00:03:18.362308 containerd[2702]: time="2025-05-08T00:03:18.362283470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 8 00:03:18.367687 containerd[2702]: time="2025-05-08T00:03:18.367653151Z" level=info msg="CreateContainer within sandbox \"1f3b68b98334277695cff2960067b7331feee7c00070be7c8c992b62e82b7d60\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 8 00:03:18.372437 containerd[2702]: time="2025-05-08T00:03:18.372408364Z" level=info msg="CreateContainer within sandbox \"1f3b68b98334277695cff2960067b7331feee7c00070be7c8c992b62e82b7d60\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fdc883ea28b99bbf906bb77d0ac1f0f238c187e3dc9c3c40c89300324726924e\"" May 8 00:03:18.372779 containerd[2702]: time="2025-05-08T00:03:18.372752019Z" level=info msg="StartContainer for \"fdc883ea28b99bbf906bb77d0ac1f0f238c187e3dc9c3c40c89300324726924e\"" May 8 00:03:18.403974 systemd[1]: Started cri-containerd-fdc883ea28b99bbf906bb77d0ac1f0f238c187e3dc9c3c40c89300324726924e.scope - libcontainer container fdc883ea28b99bbf906bb77d0ac1f0f238c187e3dc9c3c40c89300324726924e. May 8 00:03:18.428446 containerd[2702]: time="2025-05-08T00:03:18.428415112Z" level=info msg="StartContainer for \"fdc883ea28b99bbf906bb77d0ac1f0f238c187e3dc9c3c40c89300324726924e\" returns successfully" May 8 00:03:18.814812 kubelet[4417]: E0508 00:03:18.814775 4417 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-78qnk" podUID="ecdba017-f46d-4d93-b251-179fdcd1b734" May 8 00:03:18.859535 containerd[2702]: time="2025-05-08T00:03:18.859508455Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 8 00:03:18.876401 kubelet[4417]: I0508 00:03:18.876361 4417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5dc8c697cc-vprqp" podStartSLOduration=1.086684689 podStartE2EDuration="1.876346089s" podCreationTimestamp="2025-05-08 00:03:17 +0000 UTC" firstStartedPulling="2025-05-08 00:03:17.573102412 +0000 UTC m=+28.827699333" lastFinishedPulling="2025-05-08 00:03:18.362763812 +0000 UTC m=+29.617360733" observedRunningTime="2025-05-08 00:03:18.876224124 +0000 UTC m=+30.130821045" watchObservedRunningTime="2025-05-08 00:03:18.876346089 +0000 UTC m=+30.130943010" May 8 00:03:19.861130 kubelet[4417]: I0508 00:03:19.861105 4417 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:03:20.159609 containerd[2702]: time="2025-05-08T00:03:20.159529275Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:03:20.159862 containerd[2702]: time="2025-05-08T00:03:20.159595037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 8 00:03:20.160329 containerd[2702]: time="2025-05-08T00:03:20.160276185Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:03:20.162231 containerd[2702]: time="2025-05-08T00:03:20.162172422Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:03:20.162966 containerd[2702]: time="2025-05-08T00:03:20.162909372Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 1.303364716s" May 8 00:03:20.162966 containerd[2702]: time="2025-05-08T00:03:20.162940893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 8 00:03:20.164611 containerd[2702]: time="2025-05-08T00:03:20.164584560Z" level=info msg="CreateContainer within sandbox \"ad9610d78830eb12f30bb58dd9ac2d2bb3b421109715899e41830b6d990305d4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 8 00:03:20.170599 containerd[2702]: time="2025-05-08T00:03:20.170530682Z" level=info msg="CreateContainer within sandbox \"ad9610d78830eb12f30bb58dd9ac2d2bb3b421109715899e41830b6d990305d4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"60b27164670b113ab46ae8f46088805b8b0dc9600eb750a3b2c7736fa93fad52\"" May 8 00:03:20.170905 containerd[2702]: time="2025-05-08T00:03:20.170880457Z" level=info msg="StartContainer for \"60b27164670b113ab46ae8f46088805b8b0dc9600eb750a3b2c7736fa93fad52\"" May 8 00:03:20.200916 systemd[1]: Started cri-containerd-60b27164670b113ab46ae8f46088805b8b0dc9600eb750a3b2c7736fa93fad52.scope - libcontainer container 60b27164670b113ab46ae8f46088805b8b0dc9600eb750a3b2c7736fa93fad52. May 8 00:03:20.221392 containerd[2702]: time="2025-05-08T00:03:20.221359111Z" level=info msg="StartContainer for \"60b27164670b113ab46ae8f46088805b8b0dc9600eb750a3b2c7736fa93fad52\" returns successfully" May 8 00:03:20.586956 containerd[2702]: time="2025-05-08T00:03:20.586913909Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:03:20.588471 systemd[1]: cri-containerd-60b27164670b113ab46ae8f46088805b8b0dc9600eb750a3b2c7736fa93fad52.scope: Deactivated successfully. May 8 00:03:20.588771 systemd[1]: cri-containerd-60b27164670b113ab46ae8f46088805b8b0dc9600eb750a3b2c7736fa93fad52.scope: Consumed 897ms CPU time, 179.5M memory peak, 150.3M written to disk. May 8 00:03:20.602758 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60b27164670b113ab46ae8f46088805b8b0dc9600eb750a3b2c7736fa93fad52-rootfs.mount: Deactivated successfully. May 8 00:03:20.616299 kubelet[4417]: I0508 00:03:20.616274 4417 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 8 00:03:20.630102 kubelet[4417]: I0508 00:03:20.630071 4417 topology_manager.go:215] "Topology Admit Handler" podUID="babaf8df-3800-4281-aaf6-109330f3cc79" podNamespace="kube-system" podName="coredns-7db6d8ff4d-hf55x" May 8 00:03:20.630261 kubelet[4417]: I0508 00:03:20.630245 4417 topology_manager.go:215] "Topology Admit Handler" podUID="479ec137-6439-484c-93c9-297d6865167f" podNamespace="calico-system" podName="calico-kube-controllers-76f6cb6bb-rlfnw" May 8 00:03:20.630556 kubelet[4417]: I0508 00:03:20.630531 4417 topology_manager.go:215] "Topology Admit Handler" podUID="026d55fd-56c2-4710-8822-2dacd50ff239" podNamespace="kube-system" podName="coredns-7db6d8ff4d-l2qkb" May 8 00:03:20.630821 kubelet[4417]: I0508 00:03:20.630792 4417 topology_manager.go:215] "Topology Admit Handler" podUID="70fa2288-52a8-48ed-9660-5941586aeb55" podNamespace="calico-apiserver" podName="calico-apiserver-589b849bb-s6cgn" May 8 00:03:20.631639 kubelet[4417]: I0508 00:03:20.631618 4417 topology_manager.go:215] "Topology Admit Handler" podUID="805c0cbf-349b-444e-8d83-d615f09bd8e1" podNamespace="calico-apiserver" podName="calico-apiserver-589b849bb-zzzwl" May 8 00:03:20.634683 systemd[1]: Created slice kubepods-burstable-podbabaf8df_3800_4281_aaf6_109330f3cc79.slice - libcontainer container kubepods-burstable-podbabaf8df_3800_4281_aaf6_109330f3cc79.slice. May 8 00:03:20.639045 systemd[1]: Created slice kubepods-besteffort-pod479ec137_6439_484c_93c9_297d6865167f.slice - libcontainer container kubepods-besteffort-pod479ec137_6439_484c_93c9_297d6865167f.slice. May 8 00:03:20.642643 systemd[1]: Created slice kubepods-burstable-pod026d55fd_56c2_4710_8822_2dacd50ff239.slice - libcontainer container kubepods-burstable-pod026d55fd_56c2_4710_8822_2dacd50ff239.slice. May 8 00:03:20.646858 systemd[1]: Created slice kubepods-besteffort-pod70fa2288_52a8_48ed_9660_5941586aeb55.slice - libcontainer container kubepods-besteffort-pod70fa2288_52a8_48ed_9660_5941586aeb55.slice. May 8 00:03:20.650144 systemd[1]: Created slice kubepods-besteffort-pod805c0cbf_349b_444e_8d83_d615f09bd8e1.slice - libcontainer container kubepods-besteffort-pod805c0cbf_349b_444e_8d83_d615f09bd8e1.slice. May 8 00:03:20.733122 kubelet[4417]: I0508 00:03:20.733084 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx6kd\" (UniqueName: \"kubernetes.io/projected/026d55fd-56c2-4710-8822-2dacd50ff239-kube-api-access-dx6kd\") pod \"coredns-7db6d8ff4d-l2qkb\" (UID: \"026d55fd-56c2-4710-8822-2dacd50ff239\") " pod="kube-system/coredns-7db6d8ff4d-l2qkb" May 8 00:03:20.733204 kubelet[4417]: I0508 00:03:20.733127 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/026d55fd-56c2-4710-8822-2dacd50ff239-config-volume\") pod \"coredns-7db6d8ff4d-l2qkb\" (UID: \"026d55fd-56c2-4710-8822-2dacd50ff239\") " pod="kube-system/coredns-7db6d8ff4d-l2qkb" May 8 00:03:20.733204 kubelet[4417]: I0508 00:03:20.733148 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/babaf8df-3800-4281-aaf6-109330f3cc79-config-volume\") pod \"coredns-7db6d8ff4d-hf55x\" (UID: \"babaf8df-3800-4281-aaf6-109330f3cc79\") " pod="kube-system/coredns-7db6d8ff4d-hf55x" May 8 00:03:20.733204 kubelet[4417]: I0508 00:03:20.733164 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/805c0cbf-349b-444e-8d83-d615f09bd8e1-calico-apiserver-certs\") pod \"calico-apiserver-589b849bb-zzzwl\" (UID: \"805c0cbf-349b-444e-8d83-d615f09bd8e1\") " pod="calico-apiserver/calico-apiserver-589b849bb-zzzwl" May 8 00:03:20.733204 kubelet[4417]: I0508 00:03:20.733183 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/479ec137-6439-484c-93c9-297d6865167f-tigera-ca-bundle\") pod \"calico-kube-controllers-76f6cb6bb-rlfnw\" (UID: \"479ec137-6439-484c-93c9-297d6865167f\") " pod="calico-system/calico-kube-controllers-76f6cb6bb-rlfnw" May 8 00:03:20.733367 kubelet[4417]: I0508 00:03:20.733217 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jznf4\" (UniqueName: \"kubernetes.io/projected/babaf8df-3800-4281-aaf6-109330f3cc79-kube-api-access-jznf4\") pod \"coredns-7db6d8ff4d-hf55x\" (UID: \"babaf8df-3800-4281-aaf6-109330f3cc79\") " pod="kube-system/coredns-7db6d8ff4d-hf55x" May 8 00:03:20.733367 kubelet[4417]: I0508 00:03:20.733236 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwfjv\" (UniqueName: \"kubernetes.io/projected/805c0cbf-349b-444e-8d83-d615f09bd8e1-kube-api-access-pwfjv\") pod \"calico-apiserver-589b849bb-zzzwl\" (UID: \"805c0cbf-349b-444e-8d83-d615f09bd8e1\") " pod="calico-apiserver/calico-apiserver-589b849bb-zzzwl" May 8 00:03:20.733367 kubelet[4417]: I0508 00:03:20.733257 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gh2g\" (UniqueName: \"kubernetes.io/projected/479ec137-6439-484c-93c9-297d6865167f-kube-api-access-6gh2g\") pod \"calico-kube-controllers-76f6cb6bb-rlfnw\" (UID: \"479ec137-6439-484c-93c9-297d6865167f\") " pod="calico-system/calico-kube-controllers-76f6cb6bb-rlfnw" May 8 00:03:20.733367 kubelet[4417]: I0508 00:03:20.733275 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skj2p\" (UniqueName: \"kubernetes.io/projected/70fa2288-52a8-48ed-9660-5941586aeb55-kube-api-access-skj2p\") pod \"calico-apiserver-589b849bb-s6cgn\" (UID: \"70fa2288-52a8-48ed-9660-5941586aeb55\") " pod="calico-apiserver/calico-apiserver-589b849bb-s6cgn" May 8 00:03:20.733367 kubelet[4417]: I0508 00:03:20.733309 4417 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/70fa2288-52a8-48ed-9660-5941586aeb55-calico-apiserver-certs\") pod \"calico-apiserver-589b849bb-s6cgn\" (UID: \"70fa2288-52a8-48ed-9660-5941586aeb55\") " pod="calico-apiserver/calico-apiserver-589b849bb-s6cgn" May 8 00:03:20.819025 systemd[1]: Created slice kubepods-besteffort-podecdba017_f46d_4d93_b251_179fdcd1b734.slice - libcontainer container kubepods-besteffort-podecdba017_f46d_4d93_b251_179fdcd1b734.slice. May 8 00:03:20.820680 containerd[2702]: time="2025-05-08T00:03:20.820640581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-78qnk,Uid:ecdba017-f46d-4d93-b251-179fdcd1b734,Namespace:calico-system,Attempt:0,}" May 8 00:03:20.825855 containerd[2702]: time="2025-05-08T00:03:20.825813911Z" level=info msg="shim disconnected" id=60b27164670b113ab46ae8f46088805b8b0dc9600eb750a3b2c7736fa93fad52 namespace=k8s.io May 8 00:03:20.825884 containerd[2702]: time="2025-05-08T00:03:20.825854113Z" level=warning msg="cleaning up after shim disconnected" id=60b27164670b113ab46ae8f46088805b8b0dc9600eb750a3b2c7736fa93fad52 namespace=k8s.io May 8 00:03:20.825884 containerd[2702]: time="2025-05-08T00:03:20.825861753Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:03:20.864679 containerd[2702]: time="2025-05-08T00:03:20.864564929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 8 00:03:20.899766 containerd[2702]: time="2025-05-08T00:03:20.899714759Z" level=error msg="Failed to destroy network for sandbox \"d8db5656ea5e531764db87dbcad62b7d50008ec357bb1837f18d72d9995a6173\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:20.900339 containerd[2702]: time="2025-05-08T00:03:20.900313744Z" level=error msg="encountered an error cleaning up failed sandbox \"d8db5656ea5e531764db87dbcad62b7d50008ec357bb1837f18d72d9995a6173\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:20.900403 containerd[2702]: time="2025-05-08T00:03:20.900386106Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-78qnk,Uid:ecdba017-f46d-4d93-b251-179fdcd1b734,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d8db5656ea5e531764db87dbcad62b7d50008ec357bb1837f18d72d9995a6173\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:20.900628 kubelet[4417]: E0508 00:03:20.900582 4417 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8db5656ea5e531764db87dbcad62b7d50008ec357bb1837f18d72d9995a6173\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:20.900880 kubelet[4417]: E0508 00:03:20.900659 4417 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8db5656ea5e531764db87dbcad62b7d50008ec357bb1837f18d72d9995a6173\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-78qnk" May 8 00:03:20.900880 kubelet[4417]: E0508 00:03:20.900682 4417 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8db5656ea5e531764db87dbcad62b7d50008ec357bb1837f18d72d9995a6173\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-78qnk" May 8 00:03:20.900880 kubelet[4417]: E0508 00:03:20.900722 4417 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-78qnk_calico-system(ecdba017-f46d-4d93-b251-179fdcd1b734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-78qnk_calico-system(ecdba017-f46d-4d93-b251-179fdcd1b734)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8db5656ea5e531764db87dbcad62b7d50008ec357bb1837f18d72d9995a6173\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-78qnk" podUID="ecdba017-f46d-4d93-b251-179fdcd1b734" May 8 00:03:20.937421 containerd[2702]: time="2025-05-08T00:03:20.937391093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hf55x,Uid:babaf8df-3800-4281-aaf6-109330f3cc79,Namespace:kube-system,Attempt:0,}" May 8 00:03:20.941875 containerd[2702]: time="2025-05-08T00:03:20.941850394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76f6cb6bb-rlfnw,Uid:479ec137-6439-484c-93c9-297d6865167f,Namespace:calico-system,Attempt:0,}" May 8 00:03:20.945326 containerd[2702]: time="2025-05-08T00:03:20.945303935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l2qkb,Uid:026d55fd-56c2-4710-8822-2dacd50ff239,Namespace:kube-system,Attempt:0,}" May 8 00:03:20.948757 containerd[2702]: time="2025-05-08T00:03:20.948728874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589b849bb-s6cgn,Uid:70fa2288-52a8-48ed-9660-5941586aeb55,Namespace:calico-apiserver,Attempt:0,}" May 8 00:03:20.952216 containerd[2702]: time="2025-05-08T00:03:20.952190335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589b849bb-zzzwl,Uid:805c0cbf-349b-444e-8d83-d615f09bd8e1,Namespace:calico-apiserver,Attempt:0,}" May 8 00:03:20.981900 containerd[2702]: time="2025-05-08T00:03:20.981851942Z" level=error msg="Failed to destroy network for sandbox \"0286357e8ee664578efc93e6958c44b4cd90bcc1bd1fd9fd4e71cd419852d78a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:20.982197 containerd[2702]: time="2025-05-08T00:03:20.982173235Z" level=error msg="encountered an error cleaning up failed sandbox \"0286357e8ee664578efc93e6958c44b4cd90bcc1bd1fd9fd4e71cd419852d78a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:20.982249 containerd[2702]: time="2025-05-08T00:03:20.982232077Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hf55x,Uid:babaf8df-3800-4281-aaf6-109330f3cc79,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0286357e8ee664578efc93e6958c44b4cd90bcc1bd1fd9fd4e71cd419852d78a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:20.982466 kubelet[4417]: E0508 00:03:20.982428 4417 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0286357e8ee664578efc93e6958c44b4cd90bcc1bd1fd9fd4e71cd419852d78a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:20.982501 kubelet[4417]: E0508 00:03:20.982488 4417 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0286357e8ee664578efc93e6958c44b4cd90bcc1bd1fd9fd4e71cd419852d78a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hf55x" May 8 00:03:20.982533 kubelet[4417]: E0508 00:03:20.982508 4417 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0286357e8ee664578efc93e6958c44b4cd90bcc1bd1fd9fd4e71cd419852d78a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hf55x" May 8 00:03:20.982571 kubelet[4417]: E0508 00:03:20.982546 4417 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-hf55x_kube-system(babaf8df-3800-4281-aaf6-109330f3cc79)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-hf55x_kube-system(babaf8df-3800-4281-aaf6-109330f3cc79)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0286357e8ee664578efc93e6958c44b4cd90bcc1bd1fd9fd4e71cd419852d78a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-hf55x" podUID="babaf8df-3800-4281-aaf6-109330f3cc79" May 8 00:03:20.990087 containerd[2702]: time="2025-05-08T00:03:20.990045035Z" level=error msg="Failed to destroy network for sandbox \"ea89867d9667df584b88c39fa576b94e6cf08a78335428fc8da0fd087596607d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:20.990408 containerd[2702]: time="2025-05-08T00:03:20.990385809Z" level=error msg="encountered an error cleaning up failed sandbox \"ea89867d9667df584b88c39fa576b94e6cf08a78335428fc8da0fd087596607d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:20.990459 containerd[2702]: time="2025-05-08T00:03:20.990442492Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76f6cb6bb-rlfnw,Uid:479ec137-6439-484c-93c9-297d6865167f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ea89867d9667df584b88c39fa576b94e6cf08a78335428fc8da0fd087596607d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:20.990633 kubelet[4417]: E0508 00:03:20.990603 4417 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea89867d9667df584b88c39fa576b94e6cf08a78335428fc8da0fd087596607d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:20.990668 kubelet[4417]: E0508 00:03:20.990653 4417 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea89867d9667df584b88c39fa576b94e6cf08a78335428fc8da0fd087596607d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76f6cb6bb-rlfnw" May 8 00:03:20.990695 kubelet[4417]: E0508 00:03:20.990672 4417 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea89867d9667df584b88c39fa576b94e6cf08a78335428fc8da0fd087596607d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76f6cb6bb-rlfnw" May 8 00:03:20.990748 kubelet[4417]: E0508 00:03:20.990727 4417 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76f6cb6bb-rlfnw_calico-system(479ec137-6439-484c-93c9-297d6865167f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76f6cb6bb-rlfnw_calico-system(479ec137-6439-484c-93c9-297d6865167f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea89867d9667df584b88c39fa576b94e6cf08a78335428fc8da0fd087596607d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76f6cb6bb-rlfnw" podUID="479ec137-6439-484c-93c9-297d6865167f" May 8 00:03:21.011759 containerd[2702]: time="2025-05-08T00:03:21.011713979Z" level=error msg="Failed to destroy network for sandbox \"61e90d998f193e125748818c937ef9e382a82c926511f6e59fa46beb622580d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.012077 containerd[2702]: time="2025-05-08T00:03:21.012055472Z" level=error msg="encountered an error cleaning up failed sandbox \"61e90d998f193e125748818c937ef9e382a82c926511f6e59fa46beb622580d5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.012129 containerd[2702]: time="2025-05-08T00:03:21.012111914Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589b849bb-zzzwl,Uid:805c0cbf-349b-444e-8d83-d615f09bd8e1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"61e90d998f193e125748818c937ef9e382a82c926511f6e59fa46beb622580d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.012180 containerd[2702]: time="2025-05-08T00:03:21.012057992Z" level=error msg="Failed to destroy network for sandbox \"203dea32cab135b8e456669b0868c6078d0c5fa4d3942caed3f770d0f5292e7e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.012290 kubelet[4417]: E0508 00:03:21.012262 4417 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61e90d998f193e125748818c937ef9e382a82c926511f6e59fa46beb622580d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.012325 kubelet[4417]: E0508 00:03:21.012307 4417 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61e90d998f193e125748818c937ef9e382a82c926511f6e59fa46beb622580d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-589b849bb-zzzwl" May 8 00:03:21.012351 kubelet[4417]: E0508 00:03:21.012325 4417 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61e90d998f193e125748818c937ef9e382a82c926511f6e59fa46beb622580d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-589b849bb-zzzwl" May 8 00:03:21.012393 containerd[2702]: time="2025-05-08T00:03:21.012362604Z" level=error msg="Failed to destroy network for sandbox \"620028f9dd5e4dff71ba31cf57e9c41a45d57b1b1a2e62518f49739dd9008245\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.012421 kubelet[4417]: E0508 00:03:21.012376 4417 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-589b849bb-zzzwl_calico-apiserver(805c0cbf-349b-444e-8d83-d615f09bd8e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-589b849bb-zzzwl_calico-apiserver(805c0cbf-349b-444e-8d83-d615f09bd8e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61e90d998f193e125748818c937ef9e382a82c926511f6e59fa46beb622580d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-589b849bb-zzzwl" podUID="805c0cbf-349b-444e-8d83-d615f09bd8e1" May 8 00:03:21.012461 containerd[2702]: time="2025-05-08T00:03:21.012435847Z" level=error msg="encountered an error cleaning up failed sandbox \"203dea32cab135b8e456669b0868c6078d0c5fa4d3942caed3f770d0f5292e7e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.012504 containerd[2702]: time="2025-05-08T00:03:21.012488689Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l2qkb,Uid:026d55fd-56c2-4710-8822-2dacd50ff239,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"203dea32cab135b8e456669b0868c6078d0c5fa4d3942caed3f770d0f5292e7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.012628 kubelet[4417]: E0508 00:03:21.012606 4417 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"203dea32cab135b8e456669b0868c6078d0c5fa4d3942caed3f770d0f5292e7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.012657 kubelet[4417]: E0508 00:03:21.012641 4417 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"203dea32cab135b8e456669b0868c6078d0c5fa4d3942caed3f770d0f5292e7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-l2qkb" May 8 00:03:21.012683 kubelet[4417]: E0508 00:03:21.012662 4417 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"203dea32cab135b8e456669b0868c6078d0c5fa4d3942caed3f770d0f5292e7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-l2qkb" May 8 00:03:21.012706 containerd[2702]: time="2025-05-08T00:03:21.012655456Z" level=error msg="encountered an error cleaning up failed sandbox \"620028f9dd5e4dff71ba31cf57e9c41a45d57b1b1a2e62518f49739dd9008245\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.012727 kubelet[4417]: E0508 00:03:21.012692 4417 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-l2qkb_kube-system(026d55fd-56c2-4710-8822-2dacd50ff239)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-l2qkb_kube-system(026d55fd-56c2-4710-8822-2dacd50ff239)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"203dea32cab135b8e456669b0868c6078d0c5fa4d3942caed3f770d0f5292e7e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-l2qkb" podUID="026d55fd-56c2-4710-8822-2dacd50ff239" May 8 00:03:21.012764 containerd[2702]: time="2025-05-08T00:03:21.012700097Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589b849bb-s6cgn,Uid:70fa2288-52a8-48ed-9660-5941586aeb55,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"620028f9dd5e4dff71ba31cf57e9c41a45d57b1b1a2e62518f49739dd9008245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.012844 kubelet[4417]: E0508 00:03:21.012819 4417 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"620028f9dd5e4dff71ba31cf57e9c41a45d57b1b1a2e62518f49739dd9008245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.012875 kubelet[4417]: E0508 00:03:21.012859 4417 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"620028f9dd5e4dff71ba31cf57e9c41a45d57b1b1a2e62518f49739dd9008245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-589b849bb-s6cgn" May 8 00:03:21.012900 kubelet[4417]: E0508 00:03:21.012876 4417 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"620028f9dd5e4dff71ba31cf57e9c41a45d57b1b1a2e62518f49739dd9008245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-589b849bb-s6cgn" May 8 00:03:21.012926 kubelet[4417]: E0508 00:03:21.012908 4417 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-589b849bb-s6cgn_calico-apiserver(70fa2288-52a8-48ed-9660-5941586aeb55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-589b849bb-s6cgn_calico-apiserver(70fa2288-52a8-48ed-9660-5941586aeb55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"620028f9dd5e4dff71ba31cf57e9c41a45d57b1b1a2e62518f49739dd9008245\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-589b849bb-s6cgn" podUID="70fa2288-52a8-48ed-9660-5941586aeb55" May 8 00:03:21.320254 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d8db5656ea5e531764db87dbcad62b7d50008ec357bb1837f18d72d9995a6173-shm.mount: Deactivated successfully. May 8 00:03:21.866515 kubelet[4417]: I0508 00:03:21.866494 4417 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea89867d9667df584b88c39fa576b94e6cf08a78335428fc8da0fd087596607d" May 8 00:03:21.867061 containerd[2702]: time="2025-05-08T00:03:21.867030006Z" level=info msg="StopPodSandbox for \"ea89867d9667df584b88c39fa576b94e6cf08a78335428fc8da0fd087596607d\"" May 8 00:03:21.867244 containerd[2702]: time="2025-05-08T00:03:21.867197693Z" level=info msg="Ensure that sandbox ea89867d9667df584b88c39fa576b94e6cf08a78335428fc8da0fd087596607d in task-service has been cleanup successfully" May 8 00:03:21.867276 kubelet[4417]: I0508 00:03:21.867183 4417 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0286357e8ee664578efc93e6958c44b4cd90bcc1bd1fd9fd4e71cd419852d78a" May 8 00:03:21.867403 containerd[2702]: time="2025-05-08T00:03:21.867388980Z" level=info msg="TearDown network for sandbox \"ea89867d9667df584b88c39fa576b94e6cf08a78335428fc8da0fd087596607d\" successfully" May 8 00:03:21.867427 containerd[2702]: time="2025-05-08T00:03:21.867403261Z" level=info msg="StopPodSandbox for \"ea89867d9667df584b88c39fa576b94e6cf08a78335428fc8da0fd087596607d\" returns successfully" May 8 00:03:21.867618 containerd[2702]: time="2025-05-08T00:03:21.867596708Z" level=info msg="StopPodSandbox for \"0286357e8ee664578efc93e6958c44b4cd90bcc1bd1fd9fd4e71cd419852d78a\"" May 8 00:03:21.867793 containerd[2702]: time="2025-05-08T00:03:21.867772115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76f6cb6bb-rlfnw,Uid:479ec137-6439-484c-93c9-297d6865167f,Namespace:calico-system,Attempt:1,}" May 8 00:03:21.867895 containerd[2702]: time="2025-05-08T00:03:21.867879919Z" level=info msg="Ensure that sandbox 0286357e8ee664578efc93e6958c44b4cd90bcc1bd1fd9fd4e71cd419852d78a in task-service has been cleanup successfully" May 8 00:03:21.868057 containerd[2702]: time="2025-05-08T00:03:21.868043166Z" level=info msg="TearDown network for sandbox \"0286357e8ee664578efc93e6958c44b4cd90bcc1bd1fd9fd4e71cd419852d78a\" successfully" May 8 00:03:21.868083 containerd[2702]: time="2025-05-08T00:03:21.868058126Z" level=info msg="StopPodSandbox for \"0286357e8ee664578efc93e6958c44b4cd90bcc1bd1fd9fd4e71cd419852d78a\" returns successfully" May 8 00:03:21.868151 kubelet[4417]: I0508 00:03:21.868138 4417 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8db5656ea5e531764db87dbcad62b7d50008ec357bb1837f18d72d9995a6173" May 8 00:03:21.868358 containerd[2702]: time="2025-05-08T00:03:21.868339977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hf55x,Uid:babaf8df-3800-4281-aaf6-109330f3cc79,Namespace:kube-system,Attempt:1,}" May 8 00:03:21.868520 containerd[2702]: time="2025-05-08T00:03:21.868488223Z" level=info msg="StopPodSandbox for \"d8db5656ea5e531764db87dbcad62b7d50008ec357bb1837f18d72d9995a6173\"" May 8 00:03:21.868682 containerd[2702]: time="2025-05-08T00:03:21.868668830Z" level=info msg="Ensure that sandbox d8db5656ea5e531764db87dbcad62b7d50008ec357bb1837f18d72d9995a6173 in task-service has been cleanup successfully" May 8 00:03:21.868924 kubelet[4417]: I0508 00:03:21.868911 4417 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61e90d998f193e125748818c937ef9e382a82c926511f6e59fa46beb622580d5" May 8 00:03:21.868927 systemd[1]: run-netns-cni\x2d4e7f96ca\x2dc092\x2d2e35\x2d16c5\x2d0fd5e89915fc.mount: Deactivated successfully. May 8 00:03:21.868999 containerd[2702]: time="2025-05-08T00:03:21.868921800Z" level=info msg="TearDown network for sandbox \"d8db5656ea5e531764db87dbcad62b7d50008ec357bb1837f18d72d9995a6173\" successfully" May 8 00:03:21.868999 containerd[2702]: time="2025-05-08T00:03:21.868937120Z" level=info msg="StopPodSandbox for \"d8db5656ea5e531764db87dbcad62b7d50008ec357bb1837f18d72d9995a6173\" returns successfully" May 8 00:03:21.869306 containerd[2702]: time="2025-05-08T00:03:21.869289854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-78qnk,Uid:ecdba017-f46d-4d93-b251-179fdcd1b734,Namespace:calico-system,Attempt:1,}" May 8 00:03:21.869329 containerd[2702]: time="2025-05-08T00:03:21.869295934Z" level=info msg="StopPodSandbox for \"61e90d998f193e125748818c937ef9e382a82c926511f6e59fa46beb622580d5\"" May 8 00:03:21.869493 containerd[2702]: time="2025-05-08T00:03:21.869479742Z" level=info msg="Ensure that sandbox 61e90d998f193e125748818c937ef9e382a82c926511f6e59fa46beb622580d5 in task-service has been cleanup successfully" May 8 00:03:21.869645 containerd[2702]: time="2025-05-08T00:03:21.869631347Z" level=info msg="TearDown network for sandbox \"61e90d998f193e125748818c937ef9e382a82c926511f6e59fa46beb622580d5\" successfully" May 8 00:03:21.869671 containerd[2702]: time="2025-05-08T00:03:21.869645348Z" level=info msg="StopPodSandbox for \"61e90d998f193e125748818c937ef9e382a82c926511f6e59fa46beb622580d5\" returns successfully" May 8 00:03:21.869695 kubelet[4417]: I0508 00:03:21.869668 4417 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="620028f9dd5e4dff71ba31cf57e9c41a45d57b1b1a2e62518f49739dd9008245" May 8 00:03:21.869989 containerd[2702]: time="2025-05-08T00:03:21.869975241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589b849bb-zzzwl,Uid:805c0cbf-349b-444e-8d83-d615f09bd8e1,Namespace:calico-apiserver,Attempt:1,}" May 8 00:03:21.870196 containerd[2702]: time="2025-05-08T00:03:21.869996762Z" level=info msg="StopPodSandbox for \"620028f9dd5e4dff71ba31cf57e9c41a45d57b1b1a2e62518f49739dd9008245\"" May 8 00:03:21.870322 containerd[2702]: time="2025-05-08T00:03:21.870307894Z" level=info msg="Ensure that sandbox 620028f9dd5e4dff71ba31cf57e9c41a45d57b1b1a2e62518f49739dd9008245 in task-service has been cleanup successfully" May 8 00:03:21.870415 kubelet[4417]: I0508 00:03:21.870401 4417 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="203dea32cab135b8e456669b0868c6078d0c5fa4d3942caed3f770d0f5292e7e" May 8 00:03:21.870588 containerd[2702]: time="2025-05-08T00:03:21.870571544Z" level=info msg="TearDown network for sandbox \"620028f9dd5e4dff71ba31cf57e9c41a45d57b1b1a2e62518f49739dd9008245\" successfully" May 8 00:03:21.870612 containerd[2702]: time="2025-05-08T00:03:21.870588345Z" level=info msg="StopPodSandbox for \"620028f9dd5e4dff71ba31cf57e9c41a45d57b1b1a2e62518f49739dd9008245\" returns successfully" May 8 00:03:21.870904 systemd[1]: run-netns-cni\x2d1923af2e\x2ddd68\x2da407\x2de492\x2d265ad8a2823e.mount: Deactivated successfully. May 8 00:03:21.870981 systemd[1]: run-netns-cni\x2ded89a4c9\x2dcb07\x2d2cf0\x2d39f9\x2d2a213357d0ca.mount: Deactivated successfully. May 8 00:03:21.871023 containerd[2702]: time="2025-05-08T00:03:21.871000521Z" level=info msg="StopPodSandbox for \"203dea32cab135b8e456669b0868c6078d0c5fa4d3942caed3f770d0f5292e7e\"" May 8 00:03:21.871089 containerd[2702]: time="2025-05-08T00:03:21.871064763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589b849bb-s6cgn,Uid:70fa2288-52a8-48ed-9660-5941586aeb55,Namespace:calico-apiserver,Attempt:1,}" May 8 00:03:21.871031 systemd[1]: run-netns-cni\x2d34e6e998\x2d2c31\x2d723b\x2d504d\x2d7b7a03211872.mount: Deactivated successfully. May 8 00:03:21.871157 containerd[2702]: time="2025-05-08T00:03:21.871142486Z" level=info msg="Ensure that sandbox 203dea32cab135b8e456669b0868c6078d0c5fa4d3942caed3f770d0f5292e7e in task-service has been cleanup successfully" May 8 00:03:21.871313 containerd[2702]: time="2025-05-08T00:03:21.871299372Z" level=info msg="TearDown network for sandbox \"203dea32cab135b8e456669b0868c6078d0c5fa4d3942caed3f770d0f5292e7e\" successfully" May 8 00:03:21.871333 containerd[2702]: time="2025-05-08T00:03:21.871313373Z" level=info msg="StopPodSandbox for \"203dea32cab135b8e456669b0868c6078d0c5fa4d3942caed3f770d0f5292e7e\" returns successfully" May 8 00:03:21.871658 containerd[2702]: time="2025-05-08T00:03:21.871640625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l2qkb,Uid:026d55fd-56c2-4710-8822-2dacd50ff239,Namespace:kube-system,Attempt:1,}" May 8 00:03:21.873999 systemd[1]: run-netns-cni\x2d8c41d2d3\x2d0f46\x2d0e2b\x2d2e48\x2d34c5db56c99e.mount: Deactivated successfully. May 8 00:03:21.874078 systemd[1]: run-netns-cni\x2d2b95fdc5\x2df3fc\x2d77a7\x2dadc3\x2d739f74893faa.mount: Deactivated successfully. May 8 00:03:21.918418 containerd[2702]: time="2025-05-08T00:03:21.918367841Z" level=error msg="Failed to destroy network for sandbox \"95604299eefc9882140aa58b0c424648dfda7875cb52916b5251e92699e7c852\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.918696 containerd[2702]: time="2025-05-08T00:03:21.918662292Z" level=error msg="Failed to destroy network for sandbox \"308cefa1affd477c33c2178b491341b04faddeee325d1e8c80ce651c6cae36e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.918803 containerd[2702]: time="2025-05-08T00:03:21.918780577Z" level=error msg="encountered an error cleaning up failed sandbox \"95604299eefc9882140aa58b0c424648dfda7875cb52916b5251e92699e7c852\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.918867 containerd[2702]: time="2025-05-08T00:03:21.918851540Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589b849bb-zzzwl,Uid:805c0cbf-349b-444e-8d83-d615f09bd8e1,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"95604299eefc9882140aa58b0c424648dfda7875cb52916b5251e92699e7c852\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.918935 containerd[2702]: time="2025-05-08T00:03:21.918864940Z" level=error msg="Failed to destroy network for sandbox \"7a07854e2751a41c24beba08491c46960262ab93a1825f6f2e365740a3b51de2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.918990 containerd[2702]: time="2025-05-08T00:03:21.918969584Z" level=error msg="encountered an error cleaning up failed sandbox \"308cefa1affd477c33c2178b491341b04faddeee325d1e8c80ce651c6cae36e6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.919038 containerd[2702]: time="2025-05-08T00:03:21.919021666Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589b849bb-s6cgn,Uid:70fa2288-52a8-48ed-9660-5941586aeb55,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"308cefa1affd477c33c2178b491341b04faddeee325d1e8c80ce651c6cae36e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.919072 kubelet[4417]: E0508 00:03:21.919044 4417 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95604299eefc9882140aa58b0c424648dfda7875cb52916b5251e92699e7c852\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.919310 kubelet[4417]: E0508 00:03:21.919101 4417 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95604299eefc9882140aa58b0c424648dfda7875cb52916b5251e92699e7c852\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-589b849bb-zzzwl" May 8 00:03:21.919310 kubelet[4417]: E0508 00:03:21.919122 4417 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95604299eefc9882140aa58b0c424648dfda7875cb52916b5251e92699e7c852\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-589b849bb-zzzwl" May 8 00:03:21.919310 kubelet[4417]: E0508 00:03:21.919154 4417 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"308cefa1affd477c33c2178b491341b04faddeee325d1e8c80ce651c6cae36e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.919388 containerd[2702]: time="2025-05-08T00:03:21.919217194Z" level=error msg="encountered an error cleaning up failed sandbox \"7a07854e2751a41c24beba08491c46960262ab93a1825f6f2e365740a3b51de2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.919388 containerd[2702]: time="2025-05-08T00:03:21.919241835Z" level=error msg="Failed to destroy network for sandbox \"fddc6bda409a7acee382dc2ec3b8103a6d687d1bdfb87c4ea0fac220127f37a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.919388 containerd[2702]: time="2025-05-08T00:03:21.919259035Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hf55x,Uid:babaf8df-3800-4281-aaf6-109330f3cc79,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"7a07854e2751a41c24beba08491c46960262ab93a1825f6f2e365740a3b51de2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.919449 kubelet[4417]: E0508 00:03:21.919163 4417 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-589b849bb-zzzwl_calico-apiserver(805c0cbf-349b-444e-8d83-d615f09bd8e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-589b849bb-zzzwl_calico-apiserver(805c0cbf-349b-444e-8d83-d615f09bd8e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95604299eefc9882140aa58b0c424648dfda7875cb52916b5251e92699e7c852\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-589b849bb-zzzwl" podUID="805c0cbf-349b-444e-8d83-d615f09bd8e1" May 8 00:03:21.919449 kubelet[4417]: E0508 00:03:21.919199 4417 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"308cefa1affd477c33c2178b491341b04faddeee325d1e8c80ce651c6cae36e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-589b849bb-s6cgn" May 8 00:03:21.919449 kubelet[4417]: E0508 00:03:21.919217 4417 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"308cefa1affd477c33c2178b491341b04faddeee325d1e8c80ce651c6cae36e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-589b849bb-s6cgn" May 8 00:03:21.919535 containerd[2702]: time="2025-05-08T00:03:21.919404441Z" level=error msg="Failed to destroy network for sandbox \"3d495312edc1e4b7047949fad951b5f848ce2ec320c3e55ca78db1556721f11e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.919559 kubelet[4417]: E0508 00:03:21.919248 4417 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-589b849bb-s6cgn_calico-apiserver(70fa2288-52a8-48ed-9660-5941586aeb55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-589b849bb-s6cgn_calico-apiserver(70fa2288-52a8-48ed-9660-5941586aeb55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"308cefa1affd477c33c2178b491341b04faddeee325d1e8c80ce651c6cae36e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-589b849bb-s6cgn" podUID="70fa2288-52a8-48ed-9660-5941586aeb55" May 8 00:03:21.919559 kubelet[4417]: E0508 00:03:21.919357 4417 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a07854e2751a41c24beba08491c46960262ab93a1825f6f2e365740a3b51de2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.919559 kubelet[4417]: E0508 00:03:21.919398 4417 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a07854e2751a41c24beba08491c46960262ab93a1825f6f2e365740a3b51de2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hf55x" May 8 00:03:21.919631 kubelet[4417]: E0508 00:03:21.919414 4417 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a07854e2751a41c24beba08491c46960262ab93a1825f6f2e365740a3b51de2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hf55x" May 8 00:03:21.919631 kubelet[4417]: E0508 00:03:21.919445 4417 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-hf55x_kube-system(babaf8df-3800-4281-aaf6-109330f3cc79)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-hf55x_kube-system(babaf8df-3800-4281-aaf6-109330f3cc79)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a07854e2751a41c24beba08491c46960262ab93a1825f6f2e365740a3b51de2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-hf55x" podUID="babaf8df-3800-4281-aaf6-109330f3cc79" May 8 00:03:21.919687 containerd[2702]: time="2025-05-08T00:03:21.919556127Z" level=error msg="encountered an error cleaning up failed sandbox \"fddc6bda409a7acee382dc2ec3b8103a6d687d1bdfb87c4ea0fac220127f37a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.919687 containerd[2702]: time="2025-05-08T00:03:21.919603809Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76f6cb6bb-rlfnw,Uid:479ec137-6439-484c-93c9-297d6865167f,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"fddc6bda409a7acee382dc2ec3b8103a6d687d1bdfb87c4ea0fac220127f37a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.919687 containerd[2702]: time="2025-05-08T00:03:21.919662771Z" level=error msg="encountered an error cleaning up failed sandbox \"3d495312edc1e4b7047949fad951b5f848ce2ec320c3e55ca78db1556721f11e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.919759 containerd[2702]: time="2025-05-08T00:03:21.919706293Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-78qnk,Uid:ecdba017-f46d-4d93-b251-179fdcd1b734,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"3d495312edc1e4b7047949fad951b5f848ce2ec320c3e55ca78db1556721f11e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.919784 kubelet[4417]: E0508 00:03:21.919719 4417 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fddc6bda409a7acee382dc2ec3b8103a6d687d1bdfb87c4ea0fac220127f37a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.919784 kubelet[4417]: E0508 00:03:21.919751 4417 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fddc6bda409a7acee382dc2ec3b8103a6d687d1bdfb87c4ea0fac220127f37a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76f6cb6bb-rlfnw" May 8 00:03:21.919784 kubelet[4417]: E0508 00:03:21.919767 4417 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fddc6bda409a7acee382dc2ec3b8103a6d687d1bdfb87c4ea0fac220127f37a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76f6cb6bb-rlfnw" May 8 00:03:21.919855 kubelet[4417]: E0508 00:03:21.919800 4417 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76f6cb6bb-rlfnw_calico-system(479ec137-6439-484c-93c9-297d6865167f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76f6cb6bb-rlfnw_calico-system(479ec137-6439-484c-93c9-297d6865167f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fddc6bda409a7acee382dc2ec3b8103a6d687d1bdfb87c4ea0fac220127f37a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76f6cb6bb-rlfnw" podUID="479ec137-6439-484c-93c9-297d6865167f" May 8 00:03:21.919855 kubelet[4417]: E0508 00:03:21.919800 4417 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d495312edc1e4b7047949fad951b5f848ce2ec320c3e55ca78db1556721f11e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.919914 kubelet[4417]: E0508 00:03:21.919854 4417 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d495312edc1e4b7047949fad951b5f848ce2ec320c3e55ca78db1556721f11e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-78qnk" May 8 00:03:21.919914 kubelet[4417]: E0508 00:03:21.919870 4417 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d495312edc1e4b7047949fad951b5f848ce2ec320c3e55ca78db1556721f11e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-78qnk" May 8 00:03:21.919914 kubelet[4417]: E0508 00:03:21.919901 4417 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-78qnk_calico-system(ecdba017-f46d-4d93-b251-179fdcd1b734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-78qnk_calico-system(ecdba017-f46d-4d93-b251-179fdcd1b734)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d495312edc1e4b7047949fad951b5f848ce2ec320c3e55ca78db1556721f11e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-78qnk" podUID="ecdba017-f46d-4d93-b251-179fdcd1b734" May 8 00:03:21.924288 containerd[2702]: time="2025-05-08T00:03:21.924252229Z" level=error msg="Failed to destroy network for sandbox \"d6766eee6915a89294ebcee84c9a0d5f739df341124c9c8fb287eacbf4984556\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.924765 containerd[2702]: time="2025-05-08T00:03:21.924742248Z" level=error msg="encountered an error cleaning up failed sandbox \"d6766eee6915a89294ebcee84c9a0d5f739df341124c9c8fb287eacbf4984556\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.924810 containerd[2702]: time="2025-05-08T00:03:21.924789250Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l2qkb,Uid:026d55fd-56c2-4710-8822-2dacd50ff239,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"d6766eee6915a89294ebcee84c9a0d5f739df341124c9c8fb287eacbf4984556\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.924927 kubelet[4417]: E0508 00:03:21.924907 4417 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6766eee6915a89294ebcee84c9a0d5f739df341124c9c8fb287eacbf4984556\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:21.924953 kubelet[4417]: E0508 00:03:21.924941 4417 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6766eee6915a89294ebcee84c9a0d5f739df341124c9c8fb287eacbf4984556\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-l2qkb" May 8 00:03:21.924979 kubelet[4417]: E0508 00:03:21.924958 4417 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6766eee6915a89294ebcee84c9a0d5f739df341124c9c8fb287eacbf4984556\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-l2qkb" May 8 00:03:21.925012 kubelet[4417]: E0508 00:03:21.924988 4417 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-l2qkb_kube-system(026d55fd-56c2-4710-8822-2dacd50ff239)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-l2qkb_kube-system(026d55fd-56c2-4710-8822-2dacd50ff239)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d6766eee6915a89294ebcee84c9a0d5f739df341124c9c8fb287eacbf4984556\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-l2qkb" podUID="026d55fd-56c2-4710-8822-2dacd50ff239" May 8 00:03:22.316486 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fddc6bda409a7acee382dc2ec3b8103a6d687d1bdfb87c4ea0fac220127f37a7-shm.mount: Deactivated successfully. May 8 00:03:22.681319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount581735012.mount: Deactivated successfully. May 8 00:03:22.699365 containerd[2702]: time="2025-05-08T00:03:22.699329289Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:03:22.699468 containerd[2702]: time="2025-05-08T00:03:22.699349610Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 8 00:03:22.700089 containerd[2702]: time="2025-05-08T00:03:22.700070277Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:03:22.701724 containerd[2702]: time="2025-05-08T00:03:22.701693857Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:03:22.702396 containerd[2702]: time="2025-05-08T00:03:22.702371322Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 1.837765672s" May 8 00:03:22.702429 containerd[2702]: time="2025-05-08T00:03:22.702400123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 8 00:03:22.708518 containerd[2702]: time="2025-05-08T00:03:22.708485189Z" level=info msg="CreateContainer within sandbox \"ad9610d78830eb12f30bb58dd9ac2d2bb3b421109715899e41830b6d990305d4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 8 00:03:22.734605 containerd[2702]: time="2025-05-08T00:03:22.734565037Z" level=info msg="CreateContainer within sandbox \"ad9610d78830eb12f30bb58dd9ac2d2bb3b421109715899e41830b6d990305d4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3261cb6f585643cf6e69ff932023c7c7dad3001132c636379d9dc43644aff41b\"" May 8 00:03:22.734991 containerd[2702]: time="2025-05-08T00:03:22.734965052Z" level=info msg="StartContainer for \"3261cb6f585643cf6e69ff932023c7c7dad3001132c636379d9dc43644aff41b\"" May 8 00:03:22.767979 systemd[1]: Started cri-containerd-3261cb6f585643cf6e69ff932023c7c7dad3001132c636379d9dc43644aff41b.scope - libcontainer container 3261cb6f585643cf6e69ff932023c7c7dad3001132c636379d9dc43644aff41b. May 8 00:03:22.790028 containerd[2702]: time="2025-05-08T00:03:22.789997494Z" level=info msg="StartContainer for \"3261cb6f585643cf6e69ff932023c7c7dad3001132c636379d9dc43644aff41b\" returns successfully" May 8 00:03:22.874749 kubelet[4417]: I0508 00:03:22.874695 4417 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fddc6bda409a7acee382dc2ec3b8103a6d687d1bdfb87c4ea0fac220127f37a7" May 8 00:03:22.875478 containerd[2702]: time="2025-05-08T00:03:22.875456226Z" level=info msg="StopPodSandbox for \"fddc6bda409a7acee382dc2ec3b8103a6d687d1bdfb87c4ea0fac220127f37a7\"" May 8 00:03:22.875683 containerd[2702]: time="2025-05-08T00:03:22.875641913Z" level=info msg="Ensure that sandbox fddc6bda409a7acee382dc2ec3b8103a6d687d1bdfb87c4ea0fac220127f37a7 in task-service has been cleanup successfully" May 8 00:03:22.875913 containerd[2702]: time="2025-05-08T00:03:22.875896082Z" level=info msg="TearDown network for sandbox \"fddc6bda409a7acee382dc2ec3b8103a6d687d1bdfb87c4ea0fac220127f37a7\" successfully" May 8 00:03:22.875936 containerd[2702]: time="2025-05-08T00:03:22.875913243Z" level=info msg="StopPodSandbox for \"fddc6bda409a7acee382dc2ec3b8103a6d687d1bdfb87c4ea0fac220127f37a7\" returns successfully" May 8 00:03:22.877005 containerd[2702]: time="2025-05-08T00:03:22.876980003Z" level=info msg="StopPodSandbox for \"ea89867d9667df584b88c39fa576b94e6cf08a78335428fc8da0fd087596607d\"" May 8 00:03:22.877089 containerd[2702]: time="2025-05-08T00:03:22.877075326Z" level=info msg="TearDown network for sandbox \"ea89867d9667df584b88c39fa576b94e6cf08a78335428fc8da0fd087596607d\" successfully" May 8 00:03:22.877109 containerd[2702]: time="2025-05-08T00:03:22.877090447Z" level=info msg="StopPodSandbox for \"ea89867d9667df584b88c39fa576b94e6cf08a78335428fc8da0fd087596607d\" returns successfully" May 8 00:03:22.877157 kubelet[4417]: I0508 00:03:22.877142 4417 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a07854e2751a41c24beba08491c46960262ab93a1825f6f2e365740a3b51de2" May 8 00:03:22.877435 containerd[2702]: time="2025-05-08T00:03:22.877411539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76f6cb6bb-rlfnw,Uid:479ec137-6439-484c-93c9-297d6865167f,Namespace:calico-system,Attempt:2,}" May 8 00:03:22.877533 containerd[2702]: time="2025-05-08T00:03:22.877508542Z" level=info msg="StopPodSandbox for \"7a07854e2751a41c24beba08491c46960262ab93a1825f6f2e365740a3b51de2\"" May 8 00:03:22.877769 containerd[2702]: time="2025-05-08T00:03:22.877743791Z" level=info msg="Ensure that sandbox 7a07854e2751a41c24beba08491c46960262ab93a1825f6f2e365740a3b51de2 in task-service has been cleanup successfully" May 8 00:03:22.878071 containerd[2702]: time="2025-05-08T00:03:22.878057363Z" level=info msg="TearDown network for sandbox \"7a07854e2751a41c24beba08491c46960262ab93a1825f6f2e365740a3b51de2\" successfully" May 8 00:03:22.878092 containerd[2702]: time="2025-05-08T00:03:22.878072483Z" level=info msg="StopPodSandbox for \"7a07854e2751a41c24beba08491c46960262ab93a1825f6f2e365740a3b51de2\" returns successfully" May 8 00:03:22.878516 containerd[2702]: time="2025-05-08T00:03:22.878498539Z" level=info msg="StopPodSandbox for \"0286357e8ee664578efc93e6958c44b4cd90bcc1bd1fd9fd4e71cd419852d78a\"" May 8 00:03:22.878585 containerd[2702]: time="2025-05-08T00:03:22.878574782Z" level=info msg="TearDown network for sandbox \"0286357e8ee664578efc93e6958c44b4cd90bcc1bd1fd9fd4e71cd419852d78a\" successfully" May 8 00:03:22.878609 containerd[2702]: time="2025-05-08T00:03:22.878585702Z" level=info msg="StopPodSandbox for \"0286357e8ee664578efc93e6958c44b4cd90bcc1bd1fd9fd4e71cd419852d78a\" returns successfully" May 8 00:03:22.878676 kubelet[4417]: I0508 00:03:22.878664 4417 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d495312edc1e4b7047949fad951b5f848ce2ec320c3e55ca78db1556721f11e" May 8 00:03:22.878911 containerd[2702]: time="2025-05-08T00:03:22.878894434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hf55x,Uid:babaf8df-3800-4281-aaf6-109330f3cc79,Namespace:kube-system,Attempt:2,}" May 8 00:03:22.879061 containerd[2702]: time="2025-05-08T00:03:22.879044319Z" level=info msg="StopPodSandbox for \"3d495312edc1e4b7047949fad951b5f848ce2ec320c3e55ca78db1556721f11e\"" May 8 00:03:22.879212 containerd[2702]: time="2025-05-08T00:03:22.879198005Z" level=info msg="Ensure that sandbox 3d495312edc1e4b7047949fad951b5f848ce2ec320c3e55ca78db1556721f11e in task-service has been cleanup successfully" May 8 00:03:22.879428 containerd[2702]: time="2025-05-08T00:03:22.879410373Z" level=info msg="TearDown network for sandbox \"3d495312edc1e4b7047949fad951b5f848ce2ec320c3e55ca78db1556721f11e\" successfully" May 8 00:03:22.879447 containerd[2702]: time="2025-05-08T00:03:22.879428893Z" level=info msg="StopPodSandbox for \"3d495312edc1e4b7047949fad951b5f848ce2ec320c3e55ca78db1556721f11e\" returns successfully" May 8 00:03:22.879647 containerd[2702]: time="2025-05-08T00:03:22.879627541Z" level=info msg="StopPodSandbox for \"d8db5656ea5e531764db87dbcad62b7d50008ec357bb1837f18d72d9995a6173\"" May 8 00:03:22.879686 kubelet[4417]: I0508 00:03:22.879668 4417 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95604299eefc9882140aa58b0c424648dfda7875cb52916b5251e92699e7c852" May 8 00:03:22.879727 containerd[2702]: time="2025-05-08T00:03:22.879714544Z" level=info msg="TearDown network for sandbox \"d8db5656ea5e531764db87dbcad62b7d50008ec357bb1837f18d72d9995a6173\" successfully" May 8 00:03:22.879796 containerd[2702]: time="2025-05-08T00:03:22.879727025Z" level=info msg="StopPodSandbox for \"d8db5656ea5e531764db87dbcad62b7d50008ec357bb1837f18d72d9995a6173\" returns successfully" May 8 00:03:22.880722 containerd[2702]: time="2025-05-08T00:03:22.880701621Z" level=info msg="StopPodSandbox for \"95604299eefc9882140aa58b0c424648dfda7875cb52916b5251e92699e7c852\"" May 8 00:03:22.880870 containerd[2702]: time="2025-05-08T00:03:22.880856626Z" level=info msg="Ensure that sandbox 95604299eefc9882140aa58b0c424648dfda7875cb52916b5251e92699e7c852 in task-service has been cleanup successfully" May 8 00:03:22.881265 containerd[2702]: time="2025-05-08T00:03:22.881134557Z" level=info msg="TearDown network for sandbox \"95604299eefc9882140aa58b0c424648dfda7875cb52916b5251e92699e7c852\" successfully" May 8 00:03:22.881265 containerd[2702]: time="2025-05-08T00:03:22.881152117Z" level=info msg="StopPodSandbox for \"95604299eefc9882140aa58b0c424648dfda7875cb52916b5251e92699e7c852\" returns successfully" May 8 00:03:22.881265 containerd[2702]: time="2025-05-08T00:03:22.881175078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-78qnk,Uid:ecdba017-f46d-4d93-b251-179fdcd1b734,Namespace:calico-system,Attempt:2,}" May 8 00:03:22.881589 containerd[2702]: time="2025-05-08T00:03:22.881569613Z" level=info msg="StopPodSandbox for \"61e90d998f193e125748818c937ef9e382a82c926511f6e59fa46beb622580d5\"" May 8 00:03:22.881663 containerd[2702]: time="2025-05-08T00:03:22.881651896Z" level=info msg="TearDown network for sandbox \"61e90d998f193e125748818c937ef9e382a82c926511f6e59fa46beb622580d5\" successfully" May 8 00:03:22.881691 containerd[2702]: time="2025-05-08T00:03:22.881663816Z" level=info msg="StopPodSandbox for \"61e90d998f193e125748818c937ef9e382a82c926511f6e59fa46beb622580d5\" returns successfully" May 8 00:03:22.882409 containerd[2702]: time="2025-05-08T00:03:22.882390603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589b849bb-zzzwl,Uid:805c0cbf-349b-444e-8d83-d615f09bd8e1,Namespace:calico-apiserver,Attempt:2,}" May 8 00:03:22.882473 kubelet[4417]: I0508 00:03:22.882456 4417 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="308cefa1affd477c33c2178b491341b04faddeee325d1e8c80ce651c6cae36e6" May 8 00:03:22.883036 containerd[2702]: time="2025-05-08T00:03:22.883008146Z" level=info msg="StopPodSandbox for \"308cefa1affd477c33c2178b491341b04faddeee325d1e8c80ce651c6cae36e6\"" May 8 00:03:22.883176 containerd[2702]: time="2025-05-08T00:03:22.883160952Z" level=info msg="Ensure that sandbox 308cefa1affd477c33c2178b491341b04faddeee325d1e8c80ce651c6cae36e6 in task-service has been cleanup successfully" May 8 00:03:22.883399 containerd[2702]: time="2025-05-08T00:03:22.883379280Z" level=info msg="TearDown network for sandbox \"308cefa1affd477c33c2178b491341b04faddeee325d1e8c80ce651c6cae36e6\" successfully" May 8 00:03:22.883432 containerd[2702]: time="2025-05-08T00:03:22.883397281Z" level=info msg="StopPodSandbox for \"308cefa1affd477c33c2178b491341b04faddeee325d1e8c80ce651c6cae36e6\" returns successfully" May 8 00:03:22.883741 containerd[2702]: time="2025-05-08T00:03:22.883718573Z" level=info msg="StopPodSandbox for \"620028f9dd5e4dff71ba31cf57e9c41a45d57b1b1a2e62518f49739dd9008245\"" May 8 00:03:22.883810 containerd[2702]: time="2025-05-08T00:03:22.883795696Z" level=info msg="TearDown network for sandbox \"620028f9dd5e4dff71ba31cf57e9c41a45d57b1b1a2e62518f49739dd9008245\" successfully" May 8 00:03:22.883838 containerd[2702]: time="2025-05-08T00:03:22.883812536Z" level=info msg="StopPodSandbox for \"620028f9dd5e4dff71ba31cf57e9c41a45d57b1b1a2e62518f49739dd9008245\" returns successfully" May 8 00:03:22.884163 kubelet[4417]: I0508 00:03:22.884145 4417 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6766eee6915a89294ebcee84c9a0d5f739df341124c9c8fb287eacbf4984556" May 8 00:03:22.884188 kubelet[4417]: I0508 00:03:22.884139 4417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zhh4v" podStartSLOduration=0.749125895 podStartE2EDuration="5.884128028s" podCreationTimestamp="2025-05-08 00:03:17 +0000 UTC" firstStartedPulling="2025-05-08 00:03:17.568097896 +0000 UTC m=+28.822694817" lastFinishedPulling="2025-05-08 00:03:22.703100029 +0000 UTC m=+33.957696950" observedRunningTime="2025-05-08 00:03:22.884035184 +0000 UTC m=+34.138632105" watchObservedRunningTime="2025-05-08 00:03:22.884128028 +0000 UTC m=+34.138724949" May 8 00:03:22.884246 containerd[2702]: time="2025-05-08T00:03:22.884185390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589b849bb-s6cgn,Uid:70fa2288-52a8-48ed-9660-5941586aeb55,Namespace:calico-apiserver,Attempt:2,}" May 8 00:03:22.884596 containerd[2702]: time="2025-05-08T00:03:22.884577725Z" level=info msg="StopPodSandbox for \"d6766eee6915a89294ebcee84c9a0d5f739df341124c9c8fb287eacbf4984556\"" May 8 00:03:22.884744 containerd[2702]: time="2025-05-08T00:03:22.884728170Z" level=info msg="Ensure that sandbox d6766eee6915a89294ebcee84c9a0d5f739df341124c9c8fb287eacbf4984556 in task-service has been cleanup successfully" May 8 00:03:22.884926 containerd[2702]: time="2025-05-08T00:03:22.884906417Z" level=info msg="TearDown network for sandbox \"d6766eee6915a89294ebcee84c9a0d5f739df341124c9c8fb287eacbf4984556\" successfully" May 8 00:03:22.884926 containerd[2702]: time="2025-05-08T00:03:22.884923457Z" level=info msg="StopPodSandbox for \"d6766eee6915a89294ebcee84c9a0d5f739df341124c9c8fb287eacbf4984556\" returns successfully" May 8 00:03:22.885183 containerd[2702]: time="2025-05-08T00:03:22.885164066Z" level=info msg="StopPodSandbox for \"203dea32cab135b8e456669b0868c6078d0c5fa4d3942caed3f770d0f5292e7e\"" May 8 00:03:22.885246 containerd[2702]: time="2025-05-08T00:03:22.885235469Z" level=info msg="TearDown network for sandbox \"203dea32cab135b8e456669b0868c6078d0c5fa4d3942caed3f770d0f5292e7e\" successfully" May 8 00:03:22.885267 containerd[2702]: time="2025-05-08T00:03:22.885246469Z" level=info msg="StopPodSandbox for \"203dea32cab135b8e456669b0868c6078d0c5fa4d3942caed3f770d0f5292e7e\" returns successfully" May 8 00:03:22.885661 containerd[2702]: time="2025-05-08T00:03:22.885638124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l2qkb,Uid:026d55fd-56c2-4710-8822-2dacd50ff239,Namespace:kube-system,Attempt:2,}" May 8 00:03:22.922895 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 8 00:03:22.922998 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 8 00:03:22.924201 containerd[2702]: time="2025-05-08T00:03:22.924152393Z" level=error msg="Failed to destroy network for sandbox \"dcab63d45473a06c9cf59fb8ad69e4de522bf15d7fbf38602c35f7c41aba74b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:22.926109 containerd[2702]: time="2025-05-08T00:03:22.926075745Z" level=error msg="Failed to destroy network for sandbox \"01f940cd04893e438d6a59a5fdfde834ea3fd6e38dcc4632687846ef5486a6f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:22.926424 containerd[2702]: time="2025-05-08T00:03:22.926399437Z" level=error msg="Failed to destroy network for sandbox \"e4c5bda77bfd2577ceb5dd9c35cee50b3e7324bb47641e37571eefacfaf0571b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:22.926901 containerd[2702]: time="2025-05-08T00:03:22.926876534Z" level=error msg="Failed to destroy network for sandbox \"2fb8e73d5f4ef8064634faeb7c9b5b914533ef6b0a71faaa0bae82880bb29836\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:22.932813 containerd[2702]: time="2025-05-08T00:03:22.932737632Z" level=error msg="encountered an error cleaning up failed sandbox \"dcab63d45473a06c9cf59fb8ad69e4de522bf15d7fbf38602c35f7c41aba74b5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:22.932813 containerd[2702]: time="2025-05-08T00:03:22.932777513Z" level=error msg="encountered an error cleaning up failed sandbox \"01f940cd04893e438d6a59a5fdfde834ea3fd6e38dcc4632687846ef5486a6f7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:22.932898 containerd[2702]: time="2025-05-08T00:03:22.932813355Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76f6cb6bb-rlfnw,Uid:479ec137-6439-484c-93c9-297d6865167f,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"dcab63d45473a06c9cf59fb8ad69e4de522bf15d7fbf38602c35f7c41aba74b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:22.932898 containerd[2702]: time="2025-05-08T00:03:22.932836836Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hf55x,Uid:babaf8df-3800-4281-aaf6-109330f3cc79,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"01f940cd04893e438d6a59a5fdfde834ea3fd6e38dcc4632687846ef5486a6f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:22.932898 containerd[2702]: time="2025-05-08T00:03:22.932797514Z" level=error msg="encountered an error cleaning up failed sandbox \"e4c5bda77bfd2577ceb5dd9c35cee50b3e7324bb47641e37571eefacfaf0571b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:22.932991 containerd[2702]: time="2025-05-08T00:03:22.932921719Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-78qnk,Uid:ecdba017-f46d-4d93-b251-179fdcd1b734,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e4c5bda77bfd2577ceb5dd9c35cee50b3e7324bb47641e37571eefacfaf0571b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:22.933015 containerd[2702]: time="2025-05-08T00:03:22.932756473Z" level=error msg="encountered an error cleaning up failed sandbox \"2fb8e73d5f4ef8064634faeb7c9b5b914533ef6b0a71faaa0bae82880bb29836\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:22.933041 containerd[2702]: time="2025-05-08T00:03:22.933025563Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589b849bb-zzzwl,Uid:805c0cbf-349b-444e-8d83-d615f09bd8e1,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"2fb8e73d5f4ef8064634faeb7c9b5b914533ef6b0a71faaa0bae82880bb29836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:22.933073 kubelet[4417]: E0508 00:03:22.933001 4417 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01f940cd04893e438d6a59a5fdfde834ea3fd6e38dcc4632687846ef5486a6f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:22.933073 kubelet[4417]: E0508 00:03:22.933053 4417 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01f940cd04893e438d6a59a5fdfde834ea3fd6e38dcc4632687846ef5486a6f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hf55x" May 8 00:03:22.933073 kubelet[4417]: E0508 00:03:22.933049 4417 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4c5bda77bfd2577ceb5dd9c35cee50b3e7324bb47641e37571eefacfaf0571b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:22.933073 kubelet[4417]: E0508 00:03:22.933071 4417 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01f940cd04893e438d6a59a5fdfde834ea3fd6e38dcc4632687846ef5486a6f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hf55x" May 8 00:03:22.933366 kubelet[4417]: E0508 00:03:22.933094 4417 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4c5bda77bfd2577ceb5dd9c35cee50b3e7324bb47641e37571eefacfaf0571b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-78qnk" May 8 00:03:22.933366 kubelet[4417]: E0508 00:03:22.933111 4417 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4c5bda77bfd2577ceb5dd9c35cee50b3e7324bb47641e37571eefacfaf0571b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-78qnk" May 8 00:03:22.933366 kubelet[4417]: E0508 00:03:22.933107 4417 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-hf55x_kube-system(babaf8df-3800-4281-aaf6-109330f3cc79)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-hf55x_kube-system(babaf8df-3800-4281-aaf6-109330f3cc79)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01f940cd04893e438d6a59a5fdfde834ea3fd6e38dcc4632687846ef5486a6f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-hf55x" podUID="babaf8df-3800-4281-aaf6-109330f3cc79" May 8 00:03:22.933443 kubelet[4417]: E0508 00:03:22.933001 4417 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcab63d45473a06c9cf59fb8ad69e4de522bf15d7fbf38602c35f7c41aba74b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:22.933443 kubelet[4417]: E0508 00:03:22.933141 4417 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-78qnk_calico-system(ecdba017-f46d-4d93-b251-179fdcd1b734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-78qnk_calico-system(ecdba017-f46d-4d93-b251-179fdcd1b734)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4c5bda77bfd2577ceb5dd9c35cee50b3e7324bb47641e37571eefacfaf0571b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-78qnk" podUID="ecdba017-f46d-4d93-b251-179fdcd1b734" May 8 00:03:22.933443 kubelet[4417]: E0508 00:03:22.933147 4417 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fb8e73d5f4ef8064634faeb7c9b5b914533ef6b0a71faaa0bae82880bb29836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:22.933529 kubelet[4417]: E0508 00:03:22.933174 4417 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcab63d45473a06c9cf59fb8ad69e4de522bf15d7fbf38602c35f7c41aba74b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76f6cb6bb-rlfnw" May 8 00:03:22.933529 kubelet[4417]: E0508 00:03:22.933186 4417 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fb8e73d5f4ef8064634faeb7c9b5b914533ef6b0a71faaa0bae82880bb29836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-589b849bb-zzzwl" May 8 00:03:22.933529 kubelet[4417]: E0508 00:03:22.933192 4417 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcab63d45473a06c9cf59fb8ad69e4de522bf15d7fbf38602c35f7c41aba74b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76f6cb6bb-rlfnw" May 8 00:03:22.933529 kubelet[4417]: E0508 00:03:22.933202 4417 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fb8e73d5f4ef8064634faeb7c9b5b914533ef6b0a71faaa0bae82880bb29836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-589b849bb-zzzwl" May 8 00:03:22.933612 kubelet[4417]: E0508 00:03:22.933224 4417 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76f6cb6bb-rlfnw_calico-system(479ec137-6439-484c-93c9-297d6865167f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76f6cb6bb-rlfnw_calico-system(479ec137-6439-484c-93c9-297d6865167f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dcab63d45473a06c9cf59fb8ad69e4de522bf15d7fbf38602c35f7c41aba74b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76f6cb6bb-rlfnw" podUID="479ec137-6439-484c-93c9-297d6865167f" May 8 00:03:22.933612 kubelet[4417]: E0508 00:03:22.933230 4417 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-589b849bb-zzzwl_calico-apiserver(805c0cbf-349b-444e-8d83-d615f09bd8e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-589b849bb-zzzwl_calico-apiserver(805c0cbf-349b-444e-8d83-d615f09bd8e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2fb8e73d5f4ef8064634faeb7c9b5b914533ef6b0a71faaa0bae82880bb29836\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-589b849bb-zzzwl" podUID="805c0cbf-349b-444e-8d83-d615f09bd8e1" May 8 00:03:22.958390 containerd[2702]: time="2025-05-08T00:03:22.958332662Z" level=error msg="Failed to destroy network for sandbox \"4471b8c457bfcb1a11d660bd731b9eb8f2a5e6360d0b19dcc1b9838c855f05f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:22.958690 containerd[2702]: time="2025-05-08T00:03:22.958667754Z" level=error msg="encountered an error cleaning up failed sandbox \"4471b8c457bfcb1a11d660bd731b9eb8f2a5e6360d0b19dcc1b9838c855f05f9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:22.958741 containerd[2702]: time="2025-05-08T00:03:22.958726076Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l2qkb,Uid:026d55fd-56c2-4710-8822-2dacd50ff239,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"4471b8c457bfcb1a11d660bd731b9eb8f2a5e6360d0b19dcc1b9838c855f05f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:22.958911 containerd[2702]: time="2025-05-08T00:03:22.958885442Z" level=error msg="Failed to destroy network for sandbox \"8316d93dc161acf2c943521540d4a6889a2dc1affce88805d34e831f8974e349\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:22.958940 kubelet[4417]: E0508 00:03:22.958901 4417 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4471b8c457bfcb1a11d660bd731b9eb8f2a5e6360d0b19dcc1b9838c855f05f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:22.958988 kubelet[4417]: E0508 00:03:22.958962 4417 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4471b8c457bfcb1a11d660bd731b9eb8f2a5e6360d0b19dcc1b9838c855f05f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-l2qkb" May 8 00:03:22.959015 kubelet[4417]: E0508 00:03:22.958985 4417 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4471b8c457bfcb1a11d660bd731b9eb8f2a5e6360d0b19dcc1b9838c855f05f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-l2qkb" May 8 00:03:22.959047 kubelet[4417]: E0508 00:03:22.959022 4417 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-l2qkb_kube-system(026d55fd-56c2-4710-8822-2dacd50ff239)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-l2qkb_kube-system(026d55fd-56c2-4710-8822-2dacd50ff239)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4471b8c457bfcb1a11d660bd731b9eb8f2a5e6360d0b19dcc1b9838c855f05f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-l2qkb" podUID="026d55fd-56c2-4710-8822-2dacd50ff239" May 8 00:03:22.959172 containerd[2702]: time="2025-05-08T00:03:22.959153932Z" level=error msg="encountered an error cleaning up failed sandbox \"8316d93dc161acf2c943521540d4a6889a2dc1affce88805d34e831f8974e349\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:22.959214 containerd[2702]: time="2025-05-08T00:03:22.959197294Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589b849bb-s6cgn,Uid:70fa2288-52a8-48ed-9660-5941586aeb55,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"8316d93dc161acf2c943521540d4a6889a2dc1affce88805d34e831f8974e349\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:22.959358 kubelet[4417]: E0508 00:03:22.959332 4417 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8316d93dc161acf2c943521540d4a6889a2dc1affce88805d34e831f8974e349\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:03:22.959391 kubelet[4417]: E0508 00:03:22.959376 4417 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8316d93dc161acf2c943521540d4a6889a2dc1affce88805d34e831f8974e349\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-589b849bb-s6cgn" May 8 00:03:22.959413 kubelet[4417]: E0508 00:03:22.959394 4417 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8316d93dc161acf2c943521540d4a6889a2dc1affce88805d34e831f8974e349\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-589b849bb-s6cgn" May 8 00:03:22.959453 kubelet[4417]: E0508 00:03:22.959426 4417 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-589b849bb-s6cgn_calico-apiserver(70fa2288-52a8-48ed-9660-5941586aeb55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-589b849bb-s6cgn_calico-apiserver(70fa2288-52a8-48ed-9660-5941586aeb55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8316d93dc161acf2c943521540d4a6889a2dc1affce88805d34e831f8974e349\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-589b849bb-s6cgn" podUID="70fa2288-52a8-48ed-9660-5941586aeb55" May 8 00:03:23.213729 kubelet[4417]: I0508 00:03:23.213646 4417 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:03:23.317234 systemd[1]: run-netns-cni\x2dc633b49b\x2d5293\x2db4be\x2dc14a\x2d5fa399c69688.mount: Deactivated successfully. May 8 00:03:23.317315 systemd[1]: run-netns-cni\x2d11d9021f\x2d3e67\x2d1fad\x2ded8c\x2d0c2b371160e1.mount: Deactivated successfully. May 8 00:03:23.317360 systemd[1]: run-netns-cni\x2d2cc80226\x2dc5eb\x2dde7b\x2d9c0f\x2dcdea7f111c3f.mount: Deactivated successfully. May 8 00:03:23.317404 systemd[1]: run-netns-cni\x2d8dc1560e\x2d2cb1\x2d51a5\x2d432c\x2deb5749c32b2d.mount: Deactivated successfully. May 8 00:03:23.317446 systemd[1]: run-netns-cni\x2dba8ef243\x2d3d99\x2d4a34\x2d7cd3\x2d0923e7be8e5e.mount: Deactivated successfully. May 8 00:03:23.317486 systemd[1]: run-netns-cni\x2dc54dae67\x2d3fc4\x2d2c93\x2d8e25\x2dcc73f2ec844c.mount: Deactivated successfully. May 8 00:03:23.886790 kubelet[4417]: I0508 00:03:23.886767 4417 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4c5bda77bfd2577ceb5dd9c35cee50b3e7324bb47641e37571eefacfaf0571b" May 8 00:03:23.887197 containerd[2702]: time="2025-05-08T00:03:23.887172974Z" level=info msg="StopPodSandbox for \"e4c5bda77bfd2577ceb5dd9c35cee50b3e7324bb47641e37571eefacfaf0571b\"" May 8 00:03:23.887428 containerd[2702]: time="2025-05-08T00:03:23.887330740Z" level=info msg="Ensure that sandbox e4c5bda77bfd2577ceb5dd9c35cee50b3e7324bb47641e37571eefacfaf0571b in task-service has been cleanup successfully" May 8 00:03:23.887520 containerd[2702]: time="2025-05-08T00:03:23.887506106Z" level=info msg="TearDown network for sandbox \"e4c5bda77bfd2577ceb5dd9c35cee50b3e7324bb47641e37571eefacfaf0571b\" successfully" May 8 00:03:23.887547 containerd[2702]: time="2025-05-08T00:03:23.887520706Z" level=info msg="StopPodSandbox for \"e4c5bda77bfd2577ceb5dd9c35cee50b3e7324bb47641e37571eefacfaf0571b\" returns successfully" May 8 00:03:23.887715 containerd[2702]: time="2025-05-08T00:03:23.887696152Z" level=info msg="StopPodSandbox for \"3d495312edc1e4b7047949fad951b5f848ce2ec320c3e55ca78db1556721f11e\"" May 8 00:03:23.887787 containerd[2702]: time="2025-05-08T00:03:23.887775995Z" level=info msg="TearDown network for sandbox \"3d495312edc1e4b7047949fad951b5f848ce2ec320c3e55ca78db1556721f11e\" successfully" May 8 00:03:23.887815 containerd[2702]: time="2025-05-08T00:03:23.887787396Z" level=info msg="StopPodSandbox for \"3d495312edc1e4b7047949fad951b5f848ce2ec320c3e55ca78db1556721f11e\" returns successfully" May 8 00:03:23.887897 kubelet[4417]: I0508 00:03:23.887882 4417 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fb8e73d5f4ef8064634faeb7c9b5b914533ef6b0a71faaa0bae82880bb29836" May 8 00:03:23.887992 containerd[2702]: time="2025-05-08T00:03:23.887977922Z" level=info msg="StopPodSandbox for \"d8db5656ea5e531764db87dbcad62b7d50008ec357bb1837f18d72d9995a6173\"" May 8 00:03:23.888054 containerd[2702]: time="2025-05-08T00:03:23.888043765Z" level=info msg="TearDown network for sandbox \"d8db5656ea5e531764db87dbcad62b7d50008ec357bb1837f18d72d9995a6173\" successfully" May 8 00:03:23.888075 containerd[2702]: time="2025-05-08T00:03:23.888053965Z" level=info msg="StopPodSandbox for \"d8db5656ea5e531764db87dbcad62b7d50008ec357bb1837f18d72d9995a6173\" returns successfully" May 8 00:03:23.888234 containerd[2702]: time="2025-05-08T00:03:23.888219891Z" level=info msg="StopPodSandbox for \"2fb8e73d5f4ef8064634faeb7c9b5b914533ef6b0a71faaa0bae82880bb29836\"" May 8 00:03:23.888381 containerd[2702]: time="2025-05-08T00:03:23.888369296Z" level=info msg="Ensure that sandbox 2fb8e73d5f4ef8064634faeb7c9b5b914533ef6b0a71faaa0bae82880bb29836 in task-service has been cleanup successfully" May 8 00:03:23.888521 containerd[2702]: time="2025-05-08T00:03:23.888509541Z" level=info msg="TearDown network for sandbox \"2fb8e73d5f4ef8064634faeb7c9b5b914533ef6b0a71faaa0bae82880bb29836\" successfully" May 8 00:03:23.888545 containerd[2702]: time="2025-05-08T00:03:23.888521182Z" level=info msg="StopPodSandbox for \"2fb8e73d5f4ef8064634faeb7c9b5b914533ef6b0a71faaa0bae82880bb29836\" returns successfully" May 8 00:03:23.888564 containerd[2702]: time="2025-05-08T00:03:23.888536342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-78qnk,Uid:ecdba017-f46d-4d93-b251-179fdcd1b734,Namespace:calico-system,Attempt:3,}" May 8 00:03:23.888714 containerd[2702]: time="2025-05-08T00:03:23.888699028Z" level=info msg="StopPodSandbox for \"95604299eefc9882140aa58b0c424648dfda7875cb52916b5251e92699e7c852\"" May 8 00:03:23.888793 containerd[2702]: time="2025-05-08T00:03:23.888781751Z" level=info msg="TearDown network for sandbox \"95604299eefc9882140aa58b0c424648dfda7875cb52916b5251e92699e7c852\" successfully" May 8 00:03:23.888816 containerd[2702]: time="2025-05-08T00:03:23.888792991Z" level=info msg="StopPodSandbox for \"95604299eefc9882140aa58b0c424648dfda7875cb52916b5251e92699e7c852\" returns successfully" May 8 00:03:23.888976 systemd[1]: run-netns-cni\x2dbbf4a60e\x2d3f04\x2dd1cd\x2d0237\x2ded5b36eff69e.mount: Deactivated successfully. May 8 00:03:23.889127 containerd[2702]: time="2025-05-08T00:03:23.889041680Z" level=info msg="StopPodSandbox for \"61e90d998f193e125748818c937ef9e382a82c926511f6e59fa46beb622580d5\"" May 8 00:03:23.889127 containerd[2702]: time="2025-05-08T00:03:23.889103082Z" level=info msg="TearDown network for sandbox \"61e90d998f193e125748818c937ef9e382a82c926511f6e59fa46beb622580d5\" successfully" May 8 00:03:23.889127 containerd[2702]: time="2025-05-08T00:03:23.889112243Z" level=info msg="StopPodSandbox for \"61e90d998f193e125748818c937ef9e382a82c926511f6e59fa46beb622580d5\" returns successfully" May 8 00:03:23.889184 kubelet[4417]: I0508 00:03:23.889134 4417 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8316d93dc161acf2c943521540d4a6889a2dc1affce88805d34e831f8974e349" May 8 00:03:23.889456 containerd[2702]: time="2025-05-08T00:03:23.889439814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589b849bb-zzzwl,Uid:805c0cbf-349b-444e-8d83-d615f09bd8e1,Namespace:calico-apiserver,Attempt:3,}" May 8 00:03:23.889480 containerd[2702]: time="2025-05-08T00:03:23.889461975Z" level=info msg="StopPodSandbox for \"8316d93dc161acf2c943521540d4a6889a2dc1affce88805d34e831f8974e349\"" May 8 00:03:23.889600 containerd[2702]: time="2025-05-08T00:03:23.889587780Z" level=info msg="Ensure that sandbox 8316d93dc161acf2c943521540d4a6889a2dc1affce88805d34e831f8974e349 in task-service has been cleanup successfully" May 8 00:03:23.889814 containerd[2702]: time="2025-05-08T00:03:23.889796747Z" level=info msg="TearDown network for sandbox \"8316d93dc161acf2c943521540d4a6889a2dc1affce88805d34e831f8974e349\" successfully" May 8 00:03:23.889844 containerd[2702]: time="2025-05-08T00:03:23.889814228Z" level=info msg="StopPodSandbox for \"8316d93dc161acf2c943521540d4a6889a2dc1affce88805d34e831f8974e349\" returns successfully" May 8 00:03:23.890100 containerd[2702]: time="2025-05-08T00:03:23.890085997Z" level=info msg="StopPodSandbox for \"308cefa1affd477c33c2178b491341b04faddeee325d1e8c80ce651c6cae36e6\"" May 8 00:03:23.890170 containerd[2702]: time="2025-05-08T00:03:23.890159840Z" level=info msg="TearDown network for sandbox \"308cefa1affd477c33c2178b491341b04faddeee325d1e8c80ce651c6cae36e6\" successfully" May 8 00:03:23.890192 containerd[2702]: time="2025-05-08T00:03:23.890170240Z" level=info msg="StopPodSandbox for \"308cefa1affd477c33c2178b491341b04faddeee325d1e8c80ce651c6cae36e6\" returns successfully" May 8 00:03:23.890350 kubelet[4417]: I0508 00:03:23.890338 4417 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4471b8c457bfcb1a11d660bd731b9eb8f2a5e6360d0b19dcc1b9838c855f05f9" May 8 00:03:23.890489 containerd[2702]: time="2025-05-08T00:03:23.890474251Z" level=info msg="StopPodSandbox for \"620028f9dd5e4dff71ba31cf57e9c41a45d57b1b1a2e62518f49739dd9008245\"" May 8 00:03:23.890560 containerd[2702]: time="2025-05-08T00:03:23.890550054Z" level=info msg="TearDown network for sandbox \"620028f9dd5e4dff71ba31cf57e9c41a45d57b1b1a2e62518f49739dd9008245\" successfully" May 8 00:03:23.890580 containerd[2702]: time="2025-05-08T00:03:23.890560014Z" level=info msg="StopPodSandbox for \"620028f9dd5e4dff71ba31cf57e9c41a45d57b1b1a2e62518f49739dd9008245\" returns successfully" May 8 00:03:23.890675 containerd[2702]: time="2025-05-08T00:03:23.890659618Z" level=info msg="StopPodSandbox for \"4471b8c457bfcb1a11d660bd731b9eb8f2a5e6360d0b19dcc1b9838c855f05f9\"" May 8 00:03:23.890810 containerd[2702]: time="2025-05-08T00:03:23.890794942Z" level=info msg="Ensure that sandbox 4471b8c457bfcb1a11d660bd731b9eb8f2a5e6360d0b19dcc1b9838c855f05f9 in task-service has been cleanup successfully" May 8 00:03:23.890904 systemd[1]: run-netns-cni\x2d219269e4\x2dd6bd\x2dd87d\x2dd56c\x2d8d6ed71f627b.mount: Deactivated successfully. May 8 00:03:23.890949 containerd[2702]: time="2025-05-08T00:03:23.890932187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589b849bb-s6cgn,Uid:70fa2288-52a8-48ed-9660-5941586aeb55,Namespace:calico-apiserver,Attempt:3,}" May 8 00:03:23.890980 systemd[1]: run-netns-cni\x2dd179f44b\x2d4d02\x2db85e\x2dee09\x2d61cbff11b3ec.mount: Deactivated successfully. May 8 00:03:23.891119 containerd[2702]: time="2025-05-08T00:03:23.891104153Z" level=info msg="TearDown network for sandbox \"4471b8c457bfcb1a11d660bd731b9eb8f2a5e6360d0b19dcc1b9838c855f05f9\" successfully" May 8 00:03:23.891145 containerd[2702]: time="2025-05-08T00:03:23.891119274Z" level=info msg="StopPodSandbox for \"4471b8c457bfcb1a11d660bd731b9eb8f2a5e6360d0b19dcc1b9838c855f05f9\" returns successfully" May 8 00:03:23.891379 containerd[2702]: time="2025-05-08T00:03:23.891363843Z" level=info msg="StopPodSandbox for \"d6766eee6915a89294ebcee84c9a0d5f739df341124c9c8fb287eacbf4984556\"" May 8 00:03:23.891440 containerd[2702]: time="2025-05-08T00:03:23.891430405Z" level=info msg="TearDown network for sandbox \"d6766eee6915a89294ebcee84c9a0d5f739df341124c9c8fb287eacbf4984556\" successfully" May 8 00:03:23.891460 containerd[2702]: time="2025-05-08T00:03:23.891439925Z" level=info msg="StopPodSandbox for \"d6766eee6915a89294ebcee84c9a0d5f739df341124c9c8fb287eacbf4984556\" returns successfully" May 8 00:03:23.891533 kubelet[4417]: I0508 00:03:23.891517 4417 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcab63d45473a06c9cf59fb8ad69e4de522bf15d7fbf38602c35f7c41aba74b5" May 8 00:03:23.891876 containerd[2702]: time="2025-05-08T00:03:23.891851100Z" level=info msg="StopPodSandbox for \"203dea32cab135b8e456669b0868c6078d0c5fa4d3942caed3f770d0f5292e7e\"" May 8 00:03:23.891943 containerd[2702]: time="2025-05-08T00:03:23.891933223Z" level=info msg="TearDown network for sandbox \"203dea32cab135b8e456669b0868c6078d0c5fa4d3942caed3f770d0f5292e7e\" successfully" May 8 00:03:23.891967 containerd[2702]: time="2025-05-08T00:03:23.891944143Z" level=info msg="StopPodSandbox for \"203dea32cab135b8e456669b0868c6078d0c5fa4d3942caed3f770d0f5292e7e\" returns successfully" May 8 00:03:23.892084 containerd[2702]: time="2025-05-08T00:03:23.892071508Z" level=info msg="StopPodSandbox for \"dcab63d45473a06c9cf59fb8ad69e4de522bf15d7fbf38602c35f7c41aba74b5\"" May 8 00:03:23.892198 containerd[2702]: time="2025-05-08T00:03:23.892185752Z" level=info msg="Ensure that sandbox dcab63d45473a06c9cf59fb8ad69e4de522bf15d7fbf38602c35f7c41aba74b5 in task-service has been cleanup successfully" May 8 00:03:23.892250 containerd[2702]: time="2025-05-08T00:03:23.892231273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l2qkb,Uid:026d55fd-56c2-4710-8822-2dacd50ff239,Namespace:kube-system,Attempt:3,}" May 8 00:03:23.892335 containerd[2702]: time="2025-05-08T00:03:23.892322757Z" level=info msg="TearDown network for sandbox \"dcab63d45473a06c9cf59fb8ad69e4de522bf15d7fbf38602c35f7c41aba74b5\" successfully" May 8 00:03:23.892399 containerd[2702]: time="2025-05-08T00:03:23.892335557Z" level=info msg="StopPodSandbox for \"dcab63d45473a06c9cf59fb8ad69e4de522bf15d7fbf38602c35f7c41aba74b5\" returns successfully" May 8 00:03:23.892530 containerd[2702]: time="2025-05-08T00:03:23.892515243Z" level=info msg="StopPodSandbox for \"fddc6bda409a7acee382dc2ec3b8103a6d687d1bdfb87c4ea0fac220127f37a7\"" May 8 00:03:23.892597 containerd[2702]: time="2025-05-08T00:03:23.892585206Z" level=info msg="TearDown network for sandbox \"fddc6bda409a7acee382dc2ec3b8103a6d687d1bdfb87c4ea0fac220127f37a7\" successfully" May 8 00:03:23.892629 containerd[2702]: time="2025-05-08T00:03:23.892596206Z" level=info msg="StopPodSandbox for \"fddc6bda409a7acee382dc2ec3b8103a6d687d1bdfb87c4ea0fac220127f37a7\" returns successfully" May 8 00:03:23.892879 containerd[2702]: time="2025-05-08T00:03:23.892766212Z" level=info msg="StopPodSandbox for \"ea89867d9667df584b88c39fa576b94e6cf08a78335428fc8da0fd087596607d\"" May 8 00:03:23.892879 containerd[2702]: time="2025-05-08T00:03:23.892853015Z" level=info msg="TearDown network for sandbox \"ea89867d9667df584b88c39fa576b94e6cf08a78335428fc8da0fd087596607d\" successfully" May 8 00:03:23.892879 containerd[2702]: time="2025-05-08T00:03:23.892863576Z" level=info msg="StopPodSandbox for \"ea89867d9667df584b88c39fa576b94e6cf08a78335428fc8da0fd087596607d\" returns successfully" May 8 00:03:23.892986 systemd[1]: run-netns-cni\x2d0da76699\x2d6576\x2d75e9\x2d9eb1\x2d81fabb4746ba.mount: Deactivated successfully. May 8 00:03:23.893357 containerd[2702]: time="2025-05-08T00:03:23.893158026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76f6cb6bb-rlfnw,Uid:479ec137-6439-484c-93c9-297d6865167f,Namespace:calico-system,Attempt:3,}" May 8 00:03:23.893435 kubelet[4417]: I0508 00:03:23.893347 4417 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01f940cd04893e438d6a59a5fdfde834ea3fd6e38dcc4632687846ef5486a6f7" May 8 00:03:23.893435 kubelet[4417]: I0508 00:03:23.893377 4417 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:03:23.893849 containerd[2702]: time="2025-05-08T00:03:23.893828570Z" level=info msg="StopPodSandbox for \"01f940cd04893e438d6a59a5fdfde834ea3fd6e38dcc4632687846ef5486a6f7\"" May 8 00:03:23.893959 containerd[2702]: time="2025-05-08T00:03:23.893945294Z" level=info msg="Ensure that sandbox 01f940cd04893e438d6a59a5fdfde834ea3fd6e38dcc4632687846ef5486a6f7 in task-service has been cleanup successfully" May 8 00:03:23.894104 containerd[2702]: time="2025-05-08T00:03:23.894091259Z" level=info msg="TearDown network for sandbox \"01f940cd04893e438d6a59a5fdfde834ea3fd6e38dcc4632687846ef5486a6f7\" successfully" May 8 00:03:23.894134 containerd[2702]: time="2025-05-08T00:03:23.894103420Z" level=info msg="StopPodSandbox for \"01f940cd04893e438d6a59a5fdfde834ea3fd6e38dcc4632687846ef5486a6f7\" returns successfully" May 8 00:03:23.894346 containerd[2702]: time="2025-05-08T00:03:23.894326628Z" level=info msg="StopPodSandbox for \"7a07854e2751a41c24beba08491c46960262ab93a1825f6f2e365740a3b51de2\"" May 8 00:03:23.894415 containerd[2702]: time="2025-05-08T00:03:23.894403951Z" level=info msg="TearDown network for sandbox \"7a07854e2751a41c24beba08491c46960262ab93a1825f6f2e365740a3b51de2\" successfully" May 8 00:03:23.894437 containerd[2702]: time="2025-05-08T00:03:23.894414951Z" level=info msg="StopPodSandbox for \"7a07854e2751a41c24beba08491c46960262ab93a1825f6f2e365740a3b51de2\" returns successfully" May 8 00:03:23.894751 containerd[2702]: time="2025-05-08T00:03:23.894640799Z" level=info msg="StopPodSandbox for \"0286357e8ee664578efc93e6958c44b4cd90bcc1bd1fd9fd4e71cd419852d78a\"" May 8 00:03:23.894817 containerd[2702]: time="2025-05-08T00:03:23.894797124Z" level=info msg="TearDown network for sandbox \"0286357e8ee664578efc93e6958c44b4cd90bcc1bd1fd9fd4e71cd419852d78a\" successfully" May 8 00:03:23.894841 containerd[2702]: time="2025-05-08T00:03:23.894817445Z" level=info msg="StopPodSandbox for \"0286357e8ee664578efc93e6958c44b4cd90bcc1bd1fd9fd4e71cd419852d78a\" returns successfully" May 8 00:03:23.895168 containerd[2702]: time="2025-05-08T00:03:23.895145497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hf55x,Uid:babaf8df-3800-4281-aaf6-109330f3cc79,Namespace:kube-system,Attempt:3,}" May 8 00:03:23.895826 systemd[1]: run-netns-cni\x2dcf82c8c6\x2d8569\x2db473\x2dd14a\x2d963aeaf42511.mount: Deactivated successfully. May 8 00:03:23.995994 systemd-networkd[2602]: cali7cac28f71ba: Link UP May 8 00:03:23.996405 systemd-networkd[2602]: cali7cac28f71ba: Gained carrier May 8 00:03:24.002640 containerd[2702]: 2025-05-08 00:03:23.917 [INFO][6708] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 8 00:03:24.002640 containerd[2702]: 2025-05-08 00:03:23.928 [INFO][6708] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.1.1--n--1f162da554-k8s-calico--kube--controllers--76f6cb6bb--rlfnw-eth0 calico-kube-controllers-76f6cb6bb- calico-system 479ec137-6439-484c-93c9-297d6865167f 667 0 2025-05-08 00:03:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:76f6cb6bb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4230.1.1-n-1f162da554 calico-kube-controllers-76f6cb6bb-rlfnw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7cac28f71ba [] []}} ContainerID="21c3e8e728ec1316b5e650f53b1bf3dfaebffe0a31d7e66fab3d23bb085b71ae" Namespace="calico-system" Pod="calico-kube-controllers-76f6cb6bb-rlfnw" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-calico--kube--controllers--76f6cb6bb--rlfnw-" May 8 00:03:24.002640 containerd[2702]: 2025-05-08 00:03:23.928 [INFO][6708] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="21c3e8e728ec1316b5e650f53b1bf3dfaebffe0a31d7e66fab3d23bb085b71ae" Namespace="calico-system" Pod="calico-kube-controllers-76f6cb6bb-rlfnw" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-calico--kube--controllers--76f6cb6bb--rlfnw-eth0" May 8 00:03:24.002640 containerd[2702]: 2025-05-08 00:03:23.962 [INFO][6836] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="21c3e8e728ec1316b5e650f53b1bf3dfaebffe0a31d7e66fab3d23bb085b71ae" HandleID="k8s-pod-network.21c3e8e728ec1316b5e650f53b1bf3dfaebffe0a31d7e66fab3d23bb085b71ae" Workload="ci--4230.1.1--n--1f162da554-k8s-calico--kube--controllers--76f6cb6bb--rlfnw-eth0" May 8 00:03:24.002640 containerd[2702]: 2025-05-08 00:03:23.972 [INFO][6836] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="21c3e8e728ec1316b5e650f53b1bf3dfaebffe0a31d7e66fab3d23bb085b71ae" HandleID="k8s-pod-network.21c3e8e728ec1316b5e650f53b1bf3dfaebffe0a31d7e66fab3d23bb085b71ae" Workload="ci--4230.1.1--n--1f162da554-k8s-calico--kube--controllers--76f6cb6bb--rlfnw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000906d50), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4230.1.1-n-1f162da554", "pod":"calico-kube-controllers-76f6cb6bb-rlfnw", "timestamp":"2025-05-08 00:03:23.962746896 +0000 UTC"}, Hostname:"ci-4230.1.1-n-1f162da554", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:03:24.002640 containerd[2702]: 2025-05-08 00:03:23.972 [INFO][6836] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:03:24.002640 containerd[2702]: 2025-05-08 00:03:23.972 [INFO][6836] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:03:24.002640 containerd[2702]: 2025-05-08 00:03:23.972 [INFO][6836] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.1.1-n-1f162da554' May 8 00:03:24.002640 containerd[2702]: 2025-05-08 00:03:23.973 [INFO][6836] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.21c3e8e728ec1316b5e650f53b1bf3dfaebffe0a31d7e66fab3d23bb085b71ae" host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.002640 containerd[2702]: 2025-05-08 00:03:23.976 [INFO][6836] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.002640 containerd[2702]: 2025-05-08 00:03:23.979 [INFO][6836] ipam/ipam.go 489: Trying affinity for 192.168.78.128/26 host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.002640 containerd[2702]: 2025-05-08 00:03:23.980 [INFO][6836] ipam/ipam.go 155: Attempting to load block cidr=192.168.78.128/26 host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.002640 containerd[2702]: 2025-05-08 00:03:23.982 [INFO][6836] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.78.128/26 host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.002640 containerd[2702]: 2025-05-08 00:03:23.982 [INFO][6836] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.78.128/26 handle="k8s-pod-network.21c3e8e728ec1316b5e650f53b1bf3dfaebffe0a31d7e66fab3d23bb085b71ae" host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.002640 containerd[2702]: 2025-05-08 00:03:23.983 [INFO][6836] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.21c3e8e728ec1316b5e650f53b1bf3dfaebffe0a31d7e66fab3d23bb085b71ae May 8 00:03:24.002640 containerd[2702]: 2025-05-08 00:03:23.985 [INFO][6836] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.78.128/26 handle="k8s-pod-network.21c3e8e728ec1316b5e650f53b1bf3dfaebffe0a31d7e66fab3d23bb085b71ae" host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.002640 containerd[2702]: 2025-05-08 00:03:23.989 [INFO][6836] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.78.129/26] block=192.168.78.128/26 handle="k8s-pod-network.21c3e8e728ec1316b5e650f53b1bf3dfaebffe0a31d7e66fab3d23bb085b71ae" host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.002640 containerd[2702]: 2025-05-08 00:03:23.989 [INFO][6836] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.78.129/26] handle="k8s-pod-network.21c3e8e728ec1316b5e650f53b1bf3dfaebffe0a31d7e66fab3d23bb085b71ae" host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.002640 containerd[2702]: 2025-05-08 00:03:23.989 [INFO][6836] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:03:24.002640 containerd[2702]: 2025-05-08 00:03:23.989 [INFO][6836] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.78.129/26] IPv6=[] ContainerID="21c3e8e728ec1316b5e650f53b1bf3dfaebffe0a31d7e66fab3d23bb085b71ae" HandleID="k8s-pod-network.21c3e8e728ec1316b5e650f53b1bf3dfaebffe0a31d7e66fab3d23bb085b71ae" Workload="ci--4230.1.1--n--1f162da554-k8s-calico--kube--controllers--76f6cb6bb--rlfnw-eth0" May 8 00:03:24.003138 containerd[2702]: 2025-05-08 00:03:23.990 [INFO][6708] cni-plugin/k8s.go 386: Populated endpoint ContainerID="21c3e8e728ec1316b5e650f53b1bf3dfaebffe0a31d7e66fab3d23bb085b71ae" Namespace="calico-system" Pod="calico-kube-controllers-76f6cb6bb-rlfnw" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-calico--kube--controllers--76f6cb6bb--rlfnw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--1f162da554-k8s-calico--kube--controllers--76f6cb6bb--rlfnw-eth0", GenerateName:"calico-kube-controllers-76f6cb6bb-", Namespace:"calico-system", SelfLink:"", UID:"479ec137-6439-484c-93c9-297d6865167f", ResourceVersion:"667", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 3, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76f6cb6bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-1f162da554", ContainerID:"", Pod:"calico-kube-controllers-76f6cb6bb-rlfnw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.78.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7cac28f71ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:03:24.003138 containerd[2702]: 2025-05-08 00:03:23.991 [INFO][6708] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.78.129/32] ContainerID="21c3e8e728ec1316b5e650f53b1bf3dfaebffe0a31d7e66fab3d23bb085b71ae" Namespace="calico-system" Pod="calico-kube-controllers-76f6cb6bb-rlfnw" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-calico--kube--controllers--76f6cb6bb--rlfnw-eth0" May 8 00:03:24.003138 containerd[2702]: 2025-05-08 00:03:23.991 [INFO][6708] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7cac28f71ba ContainerID="21c3e8e728ec1316b5e650f53b1bf3dfaebffe0a31d7e66fab3d23bb085b71ae" Namespace="calico-system" Pod="calico-kube-controllers-76f6cb6bb-rlfnw" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-calico--kube--controllers--76f6cb6bb--rlfnw-eth0" May 8 00:03:24.003138 containerd[2702]: 2025-05-08 00:03:23.996 [INFO][6708] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="21c3e8e728ec1316b5e650f53b1bf3dfaebffe0a31d7e66fab3d23bb085b71ae" Namespace="calico-system" Pod="calico-kube-controllers-76f6cb6bb-rlfnw" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-calico--kube--controllers--76f6cb6bb--rlfnw-eth0" May 8 00:03:24.003138 containerd[2702]: 2025-05-08 00:03:23.996 [INFO][6708] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="21c3e8e728ec1316b5e650f53b1bf3dfaebffe0a31d7e66fab3d23bb085b71ae" Namespace="calico-system" Pod="calico-kube-controllers-76f6cb6bb-rlfnw" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-calico--kube--controllers--76f6cb6bb--rlfnw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--1f162da554-k8s-calico--kube--controllers--76f6cb6bb--rlfnw-eth0", GenerateName:"calico-kube-controllers-76f6cb6bb-", Namespace:"calico-system", SelfLink:"", UID:"479ec137-6439-484c-93c9-297d6865167f", ResourceVersion:"667", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 3, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76f6cb6bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-1f162da554", ContainerID:"21c3e8e728ec1316b5e650f53b1bf3dfaebffe0a31d7e66fab3d23bb085b71ae", Pod:"calico-kube-controllers-76f6cb6bb-rlfnw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.78.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7cac28f71ba", MAC:"c6:f6:94:2c:dd:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:03:24.003138 containerd[2702]: 2025-05-08 00:03:24.001 [INFO][6708] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="21c3e8e728ec1316b5e650f53b1bf3dfaebffe0a31d7e66fab3d23bb085b71ae" Namespace="calico-system" Pod="calico-kube-controllers-76f6cb6bb-rlfnw" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-calico--kube--controllers--76f6cb6bb--rlfnw-eth0" May 8 00:03:24.010502 systemd-networkd[2602]: cali1138fc67294: Link UP May 8 00:03:24.010633 systemd-networkd[2602]: cali1138fc67294: Gained carrier May 8 00:03:24.016480 containerd[2702]: time="2025-05-08T00:03:24.016419018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:03:24.016480 containerd[2702]: time="2025-05-08T00:03:24.016472500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:03:24.016534 containerd[2702]: time="2025-05-08T00:03:24.016482700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:03:24.016594 containerd[2702]: time="2025-05-08T00:03:24.016556903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:03:24.017560 containerd[2702]: 2025-05-08 00:03:23.914 [INFO][6674] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 8 00:03:24.017560 containerd[2702]: 2025-05-08 00:03:23.928 [INFO][6674] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.1.1--n--1f162da554-k8s-calico--apiserver--589b849bb--zzzwl-eth0 calico-apiserver-589b849bb- calico-apiserver 805c0cbf-349b-444e-8d83-d615f09bd8e1 669 0 2025-05-08 00:03:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:589b849bb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4230.1.1-n-1f162da554 calico-apiserver-589b849bb-zzzwl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1138fc67294 [] []}} ContainerID="729d9501a3e6a37496d3f1a6fabab2125ad7bb4dda125027b22bb59ba6334b8b" Namespace="calico-apiserver" Pod="calico-apiserver-589b849bb-zzzwl" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-calico--apiserver--589b849bb--zzzwl-" May 8 00:03:24.017560 containerd[2702]: 2025-05-08 00:03:23.928 [INFO][6674] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="729d9501a3e6a37496d3f1a6fabab2125ad7bb4dda125027b22bb59ba6334b8b" Namespace="calico-apiserver" Pod="calico-apiserver-589b849bb-zzzwl" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-calico--apiserver--589b849bb--zzzwl-eth0" May 8 00:03:24.017560 containerd[2702]: 2025-05-08 00:03:23.962 [INFO][6827] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="729d9501a3e6a37496d3f1a6fabab2125ad7bb4dda125027b22bb59ba6334b8b" HandleID="k8s-pod-network.729d9501a3e6a37496d3f1a6fabab2125ad7bb4dda125027b22bb59ba6334b8b" Workload="ci--4230.1.1--n--1f162da554-k8s-calico--apiserver--589b849bb--zzzwl-eth0" May 8 00:03:24.017560 containerd[2702]: 2025-05-08 00:03:23.972 [INFO][6827] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="729d9501a3e6a37496d3f1a6fabab2125ad7bb4dda125027b22bb59ba6334b8b" HandleID="k8s-pod-network.729d9501a3e6a37496d3f1a6fabab2125ad7bb4dda125027b22bb59ba6334b8b" Workload="ci--4230.1.1--n--1f162da554-k8s-calico--apiserver--589b849bb--zzzwl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005260e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4230.1.1-n-1f162da554", "pod":"calico-apiserver-589b849bb-zzzwl", "timestamp":"2025-05-08 00:03:23.962742176 +0000 UTC"}, Hostname:"ci-4230.1.1-n-1f162da554", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:03:24.017560 containerd[2702]: 2025-05-08 00:03:23.972 [INFO][6827] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:03:24.017560 containerd[2702]: 2025-05-08 00:03:23.989 [INFO][6827] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:03:24.017560 containerd[2702]: 2025-05-08 00:03:23.989 [INFO][6827] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.1.1-n-1f162da554' May 8 00:03:24.017560 containerd[2702]: 2025-05-08 00:03:23.990 [INFO][6827] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.729d9501a3e6a37496d3f1a6fabab2125ad7bb4dda125027b22bb59ba6334b8b" host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.017560 containerd[2702]: 2025-05-08 00:03:23.993 [INFO][6827] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.017560 containerd[2702]: 2025-05-08 00:03:23.997 [INFO][6827] ipam/ipam.go 489: Trying affinity for 192.168.78.128/26 host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.017560 containerd[2702]: 2025-05-08 00:03:23.998 [INFO][6827] ipam/ipam.go 155: Attempting to load block cidr=192.168.78.128/26 host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.017560 containerd[2702]: 2025-05-08 00:03:24.000 [INFO][6827] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.78.128/26 host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.017560 containerd[2702]: 2025-05-08 00:03:24.000 [INFO][6827] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.78.128/26 handle="k8s-pod-network.729d9501a3e6a37496d3f1a6fabab2125ad7bb4dda125027b22bb59ba6334b8b" host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.017560 containerd[2702]: 2025-05-08 00:03:24.001 [INFO][6827] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.729d9501a3e6a37496d3f1a6fabab2125ad7bb4dda125027b22bb59ba6334b8b May 8 00:03:24.017560 containerd[2702]: 2025-05-08 00:03:24.003 [INFO][6827] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.78.128/26 handle="k8s-pod-network.729d9501a3e6a37496d3f1a6fabab2125ad7bb4dda125027b22bb59ba6334b8b" host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.017560 containerd[2702]: 2025-05-08 00:03:24.006 [INFO][6827] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.78.130/26] block=192.168.78.128/26 handle="k8s-pod-network.729d9501a3e6a37496d3f1a6fabab2125ad7bb4dda125027b22bb59ba6334b8b" host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.017560 containerd[2702]: 2025-05-08 00:03:24.006 [INFO][6827] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.78.130/26] handle="k8s-pod-network.729d9501a3e6a37496d3f1a6fabab2125ad7bb4dda125027b22bb59ba6334b8b" host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.017560 containerd[2702]: 2025-05-08 00:03:24.007 [INFO][6827] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:03:24.017560 containerd[2702]: 2025-05-08 00:03:24.007 [INFO][6827] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.78.130/26] IPv6=[] ContainerID="729d9501a3e6a37496d3f1a6fabab2125ad7bb4dda125027b22bb59ba6334b8b" HandleID="k8s-pod-network.729d9501a3e6a37496d3f1a6fabab2125ad7bb4dda125027b22bb59ba6334b8b" Workload="ci--4230.1.1--n--1f162da554-k8s-calico--apiserver--589b849bb--zzzwl-eth0" May 8 00:03:24.017947 containerd[2702]: 2025-05-08 00:03:24.008 [INFO][6674] cni-plugin/k8s.go 386: Populated endpoint ContainerID="729d9501a3e6a37496d3f1a6fabab2125ad7bb4dda125027b22bb59ba6334b8b" Namespace="calico-apiserver" Pod="calico-apiserver-589b849bb-zzzwl" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-calico--apiserver--589b849bb--zzzwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--1f162da554-k8s-calico--apiserver--589b849bb--zzzwl-eth0", GenerateName:"calico-apiserver-589b849bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"805c0cbf-349b-444e-8d83-d615f09bd8e1", ResourceVersion:"669", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 3, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"589b849bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-1f162da554", ContainerID:"", Pod:"calico-apiserver-589b849bb-zzzwl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.78.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1138fc67294", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:03:24.017947 containerd[2702]: 2025-05-08 00:03:24.009 [INFO][6674] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.78.130/32] ContainerID="729d9501a3e6a37496d3f1a6fabab2125ad7bb4dda125027b22bb59ba6334b8b" Namespace="calico-apiserver" Pod="calico-apiserver-589b849bb-zzzwl" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-calico--apiserver--589b849bb--zzzwl-eth0" May 8 00:03:24.017947 containerd[2702]: 2025-05-08 00:03:24.009 [INFO][6674] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1138fc67294 ContainerID="729d9501a3e6a37496d3f1a6fabab2125ad7bb4dda125027b22bb59ba6334b8b" Namespace="calico-apiserver" Pod="calico-apiserver-589b849bb-zzzwl" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-calico--apiserver--589b849bb--zzzwl-eth0" May 8 00:03:24.017947 containerd[2702]: 2025-05-08 00:03:24.010 [INFO][6674] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="729d9501a3e6a37496d3f1a6fabab2125ad7bb4dda125027b22bb59ba6334b8b" Namespace="calico-apiserver" Pod="calico-apiserver-589b849bb-zzzwl" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-calico--apiserver--589b849bb--zzzwl-eth0" May 8 00:03:24.017947 containerd[2702]: 2025-05-08 00:03:24.011 [INFO][6674] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="729d9501a3e6a37496d3f1a6fabab2125ad7bb4dda125027b22bb59ba6334b8b" Namespace="calico-apiserver" Pod="calico-apiserver-589b849bb-zzzwl" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-calico--apiserver--589b849bb--zzzwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--1f162da554-k8s-calico--apiserver--589b849bb--zzzwl-eth0", GenerateName:"calico-apiserver-589b849bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"805c0cbf-349b-444e-8d83-d615f09bd8e1", ResourceVersion:"669", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 3, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"589b849bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-1f162da554", ContainerID:"729d9501a3e6a37496d3f1a6fabab2125ad7bb4dda125027b22bb59ba6334b8b", Pod:"calico-apiserver-589b849bb-zzzwl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.78.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1138fc67294", MAC:"0e:86:34:ec:b4:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:03:24.017947 containerd[2702]: 2025-05-08 00:03:24.016 [INFO][6674] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="729d9501a3e6a37496d3f1a6fabab2125ad7bb4dda125027b22bb59ba6334b8b" Namespace="calico-apiserver" Pod="calico-apiserver-589b849bb-zzzwl" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-calico--apiserver--589b849bb--zzzwl-eth0" May 8 00:03:24.028963 systemd-networkd[2602]: cali15f53522083: Link UP May 8 00:03:24.029131 systemd-networkd[2602]: cali15f53522083: Gained carrier May 8 00:03:24.032551 containerd[2702]: time="2025-05-08T00:03:24.032486564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:03:24.032551 containerd[2702]: time="2025-05-08T00:03:24.032536846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:03:24.032620 containerd[2702]: time="2025-05-08T00:03:24.032548646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:03:24.032641 containerd[2702]: time="2025-05-08T00:03:24.032622808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:03:24.035759 containerd[2702]: 2025-05-08 00:03:23.911 [INFO][6662] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 8 00:03:24.035759 containerd[2702]: 2025-05-08 00:03:23.928 [INFO][6662] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.1.1--n--1f162da554-k8s-csi--node--driver--78qnk-eth0 csi-node-driver- calico-system ecdba017-f46d-4d93-b251-179fdcd1b734 609 0 2025-05-08 00:03:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4230.1.1-n-1f162da554 csi-node-driver-78qnk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali15f53522083 [] []}} ContainerID="a529131b31933030926939e8ab7ab175ab6cba48aae89d7109f9474f4c65ee79" Namespace="calico-system" Pod="csi-node-driver-78qnk" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-csi--node--driver--78qnk-" May 8 00:03:24.035759 containerd[2702]: 2025-05-08 00:03:23.928 [INFO][6662] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a529131b31933030926939e8ab7ab175ab6cba48aae89d7109f9474f4c65ee79" Namespace="calico-system" Pod="csi-node-driver-78qnk" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-csi--node--driver--78qnk-eth0" May 8 00:03:24.035759 containerd[2702]: 2025-05-08 00:03:23.962 [INFO][6831] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a529131b31933030926939e8ab7ab175ab6cba48aae89d7109f9474f4c65ee79" HandleID="k8s-pod-network.a529131b31933030926939e8ab7ab175ab6cba48aae89d7109f9474f4c65ee79" Workload="ci--4230.1.1--n--1f162da554-k8s-csi--node--driver--78qnk-eth0" May 8 00:03:24.035759 containerd[2702]: 2025-05-08 00:03:23.972 [INFO][6831] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a529131b31933030926939e8ab7ab175ab6cba48aae89d7109f9474f4c65ee79" HandleID="k8s-pod-network.a529131b31933030926939e8ab7ab175ab6cba48aae89d7109f9474f4c65ee79" Workload="ci--4230.1.1--n--1f162da554-k8s-csi--node--driver--78qnk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000372d30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4230.1.1-n-1f162da554", "pod":"csi-node-driver-78qnk", "timestamp":"2025-05-08 00:03:23.962742936 +0000 UTC"}, Hostname:"ci-4230.1.1-n-1f162da554", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:03:24.035759 containerd[2702]: 2025-05-08 00:03:23.972 [INFO][6831] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:03:24.035759 containerd[2702]: 2025-05-08 00:03:24.007 [INFO][6831] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:03:24.035759 containerd[2702]: 2025-05-08 00:03:24.007 [INFO][6831] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.1.1-n-1f162da554' May 8 00:03:24.035759 containerd[2702]: 2025-05-08 00:03:24.008 [INFO][6831] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a529131b31933030926939e8ab7ab175ab6cba48aae89d7109f9474f4c65ee79" host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.035759 containerd[2702]: 2025-05-08 00:03:24.011 [INFO][6831] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.035759 containerd[2702]: 2025-05-08 00:03:24.014 [INFO][6831] ipam/ipam.go 489: Trying affinity for 192.168.78.128/26 host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.035759 containerd[2702]: 2025-05-08 00:03:24.015 [INFO][6831] ipam/ipam.go 155: Attempting to load block cidr=192.168.78.128/26 host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.035759 containerd[2702]: 2025-05-08 00:03:24.017 [INFO][6831] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.78.128/26 host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.035759 containerd[2702]: 2025-05-08 00:03:24.017 [INFO][6831] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.78.128/26 handle="k8s-pod-network.a529131b31933030926939e8ab7ab175ab6cba48aae89d7109f9474f4c65ee79" host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.035759 containerd[2702]: 2025-05-08 00:03:24.018 [INFO][6831] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a529131b31933030926939e8ab7ab175ab6cba48aae89d7109f9474f4c65ee79 May 8 00:03:24.035759 containerd[2702]: 2025-05-08 00:03:24.021 [INFO][6831] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.78.128/26 handle="k8s-pod-network.a529131b31933030926939e8ab7ab175ab6cba48aae89d7109f9474f4c65ee79" host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.035759 containerd[2702]: 2025-05-08 00:03:24.025 [INFO][6831] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.78.131/26] block=192.168.78.128/26 handle="k8s-pod-network.a529131b31933030926939e8ab7ab175ab6cba48aae89d7109f9474f4c65ee79" host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.035759 containerd[2702]: 2025-05-08 00:03:24.025 [INFO][6831] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.78.131/26] handle="k8s-pod-network.a529131b31933030926939e8ab7ab175ab6cba48aae89d7109f9474f4c65ee79" host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.035759 containerd[2702]: 2025-05-08 00:03:24.025 [INFO][6831] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:03:24.035759 containerd[2702]: 2025-05-08 00:03:24.025 [INFO][6831] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.78.131/26] IPv6=[] ContainerID="a529131b31933030926939e8ab7ab175ab6cba48aae89d7109f9474f4c65ee79" HandleID="k8s-pod-network.a529131b31933030926939e8ab7ab175ab6cba48aae89d7109f9474f4c65ee79" Workload="ci--4230.1.1--n--1f162da554-k8s-csi--node--driver--78qnk-eth0" May 8 00:03:24.036156 containerd[2702]: 2025-05-08 00:03:24.026 [INFO][6662] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a529131b31933030926939e8ab7ab175ab6cba48aae89d7109f9474f4c65ee79" Namespace="calico-system" Pod="csi-node-driver-78qnk" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-csi--node--driver--78qnk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--1f162da554-k8s-csi--node--driver--78qnk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ecdba017-f46d-4d93-b251-179fdcd1b734", ResourceVersion:"609", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 3, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-1f162da554", ContainerID:"", Pod:"csi-node-driver-78qnk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.78.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali15f53522083", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:03:24.036156 containerd[2702]: 2025-05-08 00:03:24.026 [INFO][6662] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.78.131/32] ContainerID="a529131b31933030926939e8ab7ab175ab6cba48aae89d7109f9474f4c65ee79" Namespace="calico-system" Pod="csi-node-driver-78qnk" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-csi--node--driver--78qnk-eth0" May 8 00:03:24.036156 containerd[2702]: 2025-05-08 00:03:24.027 [INFO][6662] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali15f53522083 ContainerID="a529131b31933030926939e8ab7ab175ab6cba48aae89d7109f9474f4c65ee79" Namespace="calico-system" Pod="csi-node-driver-78qnk" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-csi--node--driver--78qnk-eth0" May 8 00:03:24.036156 containerd[2702]: 2025-05-08 00:03:24.029 [INFO][6662] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a529131b31933030926939e8ab7ab175ab6cba48aae89d7109f9474f4c65ee79" Namespace="calico-system" Pod="csi-node-driver-78qnk" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-csi--node--driver--78qnk-eth0" May 8 00:03:24.036156 containerd[2702]: 2025-05-08 00:03:24.029 [INFO][6662] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a529131b31933030926939e8ab7ab175ab6cba48aae89d7109f9474f4c65ee79" Namespace="calico-system" Pod="csi-node-driver-78qnk" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-csi--node--driver--78qnk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--1f162da554-k8s-csi--node--driver--78qnk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ecdba017-f46d-4d93-b251-179fdcd1b734", ResourceVersion:"609", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 3, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-1f162da554", ContainerID:"a529131b31933030926939e8ab7ab175ab6cba48aae89d7109f9474f4c65ee79", Pod:"csi-node-driver-78qnk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.78.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali15f53522083", MAC:"aa:c4:f1:86:02:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:03:24.036156 containerd[2702]: 2025-05-08 00:03:24.034 [INFO][6662] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a529131b31933030926939e8ab7ab175ab6cba48aae89d7109f9474f4c65ee79" Namespace="calico-system" Pod="csi-node-driver-78qnk" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-csi--node--driver--78qnk-eth0" May 8 00:03:24.047666 systemd-networkd[2602]: calif4109dff42f: Link UP May 8 00:03:24.047924 systemd-networkd[2602]: calif4109dff42f: Gained carrier May 8 00:03:24.049406 containerd[2702]: time="2025-05-08T00:03:24.049315695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:03:24.049406 containerd[2702]: time="2025-05-08T00:03:24.049368817Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:03:24.049406 containerd[2702]: time="2025-05-08T00:03:24.049379298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:03:24.049496 containerd[2702]: time="2025-05-08T00:03:24.049451420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:03:24.049991 systemd[1]: Started cri-containerd-21c3e8e728ec1316b5e650f53b1bf3dfaebffe0a31d7e66fab3d23bb085b71ae.scope - libcontainer container 21c3e8e728ec1316b5e650f53b1bf3dfaebffe0a31d7e66fab3d23bb085b71ae. May 8 00:03:24.052975 systemd[1]: Started cri-containerd-729d9501a3e6a37496d3f1a6fabab2125ad7bb4dda125027b22bb59ba6334b8b.scope - libcontainer container 729d9501a3e6a37496d3f1a6fabab2125ad7bb4dda125027b22bb59ba6334b8b. May 8 00:03:24.054458 containerd[2702]: 2025-05-08 00:03:23.918 [INFO][6717] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 8 00:03:24.054458 containerd[2702]: 2025-05-08 00:03:23.928 [INFO][6717] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.1.1--n--1f162da554-k8s-coredns--7db6d8ff4d--hf55x-eth0 coredns-7db6d8ff4d- kube-system babaf8df-3800-4281-aaf6-109330f3cc79 662 0 2025-05-08 00:03:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4230.1.1-n-1f162da554 coredns-7db6d8ff4d-hf55x eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif4109dff42f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="090012cbd3556fd730825de21d3e63d1867a00eeb32b7244fd713fc66d774340" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hf55x" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-coredns--7db6d8ff4d--hf55x-" May 8 00:03:24.054458 containerd[2702]: 2025-05-08 00:03:23.928 [INFO][6717] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="090012cbd3556fd730825de21d3e63d1867a00eeb32b7244fd713fc66d774340" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hf55x" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-coredns--7db6d8ff4d--hf55x-eth0" May 8 00:03:24.054458 containerd[2702]: 2025-05-08 00:03:23.962 [INFO][6838] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="090012cbd3556fd730825de21d3e63d1867a00eeb32b7244fd713fc66d774340" HandleID="k8s-pod-network.090012cbd3556fd730825de21d3e63d1867a00eeb32b7244fd713fc66d774340" Workload="ci--4230.1.1--n--1f162da554-k8s-coredns--7db6d8ff4d--hf55x-eth0" May 8 00:03:24.054458 containerd[2702]: 2025-05-08 00:03:23.976 [INFO][6838] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="090012cbd3556fd730825de21d3e63d1867a00eeb32b7244fd713fc66d774340" HandleID="k8s-pod-network.090012cbd3556fd730825de21d3e63d1867a00eeb32b7244fd713fc66d774340" Workload="ci--4230.1.1--n--1f162da554-k8s-coredns--7db6d8ff4d--hf55x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400071a160), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4230.1.1-n-1f162da554", "pod":"coredns-7db6d8ff4d-hf55x", "timestamp":"2025-05-08 00:03:23.962743696 +0000 UTC"}, Hostname:"ci-4230.1.1-n-1f162da554", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:03:24.054458 containerd[2702]: 2025-05-08 00:03:23.976 [INFO][6838] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:03:24.054458 containerd[2702]: 2025-05-08 00:03:24.025 [INFO][6838] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:03:24.054458 containerd[2702]: 2025-05-08 00:03:24.025 [INFO][6838] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.1.1-n-1f162da554' May 8 00:03:24.054458 containerd[2702]: 2025-05-08 00:03:24.026 [INFO][6838] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.090012cbd3556fd730825de21d3e63d1867a00eeb32b7244fd713fc66d774340" host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.054458 containerd[2702]: 2025-05-08 00:03:24.030 [INFO][6838] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.054458 containerd[2702]: 2025-05-08 00:03:24.034 [INFO][6838] ipam/ipam.go 489: Trying affinity for 192.168.78.128/26 host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.054458 containerd[2702]: 2025-05-08 00:03:24.035 [INFO][6838] ipam/ipam.go 155: Attempting to load block cidr=192.168.78.128/26 host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.054458 containerd[2702]: 2025-05-08 00:03:24.037 [INFO][6838] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.78.128/26 host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.054458 containerd[2702]: 2025-05-08 00:03:24.037 [INFO][6838] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.78.128/26 handle="k8s-pod-network.090012cbd3556fd730825de21d3e63d1867a00eeb32b7244fd713fc66d774340" host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.054458 containerd[2702]: 2025-05-08 00:03:24.038 [INFO][6838] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.090012cbd3556fd730825de21d3e63d1867a00eeb32b7244fd713fc66d774340 May 8 00:03:24.054458 containerd[2702]: 2025-05-08 00:03:24.040 [INFO][6838] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.78.128/26 handle="k8s-pod-network.090012cbd3556fd730825de21d3e63d1867a00eeb32b7244fd713fc66d774340" host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.054458 containerd[2702]: 2025-05-08 00:03:24.044 [INFO][6838] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.78.132/26] block=192.168.78.128/26 handle="k8s-pod-network.090012cbd3556fd730825de21d3e63d1867a00eeb32b7244fd713fc66d774340" host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.054458 containerd[2702]: 2025-05-08 00:03:24.044 [INFO][6838] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.78.132/26] handle="k8s-pod-network.090012cbd3556fd730825de21d3e63d1867a00eeb32b7244fd713fc66d774340" host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.054458 containerd[2702]: 2025-05-08 00:03:24.044 [INFO][6838] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:03:24.054458 containerd[2702]: 2025-05-08 00:03:24.045 [INFO][6838] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.78.132/26] IPv6=[] ContainerID="090012cbd3556fd730825de21d3e63d1867a00eeb32b7244fd713fc66d774340" HandleID="k8s-pod-network.090012cbd3556fd730825de21d3e63d1867a00eeb32b7244fd713fc66d774340" Workload="ci--4230.1.1--n--1f162da554-k8s-coredns--7db6d8ff4d--hf55x-eth0" May 8 00:03:24.054868 containerd[2702]: 2025-05-08 00:03:24.046 [INFO][6717] cni-plugin/k8s.go 386: Populated endpoint ContainerID="090012cbd3556fd730825de21d3e63d1867a00eeb32b7244fd713fc66d774340" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hf55x" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-coredns--7db6d8ff4d--hf55x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--1f162da554-k8s-coredns--7db6d8ff4d--hf55x-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"babaf8df-3800-4281-aaf6-109330f3cc79", ResourceVersion:"662", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 3, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-1f162da554", ContainerID:"", Pod:"coredns-7db6d8ff4d-hf55x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.78.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif4109dff42f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:03:24.054868 containerd[2702]: 2025-05-08 00:03:24.046 [INFO][6717] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.78.132/32] ContainerID="090012cbd3556fd730825de21d3e63d1867a00eeb32b7244fd713fc66d774340" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hf55x" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-coredns--7db6d8ff4d--hf55x-eth0" May 8 00:03:24.054868 containerd[2702]: 2025-05-08 00:03:24.046 [INFO][6717] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif4109dff42f ContainerID="090012cbd3556fd730825de21d3e63d1867a00eeb32b7244fd713fc66d774340" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hf55x" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-coredns--7db6d8ff4d--hf55x-eth0" May 8 00:03:24.054868 containerd[2702]: 2025-05-08 00:03:24.047 [INFO][6717] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="090012cbd3556fd730825de21d3e63d1867a00eeb32b7244fd713fc66d774340" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hf55x" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-coredns--7db6d8ff4d--hf55x-eth0" May 8 00:03:24.054868 containerd[2702]: 2025-05-08 00:03:24.048 [INFO][6717] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="090012cbd3556fd730825de21d3e63d1867a00eeb32b7244fd713fc66d774340" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hf55x" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-coredns--7db6d8ff4d--hf55x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--1f162da554-k8s-coredns--7db6d8ff4d--hf55x-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"babaf8df-3800-4281-aaf6-109330f3cc79", ResourceVersion:"662", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 3, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-1f162da554", ContainerID:"090012cbd3556fd730825de21d3e63d1867a00eeb32b7244fd713fc66d774340", Pod:"coredns-7db6d8ff4d-hf55x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.78.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif4109dff42f", MAC:"da:c7:6a:4c:f2:77", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:03:24.054868 containerd[2702]: 2025-05-08 00:03:24.053 [INFO][6717] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="090012cbd3556fd730825de21d3e63d1867a00eeb32b7244fd713fc66d774340" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hf55x" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-coredns--7db6d8ff4d--hf55x-eth0" May 8 00:03:24.058238 systemd[1]: Started cri-containerd-a529131b31933030926939e8ab7ab175ab6cba48aae89d7109f9474f4c65ee79.scope - libcontainer container a529131b31933030926939e8ab7ab175ab6cba48aae89d7109f9474f4c65ee79. May 8 00:03:24.068845 systemd-networkd[2602]: calicc0eca7f89f: Link UP May 8 00:03:24.069331 containerd[2702]: time="2025-05-08T00:03:24.069202371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:03:24.069331 containerd[2702]: time="2025-05-08T00:03:24.069260333Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:03:24.069331 containerd[2702]: time="2025-05-08T00:03:24.069275293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:03:24.069407 systemd-networkd[2602]: calicc0eca7f89f: Gained carrier May 8 00:03:24.069436 containerd[2702]: time="2025-05-08T00:03:24.069346496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:03:24.075184 containerd[2702]: time="2025-05-08T00:03:24.075131812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-78qnk,Uid:ecdba017-f46d-4d93-b251-179fdcd1b734,Namespace:calico-system,Attempt:3,} returns sandbox id \"a529131b31933030926939e8ab7ab175ab6cba48aae89d7109f9474f4c65ee79\"" May 8 00:03:24.075357 containerd[2702]: time="2025-05-08T00:03:24.075342139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76f6cb6bb-rlfnw,Uid:479ec137-6439-484c-93c9-297d6865167f,Namespace:calico-system,Attempt:3,} returns sandbox id \"21c3e8e728ec1316b5e650f53b1bf3dfaebffe0a31d7e66fab3d23bb085b71ae\"" May 8 00:03:24.076228 containerd[2702]: time="2025-05-08T00:03:24.076205849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589b849bb-zzzwl,Uid:805c0cbf-349b-444e-8d83-d615f09bd8e1,Namespace:calico-apiserver,Attempt:3,} returns sandbox id \"729d9501a3e6a37496d3f1a6fabab2125ad7bb4dda125027b22bb59ba6334b8b\"" May 8 00:03:24.077043 containerd[2702]: time="2025-05-08T00:03:24.077026036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 8 00:03:24.077407 containerd[2702]: 2025-05-08 00:03:23.915 [INFO][6690] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 8 00:03:24.077407 containerd[2702]: 2025-05-08 00:03:23.929 [INFO][6690] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.1.1--n--1f162da554-k8s-coredns--7db6d8ff4d--l2qkb-eth0 coredns-7db6d8ff4d- kube-system 026d55fd-56c2-4710-8822-2dacd50ff239 665 0 2025-05-08 00:03:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4230.1.1-n-1f162da554 coredns-7db6d8ff4d-l2qkb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicc0eca7f89f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ec8a00d1b11b7ee9a24754e01e0ec9be5cdf3b08b55b2e146adab121effc2260" Namespace="kube-system" Pod="coredns-7db6d8ff4d-l2qkb" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-coredns--7db6d8ff4d--l2qkb-" May 8 00:03:24.077407 containerd[2702]: 2025-05-08 00:03:23.929 [INFO][6690] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ec8a00d1b11b7ee9a24754e01e0ec9be5cdf3b08b55b2e146adab121effc2260" Namespace="kube-system" Pod="coredns-7db6d8ff4d-l2qkb" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-coredns--7db6d8ff4d--l2qkb-eth0" May 8 00:03:24.077407 containerd[2702]: 2025-05-08 00:03:23.962 [INFO][6834] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec8a00d1b11b7ee9a24754e01e0ec9be5cdf3b08b55b2e146adab121effc2260" HandleID="k8s-pod-network.ec8a00d1b11b7ee9a24754e01e0ec9be5cdf3b08b55b2e146adab121effc2260" Workload="ci--4230.1.1--n--1f162da554-k8s-coredns--7db6d8ff4d--l2qkb-eth0" May 8 00:03:24.077407 containerd[2702]: 2025-05-08 00:03:23.976 [INFO][6834] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ec8a00d1b11b7ee9a24754e01e0ec9be5cdf3b08b55b2e146adab121effc2260" HandleID="k8s-pod-network.ec8a00d1b11b7ee9a24754e01e0ec9be5cdf3b08b55b2e146adab121effc2260" Workload="ci--4230.1.1--n--1f162da554-k8s-coredns--7db6d8ff4d--l2qkb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000460c70), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4230.1.1-n-1f162da554", "pod":"coredns-7db6d8ff4d-l2qkb", "timestamp":"2025-05-08 00:03:23.962745656 +0000 UTC"}, Hostname:"ci-4230.1.1-n-1f162da554", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:03:24.077407 containerd[2702]: 2025-05-08 00:03:23.976 [INFO][6834] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:03:24.077407 containerd[2702]: 2025-05-08 00:03:24.045 [INFO][6834] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:03:24.077407 containerd[2702]: 2025-05-08 00:03:24.045 [INFO][6834] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.1.1-n-1f162da554' May 8 00:03:24.077407 containerd[2702]: 2025-05-08 00:03:24.046 [INFO][6834] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ec8a00d1b11b7ee9a24754e01e0ec9be5cdf3b08b55b2e146adab121effc2260" host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.077407 containerd[2702]: 2025-05-08 00:03:24.049 [INFO][6834] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.077407 containerd[2702]: 2025-05-08 00:03:24.052 [INFO][6834] ipam/ipam.go 489: Trying affinity for 192.168.78.128/26 host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.077407 containerd[2702]: 2025-05-08 00:03:24.054 [INFO][6834] ipam/ipam.go 155: Attempting to load block cidr=192.168.78.128/26 host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.077407 containerd[2702]: 2025-05-08 00:03:24.056 [INFO][6834] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.78.128/26 host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.077407 containerd[2702]: 2025-05-08 00:03:24.057 [INFO][6834] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.78.128/26 handle="k8s-pod-network.ec8a00d1b11b7ee9a24754e01e0ec9be5cdf3b08b55b2e146adab121effc2260" host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.077407 containerd[2702]: 2025-05-08 00:03:24.058 [INFO][6834] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ec8a00d1b11b7ee9a24754e01e0ec9be5cdf3b08b55b2e146adab121effc2260 May 8 00:03:24.077407 containerd[2702]: 2025-05-08 00:03:24.060 [INFO][6834] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.78.128/26 handle="k8s-pod-network.ec8a00d1b11b7ee9a24754e01e0ec9be5cdf3b08b55b2e146adab121effc2260" host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.077407 containerd[2702]: 2025-05-08 00:03:24.066 [INFO][6834] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.78.133/26] block=192.168.78.128/26 handle="k8s-pod-network.ec8a00d1b11b7ee9a24754e01e0ec9be5cdf3b08b55b2e146adab121effc2260" host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.077407 containerd[2702]: 2025-05-08 00:03:24.066 [INFO][6834] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.78.133/26] handle="k8s-pod-network.ec8a00d1b11b7ee9a24754e01e0ec9be5cdf3b08b55b2e146adab121effc2260" host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.077407 containerd[2702]: 2025-05-08 00:03:24.066 [INFO][6834] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:03:24.077407 containerd[2702]: 2025-05-08 00:03:24.066 [INFO][6834] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.78.133/26] IPv6=[] ContainerID="ec8a00d1b11b7ee9a24754e01e0ec9be5cdf3b08b55b2e146adab121effc2260" HandleID="k8s-pod-network.ec8a00d1b11b7ee9a24754e01e0ec9be5cdf3b08b55b2e146adab121effc2260" Workload="ci--4230.1.1--n--1f162da554-k8s-coredns--7db6d8ff4d--l2qkb-eth0" May 8 00:03:24.077799 containerd[2702]: 2025-05-08 00:03:24.067 [INFO][6690] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ec8a00d1b11b7ee9a24754e01e0ec9be5cdf3b08b55b2e146adab121effc2260" Namespace="kube-system" Pod="coredns-7db6d8ff4d-l2qkb" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-coredns--7db6d8ff4d--l2qkb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--1f162da554-k8s-coredns--7db6d8ff4d--l2qkb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"026d55fd-56c2-4710-8822-2dacd50ff239", ResourceVersion:"665", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 3, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-1f162da554", ContainerID:"", Pod:"coredns-7db6d8ff4d-l2qkb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.78.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicc0eca7f89f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:03:24.077799 containerd[2702]: 2025-05-08 00:03:24.067 [INFO][6690] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.78.133/32] ContainerID="ec8a00d1b11b7ee9a24754e01e0ec9be5cdf3b08b55b2e146adab121effc2260" Namespace="kube-system" Pod="coredns-7db6d8ff4d-l2qkb" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-coredns--7db6d8ff4d--l2qkb-eth0" May 8 00:03:24.077799 containerd[2702]: 2025-05-08 00:03:24.067 [INFO][6690] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicc0eca7f89f ContainerID="ec8a00d1b11b7ee9a24754e01e0ec9be5cdf3b08b55b2e146adab121effc2260" Namespace="kube-system" Pod="coredns-7db6d8ff4d-l2qkb" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-coredns--7db6d8ff4d--l2qkb-eth0" May 8 00:03:24.077799 containerd[2702]: 2025-05-08 00:03:24.069 [INFO][6690] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ec8a00d1b11b7ee9a24754e01e0ec9be5cdf3b08b55b2e146adab121effc2260" Namespace="kube-system" Pod="coredns-7db6d8ff4d-l2qkb" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-coredns--7db6d8ff4d--l2qkb-eth0" May 8 00:03:24.077799 containerd[2702]: 2025-05-08 00:03:24.070 [INFO][6690] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ec8a00d1b11b7ee9a24754e01e0ec9be5cdf3b08b55b2e146adab121effc2260" Namespace="kube-system" Pod="coredns-7db6d8ff4d-l2qkb" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-coredns--7db6d8ff4d--l2qkb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--1f162da554-k8s-coredns--7db6d8ff4d--l2qkb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"026d55fd-56c2-4710-8822-2dacd50ff239", ResourceVersion:"665", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 3, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-1f162da554", ContainerID:"ec8a00d1b11b7ee9a24754e01e0ec9be5cdf3b08b55b2e146adab121effc2260", Pod:"coredns-7db6d8ff4d-l2qkb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.78.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicc0eca7f89f", MAC:"7a:90:2b:51:a2:ef", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:03:24.077799 containerd[2702]: 2025-05-08 00:03:24.076 [INFO][6690] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ec8a00d1b11b7ee9a24754e01e0ec9be5cdf3b08b55b2e146adab121effc2260" Namespace="kube-system" Pod="coredns-7db6d8ff4d-l2qkb" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-coredns--7db6d8ff4d--l2qkb-eth0" May 8 00:03:24.079264 systemd[1]: Started cri-containerd-090012cbd3556fd730825de21d3e63d1867a00eeb32b7244fd713fc66d774340.scope - libcontainer container 090012cbd3556fd730825de21d3e63d1867a00eeb32b7244fd713fc66d774340. May 8 00:03:24.099014 containerd[2702]: time="2025-05-08T00:03:24.098910020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:03:24.099060 containerd[2702]: time="2025-05-08T00:03:24.099012823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:03:24.099060 containerd[2702]: time="2025-05-08T00:03:24.099023784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:03:24.099112 containerd[2702]: time="2025-05-08T00:03:24.099096466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:03:24.100044 systemd-networkd[2602]: cali18dc97d9a3b: Link UP May 8 00:03:24.100242 systemd-networkd[2602]: cali18dc97d9a3b: Gained carrier May 8 00:03:24.103341 containerd[2702]: time="2025-05-08T00:03:24.103315409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hf55x,Uid:babaf8df-3800-4281-aaf6-109330f3cc79,Namespace:kube-system,Attempt:3,} returns sandbox id \"090012cbd3556fd730825de21d3e63d1867a00eeb32b7244fd713fc66d774340\"" May 8 00:03:24.105428 containerd[2702]: time="2025-05-08T00:03:24.105408080Z" level=info msg="CreateContainer within sandbox \"090012cbd3556fd730825de21d3e63d1867a00eeb32b7244fd713fc66d774340\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:03:24.107010 containerd[2702]: 2025-05-08 00:03:23.915 [INFO][6679] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 8 00:03:24.107010 containerd[2702]: 2025-05-08 00:03:23.928 [INFO][6679] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.1.1--n--1f162da554-k8s-calico--apiserver--589b849bb--s6cgn-eth0 calico-apiserver-589b849bb- calico-apiserver 70fa2288-52a8-48ed-9660-5941586aeb55 666 0 2025-05-08 00:03:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:589b849bb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4230.1.1-n-1f162da554 calico-apiserver-589b849bb-s6cgn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali18dc97d9a3b [] []}} ContainerID="5480e0d7c13b1ade1a3a7ba63e9ac1d14b3f41dee745503caacf4a574463c9a4" Namespace="calico-apiserver" Pod="calico-apiserver-589b849bb-s6cgn" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-calico--apiserver--589b849bb--s6cgn-" May 8 00:03:24.107010 containerd[2702]: 2025-05-08 00:03:23.928 [INFO][6679] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5480e0d7c13b1ade1a3a7ba63e9ac1d14b3f41dee745503caacf4a574463c9a4" Namespace="calico-apiserver" Pod="calico-apiserver-589b849bb-s6cgn" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-calico--apiserver--589b849bb--s6cgn-eth0" May 8 00:03:24.107010 containerd[2702]: 2025-05-08 00:03:23.962 [INFO][6829] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5480e0d7c13b1ade1a3a7ba63e9ac1d14b3f41dee745503caacf4a574463c9a4" HandleID="k8s-pod-network.5480e0d7c13b1ade1a3a7ba63e9ac1d14b3f41dee745503caacf4a574463c9a4" Workload="ci--4230.1.1--n--1f162da554-k8s-calico--apiserver--589b849bb--s6cgn-eth0" May 8 00:03:24.107010 containerd[2702]: 2025-05-08 00:03:23.976 [INFO][6829] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5480e0d7c13b1ade1a3a7ba63e9ac1d14b3f41dee745503caacf4a574463c9a4" HandleID="k8s-pod-network.5480e0d7c13b1ade1a3a7ba63e9ac1d14b3f41dee745503caacf4a574463c9a4" Workload="ci--4230.1.1--n--1f162da554-k8s-calico--apiserver--589b849bb--s6cgn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000313120), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4230.1.1-n-1f162da554", "pod":"calico-apiserver-589b849bb-s6cgn", "timestamp":"2025-05-08 00:03:23.962748336 +0000 UTC"}, Hostname:"ci-4230.1.1-n-1f162da554", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:03:24.107010 containerd[2702]: 2025-05-08 00:03:23.976 [INFO][6829] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:03:24.107010 containerd[2702]: 2025-05-08 00:03:24.066 [INFO][6829] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:03:24.107010 containerd[2702]: 2025-05-08 00:03:24.066 [INFO][6829] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.1.1-n-1f162da554' May 8 00:03:24.107010 containerd[2702]: 2025-05-08 00:03:24.067 [INFO][6829] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5480e0d7c13b1ade1a3a7ba63e9ac1d14b3f41dee745503caacf4a574463c9a4" host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.107010 containerd[2702]: 2025-05-08 00:03:24.071 [INFO][6829] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.107010 containerd[2702]: 2025-05-08 00:03:24.075 [INFO][6829] ipam/ipam.go 489: Trying affinity for 192.168.78.128/26 host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.107010 containerd[2702]: 2025-05-08 00:03:24.076 [INFO][6829] ipam/ipam.go 155: Attempting to load block cidr=192.168.78.128/26 host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.107010 containerd[2702]: 2025-05-08 00:03:24.078 [INFO][6829] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.78.128/26 host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.107010 containerd[2702]: 2025-05-08 00:03:24.078 [INFO][6829] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.78.128/26 handle="k8s-pod-network.5480e0d7c13b1ade1a3a7ba63e9ac1d14b3f41dee745503caacf4a574463c9a4" host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.107010 containerd[2702]: 2025-05-08 00:03:24.090 [INFO][6829] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5480e0d7c13b1ade1a3a7ba63e9ac1d14b3f41dee745503caacf4a574463c9a4 May 8 00:03:24.107010 containerd[2702]: 2025-05-08 00:03:24.093 [INFO][6829] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.78.128/26 handle="k8s-pod-network.5480e0d7c13b1ade1a3a7ba63e9ac1d14b3f41dee745503caacf4a574463c9a4" host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.107010 containerd[2702]: 2025-05-08 00:03:24.097 [INFO][6829] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.78.134/26] block=192.168.78.128/26 handle="k8s-pod-network.5480e0d7c13b1ade1a3a7ba63e9ac1d14b3f41dee745503caacf4a574463c9a4" host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.107010 containerd[2702]: 2025-05-08 00:03:24.097 [INFO][6829] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.78.134/26] handle="k8s-pod-network.5480e0d7c13b1ade1a3a7ba63e9ac1d14b3f41dee745503caacf4a574463c9a4" host="ci-4230.1.1-n-1f162da554" May 8 00:03:24.107010 containerd[2702]: 2025-05-08 00:03:24.097 [INFO][6829] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:03:24.107010 containerd[2702]: 2025-05-08 00:03:24.097 [INFO][6829] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.78.134/26] IPv6=[] ContainerID="5480e0d7c13b1ade1a3a7ba63e9ac1d14b3f41dee745503caacf4a574463c9a4" HandleID="k8s-pod-network.5480e0d7c13b1ade1a3a7ba63e9ac1d14b3f41dee745503caacf4a574463c9a4" Workload="ci--4230.1.1--n--1f162da554-k8s-calico--apiserver--589b849bb--s6cgn-eth0" May 8 00:03:24.107409 containerd[2702]: 2025-05-08 00:03:24.098 [INFO][6679] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5480e0d7c13b1ade1a3a7ba63e9ac1d14b3f41dee745503caacf4a574463c9a4" Namespace="calico-apiserver" Pod="calico-apiserver-589b849bb-s6cgn" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-calico--apiserver--589b849bb--s6cgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--1f162da554-k8s-calico--apiserver--589b849bb--s6cgn-eth0", GenerateName:"calico-apiserver-589b849bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"70fa2288-52a8-48ed-9660-5941586aeb55", ResourceVersion:"666", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 3, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"589b849bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-1f162da554", ContainerID:"", Pod:"calico-apiserver-589b849bb-s6cgn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.78.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali18dc97d9a3b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:03:24.107409 containerd[2702]: 2025-05-08 00:03:24.098 [INFO][6679] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.78.134/32] ContainerID="5480e0d7c13b1ade1a3a7ba63e9ac1d14b3f41dee745503caacf4a574463c9a4" Namespace="calico-apiserver" Pod="calico-apiserver-589b849bb-s6cgn" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-calico--apiserver--589b849bb--s6cgn-eth0" May 8 00:03:24.107409 containerd[2702]: 2025-05-08 00:03:24.099 [INFO][6679] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18dc97d9a3b ContainerID="5480e0d7c13b1ade1a3a7ba63e9ac1d14b3f41dee745503caacf4a574463c9a4" Namespace="calico-apiserver" Pod="calico-apiserver-589b849bb-s6cgn" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-calico--apiserver--589b849bb--s6cgn-eth0" May 8 00:03:24.107409 containerd[2702]: 2025-05-08 00:03:24.100 [INFO][6679] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5480e0d7c13b1ade1a3a7ba63e9ac1d14b3f41dee745503caacf4a574463c9a4" Namespace="calico-apiserver" Pod="calico-apiserver-589b849bb-s6cgn" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-calico--apiserver--589b849bb--s6cgn-eth0" May 8 00:03:24.107409 containerd[2702]: 2025-05-08 00:03:24.100 [INFO][6679] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5480e0d7c13b1ade1a3a7ba63e9ac1d14b3f41dee745503caacf4a574463c9a4" Namespace="calico-apiserver" Pod="calico-apiserver-589b849bb-s6cgn" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-calico--apiserver--589b849bb--s6cgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--1f162da554-k8s-calico--apiserver--589b849bb--s6cgn-eth0", GenerateName:"calico-apiserver-589b849bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"70fa2288-52a8-48ed-9660-5941586aeb55", ResourceVersion:"666", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 3, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"589b849bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-1f162da554", ContainerID:"5480e0d7c13b1ade1a3a7ba63e9ac1d14b3f41dee745503caacf4a574463c9a4", Pod:"calico-apiserver-589b849bb-s6cgn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.78.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali18dc97d9a3b", MAC:"86:4a:c9:a8:77:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:03:24.107409 containerd[2702]: 2025-05-08 00:03:24.105 [INFO][6679] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5480e0d7c13b1ade1a3a7ba63e9ac1d14b3f41dee745503caacf4a574463c9a4" Namespace="calico-apiserver" Pod="calico-apiserver-589b849bb-s6cgn" WorkloadEndpoint="ci--4230.1.1--n--1f162da554-k8s-calico--apiserver--589b849bb--s6cgn-eth0" May 8 00:03:24.111715 containerd[2702]: time="2025-05-08T00:03:24.111685134Z" level=info msg="CreateContainer within sandbox \"090012cbd3556fd730825de21d3e63d1867a00eeb32b7244fd713fc66d774340\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"37d74daa1e10de58226f8687d46f5f8d675307c82d84909f83b86ab7b729662b\"" May 8 00:03:24.112038 containerd[2702]: time="2025-05-08T00:03:24.112017545Z" level=info msg="StartContainer for \"37d74daa1e10de58226f8687d46f5f8d675307c82d84909f83b86ab7b729662b\"" May 8 00:03:24.121661 containerd[2702]: time="2025-05-08T00:03:24.121462466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:03:24.121661 containerd[2702]: time="2025-05-08T00:03:24.121524348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:03:24.121661 containerd[2702]: time="2025-05-08T00:03:24.121535948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:03:24.121661 containerd[2702]: time="2025-05-08T00:03:24.121604470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:03:24.125991 systemd[1]: Started cri-containerd-ec8a00d1b11b7ee9a24754e01e0ec9be5cdf3b08b55b2e146adab121effc2260.scope - libcontainer container ec8a00d1b11b7ee9a24754e01e0ec9be5cdf3b08b55b2e146adab121effc2260. May 8 00:03:24.128241 systemd[1]: Started cri-containerd-37d74daa1e10de58226f8687d46f5f8d675307c82d84909f83b86ab7b729662b.scope - libcontainer container 37d74daa1e10de58226f8687d46f5f8d675307c82d84909f83b86ab7b729662b. May 8 00:03:24.131783 systemd[1]: Started cri-containerd-5480e0d7c13b1ade1a3a7ba63e9ac1d14b3f41dee745503caacf4a574463c9a4.scope - libcontainer container 5480e0d7c13b1ade1a3a7ba63e9ac1d14b3f41dee745503caacf4a574463c9a4. May 8 00:03:24.148798 containerd[2702]: time="2025-05-08T00:03:24.148761833Z" level=info msg="StartContainer for \"37d74daa1e10de58226f8687d46f5f8d675307c82d84909f83b86ab7b729662b\" returns successfully" May 8 00:03:24.151549 containerd[2702]: time="2025-05-08T00:03:24.151523767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l2qkb,Uid:026d55fd-56c2-4710-8822-2dacd50ff239,Namespace:kube-system,Attempt:3,} returns sandbox id \"ec8a00d1b11b7ee9a24754e01e0ec9be5cdf3b08b55b2e146adab121effc2260\"" May 8 00:03:24.153491 containerd[2702]: time="2025-05-08T00:03:24.153472313Z" level=info msg="CreateContainer within sandbox \"ec8a00d1b11b7ee9a24754e01e0ec9be5cdf3b08b55b2e146adab121effc2260\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:03:24.157270 containerd[2702]: time="2025-05-08T00:03:24.157249241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589b849bb-s6cgn,Uid:70fa2288-52a8-48ed-9660-5941586aeb55,Namespace:calico-apiserver,Attempt:3,} returns sandbox id \"5480e0d7c13b1ade1a3a7ba63e9ac1d14b3f41dee745503caacf4a574463c9a4\"" May 8 00:03:24.159928 containerd[2702]: time="2025-05-08T00:03:24.159904851Z" level=info msg="CreateContainer within sandbox \"ec8a00d1b11b7ee9a24754e01e0ec9be5cdf3b08b55b2e146adab121effc2260\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"35e87ef5dc2f5658303612ac8d21c4085173692545febcabbc1bf5416d30caaf\"" May 8 00:03:24.160217 containerd[2702]: time="2025-05-08T00:03:24.160196221Z" level=info msg="StartContainer for \"35e87ef5dc2f5658303612ac8d21c4085173692545febcabbc1bf5416d30caaf\"" May 8 00:03:24.194066 systemd[1]: Started cri-containerd-35e87ef5dc2f5658303612ac8d21c4085173692545febcabbc1bf5416d30caaf.scope - libcontainer container 35e87ef5dc2f5658303612ac8d21c4085173692545febcabbc1bf5416d30caaf. May 8 00:03:24.203815 kernel: bpftool[7508]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 8 00:03:24.224264 containerd[2702]: time="2025-05-08T00:03:24.224226196Z" level=info msg="StartContainer for \"35e87ef5dc2f5658303612ac8d21c4085173692545febcabbc1bf5416d30caaf\" returns successfully" May 8 00:03:24.324820 systemd[1]: run-netns-cni\x2d073e762f\x2df28f\x2d280d\x2d9e80\x2de75d028b6bbc.mount: Deactivated successfully. May 8 00:03:24.366585 systemd-networkd[2602]: vxlan.calico: Link UP May 8 00:03:24.366589 systemd-networkd[2602]: vxlan.calico: Gained carrier May 8 00:03:24.435226 containerd[2702]: time="2025-05-08T00:03:24.435137679Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:03:24.435323 containerd[2702]: time="2025-05-08T00:03:24.435149519Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 8 00:03:24.435827 containerd[2702]: time="2025-05-08T00:03:24.435794701Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:03:24.437794 containerd[2702]: time="2025-05-08T00:03:24.437767488Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:03:24.438464 containerd[2702]: time="2025-05-08T00:03:24.438442271Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 361.387793ms" May 8 00:03:24.438499 containerd[2702]: time="2025-05-08T00:03:24.438469392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 8 00:03:24.439428 containerd[2702]: time="2025-05-08T00:03:24.439407064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 8 00:03:24.440355 containerd[2702]: time="2025-05-08T00:03:24.440333815Z" level=info msg="CreateContainer within sandbox \"a529131b31933030926939e8ab7ab175ab6cba48aae89d7109f9474f4c65ee79\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 8 00:03:24.447122 containerd[2702]: time="2025-05-08T00:03:24.447099125Z" level=info msg="CreateContainer within sandbox \"a529131b31933030926939e8ab7ab175ab6cba48aae89d7109f9474f4c65ee79\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8f7a72550eb69d7a954744bc27c0fa838bd46463285cfb57cc2b761aeec0fbd4\"" May 8 00:03:24.447399 containerd[2702]: time="2025-05-08T00:03:24.447385294Z" level=info msg="StartContainer for \"8f7a72550eb69d7a954744bc27c0fa838bd46463285cfb57cc2b761aeec0fbd4\"" May 8 00:03:24.474933 systemd[1]: Started cri-containerd-8f7a72550eb69d7a954744bc27c0fa838bd46463285cfb57cc2b761aeec0fbd4.scope - libcontainer container 8f7a72550eb69d7a954744bc27c0fa838bd46463285cfb57cc2b761aeec0fbd4. May 8 00:03:24.497056 containerd[2702]: time="2025-05-08T00:03:24.497027820Z" level=info msg="StartContainer for \"8f7a72550eb69d7a954744bc27c0fa838bd46463285cfb57cc2b761aeec0fbd4\" returns successfully" May 8 00:03:24.908867 kubelet[4417]: I0508 00:03:24.908813 4417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-l2qkb" podStartSLOduration=21.908789885 podStartE2EDuration="21.908789885s" podCreationTimestamp="2025-05-08 00:03:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:03:24.90835219 +0000 UTC m=+36.162949111" watchObservedRunningTime="2025-05-08 00:03:24.908789885 +0000 UTC m=+36.163386806" May 8 00:03:24.925006 kubelet[4417]: I0508 00:03:24.924955 4417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-hf55x" podStartSLOduration=21.924940353 podStartE2EDuration="21.924940353s" podCreationTimestamp="2025-05-08 00:03:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:03:24.92485483 +0000 UTC m=+36.179451751" watchObservedRunningTime="2025-05-08 00:03:24.924940353 +0000 UTC m=+36.179537274" May 8 00:03:25.042925 containerd[2702]: time="2025-05-08T00:03:25.042883380Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:03:25.043189 containerd[2702]: time="2025-05-08T00:03:25.042928741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 8 00:03:25.043644 containerd[2702]: time="2025-05-08T00:03:25.043620804Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:03:25.045290 containerd[2702]: time="2025-05-08T00:03:25.045266457Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:03:25.046156 containerd[2702]: time="2025-05-08T00:03:25.046141046Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 606.706982ms" May 8 00:03:25.046178 containerd[2702]: time="2025-05-08T00:03:25.046163127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 8 00:03:25.046931 containerd[2702]: time="2025-05-08T00:03:25.046909871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 00:03:25.051743 containerd[2702]: time="2025-05-08T00:03:25.051721507Z" level=info msg="CreateContainer within sandbox \"21c3e8e728ec1316b5e650f53b1bf3dfaebffe0a31d7e66fab3d23bb085b71ae\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 8 00:03:25.056651 containerd[2702]: time="2025-05-08T00:03:25.056617547Z" level=info msg="CreateContainer within sandbox \"21c3e8e728ec1316b5e650f53b1bf3dfaebffe0a31d7e66fab3d23bb085b71ae\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"38c9c3505362c5660b05416d0b0d4dc4fc0cf1d9a1a8db489d0d16ab9a47cf67\"" May 8 00:03:25.056936 containerd[2702]: time="2025-05-08T00:03:25.056905876Z" level=info msg="StartContainer for \"38c9c3505362c5660b05416d0b0d4dc4fc0cf1d9a1a8db489d0d16ab9a47cf67\"" May 8 00:03:25.082922 systemd[1]: Started cri-containerd-38c9c3505362c5660b05416d0b0d4dc4fc0cf1d9a1a8db489d0d16ab9a47cf67.scope - libcontainer container 38c9c3505362c5660b05416d0b0d4dc4fc0cf1d9a1a8db489d0d16ab9a47cf67. May 8 00:03:25.108068 containerd[2702]: time="2025-05-08T00:03:25.108030579Z" level=info msg="StartContainer for \"38c9c3505362c5660b05416d0b0d4dc4fc0cf1d9a1a8db489d0d16ab9a47cf67\" returns successfully" May 8 00:03:25.436002 systemd-networkd[2602]: cali7cac28f71ba: Gained IPv6LL May 8 00:03:25.436257 systemd-networkd[2602]: calif4109dff42f: Gained IPv6LL May 8 00:03:25.499885 systemd-networkd[2602]: cali15f53522083: Gained IPv6LL May 8 00:03:25.696185 containerd[2702]: time="2025-05-08T00:03:25.696086230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 8 00:03:25.696185 containerd[2702]: time="2025-05-08T00:03:25.696084590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:03:25.696951 containerd[2702]: time="2025-05-08T00:03:25.696927417Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:03:25.698836 containerd[2702]: time="2025-05-08T00:03:25.698789958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:03:25.699599 containerd[2702]: time="2025-05-08T00:03:25.699541102Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 652.59715ms" May 8 00:03:25.699599 containerd[2702]: time="2025-05-08T00:03:25.699566183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 8 00:03:25.700438 containerd[2702]: time="2025-05-08T00:03:25.700419091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 00:03:25.701526 containerd[2702]: time="2025-05-08T00:03:25.701506446Z" level=info msg="CreateContainer within sandbox \"729d9501a3e6a37496d3f1a6fabab2125ad7bb4dda125027b22bb59ba6334b8b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:03:25.706475 containerd[2702]: time="2025-05-08T00:03:25.706448407Z" level=info msg="CreateContainer within sandbox \"729d9501a3e6a37496d3f1a6fabab2125ad7bb4dda125027b22bb59ba6334b8b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1b82bdaaeb7ffa70dd3931aa566e3068be726ad1bed90866c8de03a90ba7fb25\"" May 8 00:03:25.706769 containerd[2702]: time="2025-05-08T00:03:25.706747457Z" level=info msg="StartContainer for \"1b82bdaaeb7ffa70dd3931aa566e3068be726ad1bed90866c8de03a90ba7fb25\"" May 8 00:03:25.731996 systemd[1]: Started cri-containerd-1b82bdaaeb7ffa70dd3931aa566e3068be726ad1bed90866c8de03a90ba7fb25.scope - libcontainer container 1b82bdaaeb7ffa70dd3931aa566e3068be726ad1bed90866c8de03a90ba7fb25. May 8 00:03:25.756930 systemd-networkd[2602]: cali1138fc67294: Gained IPv6LL May 8 00:03:25.757098 containerd[2702]: time="2025-05-08T00:03:25.757054573Z" level=info msg="StartContainer for \"1b82bdaaeb7ffa70dd3931aa566e3068be726ad1bed90866c8de03a90ba7fb25\" returns successfully" May 8 00:03:25.780505 containerd[2702]: time="2025-05-08T00:03:25.780436534Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:03:25.780505 containerd[2702]: time="2025-05-08T00:03:25.780470655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 8 00:03:25.783065 containerd[2702]: time="2025-05-08T00:03:25.783041059Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 82.594087ms" May 8 00:03:25.783065 containerd[2702]: time="2025-05-08T00:03:25.783066780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 8 00:03:25.783747 containerd[2702]: time="2025-05-08T00:03:25.783724161Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 8 00:03:25.785198 containerd[2702]: time="2025-05-08T00:03:25.785179528Z" level=info msg="CreateContainer within sandbox \"5480e0d7c13b1ade1a3a7ba63e9ac1d14b3f41dee745503caacf4a574463c9a4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:03:25.789960 containerd[2702]: time="2025-05-08T00:03:25.789931083Z" level=info msg="CreateContainer within sandbox \"5480e0d7c13b1ade1a3a7ba63e9ac1d14b3f41dee745503caacf4a574463c9a4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f71863aa596cff4d2a5d85499c2b03779ee566ae53ae39db7df5fae06cbf86ab\"" May 8 00:03:25.790294 containerd[2702]: time="2025-05-08T00:03:25.790270134Z" level=info msg="StartContainer for \"f71863aa596cff4d2a5d85499c2b03779ee566ae53ae39db7df5fae06cbf86ab\"" May 8 00:03:25.826011 systemd[1]: Started cri-containerd-f71863aa596cff4d2a5d85499c2b03779ee566ae53ae39db7df5fae06cbf86ab.scope - libcontainer container f71863aa596cff4d2a5d85499c2b03779ee566ae53ae39db7df5fae06cbf86ab. May 8 00:03:25.858915 containerd[2702]: time="2025-05-08T00:03:25.858881046Z" level=info msg="StartContainer for \"f71863aa596cff4d2a5d85499c2b03779ee566ae53ae39db7df5fae06cbf86ab\" returns successfully" May 8 00:03:25.916005 kubelet[4417]: I0508 00:03:25.915956 4417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-589b849bb-s6cgn" podStartSLOduration=7.290493856 podStartE2EDuration="8.915942582s" podCreationTimestamp="2025-05-08 00:03:17 +0000 UTC" firstStartedPulling="2025-05-08 00:03:24.158146671 +0000 UTC m=+35.412743592" lastFinishedPulling="2025-05-08 00:03:25.783595397 +0000 UTC m=+37.038192318" observedRunningTime="2025-05-08 00:03:25.915621972 +0000 UTC m=+37.170218973" watchObservedRunningTime="2025-05-08 00:03:25.915942582 +0000 UTC m=+37.170539503" May 8 00:03:25.930534 kubelet[4417]: I0508 00:03:25.930496 4417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-76f6cb6bb-rlfnw" podStartSLOduration=7.960478456 podStartE2EDuration="8.930481935s" podCreationTimestamp="2025-05-08 00:03:17 +0000 UTC" firstStartedPulling="2025-05-08 00:03:24.076761187 +0000 UTC m=+35.331358068" lastFinishedPulling="2025-05-08 00:03:25.046764626 +0000 UTC m=+36.301361547" observedRunningTime="2025-05-08 00:03:25.930146404 +0000 UTC m=+37.184743325" watchObservedRunningTime="2025-05-08 00:03:25.930481935 +0000 UTC m=+37.185078856" May 8 00:03:25.937273 kubelet[4417]: I0508 00:03:25.937231 4417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-589b849bb-zzzwl" podStartSLOduration=7.313986146 podStartE2EDuration="8.937219035s" podCreationTimestamp="2025-05-08 00:03:17 +0000 UTC" firstStartedPulling="2025-05-08 00:03:24.077039957 +0000 UTC m=+35.331636878" lastFinishedPulling="2025-05-08 00:03:25.700272886 +0000 UTC m=+36.954869767" observedRunningTime="2025-05-08 00:03:25.937036589 +0000 UTC m=+37.191633550" watchObservedRunningTime="2025-05-08 00:03:25.937219035 +0000 UTC m=+37.191815956" May 8 00:03:25.948899 systemd-networkd[2602]: cali18dc97d9a3b: Gained IPv6LL May 8 00:03:26.011910 systemd-networkd[2602]: calicc0eca7f89f: Gained IPv6LL May 8 00:03:26.182679 containerd[2702]: time="2025-05-08T00:03:26.182641335Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:03:26.183029 containerd[2702]: time="2025-05-08T00:03:26.182703897Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 8 00:03:26.183439 containerd[2702]: time="2025-05-08T00:03:26.183419599Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:03:26.185202 containerd[2702]: time="2025-05-08T00:03:26.185181934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:03:26.185900 containerd[2702]: time="2025-05-08T00:03:26.185877076Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 402.121714ms" May 8 00:03:26.185926 containerd[2702]: time="2025-05-08T00:03:26.185906037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 8 00:03:26.187709 containerd[2702]: time="2025-05-08T00:03:26.187687892Z" level=info msg="CreateContainer within sandbox \"a529131b31933030926939e8ab7ab175ab6cba48aae89d7109f9474f4c65ee79\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 8 00:03:26.193771 containerd[2702]: time="2025-05-08T00:03:26.193744721Z" level=info msg="CreateContainer within sandbox \"a529131b31933030926939e8ab7ab175ab6cba48aae89d7109f9474f4c65ee79\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d87c0099953b6e4c2c65b213754ba4a3b1ff5282846a2fa06e96b0ce0831a26c\"" May 8 00:03:26.194107 containerd[2702]: time="2025-05-08T00:03:26.194085892Z" level=info msg="StartContainer for \"d87c0099953b6e4c2c65b213754ba4a3b1ff5282846a2fa06e96b0ce0831a26c\"" May 8 00:03:26.218922 systemd[1]: Started cri-containerd-d87c0099953b6e4c2c65b213754ba4a3b1ff5282846a2fa06e96b0ce0831a26c.scope - libcontainer container d87c0099953b6e4c2c65b213754ba4a3b1ff5282846a2fa06e96b0ce0831a26c. May 8 00:03:26.239775 containerd[2702]: time="2025-05-08T00:03:26.239741236Z" level=info msg="StartContainer for \"d87c0099953b6e4c2c65b213754ba4a3b1ff5282846a2fa06e96b0ce0831a26c\" returns successfully" May 8 00:03:26.267909 systemd-networkd[2602]: vxlan.calico: Gained IPv6LL May 8 00:03:26.865521 kubelet[4417]: I0508 00:03:26.865489 4417 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 8 00:03:26.865521 kubelet[4417]: I0508 00:03:26.865516 4417 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 8 00:03:26.914653 kubelet[4417]: I0508 00:03:26.914637 4417 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:03:26.914717 kubelet[4417]: I0508 00:03:26.914666 4417 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:03:26.923267 kubelet[4417]: I0508 00:03:26.923206 4417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-78qnk" podStartSLOduration=7.813470207 podStartE2EDuration="9.923189034s" podCreationTimestamp="2025-05-08 00:03:17 +0000 UTC" firstStartedPulling="2025-05-08 00:03:24.076747227 +0000 UTC m=+35.331344148" lastFinishedPulling="2025-05-08 00:03:26.186466094 +0000 UTC m=+37.441062975" observedRunningTime="2025-05-08 00:03:26.922792822 +0000 UTC m=+38.177389743" watchObservedRunningTime="2025-05-08 00:03:26.923189034 +0000 UTC m=+38.177785955" May 8 00:03:36.542850 kubelet[4417]: I0508 00:03:36.542791 4417 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:03:40.908615 systemd[1]: Started sshd@7-145.40.69.49:22-87.240.46.154:37350.service - OpenSSH per-connection server daemon (87.240.46.154:37350). May 8 00:03:41.732917 kubelet[4417]: I0508 00:03:41.732866 4417 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:03:41.968941 sshd[8254]: Invalid user pgsql from 87.240.46.154 port 37350 May 8 00:03:42.221410 sshd-session[8258]: pam_faillock(sshd:auth): User unknown May 8 00:03:42.226752 sshd[8254]: Postponed keyboard-interactive for invalid user pgsql from 87.240.46.154 port 37350 ssh2 [preauth] May 8 00:03:42.463601 sshd-session[8258]: pam_unix(sshd:auth): check pass; user unknown May 8 00:03:42.463621 sshd-session[8258]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=87.240.46.154 May 8 00:03:42.463991 sshd-session[8258]: pam_faillock(sshd:auth): User unknown May 8 00:03:44.415936 sshd[8254]: PAM: Permission denied for illegal user pgsql from 87.240.46.154 May 8 00:03:44.416283 sshd[8254]: Failed keyboard-interactive/pam for invalid user pgsql from 87.240.46.154 port 37350 ssh2 May 8 00:03:44.725710 sshd[8254]: Connection closed by invalid user pgsql 87.240.46.154 port 37350 [preauth] May 8 00:03:44.727708 systemd[1]: sshd@7-145.40.69.49:22-87.240.46.154:37350.service: Deactivated successfully. May 8 00:03:48.807253 containerd[2702]: time="2025-05-08T00:03:48.807137365Z" level=info msg="StopPodSandbox for \"d8db5656ea5e531764db87dbcad62b7d50008ec357bb1837f18d72d9995a6173\"" May 8 00:03:48.807253 containerd[2702]: time="2025-05-08T00:03:48.807246406Z" level=info msg="TearDown network for sandbox \"d8db5656ea5e531764db87dbcad62b7d50008ec357bb1837f18d72d9995a6173\" successfully" May 8 00:03:48.807643 containerd[2702]: time="2025-05-08T00:03:48.807257566Z" level=info msg="StopPodSandbox for \"d8db5656ea5e531764db87dbcad62b7d50008ec357bb1837f18d72d9995a6173\" returns successfully" May 8 00:03:48.807643 containerd[2702]: time="2025-05-08T00:03:48.807617972Z" level=info msg="RemovePodSandbox for \"d8db5656ea5e531764db87dbcad62b7d50008ec357bb1837f18d72d9995a6173\"" May 8 00:03:48.807696 containerd[2702]: time="2025-05-08T00:03:48.807651973Z" level=info msg="Forcibly stopping sandbox \"d8db5656ea5e531764db87dbcad62b7d50008ec357bb1837f18d72d9995a6173\"" May 8 00:03:48.807734 containerd[2702]: time="2025-05-08T00:03:48.807720454Z" level=info msg="TearDown network for sandbox \"d8db5656ea5e531764db87dbcad62b7d50008ec357bb1837f18d72d9995a6173\" successfully" May 8 00:03:48.809224 containerd[2702]: time="2025-05-08T00:03:48.809203078Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d8db5656ea5e531764db87dbcad62b7d50008ec357bb1837f18d72d9995a6173\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:03:48.809262 containerd[2702]: time="2025-05-08T00:03:48.809252038Z" level=info msg="RemovePodSandbox \"d8db5656ea5e531764db87dbcad62b7d50008ec357bb1837f18d72d9995a6173\" returns successfully" May 8 00:03:48.809502 containerd[2702]: time="2025-05-08T00:03:48.809486882Z" level=info msg="StopPodSandbox for \"3d495312edc1e4b7047949fad951b5f848ce2ec320c3e55ca78db1556721f11e\"" May 8 00:03:48.809564 containerd[2702]: time="2025-05-08T00:03:48.809553363Z" level=info msg="TearDown network for sandbox \"3d495312edc1e4b7047949fad951b5f848ce2ec320c3e55ca78db1556721f11e\" successfully" May 8 00:03:48.809589 containerd[2702]: time="2025-05-08T00:03:48.809563643Z" level=info msg="StopPodSandbox for \"3d495312edc1e4b7047949fad951b5f848ce2ec320c3e55ca78db1556721f11e\" returns successfully" May 8 00:03:48.809840 containerd[2702]: time="2025-05-08T00:03:48.809824607Z" level=info msg="RemovePodSandbox for \"3d495312edc1e4b7047949fad951b5f848ce2ec320c3e55ca78db1556721f11e\"" May 8 00:03:48.809865 containerd[2702]: time="2025-05-08T00:03:48.809845528Z" level=info msg="Forcibly stopping sandbox \"3d495312edc1e4b7047949fad951b5f848ce2ec320c3e55ca78db1556721f11e\"" May 8 00:03:48.809912 containerd[2702]: time="2025-05-08T00:03:48.809901809Z" level=info msg="TearDown network for sandbox \"3d495312edc1e4b7047949fad951b5f848ce2ec320c3e55ca78db1556721f11e\" successfully" May 8 00:03:48.811181 containerd[2702]: time="2025-05-08T00:03:48.811159629Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3d495312edc1e4b7047949fad951b5f848ce2ec320c3e55ca78db1556721f11e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:03:48.811218 containerd[2702]: time="2025-05-08T00:03:48.811199149Z" level=info msg="RemovePodSandbox \"3d495312edc1e4b7047949fad951b5f848ce2ec320c3e55ca78db1556721f11e\" returns successfully" May 8 00:03:48.811432 containerd[2702]: time="2025-05-08T00:03:48.811417433Z" level=info msg="StopPodSandbox for \"e4c5bda77bfd2577ceb5dd9c35cee50b3e7324bb47641e37571eefacfaf0571b\"" May 8 00:03:48.811491 containerd[2702]: time="2025-05-08T00:03:48.811481314Z" level=info msg="TearDown network for sandbox \"e4c5bda77bfd2577ceb5dd9c35cee50b3e7324bb47641e37571eefacfaf0571b\" successfully" May 8 00:03:48.811515 containerd[2702]: time="2025-05-08T00:03:48.811491194Z" level=info msg="StopPodSandbox for \"e4c5bda77bfd2577ceb5dd9c35cee50b3e7324bb47641e37571eefacfaf0571b\" returns successfully" May 8 00:03:48.811692 containerd[2702]: time="2025-05-08T00:03:48.811673757Z" level=info msg="RemovePodSandbox for \"e4c5bda77bfd2577ceb5dd9c35cee50b3e7324bb47641e37571eefacfaf0571b\"" May 8 00:03:48.811714 containerd[2702]: time="2025-05-08T00:03:48.811699877Z" level=info msg="Forcibly stopping sandbox \"e4c5bda77bfd2577ceb5dd9c35cee50b3e7324bb47641e37571eefacfaf0571b\"" May 8 00:03:48.811772 containerd[2702]: time="2025-05-08T00:03:48.811762478Z" level=info msg="TearDown network for sandbox \"e4c5bda77bfd2577ceb5dd9c35cee50b3e7324bb47641e37571eefacfaf0571b\" successfully" May 8 00:03:48.813035 containerd[2702]: time="2025-05-08T00:03:48.813013378Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e4c5bda77bfd2577ceb5dd9c35cee50b3e7324bb47641e37571eefacfaf0571b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:03:48.813082 containerd[2702]: time="2025-05-08T00:03:48.813057859Z" level=info msg="RemovePodSandbox \"e4c5bda77bfd2577ceb5dd9c35cee50b3e7324bb47641e37571eefacfaf0571b\" returns successfully" May 8 00:03:48.813297 containerd[2702]: time="2025-05-08T00:03:48.813278423Z" level=info msg="StopPodSandbox for \"ea89867d9667df584b88c39fa576b94e6cf08a78335428fc8da0fd087596607d\"" May 8 00:03:48.813365 containerd[2702]: time="2025-05-08T00:03:48.813354664Z" level=info msg="TearDown network for sandbox \"ea89867d9667df584b88c39fa576b94e6cf08a78335428fc8da0fd087596607d\" successfully" May 8 00:03:48.813387 containerd[2702]: time="2025-05-08T00:03:48.813365424Z" level=info msg="StopPodSandbox for \"ea89867d9667df584b88c39fa576b94e6cf08a78335428fc8da0fd087596607d\" returns successfully" May 8 00:03:48.813547 containerd[2702]: time="2025-05-08T00:03:48.813533907Z" level=info msg="RemovePodSandbox for \"ea89867d9667df584b88c39fa576b94e6cf08a78335428fc8da0fd087596607d\"" May 8 00:03:48.813572 containerd[2702]: time="2025-05-08T00:03:48.813553347Z" level=info msg="Forcibly stopping sandbox \"ea89867d9667df584b88c39fa576b94e6cf08a78335428fc8da0fd087596607d\"" May 8 00:03:48.813627 containerd[2702]: time="2025-05-08T00:03:48.813616628Z" level=info msg="TearDown network for sandbox \"ea89867d9667df584b88c39fa576b94e6cf08a78335428fc8da0fd087596607d\" successfully" May 8 00:03:48.814899 containerd[2702]: time="2025-05-08T00:03:48.814878648Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea89867d9667df584b88c39fa576b94e6cf08a78335428fc8da0fd087596607d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:03:48.814933 containerd[2702]: time="2025-05-08T00:03:48.814921209Z" level=info msg="RemovePodSandbox \"ea89867d9667df584b88c39fa576b94e6cf08a78335428fc8da0fd087596607d\" returns successfully" May 8 00:03:48.815151 containerd[2702]: time="2025-05-08T00:03:48.815135132Z" level=info msg="StopPodSandbox for \"fddc6bda409a7acee382dc2ec3b8103a6d687d1bdfb87c4ea0fac220127f37a7\"" May 8 00:03:48.815210 containerd[2702]: time="2025-05-08T00:03:48.815199213Z" level=info msg="TearDown network for sandbox \"fddc6bda409a7acee382dc2ec3b8103a6d687d1bdfb87c4ea0fac220127f37a7\" successfully" May 8 00:03:48.815235 containerd[2702]: time="2025-05-08T00:03:48.815209533Z" level=info msg="StopPodSandbox for \"fddc6bda409a7acee382dc2ec3b8103a6d687d1bdfb87c4ea0fac220127f37a7\" returns successfully" May 8 00:03:48.815413 containerd[2702]: time="2025-05-08T00:03:48.815398696Z" level=info msg="RemovePodSandbox for \"fddc6bda409a7acee382dc2ec3b8103a6d687d1bdfb87c4ea0fac220127f37a7\"" May 8 00:03:48.815435 containerd[2702]: time="2025-05-08T00:03:48.815417577Z" level=info msg="Forcibly stopping sandbox \"fddc6bda409a7acee382dc2ec3b8103a6d687d1bdfb87c4ea0fac220127f37a7\"" May 8 00:03:48.815485 containerd[2702]: time="2025-05-08T00:03:48.815475938Z" level=info msg="TearDown network for sandbox \"fddc6bda409a7acee382dc2ec3b8103a6d687d1bdfb87c4ea0fac220127f37a7\" successfully" May 8 00:03:48.816698 containerd[2702]: time="2025-05-08T00:03:48.816676477Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fddc6bda409a7acee382dc2ec3b8103a6d687d1bdfb87c4ea0fac220127f37a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:03:48.816731 containerd[2702]: time="2025-05-08T00:03:48.816717877Z" level=info msg="RemovePodSandbox \"fddc6bda409a7acee382dc2ec3b8103a6d687d1bdfb87c4ea0fac220127f37a7\" returns successfully" May 8 00:03:48.816943 containerd[2702]: time="2025-05-08T00:03:48.816925681Z" level=info msg="StopPodSandbox for \"dcab63d45473a06c9cf59fb8ad69e4de522bf15d7fbf38602c35f7c41aba74b5\"" May 8 00:03:48.817010 containerd[2702]: time="2025-05-08T00:03:48.816998722Z" level=info msg="TearDown network for sandbox \"dcab63d45473a06c9cf59fb8ad69e4de522bf15d7fbf38602c35f7c41aba74b5\" successfully" May 8 00:03:48.817037 containerd[2702]: time="2025-05-08T00:03:48.817010442Z" level=info msg="StopPodSandbox for \"dcab63d45473a06c9cf59fb8ad69e4de522bf15d7fbf38602c35f7c41aba74b5\" returns successfully" May 8 00:03:48.817601 containerd[2702]: time="2025-05-08T00:03:48.817243326Z" level=info msg="RemovePodSandbox for \"dcab63d45473a06c9cf59fb8ad69e4de522bf15d7fbf38602c35f7c41aba74b5\"" May 8 00:03:48.817601 containerd[2702]: time="2025-05-08T00:03:48.817275206Z" level=info msg="Forcibly stopping sandbox \"dcab63d45473a06c9cf59fb8ad69e4de522bf15d7fbf38602c35f7c41aba74b5\"" May 8 00:03:48.817601 containerd[2702]: time="2025-05-08T00:03:48.817339047Z" level=info msg="TearDown network for sandbox \"dcab63d45473a06c9cf59fb8ad69e4de522bf15d7fbf38602c35f7c41aba74b5\" successfully" May 8 00:03:48.818579 containerd[2702]: time="2025-05-08T00:03:48.818556187Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dcab63d45473a06c9cf59fb8ad69e4de522bf15d7fbf38602c35f7c41aba74b5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:03:48.818613 containerd[2702]: time="2025-05-08T00:03:48.818599427Z" level=info msg="RemovePodSandbox \"dcab63d45473a06c9cf59fb8ad69e4de522bf15d7fbf38602c35f7c41aba74b5\" returns successfully" May 8 00:03:48.818855 containerd[2702]: time="2025-05-08T00:03:48.818839391Z" level=info msg="StopPodSandbox for \"620028f9dd5e4dff71ba31cf57e9c41a45d57b1b1a2e62518f49739dd9008245\"" May 8 00:03:48.818921 containerd[2702]: time="2025-05-08T00:03:48.818910312Z" level=info msg="TearDown network for sandbox \"620028f9dd5e4dff71ba31cf57e9c41a45d57b1b1a2e62518f49739dd9008245\" successfully" May 8 00:03:48.818946 containerd[2702]: time="2025-05-08T00:03:48.818920433Z" level=info msg="StopPodSandbox for \"620028f9dd5e4dff71ba31cf57e9c41a45d57b1b1a2e62518f49739dd9008245\" returns successfully" May 8 00:03:48.819158 containerd[2702]: time="2025-05-08T00:03:48.819140196Z" level=info msg="RemovePodSandbox for \"620028f9dd5e4dff71ba31cf57e9c41a45d57b1b1a2e62518f49739dd9008245\"" May 8 00:03:48.819204 containerd[2702]: time="2025-05-08T00:03:48.819161076Z" level=info msg="Forcibly stopping sandbox \"620028f9dd5e4dff71ba31cf57e9c41a45d57b1b1a2e62518f49739dd9008245\"" May 8 00:03:48.819228 containerd[2702]: time="2025-05-08T00:03:48.819217397Z" level=info msg="TearDown network for sandbox \"620028f9dd5e4dff71ba31cf57e9c41a45d57b1b1a2e62518f49739dd9008245\" successfully" May 8 00:03:48.820491 containerd[2702]: time="2025-05-08T00:03:48.820465697Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"620028f9dd5e4dff71ba31cf57e9c41a45d57b1b1a2e62518f49739dd9008245\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:03:48.820523 containerd[2702]: time="2025-05-08T00:03:48.820512498Z" level=info msg="RemovePodSandbox \"620028f9dd5e4dff71ba31cf57e9c41a45d57b1b1a2e62518f49739dd9008245\" returns successfully" May 8 00:03:48.820763 containerd[2702]: time="2025-05-08T00:03:48.820742622Z" level=info msg="StopPodSandbox for \"308cefa1affd477c33c2178b491341b04faddeee325d1e8c80ce651c6cae36e6\"" May 8 00:03:48.820844 containerd[2702]: time="2025-05-08T00:03:48.820832463Z" level=info msg="TearDown network for sandbox \"308cefa1affd477c33c2178b491341b04faddeee325d1e8c80ce651c6cae36e6\" successfully" May 8 00:03:48.820877 containerd[2702]: time="2025-05-08T00:03:48.820844303Z" level=info msg="StopPodSandbox for \"308cefa1affd477c33c2178b491341b04faddeee325d1e8c80ce651c6cae36e6\" returns successfully" May 8 00:03:48.821087 containerd[2702]: time="2025-05-08T00:03:48.821071667Z" level=info msg="RemovePodSandbox for \"308cefa1affd477c33c2178b491341b04faddeee325d1e8c80ce651c6cae36e6\"" May 8 00:03:48.821125 containerd[2702]: time="2025-05-08T00:03:48.821093187Z" level=info msg="Forcibly stopping sandbox \"308cefa1affd477c33c2178b491341b04faddeee325d1e8c80ce651c6cae36e6\"" May 8 00:03:48.821160 containerd[2702]: time="2025-05-08T00:03:48.821149548Z" level=info msg="TearDown network for sandbox \"308cefa1affd477c33c2178b491341b04faddeee325d1e8c80ce651c6cae36e6\" successfully" May 8 00:03:48.822417 containerd[2702]: time="2025-05-08T00:03:48.822397928Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"308cefa1affd477c33c2178b491341b04faddeee325d1e8c80ce651c6cae36e6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:03:48.822448 containerd[2702]: time="2025-05-08T00:03:48.822438889Z" level=info msg="RemovePodSandbox \"308cefa1affd477c33c2178b491341b04faddeee325d1e8c80ce651c6cae36e6\" returns successfully" May 8 00:03:48.822694 containerd[2702]: time="2025-05-08T00:03:48.822678612Z" level=info msg="StopPodSandbox for \"8316d93dc161acf2c943521540d4a6889a2dc1affce88805d34e831f8974e349\"" May 8 00:03:48.822762 containerd[2702]: time="2025-05-08T00:03:48.822751374Z" level=info msg="TearDown network for sandbox \"8316d93dc161acf2c943521540d4a6889a2dc1affce88805d34e831f8974e349\" successfully" May 8 00:03:48.822784 containerd[2702]: time="2025-05-08T00:03:48.822761574Z" level=info msg="StopPodSandbox for \"8316d93dc161acf2c943521540d4a6889a2dc1affce88805d34e831f8974e349\" returns successfully" May 8 00:03:48.822973 containerd[2702]: time="2025-05-08T00:03:48.822956177Z" level=info msg="RemovePodSandbox for \"8316d93dc161acf2c943521540d4a6889a2dc1affce88805d34e831f8974e349\"" May 8 00:03:48.823017 containerd[2702]: time="2025-05-08T00:03:48.822977097Z" level=info msg="Forcibly stopping sandbox \"8316d93dc161acf2c943521540d4a6889a2dc1affce88805d34e831f8974e349\"" May 8 00:03:48.823047 containerd[2702]: time="2025-05-08T00:03:48.823034418Z" level=info msg="TearDown network for sandbox \"8316d93dc161acf2c943521540d4a6889a2dc1affce88805d34e831f8974e349\" successfully" May 8 00:03:48.824248 containerd[2702]: time="2025-05-08T00:03:48.824227637Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8316d93dc161acf2c943521540d4a6889a2dc1affce88805d34e831f8974e349\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:03:48.824286 containerd[2702]: time="2025-05-08T00:03:48.824272918Z" level=info msg="RemovePodSandbox \"8316d93dc161acf2c943521540d4a6889a2dc1affce88805d34e831f8974e349\" returns successfully" May 8 00:03:48.824520 containerd[2702]: time="2025-05-08T00:03:48.824504242Z" level=info msg="StopPodSandbox for \"0286357e8ee664578efc93e6958c44b4cd90bcc1bd1fd9fd4e71cd419852d78a\"" May 8 00:03:48.824600 containerd[2702]: time="2025-05-08T00:03:48.824589003Z" level=info msg="TearDown network for sandbox \"0286357e8ee664578efc93e6958c44b4cd90bcc1bd1fd9fd4e71cd419852d78a\" successfully" May 8 00:03:48.824625 containerd[2702]: time="2025-05-08T00:03:48.824599683Z" level=info msg="StopPodSandbox for \"0286357e8ee664578efc93e6958c44b4cd90bcc1bd1fd9fd4e71cd419852d78a\" returns successfully" May 8 00:03:48.824827 containerd[2702]: time="2025-05-08T00:03:48.824812486Z" level=info msg="RemovePodSandbox for \"0286357e8ee664578efc93e6958c44b4cd90bcc1bd1fd9fd4e71cd419852d78a\"" May 8 00:03:48.824865 containerd[2702]: time="2025-05-08T00:03:48.824833087Z" level=info msg="Forcibly stopping sandbox \"0286357e8ee664578efc93e6958c44b4cd90bcc1bd1fd9fd4e71cd419852d78a\"" May 8 00:03:48.824907 containerd[2702]: time="2025-05-08T00:03:48.824896288Z" level=info msg="TearDown network for sandbox \"0286357e8ee664578efc93e6958c44b4cd90bcc1bd1fd9fd4e71cd419852d78a\" successfully" May 8 00:03:48.826120 containerd[2702]: time="2025-05-08T00:03:48.826099267Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0286357e8ee664578efc93e6958c44b4cd90bcc1bd1fd9fd4e71cd419852d78a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:03:48.826156 containerd[2702]: time="2025-05-08T00:03:48.826146748Z" level=info msg="RemovePodSandbox \"0286357e8ee664578efc93e6958c44b4cd90bcc1bd1fd9fd4e71cd419852d78a\" returns successfully" May 8 00:03:48.826349 containerd[2702]: time="2025-05-08T00:03:48.826332951Z" level=info msg="StopPodSandbox for \"7a07854e2751a41c24beba08491c46960262ab93a1825f6f2e365740a3b51de2\"" May 8 00:03:48.826417 containerd[2702]: time="2025-05-08T00:03:48.826405032Z" level=info msg="TearDown network for sandbox \"7a07854e2751a41c24beba08491c46960262ab93a1825f6f2e365740a3b51de2\" successfully" May 8 00:03:48.826440 containerd[2702]: time="2025-05-08T00:03:48.826416312Z" level=info msg="StopPodSandbox for \"7a07854e2751a41c24beba08491c46960262ab93a1825f6f2e365740a3b51de2\" returns successfully" May 8 00:03:48.826598 containerd[2702]: time="2025-05-08T00:03:48.826584035Z" level=info msg="RemovePodSandbox for \"7a07854e2751a41c24beba08491c46960262ab93a1825f6f2e365740a3b51de2\"" May 8 00:03:48.826628 containerd[2702]: time="2025-05-08T00:03:48.826603195Z" level=info msg="Forcibly stopping sandbox \"7a07854e2751a41c24beba08491c46960262ab93a1825f6f2e365740a3b51de2\"" May 8 00:03:48.826689 containerd[2702]: time="2025-05-08T00:03:48.826678076Z" level=info msg="TearDown network for sandbox \"7a07854e2751a41c24beba08491c46960262ab93a1825f6f2e365740a3b51de2\" successfully" May 8 00:03:48.827904 containerd[2702]: time="2025-05-08T00:03:48.827882775Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7a07854e2751a41c24beba08491c46960262ab93a1825f6f2e365740a3b51de2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:03:48.827953 containerd[2702]: time="2025-05-08T00:03:48.827925096Z" level=info msg="RemovePodSandbox \"7a07854e2751a41c24beba08491c46960262ab93a1825f6f2e365740a3b51de2\" returns successfully" May 8 00:03:48.828167 containerd[2702]: time="2025-05-08T00:03:48.828149500Z" level=info msg="StopPodSandbox for \"01f940cd04893e438d6a59a5fdfde834ea3fd6e38dcc4632687846ef5486a6f7\"" May 8 00:03:48.828233 containerd[2702]: time="2025-05-08T00:03:48.828220381Z" level=info msg="TearDown network for sandbox \"01f940cd04893e438d6a59a5fdfde834ea3fd6e38dcc4632687846ef5486a6f7\" successfully" May 8 00:03:48.828254 containerd[2702]: time="2025-05-08T00:03:48.828232461Z" level=info msg="StopPodSandbox for \"01f940cd04893e438d6a59a5fdfde834ea3fd6e38dcc4632687846ef5486a6f7\" returns successfully" May 8 00:03:48.828440 containerd[2702]: time="2025-05-08T00:03:48.828425904Z" level=info msg="RemovePodSandbox for \"01f940cd04893e438d6a59a5fdfde834ea3fd6e38dcc4632687846ef5486a6f7\"" May 8 00:03:48.828474 containerd[2702]: time="2025-05-08T00:03:48.828444384Z" level=info msg="Forcibly stopping sandbox \"01f940cd04893e438d6a59a5fdfde834ea3fd6e38dcc4632687846ef5486a6f7\"" May 8 00:03:48.828507 containerd[2702]: time="2025-05-08T00:03:48.828497425Z" level=info msg="TearDown network for sandbox \"01f940cd04893e438d6a59a5fdfde834ea3fd6e38dcc4632687846ef5486a6f7\" successfully" May 8 00:03:48.829822 containerd[2702]: time="2025-05-08T00:03:48.829776006Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"01f940cd04893e438d6a59a5fdfde834ea3fd6e38dcc4632687846ef5486a6f7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:03:48.829822 containerd[2702]: time="2025-05-08T00:03:48.829824246Z" level=info msg="RemovePodSandbox \"01f940cd04893e438d6a59a5fdfde834ea3fd6e38dcc4632687846ef5486a6f7\" returns successfully" May 8 00:03:48.830101 containerd[2702]: time="2025-05-08T00:03:48.830085131Z" level=info msg="StopPodSandbox for \"61e90d998f193e125748818c937ef9e382a82c926511f6e59fa46beb622580d5\"" May 8 00:03:48.830171 containerd[2702]: time="2025-05-08T00:03:48.830159012Z" level=info msg="TearDown network for sandbox \"61e90d998f193e125748818c937ef9e382a82c926511f6e59fa46beb622580d5\" successfully" May 8 00:03:48.830171 containerd[2702]: time="2025-05-08T00:03:48.830169412Z" level=info msg="StopPodSandbox for \"61e90d998f193e125748818c937ef9e382a82c926511f6e59fa46beb622580d5\" returns successfully" May 8 00:03:48.830365 containerd[2702]: time="2025-05-08T00:03:48.830349135Z" level=info msg="RemovePodSandbox for \"61e90d998f193e125748818c937ef9e382a82c926511f6e59fa46beb622580d5\"" May 8 00:03:48.830394 containerd[2702]: time="2025-05-08T00:03:48.830371415Z" level=info msg="Forcibly stopping sandbox \"61e90d998f193e125748818c937ef9e382a82c926511f6e59fa46beb622580d5\"" May 8 00:03:48.830455 containerd[2702]: time="2025-05-08T00:03:48.830443656Z" level=info msg="TearDown network for sandbox \"61e90d998f193e125748818c937ef9e382a82c926511f6e59fa46beb622580d5\" successfully" May 8 00:03:48.831696 containerd[2702]: time="2025-05-08T00:03:48.831672676Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"61e90d998f193e125748818c937ef9e382a82c926511f6e59fa46beb622580d5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:03:48.831736 containerd[2702]: time="2025-05-08T00:03:48.831717877Z" level=info msg="RemovePodSandbox \"61e90d998f193e125748818c937ef9e382a82c926511f6e59fa46beb622580d5\" returns successfully" May 8 00:03:48.831969 containerd[2702]: time="2025-05-08T00:03:48.831954800Z" level=info msg="StopPodSandbox for \"95604299eefc9882140aa58b0c424648dfda7875cb52916b5251e92699e7c852\"" May 8 00:03:48.832042 containerd[2702]: time="2025-05-08T00:03:48.832030442Z" level=info msg="TearDown network for sandbox \"95604299eefc9882140aa58b0c424648dfda7875cb52916b5251e92699e7c852\" successfully" May 8 00:03:48.832067 containerd[2702]: time="2025-05-08T00:03:48.832041602Z" level=info msg="StopPodSandbox for \"95604299eefc9882140aa58b0c424648dfda7875cb52916b5251e92699e7c852\" returns successfully" May 8 00:03:48.832262 containerd[2702]: time="2025-05-08T00:03:48.832248245Z" level=info msg="RemovePodSandbox for \"95604299eefc9882140aa58b0c424648dfda7875cb52916b5251e92699e7c852\"" May 8 00:03:48.832292 containerd[2702]: time="2025-05-08T00:03:48.832266845Z" level=info msg="Forcibly stopping sandbox \"95604299eefc9882140aa58b0c424648dfda7875cb52916b5251e92699e7c852\"" May 8 00:03:48.832332 containerd[2702]: time="2025-05-08T00:03:48.832321726Z" level=info msg="TearDown network for sandbox \"95604299eefc9882140aa58b0c424648dfda7875cb52916b5251e92699e7c852\" successfully" May 8 00:03:48.833595 containerd[2702]: time="2025-05-08T00:03:48.833573266Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"95604299eefc9882140aa58b0c424648dfda7875cb52916b5251e92699e7c852\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:03:48.833631 containerd[2702]: time="2025-05-08T00:03:48.833620667Z" level=info msg="RemovePodSandbox \"95604299eefc9882140aa58b0c424648dfda7875cb52916b5251e92699e7c852\" returns successfully" May 8 00:03:48.833888 containerd[2702]: time="2025-05-08T00:03:48.833872391Z" level=info msg="StopPodSandbox for \"2fb8e73d5f4ef8064634faeb7c9b5b914533ef6b0a71faaa0bae82880bb29836\"" May 8 00:03:48.833950 containerd[2702]: time="2025-05-08T00:03:48.833940232Z" level=info msg="TearDown network for sandbox \"2fb8e73d5f4ef8064634faeb7c9b5b914533ef6b0a71faaa0bae82880bb29836\" successfully" May 8 00:03:48.833978 containerd[2702]: time="2025-05-08T00:03:48.833949992Z" level=info msg="StopPodSandbox for \"2fb8e73d5f4ef8064634faeb7c9b5b914533ef6b0a71faaa0bae82880bb29836\" returns successfully" May 8 00:03:48.834135 containerd[2702]: time="2025-05-08T00:03:48.834120835Z" level=info msg="RemovePodSandbox for \"2fb8e73d5f4ef8064634faeb7c9b5b914533ef6b0a71faaa0bae82880bb29836\"" May 8 00:03:48.834155 containerd[2702]: time="2025-05-08T00:03:48.834140315Z" level=info msg="Forcibly stopping sandbox \"2fb8e73d5f4ef8064634faeb7c9b5b914533ef6b0a71faaa0bae82880bb29836\"" May 8 00:03:48.834216 containerd[2702]: time="2025-05-08T00:03:48.834205236Z" level=info msg="TearDown network for sandbox \"2fb8e73d5f4ef8064634faeb7c9b5b914533ef6b0a71faaa0bae82880bb29836\" successfully" May 8 00:03:48.835542 containerd[2702]: time="2025-05-08T00:03:48.835515577Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2fb8e73d5f4ef8064634faeb7c9b5b914533ef6b0a71faaa0bae82880bb29836\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:03:48.835572 containerd[2702]: time="2025-05-08T00:03:48.835559178Z" level=info msg="RemovePodSandbox \"2fb8e73d5f4ef8064634faeb7c9b5b914533ef6b0a71faaa0bae82880bb29836\" returns successfully" May 8 00:03:48.835785 containerd[2702]: time="2025-05-08T00:03:48.835770341Z" level=info msg="StopPodSandbox for \"203dea32cab135b8e456669b0868c6078d0c5fa4d3942caed3f770d0f5292e7e\"" May 8 00:03:48.835857 containerd[2702]: time="2025-05-08T00:03:48.835843382Z" level=info msg="TearDown network for sandbox \"203dea32cab135b8e456669b0868c6078d0c5fa4d3942caed3f770d0f5292e7e\" successfully" May 8 00:03:48.835857 containerd[2702]: time="2025-05-08T00:03:48.835854303Z" level=info msg="StopPodSandbox for \"203dea32cab135b8e456669b0868c6078d0c5fa4d3942caed3f770d0f5292e7e\" returns successfully" May 8 00:03:48.836059 containerd[2702]: time="2025-05-08T00:03:48.836040986Z" level=info msg="RemovePodSandbox for \"203dea32cab135b8e456669b0868c6078d0c5fa4d3942caed3f770d0f5292e7e\"" May 8 00:03:48.836090 containerd[2702]: time="2025-05-08T00:03:48.836064746Z" level=info msg="Forcibly stopping sandbox \"203dea32cab135b8e456669b0868c6078d0c5fa4d3942caed3f770d0f5292e7e\"" May 8 00:03:48.836138 containerd[2702]: time="2025-05-08T00:03:48.836127267Z" level=info msg="TearDown network for sandbox \"203dea32cab135b8e456669b0868c6078d0c5fa4d3942caed3f770d0f5292e7e\" successfully" May 8 00:03:48.850429 containerd[2702]: time="2025-05-08T00:03:48.850406495Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"203dea32cab135b8e456669b0868c6078d0c5fa4d3942caed3f770d0f5292e7e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:03:48.850466 containerd[2702]: time="2025-05-08T00:03:48.850454295Z" level=info msg="RemovePodSandbox \"203dea32cab135b8e456669b0868c6078d0c5fa4d3942caed3f770d0f5292e7e\" returns successfully" May 8 00:03:48.850723 containerd[2702]: time="2025-05-08T00:03:48.850704459Z" level=info msg="StopPodSandbox for \"d6766eee6915a89294ebcee84c9a0d5f739df341124c9c8fb287eacbf4984556\"" May 8 00:03:48.850794 containerd[2702]: time="2025-05-08T00:03:48.850783661Z" level=info msg="TearDown network for sandbox \"d6766eee6915a89294ebcee84c9a0d5f739df341124c9c8fb287eacbf4984556\" successfully" May 8 00:03:48.850832 containerd[2702]: time="2025-05-08T00:03:48.850793661Z" level=info msg="StopPodSandbox for \"d6766eee6915a89294ebcee84c9a0d5f739df341124c9c8fb287eacbf4984556\" returns successfully" May 8 00:03:48.850977 containerd[2702]: time="2025-05-08T00:03:48.850962184Z" level=info msg="RemovePodSandbox for \"d6766eee6915a89294ebcee84c9a0d5f739df341124c9c8fb287eacbf4984556\"" May 8 00:03:48.851014 containerd[2702]: time="2025-05-08T00:03:48.850980704Z" level=info msg="Forcibly stopping sandbox \"d6766eee6915a89294ebcee84c9a0d5f739df341124c9c8fb287eacbf4984556\"" May 8 00:03:48.851043 containerd[2702]: time="2025-05-08T00:03:48.851030305Z" level=info msg="TearDown network for sandbox \"d6766eee6915a89294ebcee84c9a0d5f739df341124c9c8fb287eacbf4984556\" successfully" May 8 00:03:48.852311 containerd[2702]: time="2025-05-08T00:03:48.852291405Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d6766eee6915a89294ebcee84c9a0d5f739df341124c9c8fb287eacbf4984556\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:03:48.852347 containerd[2702]: time="2025-05-08T00:03:48.852337485Z" level=info msg="RemovePodSandbox \"d6766eee6915a89294ebcee84c9a0d5f739df341124c9c8fb287eacbf4984556\" returns successfully" May 8 00:03:48.852548 containerd[2702]: time="2025-05-08T00:03:48.852531809Z" level=info msg="StopPodSandbox for \"4471b8c457bfcb1a11d660bd731b9eb8f2a5e6360d0b19dcc1b9838c855f05f9\"" May 8 00:03:48.852622 containerd[2702]: time="2025-05-08T00:03:48.852611570Z" level=info msg="TearDown network for sandbox \"4471b8c457bfcb1a11d660bd731b9eb8f2a5e6360d0b19dcc1b9838c855f05f9\" successfully" May 8 00:03:48.852646 containerd[2702]: time="2025-05-08T00:03:48.852621530Z" level=info msg="StopPodSandbox for \"4471b8c457bfcb1a11d660bd731b9eb8f2a5e6360d0b19dcc1b9838c855f05f9\" returns successfully" May 8 00:03:48.852838 containerd[2702]: time="2025-05-08T00:03:48.852821853Z" level=info msg="RemovePodSandbox for \"4471b8c457bfcb1a11d660bd731b9eb8f2a5e6360d0b19dcc1b9838c855f05f9\"" May 8 00:03:48.852867 containerd[2702]: time="2025-05-08T00:03:48.852842294Z" level=info msg="Forcibly stopping sandbox \"4471b8c457bfcb1a11d660bd731b9eb8f2a5e6360d0b19dcc1b9838c855f05f9\"" May 8 00:03:48.852913 containerd[2702]: time="2025-05-08T00:03:48.852902614Z" level=info msg="TearDown network for sandbox \"4471b8c457bfcb1a11d660bd731b9eb8f2a5e6360d0b19dcc1b9838c855f05f9\" successfully" May 8 00:03:48.854187 containerd[2702]: time="2025-05-08T00:03:48.854165835Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4471b8c457bfcb1a11d660bd731b9eb8f2a5e6360d0b19dcc1b9838c855f05f9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:03:48.854220 containerd[2702]: time="2025-05-08T00:03:48.854210315Z" level=info msg="RemovePodSandbox \"4471b8c457bfcb1a11d660bd731b9eb8f2a5e6360d0b19dcc1b9838c855f05f9\" returns successfully" May 8 00:03:55.099662 systemd[1]: Started sshd@8-145.40.69.49:22-119.84.241.94:42474.service - OpenSSH per-connection server daemon (119.84.241.94:42474). May 8 00:03:59.331129 sshd-session[8299]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=119.84.241.94 user=bin May 8 00:04:01.147424 sshd[8296]: PAM: Permission denied for bin from 119.84.241.94 May 8 00:04:01.721878 sshd[8296]: Connection closed by authenticating user bin 119.84.241.94 port 42474 [preauth] May 8 00:04:01.723875 systemd[1]: sshd@8-145.40.69.49:22-119.84.241.94:42474.service: Deactivated successfully. May 8 00:04:17.822823 systemd[1]: Started sshd@9-145.40.69.49:22-85.208.84.5:26186.service - OpenSSH per-connection server daemon (85.208.84.5:26186). May 8 00:04:18.673980 sshd[8363]: Invalid user user1 from 85.208.84.5 port 26186 May 8 00:04:18.840897 sshd[8363]: Connection closed by invalid user user1 85.208.84.5 port 26186 [preauth] May 8 00:04:18.842945 systemd[1]: sshd@9-145.40.69.49:22-85.208.84.5:26186.service: Deactivated successfully. May 8 00:04:20.630858 kubelet[4417]: I0508 00:04:20.630801 4417 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:05:35.966683 systemd[1]: Started sshd@10-145.40.69.49:22-195.178.110.50:15340.service - OpenSSH per-connection server daemon (195.178.110.50:15340). May 8 00:05:38.004711 systemd[1]: Started sshd@11-145.40.69.49:22-207.219.222.15:43741.service - OpenSSH per-connection server daemon (207.219.222.15:43741). May 8 00:05:39.594710 sshd[8605]: Invalid user httpd from 207.219.222.15 port 43741 May 8 00:05:40.193931 sshd-session[8607]: pam_faillock(sshd:auth): User unknown May 8 00:05:40.198381 sshd[8605]: Postponed keyboard-interactive for invalid user httpd from 207.219.222.15 port 43741 ssh2 [preauth] May 8 00:05:40.734129 sshd-session[8607]: pam_unix(sshd:auth): check pass; user unknown May 8 00:05:40.734156 sshd-session[8607]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=207.219.222.15 May 8 00:05:40.734417 sshd-session[8607]: pam_faillock(sshd:auth): User unknown May 8 00:05:42.550925 sshd[8605]: PAM: Permission denied for illegal user httpd from 207.219.222.15 May 8 00:05:42.551313 sshd[8605]: Failed keyboard-interactive/pam for invalid user httpd from 207.219.222.15 port 43741 ssh2 May 8 00:05:43.048400 sshd[8605]: Connection closed by invalid user httpd 207.219.222.15 port 43741 [preauth] May 8 00:05:43.050449 systemd[1]: sshd@11-145.40.69.49:22-207.219.222.15:43741.service: Deactivated successfully. May 8 00:05:54.282608 sshd[8576]: Invalid user S from 195.178.110.50 port 15340 May 8 00:05:55.821895 sshd[8576]: Connection closed by invalid user S 195.178.110.50 port 15340 [preauth] May 8 00:05:55.823959 systemd[1]: sshd@10-145.40.69.49:22-195.178.110.50:15340.service: Deactivated successfully. May 8 00:05:55.919647 systemd[1]: Started sshd@12-145.40.69.49:22-195.178.110.50:21404.service - OpenSSH per-connection server daemon (195.178.110.50:21404). May 8 00:06:14.046829 sshd[8635]: Invalid user D from 195.178.110.50 port 21404 May 8 00:06:15.556645 sshd[8635]: Connection closed by invalid user D 195.178.110.50 port 21404 [preauth] May 8 00:06:15.558637 systemd[1]: sshd@12-145.40.69.49:22-195.178.110.50:21404.service: Deactivated successfully. May 8 00:06:15.652648 systemd[1]: Started sshd@13-145.40.69.49:22-195.178.110.50:17498.service - OpenSSH per-connection server daemon (195.178.110.50:17498). May 8 00:06:31.803086 sshd[8698]: Invalid user 5 from 195.178.110.50 port 17498 May 8 00:06:33.690863 sshd[8698]: Connection closed by invalid user 5 195.178.110.50 port 17498 [preauth] May 8 00:06:33.692835 systemd[1]: sshd@13-145.40.69.49:22-195.178.110.50:17498.service: Deactivated successfully. May 8 00:06:33.785585 systemd[1]: Started sshd@14-145.40.69.49:22-195.178.110.50:11232.service - OpenSSH per-connection server daemon (195.178.110.50:11232). May 8 00:06:48.899443 sshd[8731]: Invalid user & from 195.178.110.50 port 11232 May 8 00:06:50.577649 sshd[8731]: Connection closed by invalid user & 195.178.110.50 port 11232 [preauth] May 8 00:06:50.579667 systemd[1]: sshd@14-145.40.69.49:22-195.178.110.50:11232.service: Deactivated successfully. May 8 00:06:50.670608 systemd[1]: Started sshd@15-145.40.69.49:22-195.178.110.50:3590.service - OpenSSH per-connection server daemon (195.178.110.50:3590). May 8 00:07:06.959011 sshd[8784]: Invalid user u from 195.178.110.50 port 3590 May 8 00:07:08.393006 sshd[8784]: Connection closed by invalid user u 195.178.110.50 port 3590 [preauth] May 8 00:07:08.395075 systemd[1]: sshd@15-145.40.69.49:22-195.178.110.50:3590.service: Deactivated successfully. May 8 00:10:33.775812 systemd[1]: Started sshd@16-145.40.69.49:22-47.236.250.29:50364.service - OpenSSH per-connection server daemon (47.236.250.29:50364). May 8 00:10:34.733992 sshd[9340]: Invalid user from 47.236.250.29 port 50364 May 8 00:10:41.765926 sshd[9340]: Connection closed by invalid user 47.236.250.29 port 50364 [preauth] May 8 00:10:41.767906 systemd[1]: sshd@16-145.40.69.49:22-47.236.250.29:50364.service: Deactivated successfully. May 8 00:11:41.233755 update_engine[2693]: I20250508 00:11:41.233695 2693 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 8 00:11:41.233755 update_engine[2693]: I20250508 00:11:41.233748 2693 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 8 00:11:41.234229 update_engine[2693]: I20250508 00:11:41.234005 2693 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 8 00:11:41.234463 update_engine[2693]: I20250508 00:11:41.234323 2693 omaha_request_params.cc:62] Current group set to beta May 8 00:11:41.234463 update_engine[2693]: I20250508 00:11:41.234409 2693 update_attempter.cc:499] Already updated boot flags. Skipping. May 8 00:11:41.234463 update_engine[2693]: I20250508 00:11:41.234418 2693 update_attempter.cc:643] Scheduling an action processor start. May 8 00:11:41.234463 update_engine[2693]: I20250508 00:11:41.234431 2693 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 8 00:11:41.234463 update_engine[2693]: I20250508 00:11:41.234459 2693 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 8 00:11:41.234581 update_engine[2693]: I20250508 00:11:41.234505 2693 omaha_request_action.cc:271] Posting an Omaha request to disabled May 8 00:11:41.234581 update_engine[2693]: I20250508 00:11:41.234513 2693 omaha_request_action.cc:272] Request: May 8 00:11:41.234581 update_engine[2693]: May 8 00:11:41.234581 update_engine[2693]: May 8 00:11:41.234581 update_engine[2693]: May 8 00:11:41.234581 update_engine[2693]: May 8 00:11:41.234581 update_engine[2693]: May 8 00:11:41.234581 update_engine[2693]: May 8 00:11:41.234581 update_engine[2693]: May 8 00:11:41.234581 update_engine[2693]: May 8 00:11:41.234581 update_engine[2693]: I20250508 00:11:41.234519 2693 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 00:11:41.235087 locksmithd[2726]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 8 00:11:41.235496 update_engine[2693]: I20250508 00:11:41.235477 2693 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 00:11:41.235779 update_engine[2693]: I20250508 00:11:41.235759 2693 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 00:11:41.236241 update_engine[2693]: E20250508 00:11:41.236220 2693 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 00:11:41.236283 update_engine[2693]: I20250508 00:11:41.236271 2693 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 8 00:11:41.901669 systemd[1]: Started sshd@17-145.40.69.49:22-50.171.64.170:48350.service - OpenSSH per-connection server daemon (50.171.64.170:48350). May 8 00:11:42.034499 systemd[1]: Started sshd@18-145.40.69.49:22-139.178.68.195:60416.service - OpenSSH per-connection server daemon (139.178.68.195:60416). May 8 00:11:42.445769 sshd[9529]: Accepted publickey for core from 139.178.68.195 port 60416 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:11:42.446956 sshd-session[9529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:11:42.450617 systemd-logind[2680]: New session 10 of user core. May 8 00:11:42.459917 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 00:11:42.797906 sshd[9532]: Connection closed by 139.178.68.195 port 60416 May 8 00:11:42.798297 sshd-session[9529]: pam_unix(sshd:session): session closed for user core May 8 00:11:42.801254 systemd[1]: sshd@18-145.40.69.49:22-139.178.68.195:60416.service: Deactivated successfully. May 8 00:11:42.803511 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:11:42.804048 systemd-logind[2680]: Session 10 logged out. Waiting for processes to exit. May 8 00:11:42.804622 systemd-logind[2680]: Removed session 10. May 8 00:11:43.028694 sshd[9527]: Invalid user master from 50.171.64.170 port 48350 May 8 00:11:43.250246 sshd-session[9570]: pam_faillock(sshd:auth): User unknown May 8 00:11:43.256828 sshd[9527]: Postponed keyboard-interactive for invalid user master from 50.171.64.170 port 48350 ssh2 [preauth] May 8 00:11:43.469372 sshd-session[9570]: pam_unix(sshd:auth): check pass; user unknown May 8 00:11:43.469397 sshd-session[9570]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=50.171.64.170 May 8 00:11:43.469632 sshd-session[9570]: pam_faillock(sshd:auth): User unknown May 8 00:11:45.787722 sshd[9527]: PAM: Permission denied for illegal user master from 50.171.64.170 May 8 00:11:45.788108 sshd[9527]: Failed keyboard-interactive/pam for invalid user master from 50.171.64.170 port 48350 ssh2 May 8 00:11:45.941141 sshd[9527]: Connection closed by invalid user master 50.171.64.170 port 48350 [preauth] May 8 00:11:45.944329 systemd[1]: sshd@17-145.40.69.49:22-50.171.64.170:48350.service: Deactivated successfully. May 8 00:11:47.869617 systemd[1]: Started sshd@19-145.40.69.49:22-139.178.68.195:38896.service - OpenSSH per-connection server daemon (139.178.68.195:38896). May 8 00:11:48.283289 sshd[9584]: Accepted publickey for core from 139.178.68.195 port 38896 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:11:48.284338 sshd-session[9584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:11:48.287544 systemd-logind[2680]: New session 11 of user core. May 8 00:11:48.302905 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 00:11:48.628965 sshd[9586]: Connection closed by 139.178.68.195 port 38896 May 8 00:11:48.629278 sshd-session[9584]: pam_unix(sshd:session): session closed for user core May 8 00:11:48.632182 systemd[1]: sshd@19-145.40.69.49:22-139.178.68.195:38896.service: Deactivated successfully. May 8 00:11:48.633859 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:11:48.634431 systemd-logind[2680]: Session 11 logged out. Waiting for processes to exit. May 8 00:11:48.634992 systemd-logind[2680]: Removed session 11. May 8 00:11:48.709522 systemd[1]: Started sshd@20-145.40.69.49:22-139.178.68.195:38902.service - OpenSSH per-connection server daemon (139.178.68.195:38902). May 8 00:11:49.144846 sshd[9621]: Accepted publickey for core from 139.178.68.195 port 38902 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:11:49.145991 sshd-session[9621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:11:49.149116 systemd-logind[2680]: New session 12 of user core. May 8 00:11:49.161913 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 00:11:49.534603 sshd[9625]: Connection closed by 139.178.68.195 port 38902 May 8 00:11:49.534981 sshd-session[9621]: pam_unix(sshd:session): session closed for user core May 8 00:11:49.537847 systemd[1]: sshd@20-145.40.69.49:22-139.178.68.195:38902.service: Deactivated successfully. May 8 00:11:49.540173 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:11:49.540776 systemd-logind[2680]: Session 12 logged out. Waiting for processes to exit. May 8 00:11:49.541412 systemd-logind[2680]: Removed session 12. May 8 00:11:49.604761 systemd[1]: Started sshd@21-145.40.69.49:22-139.178.68.195:38910.service - OpenSSH per-connection server daemon (139.178.68.195:38910). May 8 00:11:50.019528 sshd[9657]: Accepted publickey for core from 139.178.68.195 port 38910 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:11:50.020543 sshd-session[9657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:11:50.023453 systemd-logind[2680]: New session 13 of user core. May 8 00:11:50.032918 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 00:11:50.366240 sshd[9659]: Connection closed by 139.178.68.195 port 38910 May 8 00:11:50.366557 sshd-session[9657]: pam_unix(sshd:session): session closed for user core May 8 00:11:50.369528 systemd[1]: sshd@21-145.40.69.49:22-139.178.68.195:38910.service: Deactivated successfully. May 8 00:11:50.371272 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:11:50.371853 systemd-logind[2680]: Session 13 logged out. Waiting for processes to exit. May 8 00:11:50.372404 systemd-logind[2680]: Removed session 13. May 8 00:11:51.134323 update_engine[2693]: I20250508 00:11:51.134043 2693 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 00:11:51.134323 update_engine[2693]: I20250508 00:11:51.134291 2693 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 00:11:51.134640 update_engine[2693]: I20250508 00:11:51.134512 2693 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 00:11:51.135428 update_engine[2693]: E20250508 00:11:51.135407 2693 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 00:11:51.135467 update_engine[2693]: I20250508 00:11:51.135445 2693 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 8 00:11:55.439870 systemd[1]: Started sshd@22-145.40.69.49:22-139.178.68.195:57436.service - OpenSSH per-connection server daemon (139.178.68.195:57436). May 8 00:11:55.862290 sshd[9722]: Accepted publickey for core from 139.178.68.195 port 57436 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:11:55.863320 sshd-session[9722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:11:55.866431 systemd-logind[2680]: New session 14 of user core. May 8 00:11:55.875918 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 00:11:56.213375 sshd[9724]: Connection closed by 139.178.68.195 port 57436 May 8 00:11:56.213686 sshd-session[9722]: pam_unix(sshd:session): session closed for user core May 8 00:11:56.216581 systemd[1]: sshd@22-145.40.69.49:22-139.178.68.195:57436.service: Deactivated successfully. May 8 00:11:56.218325 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:11:56.218904 systemd-logind[2680]: Session 14 logged out. Waiting for processes to exit. May 8 00:11:56.219455 systemd-logind[2680]: Removed session 14. May 8 00:11:56.292639 systemd[1]: Started sshd@23-145.40.69.49:22-139.178.68.195:57452.service - OpenSSH per-connection server daemon (139.178.68.195:57452). May 8 00:11:56.704279 sshd[9758]: Accepted publickey for core from 139.178.68.195 port 57452 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:11:56.705326 sshd-session[9758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:11:56.708465 systemd-logind[2680]: New session 15 of user core. May 8 00:11:56.726975 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 00:11:57.109842 sshd[9760]: Connection closed by 139.178.68.195 port 57452 May 8 00:11:57.110187 sshd-session[9758]: pam_unix(sshd:session): session closed for user core May 8 00:11:57.113118 systemd[1]: sshd@23-145.40.69.49:22-139.178.68.195:57452.service: Deactivated successfully. May 8 00:11:57.114882 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:11:57.115438 systemd-logind[2680]: Session 15 logged out. Waiting for processes to exit. May 8 00:11:57.116002 systemd-logind[2680]: Removed session 15. May 8 00:11:57.185718 systemd[1]: Started sshd@24-145.40.69.49:22-139.178.68.195:57468.service - OpenSSH per-connection server daemon (139.178.68.195:57468). May 8 00:11:57.616012 sshd[9793]: Accepted publickey for core from 139.178.68.195 port 57468 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:11:57.617035 sshd-session[9793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:11:57.620104 systemd-logind[2680]: New session 16 of user core. May 8 00:11:57.633908 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 00:11:58.982208 sshd[9795]: Connection closed by 139.178.68.195 port 57468 May 8 00:11:58.982589 sshd-session[9793]: pam_unix(sshd:session): session closed for user core May 8 00:11:58.985577 systemd[1]: sshd@24-145.40.69.49:22-139.178.68.195:57468.service: Deactivated successfully. May 8 00:11:58.987304 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:11:58.987497 systemd[1]: session-16.scope: Consumed 4.757s CPU time, 116.1M memory peak. May 8 00:11:58.987850 systemd-logind[2680]: Session 16 logged out. Waiting for processes to exit. May 8 00:11:58.988409 systemd-logind[2680]: Removed session 16. May 8 00:11:59.053590 systemd[1]: Started sshd@25-145.40.69.49:22-139.178.68.195:57474.service - OpenSSH per-connection server daemon (139.178.68.195:57474). May 8 00:11:59.467224 sshd[9892]: Accepted publickey for core from 139.178.68.195 port 57474 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:11:59.468196 sshd-session[9892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:11:59.471284 systemd-logind[2680]: New session 17 of user core. May 8 00:11:59.480910 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 00:11:59.897240 sshd[9894]: Connection closed by 139.178.68.195 port 57474 May 8 00:11:59.897589 sshd-session[9892]: pam_unix(sshd:session): session closed for user core May 8 00:11:59.900547 systemd[1]: sshd@25-145.40.69.49:22-139.178.68.195:57474.service: Deactivated successfully. May 8 00:11:59.902283 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:11:59.902829 systemd-logind[2680]: Session 17 logged out. Waiting for processes to exit. May 8 00:11:59.903394 systemd-logind[2680]: Removed session 17. May 8 00:11:59.976653 systemd[1]: Started sshd@26-145.40.69.49:22-139.178.68.195:57482.service - OpenSSH per-connection server daemon (139.178.68.195:57482). May 8 00:12:00.400034 sshd[9945]: Accepted publickey for core from 139.178.68.195 port 57482 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:12:00.401049 sshd-session[9945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:12:00.404082 systemd-logind[2680]: New session 18 of user core. May 8 00:12:00.422962 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 00:12:00.751229 sshd[9947]: Connection closed by 139.178.68.195 port 57482 May 8 00:12:00.751571 sshd-session[9945]: pam_unix(sshd:session): session closed for user core May 8 00:12:00.754356 systemd[1]: sshd@26-145.40.69.49:22-139.178.68.195:57482.service: Deactivated successfully. May 8 00:12:00.756044 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:12:00.756628 systemd-logind[2680]: Session 18 logged out. Waiting for processes to exit. May 8 00:12:00.757171 systemd-logind[2680]: Removed session 18. May 8 00:12:01.133724 update_engine[2693]: I20250508 00:12:01.133615 2693 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 00:12:01.134044 update_engine[2693]: I20250508 00:12:01.133827 2693 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 00:12:01.134075 update_engine[2693]: I20250508 00:12:01.134037 2693 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 00:12:01.134477 update_engine[2693]: E20250508 00:12:01.134460 2693 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 00:12:01.134500 update_engine[2693]: I20250508 00:12:01.134492 2693 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 8 00:12:05.828650 systemd[1]: Started sshd@27-145.40.69.49:22-139.178.68.195:48954.service - OpenSSH per-connection server daemon (139.178.68.195:48954). May 8 00:12:06.250225 sshd[9985]: Accepted publickey for core from 139.178.68.195 port 48954 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:12:06.251237 sshd-session[9985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:12:06.254543 systemd-logind[2680]: New session 19 of user core. May 8 00:12:06.263902 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 00:12:06.600010 sshd[9987]: Connection closed by 139.178.68.195 port 48954 May 8 00:12:06.600332 sshd-session[9985]: pam_unix(sshd:session): session closed for user core May 8 00:12:06.603070 systemd[1]: sshd@27-145.40.69.49:22-139.178.68.195:48954.service: Deactivated successfully. May 8 00:12:06.605423 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:12:06.606015 systemd-logind[2680]: Session 19 logged out. Waiting for processes to exit. May 8 00:12:06.606586 systemd-logind[2680]: Removed session 19. May 8 00:12:11.137565 update_engine[2693]: I20250508 00:12:11.137495 2693 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 00:12:11.137952 update_engine[2693]: I20250508 00:12:11.137761 2693 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 00:12:11.138012 update_engine[2693]: I20250508 00:12:11.137992 2693 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 00:12:11.138383 update_engine[2693]: E20250508 00:12:11.138370 2693 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 00:12:11.138427 update_engine[2693]: I20250508 00:12:11.138415 2693 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 8 00:12:11.138451 update_engine[2693]: I20250508 00:12:11.138424 2693 omaha_request_action.cc:617] Omaha request response: May 8 00:12:11.138509 update_engine[2693]: E20250508 00:12:11.138496 2693 omaha_request_action.cc:636] Omaha request network transfer failed. May 8 00:12:11.138530 update_engine[2693]: I20250508 00:12:11.138515 2693 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 8 00:12:11.138530 update_engine[2693]: I20250508 00:12:11.138520 2693 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 8 00:12:11.138530 update_engine[2693]: I20250508 00:12:11.138525 2693 update_attempter.cc:306] Processing Done. May 8 00:12:11.138589 update_engine[2693]: E20250508 00:12:11.138537 2693 update_attempter.cc:619] Update failed. May 8 00:12:11.138589 update_engine[2693]: I20250508 00:12:11.138542 2693 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 8 00:12:11.138589 update_engine[2693]: I20250508 00:12:11.138547 2693 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 8 00:12:11.138589 update_engine[2693]: I20250508 00:12:11.138552 2693 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 8 00:12:11.138658 update_engine[2693]: I20250508 00:12:11.138607 2693 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 8 00:12:11.138658 update_engine[2693]: I20250508 00:12:11.138626 2693 omaha_request_action.cc:271] Posting an Omaha request to disabled May 8 00:12:11.138658 update_engine[2693]: I20250508 00:12:11.138631 2693 omaha_request_action.cc:272] Request: May 8 00:12:11.138658 update_engine[2693]: May 8 00:12:11.138658 update_engine[2693]: May 8 00:12:11.138658 update_engine[2693]: May 8 00:12:11.138658 update_engine[2693]: May 8 00:12:11.138658 update_engine[2693]: May 8 00:12:11.138658 update_engine[2693]: May 8 00:12:11.138658 update_engine[2693]: I20250508 00:12:11.138636 2693 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 00:12:11.138829 update_engine[2693]: I20250508 00:12:11.138746 2693 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 00:12:11.138877 locksmithd[2726]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 8 00:12:11.139048 update_engine[2693]: I20250508 00:12:11.138916 2693 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 00:12:11.139352 update_engine[2693]: E20250508 00:12:11.139336 2693 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 00:12:11.139374 update_engine[2693]: I20250508 00:12:11.139364 2693 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 8 00:12:11.139374 update_engine[2693]: I20250508 00:12:11.139371 2693 omaha_request_action.cc:617] Omaha request response: May 8 00:12:11.139410 update_engine[2693]: I20250508 00:12:11.139376 2693 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 8 00:12:11.139410 update_engine[2693]: I20250508 00:12:11.139380 2693 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 8 00:12:11.139410 update_engine[2693]: I20250508 00:12:11.139384 2693 update_attempter.cc:306] Processing Done. May 8 00:12:11.139410 update_engine[2693]: I20250508 00:12:11.139389 2693 update_attempter.cc:310] Error event sent. May 8 00:12:11.139410 update_engine[2693]: I20250508 00:12:11.139395 2693 update_check_scheduler.cc:74] Next update check in 43m32s May 8 00:12:11.139530 locksmithd[2726]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 8 00:12:11.673687 systemd[1]: Started sshd@28-145.40.69.49:22-139.178.68.195:48964.service - OpenSSH per-connection server daemon (139.178.68.195:48964). May 8 00:12:12.098040 sshd[10070]: Accepted publickey for core from 139.178.68.195 port 48964 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:12:12.099053 sshd-session[10070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:12:12.102226 systemd-logind[2680]: New session 20 of user core. May 8 00:12:12.116926 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 00:12:12.450281 sshd[10072]: Connection closed by 139.178.68.195 port 48964 May 8 00:12:12.450632 sshd-session[10070]: pam_unix(sshd:session): session closed for user core May 8 00:12:12.453453 systemd[1]: sshd@28-145.40.69.49:22-139.178.68.195:48964.service: Deactivated successfully. May 8 00:12:12.455401 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:12:12.455990 systemd-logind[2680]: Session 20 logged out. Waiting for processes to exit. May 8 00:12:12.456522 systemd-logind[2680]: Removed session 20. May 8 00:12:17.520726 systemd[1]: Started sshd@29-145.40.69.49:22-139.178.68.195:41088.service - OpenSSH per-connection server daemon (139.178.68.195:41088). May 8 00:12:17.933945 sshd[10106]: Accepted publickey for core from 139.178.68.195 port 41088 ssh2: RSA SHA256:mECICmEQjvUrGPe+FqrBX44RnDDZ+nT4f0ytVgzKGko May 8 00:12:17.934899 sshd-session[10106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:12:17.938020 systemd-logind[2680]: New session 21 of user core. May 8 00:12:17.954976 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 00:12:18.278353 sshd[10108]: Connection closed by 139.178.68.195 port 41088 May 8 00:12:18.278738 sshd-session[10106]: pam_unix(sshd:session): session closed for user core May 8 00:12:18.281507 systemd[1]: sshd@29-145.40.69.49:22-139.178.68.195:41088.service: Deactivated successfully. May 8 00:12:18.283290 systemd[1]: session-21.scope: Deactivated successfully. May 8 00:12:18.283839 systemd-logind[2680]: Session 21 logged out. Waiting for processes to exit. May 8 00:12:18.284374 systemd-logind[2680]: Removed session 21.