May 15 00:43:54.174522 kernel: Booting Linux on physical CPU 0x0000120000 [0x413fd0c1] May 15 00:43:54.174546 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed May 14 22:22:56 -00 2025 May 15 00:43:54.174554 kernel: KASLR enabled May 15 00:43:54.174560 kernel: efi: EFI v2.7 by American Megatrends May 15 00:43:54.174566 kernel: efi: ACPI 2.0=0xec070000 SMBIOS 3.0=0xf0a1ff98 ESRT=0xea455818 RNG=0xebf00018 MEMRESERVE=0xe4624e18 May 15 00:43:54.174572 kernel: random: crng init done May 15 00:43:54.174579 kernel: secureboot: Secure boot disabled May 15 00:43:54.174585 kernel: esrt: Reserving ESRT space from 0x00000000ea455818 to 0x00000000ea455878. May 15 00:43:54.174593 kernel: ACPI: Early table checksum verification disabled May 15 00:43:54.174599 kernel: ACPI: RSDP 0x00000000EC070000 000024 (v02 Ampere) May 15 00:43:54.174605 kernel: ACPI: XSDT 0x00000000EC060000 0000A4 (v01 Ampere Altra 00000000 AMI 01000013) May 15 00:43:54.174611 kernel: ACPI: FACP 0x00000000EC040000 000114 (v06 Ampere Altra 00000000 INTL 20190509) May 15 00:43:54.174618 kernel: ACPI: DSDT 0x00000000EBFE0000 019B57 (v02 Ampere Jade 00000001 INTL 20200717) May 15 00:43:54.174624 kernel: ACPI: DBG2 0x00000000EC050000 00005C (v00 Ampere Altra 00000000 INTL 20190509) May 15 00:43:54.174633 kernel: ACPI: GTDT 0x00000000EC030000 000110 (v03 Ampere Altra 00000000 INTL 20190509) May 15 00:43:54.174640 kernel: ACPI: SSDT 0x00000000EC020000 00002D (v02 Ampere Altra 00000001 INTL 20190509) May 15 00:43:54.174646 kernel: ACPI: FIDT 0x00000000EBFD0000 00009C (v01 ALASKA A M I 01072009 AMI 00010013) May 15 00:43:54.174653 kernel: ACPI: SPCR 0x00000000EBFC0000 000050 (v02 ALASKA A M I 01072009 AMI 0005000F) May 15 00:43:54.174659 kernel: ACPI: BGRT 0x00000000EBFB0000 000038 (v01 ALASKA A M I 01072009 AMI 00010013) May 15 00:43:54.174666 kernel: ACPI: MCFG 0x00000000EBFA0000 0000AC (v01 Ampere Altra 00000001 AMP. 01000013) May 15 00:43:54.174672 kernel: ACPI: IORT 0x00000000EBF90000 000610 (v00 Ampere Altra 00000000 AMP. 01000013) May 15 00:43:54.174679 kernel: ACPI: PPTT 0x00000000EBF70000 006E60 (v02 Ampere Altra 00000000 AMP. 01000013) May 15 00:43:54.174686 kernel: ACPI: SLIT 0x00000000EBF60000 00002D (v01 Ampere Altra 00000000 AMP. 01000013) May 15 00:43:54.174692 kernel: ACPI: SRAT 0x00000000EBF50000 0006D0 (v03 Ampere Altra 00000000 AMP. 01000013) May 15 00:43:54.174700 kernel: ACPI: APIC 0x00000000EBF80000 0019F4 (v05 Ampere Altra 00000003 AMI 01000013) May 15 00:43:54.174707 kernel: ACPI: PCCT 0x00000000EBF30000 000576 (v02 Ampere Altra 00000003 AMP. 01000013) May 15 00:43:54.174713 kernel: ACPI: WSMT 0x00000000EBF20000 000028 (v01 ALASKA A M I 01072009 AMI 00010013) May 15 00:43:54.174720 kernel: ACPI: FPDT 0x00000000EBF10000 000044 (v01 ALASKA A M I 01072009 AMI 01000013) May 15 00:43:54.174726 kernel: ACPI: SPCR: console: pl011,mmio32,0x100002600000,115200 May 15 00:43:54.174733 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x88300000-0x883fffff] May 15 00:43:54.174739 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x90000000-0xffffffff] May 15 00:43:54.174746 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0x8007fffffff] May 15 00:43:54.174753 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80100000000-0x83fffffffff] May 15 00:43:54.174759 kernel: NUMA: NODE_DATA [mem 0x83fdffcb800-0x83fdffd0fff] May 15 00:43:54.174765 kernel: Zone ranges: May 15 00:43:54.174773 kernel: DMA [mem 0x0000000088300000-0x00000000ffffffff] May 15 00:43:54.174796 kernel: DMA32 empty May 15 00:43:54.174803 kernel: Normal [mem 0x0000000100000000-0x0000083fffffffff] May 15 00:43:54.174809 kernel: Movable zone start for each node May 15 00:43:54.174816 kernel: Early memory node ranges May 15 00:43:54.174826 kernel: node 0: [mem 0x0000000088300000-0x00000000883fffff] May 15 00:43:54.174832 kernel: node 0: [mem 0x0000000090000000-0x0000000091ffffff] May 15 00:43:54.174841 kernel: node 0: [mem 0x0000000092000000-0x0000000093ffffff] May 15 00:43:54.174848 kernel: node 0: [mem 0x0000000094000000-0x00000000eba26fff] May 15 00:43:54.174855 kernel: node 0: [mem 0x00000000eba27000-0x00000000ebe9dfff] May 15 00:43:54.174862 kernel: node 0: [mem 0x00000000ebe9e000-0x00000000ebe9efff] May 15 00:43:54.174868 kernel: node 0: [mem 0x00000000ebe9f000-0x00000000ebebcfff] May 15 00:43:54.174875 kernel: node 0: [mem 0x00000000ebebd000-0x00000000ebebdfff] May 15 00:43:54.174882 kernel: node 0: [mem 0x00000000ebebe000-0x00000000ebebffff] May 15 00:43:54.174889 kernel: node 0: [mem 0x00000000ebec0000-0x00000000ec0dffff] May 15 00:43:54.174895 kernel: node 0: [mem 0x00000000ec0e0000-0x00000000ec0effff] May 15 00:43:54.174902 kernel: node 0: [mem 0x00000000ec0f0000-0x00000000ee53ffff] May 15 00:43:54.174911 kernel: node 0: [mem 0x00000000ee540000-0x00000000f765ffff] May 15 00:43:54.174917 kernel: node 0: [mem 0x00000000f7660000-0x00000000f784ffff] May 15 00:43:54.174924 kernel: node 0: [mem 0x00000000f7850000-0x00000000f7fdffff] May 15 00:43:54.174931 kernel: node 0: [mem 0x00000000f7fe0000-0x00000000ffc8efff] May 15 00:43:54.174938 kernel: node 0: [mem 0x00000000ffc8f000-0x00000000ffc8ffff] May 15 00:43:54.174945 kernel: node 0: [mem 0x00000000ffc90000-0x00000000ffffffff] May 15 00:43:54.174952 kernel: node 0: [mem 0x0000080000000000-0x000008007fffffff] May 15 00:43:54.174958 kernel: node 0: [mem 0x0000080100000000-0x0000083fffffffff] May 15 00:43:54.174965 kernel: Initmem setup node 0 [mem 0x0000000088300000-0x0000083fffffffff] May 15 00:43:54.174972 kernel: On node 0, zone DMA: 768 pages in unavailable ranges May 15 00:43:54.174979 kernel: On node 0, zone DMA: 31744 pages in unavailable ranges May 15 00:43:54.174987 kernel: psci: probing for conduit method from ACPI. May 15 00:43:54.174994 kernel: psci: PSCIv1.1 detected in firmware. May 15 00:43:54.175001 kernel: psci: Using standard PSCI v0.2 function IDs May 15 00:43:54.175008 kernel: psci: MIGRATE_INFO_TYPE not supported. May 15 00:43:54.175015 kernel: psci: SMC Calling Convention v1.2 May 15 00:43:54.175022 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 15 00:43:54.175029 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100 -> Node 0 May 15 00:43:54.175036 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10000 -> Node 0 May 15 00:43:54.175043 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10100 -> Node 0 May 15 00:43:54.175049 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20000 -> Node 0 May 15 00:43:54.175056 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20100 -> Node 0 May 15 00:43:54.175063 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30000 -> Node 0 May 15 00:43:54.175071 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30100 -> Node 0 May 15 00:43:54.175078 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40000 -> Node 0 May 15 00:43:54.175085 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40100 -> Node 0 May 15 00:43:54.175092 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50000 -> Node 0 May 15 00:43:54.175099 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50100 -> Node 0 May 15 00:43:54.175106 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60000 -> Node 0 May 15 00:43:54.175112 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60100 -> Node 0 May 15 00:43:54.175119 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70000 -> Node 0 May 15 00:43:54.175126 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70100 -> Node 0 May 15 00:43:54.175133 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80000 -> Node 0 May 15 00:43:54.175139 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80100 -> Node 0 May 15 00:43:54.175146 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90000 -> Node 0 May 15 00:43:54.175155 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90100 -> Node 0 May 15 00:43:54.175162 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0000 -> Node 0 May 15 00:43:54.175168 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0100 -> Node 0 May 15 00:43:54.175175 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0000 -> Node 0 May 15 00:43:54.175182 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0100 -> Node 0 May 15 00:43:54.175189 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0000 -> Node 0 May 15 00:43:54.175196 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0100 -> Node 0 May 15 00:43:54.175203 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0000 -> Node 0 May 15 00:43:54.175209 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0100 -> Node 0 May 15 00:43:54.175216 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0000 -> Node 0 May 15 00:43:54.175223 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0100 -> Node 0 May 15 00:43:54.175232 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0000 -> Node 0 May 15 00:43:54.175238 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0100 -> Node 0 May 15 00:43:54.175245 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100000 -> Node 0 May 15 00:43:54.175252 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100100 -> Node 0 May 15 00:43:54.175259 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110000 -> Node 0 May 15 00:43:54.175266 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110100 -> Node 0 May 15 00:43:54.175273 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120000 -> Node 0 May 15 00:43:54.175280 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120100 -> Node 0 May 15 00:43:54.175286 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130000 -> Node 0 May 15 00:43:54.175293 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130100 -> Node 0 May 15 00:43:54.175300 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140000 -> Node 0 May 15 00:43:54.175307 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140100 -> Node 0 May 15 00:43:54.175315 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150000 -> Node 0 May 15 00:43:54.175322 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150100 -> Node 0 May 15 00:43:54.175329 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160000 -> Node 0 May 15 00:43:54.175336 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160100 -> Node 0 May 15 00:43:54.175343 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170000 -> Node 0 May 15 00:43:54.175349 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170100 -> Node 0 May 15 00:43:54.175356 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180000 -> Node 0 May 15 00:43:54.175363 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180100 -> Node 0 May 15 00:43:54.175377 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190000 -> Node 0 May 15 00:43:54.175384 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190100 -> Node 0 May 15 00:43:54.175393 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0000 -> Node 0 May 15 00:43:54.175400 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0100 -> Node 0 May 15 00:43:54.175408 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0000 -> Node 0 May 15 00:43:54.175415 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0100 -> Node 0 May 15 00:43:54.175422 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0000 -> Node 0 May 15 00:43:54.175430 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0100 -> Node 0 May 15 00:43:54.175438 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0000 -> Node 0 May 15 00:43:54.175446 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0100 -> Node 0 May 15 00:43:54.175453 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0000 -> Node 0 May 15 00:43:54.175460 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0100 -> Node 0 May 15 00:43:54.175468 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0000 -> Node 0 May 15 00:43:54.175475 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0100 -> Node 0 May 15 00:43:54.175482 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200000 -> Node 0 May 15 00:43:54.175489 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200100 -> Node 0 May 15 00:43:54.175497 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210000 -> Node 0 May 15 00:43:54.175504 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210100 -> Node 0 May 15 00:43:54.175511 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220000 -> Node 0 May 15 00:43:54.175518 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220100 -> Node 0 May 15 00:43:54.175527 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230000 -> Node 0 May 15 00:43:54.175534 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230100 -> Node 0 May 15 00:43:54.175542 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240000 -> Node 0 May 15 00:43:54.175549 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240100 -> Node 0 May 15 00:43:54.175556 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250000 -> Node 0 May 15 00:43:54.175563 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250100 -> Node 0 May 15 00:43:54.175571 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260000 -> Node 0 May 15 00:43:54.175578 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260100 -> Node 0 May 15 00:43:54.175585 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270000 -> Node 0 May 15 00:43:54.175592 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270100 -> Node 0 May 15 00:43:54.175599 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 15 00:43:54.175608 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 15 00:43:54.175616 kernel: pcpu-alloc: [0] 00 [0] 01 [0] 02 [0] 03 [0] 04 [0] 05 [0] 06 [0] 07 May 15 00:43:54.175623 kernel: pcpu-alloc: [0] 08 [0] 09 [0] 10 [0] 11 [0] 12 [0] 13 [0] 14 [0] 15 May 15 00:43:54.175630 kernel: pcpu-alloc: [0] 16 [0] 17 [0] 18 [0] 19 [0] 20 [0] 21 [0] 22 [0] 23 May 15 00:43:54.175638 kernel: pcpu-alloc: [0] 24 [0] 25 [0] 26 [0] 27 [0] 28 [0] 29 [0] 30 [0] 31 May 15 00:43:54.175645 kernel: pcpu-alloc: [0] 32 [0] 33 [0] 34 [0] 35 [0] 36 [0] 37 [0] 38 [0] 39 May 15 00:43:54.175652 kernel: pcpu-alloc: [0] 40 [0] 41 [0] 42 [0] 43 [0] 44 [0] 45 [0] 46 [0] 47 May 15 00:43:54.175660 kernel: pcpu-alloc: [0] 48 [0] 49 [0] 50 [0] 51 [0] 52 [0] 53 [0] 54 [0] 55 May 15 00:43:54.175667 kernel: pcpu-alloc: [0] 56 [0] 57 [0] 58 [0] 59 [0] 60 [0] 61 [0] 62 [0] 63 May 15 00:43:54.175674 kernel: pcpu-alloc: [0] 64 [0] 65 [0] 66 [0] 67 [0] 68 [0] 69 [0] 70 [0] 71 May 15 00:43:54.175681 kernel: pcpu-alloc: [0] 72 [0] 73 [0] 74 [0] 75 [0] 76 [0] 77 [0] 78 [0] 79 May 15 00:43:54.175690 kernel: Detected PIPT I-cache on CPU0 May 15 00:43:54.175698 kernel: CPU features: detected: GIC system register CPU interface May 15 00:43:54.175705 kernel: CPU features: detected: Virtualization Host Extensions May 15 00:43:54.175713 kernel: CPU features: detected: Hardware dirty bit management May 15 00:43:54.175720 kernel: CPU features: detected: Spectre-v4 May 15 00:43:54.175727 kernel: CPU features: detected: Spectre-BHB May 15 00:43:54.175735 kernel: CPU features: kernel page table isolation forced ON by KASLR May 15 00:43:54.175742 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 15 00:43:54.175749 kernel: CPU features: detected: ARM erratum 1418040 May 15 00:43:54.175757 kernel: CPU features: detected: SSBS not fully self-synchronizing May 15 00:43:54.175764 kernel: alternatives: applying boot alternatives May 15 00:43:54.175773 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=bfa141d6f8686d8fe96245516ecbaee60c938beef41636c397e3939a2c9a6ed9 May 15 00:43:54.175784 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 00:43:54.175792 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 15 00:43:54.175799 kernel: printk: log_buf_len total cpu_extra contributions: 323584 bytes May 15 00:43:54.175807 kernel: printk: log_buf_len min size: 262144 bytes May 15 00:43:54.175814 kernel: printk: log_buf_len: 1048576 bytes May 15 00:43:54.175821 kernel: printk: early log buf free: 249864(95%) May 15 00:43:54.175829 kernel: Dentry cache hash table entries: 16777216 (order: 15, 134217728 bytes, linear) May 15 00:43:54.175836 kernel: Inode-cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear) May 15 00:43:54.175844 kernel: Fallback order for Node 0: 0 May 15 00:43:54.175851 kernel: Built 1 zonelists, mobility grouping on. Total pages: 65996028 May 15 00:43:54.175860 kernel: Policy zone: Normal May 15 00:43:54.175868 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 00:43:54.175875 kernel: software IO TLB: area num 128. May 15 00:43:54.175882 kernel: software IO TLB: mapped [mem 0x00000000fbc8f000-0x00000000ffc8f000] (64MB) May 15 00:43:54.175890 kernel: Memory: 262923416K/268174336K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38336K init, 897K bss, 5250920K reserved, 0K cma-reserved) May 15 00:43:54.175897 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=80, Nodes=1 May 15 00:43:54.175905 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 00:43:54.175912 kernel: rcu: RCU event tracing is enabled. May 15 00:43:54.175920 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=80. May 15 00:43:54.175928 kernel: Trampoline variant of Tasks RCU enabled. May 15 00:43:54.175935 kernel: Tracing variant of Tasks RCU enabled. May 15 00:43:54.175942 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 00:43:54.175951 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=80 May 15 00:43:54.175958 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 15 00:43:54.175966 kernel: GICv3: GIC: Using split EOI/Deactivate mode May 15 00:43:54.175973 kernel: GICv3: 672 SPIs implemented May 15 00:43:54.175981 kernel: GICv3: 0 Extended SPIs implemented May 15 00:43:54.175988 kernel: Root IRQ handler: gic_handle_irq May 15 00:43:54.175995 kernel: GICv3: GICv3 features: 16 PPIs May 15 00:43:54.176002 kernel: GICv3: CPU0: found redistributor 120000 region 0:0x00001001005c0000 May 15 00:43:54.176010 kernel: SRAT: PXM 0 -> ITS 0 -> Node 0 May 15 00:43:54.176017 kernel: SRAT: PXM 0 -> ITS 1 -> Node 0 May 15 00:43:54.176024 kernel: SRAT: PXM 0 -> ITS 2 -> Node 0 May 15 00:43:54.176031 kernel: SRAT: PXM 0 -> ITS 3 -> Node 0 May 15 00:43:54.176040 kernel: SRAT: PXM 0 -> ITS 4 -> Node 0 May 15 00:43:54.176047 kernel: SRAT: PXM 0 -> ITS 5 -> Node 0 May 15 00:43:54.176054 kernel: SRAT: PXM 0 -> ITS 6 -> Node 0 May 15 00:43:54.176062 kernel: SRAT: PXM 0 -> ITS 7 -> Node 0 May 15 00:43:54.176069 kernel: ITS [mem 0x100100040000-0x10010005ffff] May 15 00:43:54.176076 kernel: ITS@0x0000100100040000: allocated 8192 Devices @80000270000 (indirect, esz 8, psz 64K, shr 1) May 15 00:43:54.176084 kernel: ITS@0x0000100100040000: allocated 32768 Interrupt Collections @80000280000 (flat, esz 2, psz 64K, shr 1) May 15 00:43:54.176091 kernel: ITS [mem 0x100100060000-0x10010007ffff] May 15 00:43:54.176099 kernel: ITS@0x0000100100060000: allocated 8192 Devices @800002a0000 (indirect, esz 8, psz 64K, shr 1) May 15 00:43:54.176106 kernel: ITS@0x0000100100060000: allocated 32768 Interrupt Collections @800002b0000 (flat, esz 2, psz 64K, shr 1) May 15 00:43:54.176113 kernel: ITS [mem 0x100100080000-0x10010009ffff] May 15 00:43:54.176122 kernel: ITS@0x0000100100080000: allocated 8192 Devices @800002d0000 (indirect, esz 8, psz 64K, shr 1) May 15 00:43:54.176130 kernel: ITS@0x0000100100080000: allocated 32768 Interrupt Collections @800002e0000 (flat, esz 2, psz 64K, shr 1) May 15 00:43:54.176137 kernel: ITS [mem 0x1001000a0000-0x1001000bffff] May 15 00:43:54.176145 kernel: ITS@0x00001001000a0000: allocated 8192 Devices @80000300000 (indirect, esz 8, psz 64K, shr 1) May 15 00:43:54.176152 kernel: ITS@0x00001001000a0000: allocated 32768 Interrupt Collections @80000310000 (flat, esz 2, psz 64K, shr 1) May 15 00:43:54.176160 kernel: ITS [mem 0x1001000c0000-0x1001000dffff] May 15 00:43:54.176167 kernel: ITS@0x00001001000c0000: allocated 8192 Devices @80000330000 (indirect, esz 8, psz 64K, shr 1) May 15 00:43:54.176174 kernel: ITS@0x00001001000c0000: allocated 32768 Interrupt Collections @80000340000 (flat, esz 2, psz 64K, shr 1) May 15 00:43:54.176182 kernel: ITS [mem 0x1001000e0000-0x1001000fffff] May 15 00:43:54.176189 kernel: ITS@0x00001001000e0000: allocated 8192 Devices @80000360000 (indirect, esz 8, psz 64K, shr 1) May 15 00:43:54.176196 kernel: ITS@0x00001001000e0000: allocated 32768 Interrupt Collections @80000370000 (flat, esz 2, psz 64K, shr 1) May 15 00:43:54.176205 kernel: ITS [mem 0x100100100000-0x10010011ffff] May 15 00:43:54.176212 kernel: ITS@0x0000100100100000: allocated 8192 Devices @80000390000 (indirect, esz 8, psz 64K, shr 1) May 15 00:43:54.176220 kernel: ITS@0x0000100100100000: allocated 32768 Interrupt Collections @800003a0000 (flat, esz 2, psz 64K, shr 1) May 15 00:43:54.176227 kernel: ITS [mem 0x100100120000-0x10010013ffff] May 15 00:43:54.176235 kernel: ITS@0x0000100100120000: allocated 8192 Devices @800003c0000 (indirect, esz 8, psz 64K, shr 1) May 15 00:43:54.176242 kernel: ITS@0x0000100100120000: allocated 32768 Interrupt Collections @800003d0000 (flat, esz 2, psz 64K, shr 1) May 15 00:43:54.176249 kernel: GICv3: using LPI property table @0x00000800003e0000 May 15 00:43:54.176257 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000800003f0000 May 15 00:43:54.176264 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 00:43:54.176271 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.176279 kernel: ACPI GTDT: found 1 memory-mapped timer block(s). May 15 00:43:54.176287 kernel: arch_timer: cp15 and mmio timer(s) running at 25.00MHz (phys/phys). May 15 00:43:54.176295 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 15 00:43:54.176303 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 15 00:43:54.176310 kernel: Console: colour dummy device 80x25 May 15 00:43:54.176318 kernel: printk: console [tty0] enabled May 15 00:43:54.176325 kernel: ACPI: Core revision 20230628 May 15 00:43:54.176333 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 15 00:43:54.176341 kernel: pid_max: default: 81920 minimum: 640 May 15 00:43:54.176348 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 15 00:43:54.176356 kernel: landlock: Up and running. May 15 00:43:54.176364 kernel: SELinux: Initializing. May 15 00:43:54.176372 kernel: Mount-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 00:43:54.176379 kernel: Mountpoint-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 00:43:54.176387 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. May 15 00:43:54.176395 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. May 15 00:43:54.176402 kernel: rcu: Hierarchical SRCU implementation. May 15 00:43:54.176410 kernel: rcu: Max phase no-delay instances is 400. May 15 00:43:54.176418 kernel: Platform MSI: ITS@0x100100040000 domain created May 15 00:43:54.176425 kernel: Platform MSI: ITS@0x100100060000 domain created May 15 00:43:54.176434 kernel: Platform MSI: ITS@0x100100080000 domain created May 15 00:43:54.176442 kernel: Platform MSI: ITS@0x1001000a0000 domain created May 15 00:43:54.176449 kernel: Platform MSI: ITS@0x1001000c0000 domain created May 15 00:43:54.176457 kernel: Platform MSI: ITS@0x1001000e0000 domain created May 15 00:43:54.176464 kernel: Platform MSI: ITS@0x100100100000 domain created May 15 00:43:54.176471 kernel: Platform MSI: ITS@0x100100120000 domain created May 15 00:43:54.176479 kernel: PCI/MSI: ITS@0x100100040000 domain created May 15 00:43:54.176486 kernel: PCI/MSI: ITS@0x100100060000 domain created May 15 00:43:54.176494 kernel: PCI/MSI: ITS@0x100100080000 domain created May 15 00:43:54.176502 kernel: PCI/MSI: ITS@0x1001000a0000 domain created May 15 00:43:54.176510 kernel: PCI/MSI: ITS@0x1001000c0000 domain created May 15 00:43:54.176517 kernel: PCI/MSI: ITS@0x1001000e0000 domain created May 15 00:43:54.176524 kernel: PCI/MSI: ITS@0x100100100000 domain created May 15 00:43:54.176532 kernel: PCI/MSI: ITS@0x100100120000 domain created May 15 00:43:54.176539 kernel: Remapping and enabling EFI services. May 15 00:43:54.176547 kernel: smp: Bringing up secondary CPUs ... May 15 00:43:54.176554 kernel: Detected PIPT I-cache on CPU1 May 15 00:43:54.176562 kernel: GICv3: CPU1: found redistributor 1a0000 region 0:0x00001001007c0000 May 15 00:43:54.176569 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000080000800000 May 15 00:43:54.176578 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.176586 kernel: CPU1: Booted secondary processor 0x00001a0000 [0x413fd0c1] May 15 00:43:54.176593 kernel: Detected PIPT I-cache on CPU2 May 15 00:43:54.176601 kernel: GICv3: CPU2: found redistributor 140000 region 0:0x0000100100640000 May 15 00:43:54.176609 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000080000810000 May 15 00:43:54.176616 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.176623 kernel: CPU2: Booted secondary processor 0x0000140000 [0x413fd0c1] May 15 00:43:54.176631 kernel: Detected PIPT I-cache on CPU3 May 15 00:43:54.176638 kernel: GICv3: CPU3: found redistributor 1c0000 region 0:0x0000100100840000 May 15 00:43:54.176647 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000080000820000 May 15 00:43:54.176655 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.176662 kernel: CPU3: Booted secondary processor 0x00001c0000 [0x413fd0c1] May 15 00:43:54.176670 kernel: Detected PIPT I-cache on CPU4 May 15 00:43:54.176677 kernel: GICv3: CPU4: found redistributor 100000 region 0:0x0000100100540000 May 15 00:43:54.176685 kernel: GICv3: CPU4: using allocated LPI pending table @0x0000080000830000 May 15 00:43:54.176692 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.176700 kernel: CPU4: Booted secondary processor 0x0000100000 [0x413fd0c1] May 15 00:43:54.176707 kernel: Detected PIPT I-cache on CPU5 May 15 00:43:54.176714 kernel: GICv3: CPU5: found redistributor 180000 region 0:0x0000100100740000 May 15 00:43:54.176723 kernel: GICv3: CPU5: using allocated LPI pending table @0x0000080000840000 May 15 00:43:54.176731 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.176738 kernel: CPU5: Booted secondary processor 0x0000180000 [0x413fd0c1] May 15 00:43:54.176746 kernel: Detected PIPT I-cache on CPU6 May 15 00:43:54.176753 kernel: GICv3: CPU6: found redistributor 160000 region 0:0x00001001006c0000 May 15 00:43:54.176761 kernel: GICv3: CPU6: using allocated LPI pending table @0x0000080000850000 May 15 00:43:54.176768 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.176775 kernel: CPU6: Booted secondary processor 0x0000160000 [0x413fd0c1] May 15 00:43:54.176785 kernel: Detected PIPT I-cache on CPU7 May 15 00:43:54.176794 kernel: GICv3: CPU7: found redistributor 1e0000 region 0:0x00001001008c0000 May 15 00:43:54.176802 kernel: GICv3: CPU7: using allocated LPI pending table @0x0000080000860000 May 15 00:43:54.176809 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.176817 kernel: CPU7: Booted secondary processor 0x00001e0000 [0x413fd0c1] May 15 00:43:54.176824 kernel: Detected PIPT I-cache on CPU8 May 15 00:43:54.176831 kernel: GICv3: CPU8: found redistributor a0000 region 0:0x00001001003c0000 May 15 00:43:54.176839 kernel: GICv3: CPU8: using allocated LPI pending table @0x0000080000870000 May 15 00:43:54.176846 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.176854 kernel: CPU8: Booted secondary processor 0x00000a0000 [0x413fd0c1] May 15 00:43:54.176861 kernel: Detected PIPT I-cache on CPU9 May 15 00:43:54.176870 kernel: GICv3: CPU9: found redistributor 220000 region 0:0x00001001009c0000 May 15 00:43:54.176877 kernel: GICv3: CPU9: using allocated LPI pending table @0x0000080000880000 May 15 00:43:54.176885 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.176892 kernel: CPU9: Booted secondary processor 0x0000220000 [0x413fd0c1] May 15 00:43:54.176900 kernel: Detected PIPT I-cache on CPU10 May 15 00:43:54.176907 kernel: GICv3: CPU10: found redistributor c0000 region 0:0x0000100100440000 May 15 00:43:54.176915 kernel: GICv3: CPU10: using allocated LPI pending table @0x0000080000890000 May 15 00:43:54.176922 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.176929 kernel: CPU10: Booted secondary processor 0x00000c0000 [0x413fd0c1] May 15 00:43:54.176938 kernel: Detected PIPT I-cache on CPU11 May 15 00:43:54.176946 kernel: GICv3: CPU11: found redistributor 240000 region 0:0x0000100100a40000 May 15 00:43:54.176953 kernel: GICv3: CPU11: using allocated LPI pending table @0x00000800008a0000 May 15 00:43:54.176961 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.176968 kernel: CPU11: Booted secondary processor 0x0000240000 [0x413fd0c1] May 15 00:43:54.176976 kernel: Detected PIPT I-cache on CPU12 May 15 00:43:54.176983 kernel: GICv3: CPU12: found redistributor 80000 region 0:0x0000100100340000 May 15 00:43:54.176991 kernel: GICv3: CPU12: using allocated LPI pending table @0x00000800008b0000 May 15 00:43:54.176998 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.177006 kernel: CPU12: Booted secondary processor 0x0000080000 [0x413fd0c1] May 15 00:43:54.177015 kernel: Detected PIPT I-cache on CPU13 May 15 00:43:54.177022 kernel: GICv3: CPU13: found redistributor 200000 region 0:0x0000100100940000 May 15 00:43:54.177030 kernel: GICv3: CPU13: using allocated LPI pending table @0x00000800008c0000 May 15 00:43:54.177038 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.177046 kernel: CPU13: Booted secondary processor 0x0000200000 [0x413fd0c1] May 15 00:43:54.177053 kernel: Detected PIPT I-cache on CPU14 May 15 00:43:54.177061 kernel: GICv3: CPU14: found redistributor e0000 region 0:0x00001001004c0000 May 15 00:43:54.177068 kernel: GICv3: CPU14: using allocated LPI pending table @0x00000800008d0000 May 15 00:43:54.177076 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.177085 kernel: CPU14: Booted secondary processor 0x00000e0000 [0x413fd0c1] May 15 00:43:54.177092 kernel: Detected PIPT I-cache on CPU15 May 15 00:43:54.177100 kernel: GICv3: CPU15: found redistributor 260000 region 0:0x0000100100ac0000 May 15 00:43:54.177107 kernel: GICv3: CPU15: using allocated LPI pending table @0x00000800008e0000 May 15 00:43:54.177115 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.177122 kernel: CPU15: Booted secondary processor 0x0000260000 [0x413fd0c1] May 15 00:43:54.177130 kernel: Detected PIPT I-cache on CPU16 May 15 00:43:54.177137 kernel: GICv3: CPU16: found redistributor 20000 region 0:0x00001001001c0000 May 15 00:43:54.177145 kernel: GICv3: CPU16: using allocated LPI pending table @0x00000800008f0000 May 15 00:43:54.177162 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.177171 kernel: CPU16: Booted secondary processor 0x0000020000 [0x413fd0c1] May 15 00:43:54.177179 kernel: Detected PIPT I-cache on CPU17 May 15 00:43:54.177186 kernel: GICv3: CPU17: found redistributor 40000 region 0:0x0000100100240000 May 15 00:43:54.177194 kernel: GICv3: CPU17: using allocated LPI pending table @0x0000080000900000 May 15 00:43:54.177202 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.177210 kernel: CPU17: Booted secondary processor 0x0000040000 [0x413fd0c1] May 15 00:43:54.177217 kernel: Detected PIPT I-cache on CPU18 May 15 00:43:54.177225 kernel: GICv3: CPU18: found redistributor 0 region 0:0x0000100100140000 May 15 00:43:54.177233 kernel: GICv3: CPU18: using allocated LPI pending table @0x0000080000910000 May 15 00:43:54.177243 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.177250 kernel: CPU18: Booted secondary processor 0x0000000000 [0x413fd0c1] May 15 00:43:54.177258 kernel: Detected PIPT I-cache on CPU19 May 15 00:43:54.177266 kernel: GICv3: CPU19: found redistributor 60000 region 0:0x00001001002c0000 May 15 00:43:54.177274 kernel: GICv3: CPU19: using allocated LPI pending table @0x0000080000920000 May 15 00:43:54.177282 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.177291 kernel: CPU19: Booted secondary processor 0x0000060000 [0x413fd0c1] May 15 00:43:54.177299 kernel: Detected PIPT I-cache on CPU20 May 15 00:43:54.177307 kernel: GICv3: CPU20: found redistributor 130000 region 0:0x0000100100600000 May 15 00:43:54.177315 kernel: GICv3: CPU20: using allocated LPI pending table @0x0000080000930000 May 15 00:43:54.177323 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.177331 kernel: CPU20: Booted secondary processor 0x0000130000 [0x413fd0c1] May 15 00:43:54.177338 kernel: Detected PIPT I-cache on CPU21 May 15 00:43:54.177346 kernel: GICv3: CPU21: found redistributor 1b0000 region 0:0x0000100100800000 May 15 00:43:54.177354 kernel: GICv3: CPU21: using allocated LPI pending table @0x0000080000940000 May 15 00:43:54.177364 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.177371 kernel: CPU21: Booted secondary processor 0x00001b0000 [0x413fd0c1] May 15 00:43:54.177379 kernel: Detected PIPT I-cache on CPU22 May 15 00:43:54.177387 kernel: GICv3: CPU22: found redistributor 150000 region 0:0x0000100100680000 May 15 00:43:54.177395 kernel: GICv3: CPU22: using allocated LPI pending table @0x0000080000950000 May 15 00:43:54.177403 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.177411 kernel: CPU22: Booted secondary processor 0x0000150000 [0x413fd0c1] May 15 00:43:54.177418 kernel: Detected PIPT I-cache on CPU23 May 15 00:43:54.177426 kernel: GICv3: CPU23: found redistributor 1d0000 region 0:0x0000100100880000 May 15 00:43:54.177435 kernel: GICv3: CPU23: using allocated LPI pending table @0x0000080000960000 May 15 00:43:54.177443 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.177451 kernel: CPU23: Booted secondary processor 0x00001d0000 [0x413fd0c1] May 15 00:43:54.177459 kernel: Detected PIPT I-cache on CPU24 May 15 00:43:54.177467 kernel: GICv3: CPU24: found redistributor 110000 region 0:0x0000100100580000 May 15 00:43:54.177475 kernel: GICv3: CPU24: using allocated LPI pending table @0x0000080000970000 May 15 00:43:54.177484 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.177493 kernel: CPU24: Booted secondary processor 0x0000110000 [0x413fd0c1] May 15 00:43:54.177501 kernel: Detected PIPT I-cache on CPU25 May 15 00:43:54.177509 kernel: GICv3: CPU25: found redistributor 190000 region 0:0x0000100100780000 May 15 00:43:54.177519 kernel: GICv3: CPU25: using allocated LPI pending table @0x0000080000980000 May 15 00:43:54.177527 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.177535 kernel: CPU25: Booted secondary processor 0x0000190000 [0x413fd0c1] May 15 00:43:54.177542 kernel: Detected PIPT I-cache on CPU26 May 15 00:43:54.177550 kernel: GICv3: CPU26: found redistributor 170000 region 0:0x0000100100700000 May 15 00:43:54.177558 kernel: GICv3: CPU26: using allocated LPI pending table @0x0000080000990000 May 15 00:43:54.177566 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.177574 kernel: CPU26: Booted secondary processor 0x0000170000 [0x413fd0c1] May 15 00:43:54.177581 kernel: Detected PIPT I-cache on CPU27 May 15 00:43:54.177591 kernel: GICv3: CPU27: found redistributor 1f0000 region 0:0x0000100100900000 May 15 00:43:54.177599 kernel: GICv3: CPU27: using allocated LPI pending table @0x00000800009a0000 May 15 00:43:54.177607 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.177614 kernel: CPU27: Booted secondary processor 0x00001f0000 [0x413fd0c1] May 15 00:43:54.177622 kernel: Detected PIPT I-cache on CPU28 May 15 00:43:54.177630 kernel: GICv3: CPU28: found redistributor b0000 region 0:0x0000100100400000 May 15 00:43:54.177638 kernel: GICv3: CPU28: using allocated LPI pending table @0x00000800009b0000 May 15 00:43:54.177646 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.177654 kernel: CPU28: Booted secondary processor 0x00000b0000 [0x413fd0c1] May 15 00:43:54.177662 kernel: Detected PIPT I-cache on CPU29 May 15 00:43:54.177671 kernel: GICv3: CPU29: found redistributor 230000 region 0:0x0000100100a00000 May 15 00:43:54.177679 kernel: GICv3: CPU29: using allocated LPI pending table @0x00000800009c0000 May 15 00:43:54.177687 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.177694 kernel: CPU29: Booted secondary processor 0x0000230000 [0x413fd0c1] May 15 00:43:54.177702 kernel: Detected PIPT I-cache on CPU30 May 15 00:43:54.177710 kernel: GICv3: CPU30: found redistributor d0000 region 0:0x0000100100480000 May 15 00:43:54.177718 kernel: GICv3: CPU30: using allocated LPI pending table @0x00000800009d0000 May 15 00:43:54.177726 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.177734 kernel: CPU30: Booted secondary processor 0x00000d0000 [0x413fd0c1] May 15 00:43:54.177743 kernel: Detected PIPT I-cache on CPU31 May 15 00:43:54.177751 kernel: GICv3: CPU31: found redistributor 250000 region 0:0x0000100100a80000 May 15 00:43:54.177759 kernel: GICv3: CPU31: using allocated LPI pending table @0x00000800009e0000 May 15 00:43:54.177767 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.177775 kernel: CPU31: Booted secondary processor 0x0000250000 [0x413fd0c1] May 15 00:43:54.177786 kernel: Detected PIPT I-cache on CPU32 May 15 00:43:54.177794 kernel: GICv3: CPU32: found redistributor 90000 region 0:0x0000100100380000 May 15 00:43:54.177802 kernel: GICv3: CPU32: using allocated LPI pending table @0x00000800009f0000 May 15 00:43:54.177810 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.177817 kernel: CPU32: Booted secondary processor 0x0000090000 [0x413fd0c1] May 15 00:43:54.177827 kernel: Detected PIPT I-cache on CPU33 May 15 00:43:54.177835 kernel: GICv3: CPU33: found redistributor 210000 region 0:0x0000100100980000 May 15 00:43:54.177842 kernel: GICv3: CPU33: using allocated LPI pending table @0x0000080000a00000 May 15 00:43:54.177850 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.177858 kernel: CPU33: Booted secondary processor 0x0000210000 [0x413fd0c1] May 15 00:43:54.177866 kernel: Detected PIPT I-cache on CPU34 May 15 00:43:54.177874 kernel: GICv3: CPU34: found redistributor f0000 region 0:0x0000100100500000 May 15 00:43:54.177882 kernel: GICv3: CPU34: using allocated LPI pending table @0x0000080000a10000 May 15 00:43:54.177889 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.177899 kernel: CPU34: Booted secondary processor 0x00000f0000 [0x413fd0c1] May 15 00:43:54.177907 kernel: Detected PIPT I-cache on CPU35 May 15 00:43:54.177914 kernel: GICv3: CPU35: found redistributor 270000 region 0:0x0000100100b00000 May 15 00:43:54.177922 kernel: GICv3: CPU35: using allocated LPI pending table @0x0000080000a20000 May 15 00:43:54.177930 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.177938 kernel: CPU35: Booted secondary processor 0x0000270000 [0x413fd0c1] May 15 00:43:54.177946 kernel: Detected PIPT I-cache on CPU36 May 15 00:43:54.177954 kernel: GICv3: CPU36: found redistributor 30000 region 0:0x0000100100200000 May 15 00:43:54.177962 kernel: GICv3: CPU36: using allocated LPI pending table @0x0000080000a30000 May 15 00:43:54.177971 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.177979 kernel: CPU36: Booted secondary processor 0x0000030000 [0x413fd0c1] May 15 00:43:54.177986 kernel: Detected PIPT I-cache on CPU37 May 15 00:43:54.177994 kernel: GICv3: CPU37: found redistributor 50000 region 0:0x0000100100280000 May 15 00:43:54.178002 kernel: GICv3: CPU37: using allocated LPI pending table @0x0000080000a40000 May 15 00:43:54.178010 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.178018 kernel: CPU37: Booted secondary processor 0x0000050000 [0x413fd0c1] May 15 00:43:54.178026 kernel: Detected PIPT I-cache on CPU38 May 15 00:43:54.178035 kernel: GICv3: CPU38: found redistributor 10000 region 0:0x0000100100180000 May 15 00:43:54.178043 kernel: GICv3: CPU38: using allocated LPI pending table @0x0000080000a50000 May 15 00:43:54.178052 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.178060 kernel: CPU38: Booted secondary processor 0x0000010000 [0x413fd0c1] May 15 00:43:54.178068 kernel: Detected PIPT I-cache on CPU39 May 15 00:43:54.178076 kernel: GICv3: CPU39: found redistributor 70000 region 0:0x0000100100300000 May 15 00:43:54.178084 kernel: GICv3: CPU39: using allocated LPI pending table @0x0000080000a60000 May 15 00:43:54.178092 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.178100 kernel: CPU39: Booted secondary processor 0x0000070000 [0x413fd0c1] May 15 00:43:54.178108 kernel: Detected PIPT I-cache on CPU40 May 15 00:43:54.178117 kernel: GICv3: CPU40: found redistributor 120100 region 0:0x00001001005e0000 May 15 00:43:54.178125 kernel: GICv3: CPU40: using allocated LPI pending table @0x0000080000a70000 May 15 00:43:54.178133 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.178141 kernel: CPU40: Booted secondary processor 0x0000120100 [0x413fd0c1] May 15 00:43:54.178148 kernel: Detected PIPT I-cache on CPU41 May 15 00:43:54.178156 kernel: GICv3: CPU41: found redistributor 1a0100 region 0:0x00001001007e0000 May 15 00:43:54.178164 kernel: GICv3: CPU41: using allocated LPI pending table @0x0000080000a80000 May 15 00:43:54.178172 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.178180 kernel: CPU41: Booted secondary processor 0x00001a0100 [0x413fd0c1] May 15 00:43:54.178189 kernel: Detected PIPT I-cache on CPU42 May 15 00:43:54.178197 kernel: GICv3: CPU42: found redistributor 140100 region 0:0x0000100100660000 May 15 00:43:54.178205 kernel: GICv3: CPU42: using allocated LPI pending table @0x0000080000a90000 May 15 00:43:54.178213 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.178221 kernel: CPU42: Booted secondary processor 0x0000140100 [0x413fd0c1] May 15 00:43:54.178228 kernel: Detected PIPT I-cache on CPU43 May 15 00:43:54.178236 kernel: GICv3: CPU43: found redistributor 1c0100 region 0:0x0000100100860000 May 15 00:43:54.178244 kernel: GICv3: CPU43: using allocated LPI pending table @0x0000080000aa0000 May 15 00:43:54.178252 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.178260 kernel: CPU43: Booted secondary processor 0x00001c0100 [0x413fd0c1] May 15 00:43:54.178269 kernel: Detected PIPT I-cache on CPU44 May 15 00:43:54.178277 kernel: GICv3: CPU44: found redistributor 100100 region 0:0x0000100100560000 May 15 00:43:54.178285 kernel: GICv3: CPU44: using allocated LPI pending table @0x0000080000ab0000 May 15 00:43:54.178293 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.178300 kernel: CPU44: Booted secondary processor 0x0000100100 [0x413fd0c1] May 15 00:43:54.178308 kernel: Detected PIPT I-cache on CPU45 May 15 00:43:54.178316 kernel: GICv3: CPU45: found redistributor 180100 region 0:0x0000100100760000 May 15 00:43:54.178324 kernel: GICv3: CPU45: using allocated LPI pending table @0x0000080000ac0000 May 15 00:43:54.178332 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.178341 kernel: CPU45: Booted secondary processor 0x0000180100 [0x413fd0c1] May 15 00:43:54.178349 kernel: Detected PIPT I-cache on CPU46 May 15 00:43:54.178357 kernel: GICv3: CPU46: found redistributor 160100 region 0:0x00001001006e0000 May 15 00:43:54.178365 kernel: GICv3: CPU46: using allocated LPI pending table @0x0000080000ad0000 May 15 00:43:54.178372 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.178380 kernel: CPU46: Booted secondary processor 0x0000160100 [0x413fd0c1] May 15 00:43:54.178388 kernel: Detected PIPT I-cache on CPU47 May 15 00:43:54.178396 kernel: GICv3: CPU47: found redistributor 1e0100 region 0:0x00001001008e0000 May 15 00:43:54.178404 kernel: GICv3: CPU47: using allocated LPI pending table @0x0000080000ae0000 May 15 00:43:54.178412 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.178421 kernel: CPU47: Booted secondary processor 0x00001e0100 [0x413fd0c1] May 15 00:43:54.178429 kernel: Detected PIPT I-cache on CPU48 May 15 00:43:54.178437 kernel: GICv3: CPU48: found redistributor a0100 region 0:0x00001001003e0000 May 15 00:43:54.178445 kernel: GICv3: CPU48: using allocated LPI pending table @0x0000080000af0000 May 15 00:43:54.178453 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.178460 kernel: CPU48: Booted secondary processor 0x00000a0100 [0x413fd0c1] May 15 00:43:54.178468 kernel: Detected PIPT I-cache on CPU49 May 15 00:43:54.178476 kernel: GICv3: CPU49: found redistributor 220100 region 0:0x00001001009e0000 May 15 00:43:54.178484 kernel: GICv3: CPU49: using allocated LPI pending table @0x0000080000b00000 May 15 00:43:54.178493 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.178501 kernel: CPU49: Booted secondary processor 0x0000220100 [0x413fd0c1] May 15 00:43:54.178510 kernel: Detected PIPT I-cache on CPU50 May 15 00:43:54.178518 kernel: GICv3: CPU50: found redistributor c0100 region 0:0x0000100100460000 May 15 00:43:54.178526 kernel: GICv3: CPU50: using allocated LPI pending table @0x0000080000b10000 May 15 00:43:54.178534 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.178542 kernel: CPU50: Booted secondary processor 0x00000c0100 [0x413fd0c1] May 15 00:43:54.178549 kernel: Detected PIPT I-cache on CPU51 May 15 00:43:54.178557 kernel: GICv3: CPU51: found redistributor 240100 region 0:0x0000100100a60000 May 15 00:43:54.178565 kernel: GICv3: CPU51: using allocated LPI pending table @0x0000080000b20000 May 15 00:43:54.178574 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.178582 kernel: CPU51: Booted secondary processor 0x0000240100 [0x413fd0c1] May 15 00:43:54.178590 kernel: Detected PIPT I-cache on CPU52 May 15 00:43:54.178598 kernel: GICv3: CPU52: found redistributor 80100 region 0:0x0000100100360000 May 15 00:43:54.178606 kernel: GICv3: CPU52: using allocated LPI pending table @0x0000080000b30000 May 15 00:43:54.178614 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.178622 kernel: CPU52: Booted secondary processor 0x0000080100 [0x413fd0c1] May 15 00:43:54.178629 kernel: Detected PIPT I-cache on CPU53 May 15 00:43:54.178637 kernel: GICv3: CPU53: found redistributor 200100 region 0:0x0000100100960000 May 15 00:43:54.178647 kernel: GICv3: CPU53: using allocated LPI pending table @0x0000080000b40000 May 15 00:43:54.178655 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.178662 kernel: CPU53: Booted secondary processor 0x0000200100 [0x413fd0c1] May 15 00:43:54.178670 kernel: Detected PIPT I-cache on CPU54 May 15 00:43:54.178678 kernel: GICv3: CPU54: found redistributor e0100 region 0:0x00001001004e0000 May 15 00:43:54.178686 kernel: GICv3: CPU54: using allocated LPI pending table @0x0000080000b50000 May 15 00:43:54.178694 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.178702 kernel: CPU54: Booted secondary processor 0x00000e0100 [0x413fd0c1] May 15 00:43:54.178710 kernel: Detected PIPT I-cache on CPU55 May 15 00:43:54.178718 kernel: GICv3: CPU55: found redistributor 260100 region 0:0x0000100100ae0000 May 15 00:43:54.178727 kernel: GICv3: CPU55: using allocated LPI pending table @0x0000080000b60000 May 15 00:43:54.178735 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.178743 kernel: CPU55: Booted secondary processor 0x0000260100 [0x413fd0c1] May 15 00:43:54.178750 kernel: Detected PIPT I-cache on CPU56 May 15 00:43:54.178758 kernel: GICv3: CPU56: found redistributor 20100 region 0:0x00001001001e0000 May 15 00:43:54.178766 kernel: GICv3: CPU56: using allocated LPI pending table @0x0000080000b70000 May 15 00:43:54.178774 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.178787 kernel: CPU56: Booted secondary processor 0x0000020100 [0x413fd0c1] May 15 00:43:54.178795 kernel: Detected PIPT I-cache on CPU57 May 15 00:43:54.178805 kernel: GICv3: CPU57: found redistributor 40100 region 0:0x0000100100260000 May 15 00:43:54.178813 kernel: GICv3: CPU57: using allocated LPI pending table @0x0000080000b80000 May 15 00:43:54.178821 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.178828 kernel: CPU57: Booted secondary processor 0x0000040100 [0x413fd0c1] May 15 00:43:54.178836 kernel: Detected PIPT I-cache on CPU58 May 15 00:43:54.178844 kernel: GICv3: CPU58: found redistributor 100 region 0:0x0000100100160000 May 15 00:43:54.178852 kernel: GICv3: CPU58: using allocated LPI pending table @0x0000080000b90000 May 15 00:43:54.178860 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.178868 kernel: CPU58: Booted secondary processor 0x0000000100 [0x413fd0c1] May 15 00:43:54.178877 kernel: Detected PIPT I-cache on CPU59 May 15 00:43:54.178885 kernel: GICv3: CPU59: found redistributor 60100 region 0:0x00001001002e0000 May 15 00:43:54.178893 kernel: GICv3: CPU59: using allocated LPI pending table @0x0000080000ba0000 May 15 00:43:54.178901 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.178909 kernel: CPU59: Booted secondary processor 0x0000060100 [0x413fd0c1] May 15 00:43:54.178916 kernel: Detected PIPT I-cache on CPU60 May 15 00:43:54.178924 kernel: GICv3: CPU60: found redistributor 130100 region 0:0x0000100100620000 May 15 00:43:54.178932 kernel: GICv3: CPU60: using allocated LPI pending table @0x0000080000bb0000 May 15 00:43:54.178940 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.178948 kernel: CPU60: Booted secondary processor 0x0000130100 [0x413fd0c1] May 15 00:43:54.178957 kernel: Detected PIPT I-cache on CPU61 May 15 00:43:54.178965 kernel: GICv3: CPU61: found redistributor 1b0100 region 0:0x0000100100820000 May 15 00:43:54.178973 kernel: GICv3: CPU61: using allocated LPI pending table @0x0000080000bc0000 May 15 00:43:54.178981 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.178989 kernel: CPU61: Booted secondary processor 0x00001b0100 [0x413fd0c1] May 15 00:43:54.178997 kernel: Detected PIPT I-cache on CPU62 May 15 00:43:54.179005 kernel: GICv3: CPU62: found redistributor 150100 region 0:0x00001001006a0000 May 15 00:43:54.179013 kernel: GICv3: CPU62: using allocated LPI pending table @0x0000080000bd0000 May 15 00:43:54.179021 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.179030 kernel: CPU62: Booted secondary processor 0x0000150100 [0x413fd0c1] May 15 00:43:54.179038 kernel: Detected PIPT I-cache on CPU63 May 15 00:43:54.179046 kernel: GICv3: CPU63: found redistributor 1d0100 region 0:0x00001001008a0000 May 15 00:43:54.179053 kernel: GICv3: CPU63: using allocated LPI pending table @0x0000080000be0000 May 15 00:43:54.179061 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.179069 kernel: CPU63: Booted secondary processor 0x00001d0100 [0x413fd0c1] May 15 00:43:54.179077 kernel: Detected PIPT I-cache on CPU64 May 15 00:43:54.179085 kernel: GICv3: CPU64: found redistributor 110100 region 0:0x00001001005a0000 May 15 00:43:54.179093 kernel: GICv3: CPU64: using allocated LPI pending table @0x0000080000bf0000 May 15 00:43:54.179101 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.179110 kernel: CPU64: Booted secondary processor 0x0000110100 [0x413fd0c1] May 15 00:43:54.179118 kernel: Detected PIPT I-cache on CPU65 May 15 00:43:54.179126 kernel: GICv3: CPU65: found redistributor 190100 region 0:0x00001001007a0000 May 15 00:43:54.179133 kernel: GICv3: CPU65: using allocated LPI pending table @0x0000080000c00000 May 15 00:43:54.179141 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.179149 kernel: CPU65: Booted secondary processor 0x0000190100 [0x413fd0c1] May 15 00:43:54.179157 kernel: Detected PIPT I-cache on CPU66 May 15 00:43:54.179164 kernel: GICv3: CPU66: found redistributor 170100 region 0:0x0000100100720000 May 15 00:43:54.179172 kernel: GICv3: CPU66: using allocated LPI pending table @0x0000080000c10000 May 15 00:43:54.179182 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.179189 kernel: CPU66: Booted secondary processor 0x0000170100 [0x413fd0c1] May 15 00:43:54.179197 kernel: Detected PIPT I-cache on CPU67 May 15 00:43:54.179205 kernel: GICv3: CPU67: found redistributor 1f0100 region 0:0x0000100100920000 May 15 00:43:54.179213 kernel: GICv3: CPU67: using allocated LPI pending table @0x0000080000c20000 May 15 00:43:54.179221 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.179229 kernel: CPU67: Booted secondary processor 0x00001f0100 [0x413fd0c1] May 15 00:43:54.179236 kernel: Detected PIPT I-cache on CPU68 May 15 00:43:54.179244 kernel: GICv3: CPU68: found redistributor b0100 region 0:0x0000100100420000 May 15 00:43:54.179252 kernel: GICv3: CPU68: using allocated LPI pending table @0x0000080000c30000 May 15 00:43:54.179261 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.179269 kernel: CPU68: Booted secondary processor 0x00000b0100 [0x413fd0c1] May 15 00:43:54.179277 kernel: Detected PIPT I-cache on CPU69 May 15 00:43:54.179285 kernel: GICv3: CPU69: found redistributor 230100 region 0:0x0000100100a20000 May 15 00:43:54.179293 kernel: GICv3: CPU69: using allocated LPI pending table @0x0000080000c40000 May 15 00:43:54.179301 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.179309 kernel: CPU69: Booted secondary processor 0x0000230100 [0x413fd0c1] May 15 00:43:54.179316 kernel: Detected PIPT I-cache on CPU70 May 15 00:43:54.179324 kernel: GICv3: CPU70: found redistributor d0100 region 0:0x00001001004a0000 May 15 00:43:54.179333 kernel: GICv3: CPU70: using allocated LPI pending table @0x0000080000c50000 May 15 00:43:54.179341 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.179349 kernel: CPU70: Booted secondary processor 0x00000d0100 [0x413fd0c1] May 15 00:43:54.179357 kernel: Detected PIPT I-cache on CPU71 May 15 00:43:54.179365 kernel: GICv3: CPU71: found redistributor 250100 region 0:0x0000100100aa0000 May 15 00:43:54.179373 kernel: GICv3: CPU71: using allocated LPI pending table @0x0000080000c60000 May 15 00:43:54.179381 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.179388 kernel: CPU71: Booted secondary processor 0x0000250100 [0x413fd0c1] May 15 00:43:54.179396 kernel: Detected PIPT I-cache on CPU72 May 15 00:43:54.179404 kernel: GICv3: CPU72: found redistributor 90100 region 0:0x00001001003a0000 May 15 00:43:54.179413 kernel: GICv3: CPU72: using allocated LPI pending table @0x0000080000c70000 May 15 00:43:54.179421 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.179429 kernel: CPU72: Booted secondary processor 0x0000090100 [0x413fd0c1] May 15 00:43:54.179437 kernel: Detected PIPT I-cache on CPU73 May 15 00:43:54.179445 kernel: GICv3: CPU73: found redistributor 210100 region 0:0x00001001009a0000 May 15 00:43:54.179453 kernel: GICv3: CPU73: using allocated LPI pending table @0x0000080000c80000 May 15 00:43:54.179461 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.179469 kernel: CPU73: Booted secondary processor 0x0000210100 [0x413fd0c1] May 15 00:43:54.179476 kernel: Detected PIPT I-cache on CPU74 May 15 00:43:54.179486 kernel: GICv3: CPU74: found redistributor f0100 region 0:0x0000100100520000 May 15 00:43:54.179494 kernel: GICv3: CPU74: using allocated LPI pending table @0x0000080000c90000 May 15 00:43:54.179502 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.179509 kernel: CPU74: Booted secondary processor 0x00000f0100 [0x413fd0c1] May 15 00:43:54.179517 kernel: Detected PIPT I-cache on CPU75 May 15 00:43:54.179525 kernel: GICv3: CPU75: found redistributor 270100 region 0:0x0000100100b20000 May 15 00:43:54.179533 kernel: GICv3: CPU75: using allocated LPI pending table @0x0000080000ca0000 May 15 00:43:54.179541 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.179549 kernel: CPU75: Booted secondary processor 0x0000270100 [0x413fd0c1] May 15 00:43:54.179557 kernel: Detected PIPT I-cache on CPU76 May 15 00:43:54.179566 kernel: GICv3: CPU76: found redistributor 30100 region 0:0x0000100100220000 May 15 00:43:54.179574 kernel: GICv3: CPU76: using allocated LPI pending table @0x0000080000cb0000 May 15 00:43:54.179582 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.179590 kernel: CPU76: Booted secondary processor 0x0000030100 [0x413fd0c1] May 15 00:43:54.179597 kernel: Detected PIPT I-cache on CPU77 May 15 00:43:54.179605 kernel: GICv3: CPU77: found redistributor 50100 region 0:0x00001001002a0000 May 15 00:43:54.179613 kernel: GICv3: CPU77: using allocated LPI pending table @0x0000080000cc0000 May 15 00:43:54.179621 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.179629 kernel: CPU77: Booted secondary processor 0x0000050100 [0x413fd0c1] May 15 00:43:54.179638 kernel: Detected PIPT I-cache on CPU78 May 15 00:43:54.179646 kernel: GICv3: CPU78: found redistributor 10100 region 0:0x00001001001a0000 May 15 00:43:54.179654 kernel: GICv3: CPU78: using allocated LPI pending table @0x0000080000cd0000 May 15 00:43:54.179662 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.179670 kernel: CPU78: Booted secondary processor 0x0000010100 [0x413fd0c1] May 15 00:43:54.179678 kernel: Detected PIPT I-cache on CPU79 May 15 00:43:54.179685 kernel: GICv3: CPU79: found redistributor 70100 region 0:0x0000100100320000 May 15 00:43:54.179693 kernel: GICv3: CPU79: using allocated LPI pending table @0x0000080000ce0000 May 15 00:43:54.179701 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:43:54.179711 kernel: CPU79: Booted secondary processor 0x0000070100 [0x413fd0c1] May 15 00:43:54.179719 kernel: smp: Brought up 1 node, 80 CPUs May 15 00:43:54.179727 kernel: SMP: Total of 80 processors activated. May 15 00:43:54.179734 kernel: CPU features: detected: 32-bit EL0 Support May 15 00:43:54.179742 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 15 00:43:54.179750 kernel: CPU features: detected: Common not Private translations May 15 00:43:54.179758 kernel: CPU features: detected: CRC32 instructions May 15 00:43:54.179766 kernel: CPU features: detected: Enhanced Virtualization Traps May 15 00:43:54.179774 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 15 00:43:54.179837 kernel: CPU features: detected: LSE atomic instructions May 15 00:43:54.179845 kernel: CPU features: detected: Privileged Access Never May 15 00:43:54.179853 kernel: CPU features: detected: RAS Extension Support May 15 00:43:54.179861 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 15 00:43:54.179869 kernel: CPU: All CPU(s) started at EL2 May 15 00:43:54.179877 kernel: alternatives: applying system-wide alternatives May 15 00:43:54.179885 kernel: devtmpfs: initialized May 15 00:43:54.179893 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 00:43:54.179901 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 15 00:43:54.179909 kernel: pinctrl core: initialized pinctrl subsystem May 15 00:43:54.179919 kernel: SMBIOS 3.4.0 present. May 15 00:43:54.179927 kernel: DMI: GIGABYTE R272-P30-JG/MP32-AR0-JG, BIOS F17a (SCP: 1.07.20210713) 07/22/2021 May 15 00:43:54.179935 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 00:43:54.179943 kernel: DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations May 15 00:43:54.179950 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 15 00:43:54.179958 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 15 00:43:54.179967 kernel: audit: initializing netlink subsys (disabled) May 15 00:43:54.179974 kernel: audit: type=2000 audit(0.042:1): state=initialized audit_enabled=0 res=1 May 15 00:43:54.179984 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 00:43:54.179992 kernel: cpuidle: using governor menu May 15 00:43:54.179999 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 15 00:43:54.180007 kernel: ASID allocator initialised with 32768 entries May 15 00:43:54.180015 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 00:43:54.180023 kernel: Serial: AMBA PL011 UART driver May 15 00:43:54.180031 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 15 00:43:54.180039 kernel: Modules: 0 pages in range for non-PLT usage May 15 00:43:54.180046 kernel: Modules: 509264 pages in range for PLT usage May 15 00:43:54.180056 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 00:43:54.180064 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 15 00:43:54.180072 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 15 00:43:54.180080 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 15 00:43:54.180088 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 00:43:54.180096 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 15 00:43:54.180104 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 15 00:43:54.180112 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 15 00:43:54.180120 kernel: ACPI: Added _OSI(Module Device) May 15 00:43:54.180129 kernel: ACPI: Added _OSI(Processor Device) May 15 00:43:54.180136 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 00:43:54.180144 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 00:43:54.180152 kernel: ACPI: 2 ACPI AML tables successfully acquired and loaded May 15 00:43:54.180160 kernel: ACPI: Interpreter enabled May 15 00:43:54.180168 kernel: ACPI: Using GIC for interrupt routing May 15 00:43:54.180176 kernel: ACPI: MCFG table detected, 8 entries May 15 00:43:54.180184 kernel: ACPI: IORT: SMMU-v3[33ffe0000000] Mapped to Proximity domain 0 May 15 00:43:54.180192 kernel: ACPI: IORT: SMMU-v3[37ffe0000000] Mapped to Proximity domain 0 May 15 00:43:54.180199 kernel: ACPI: IORT: SMMU-v3[3bffe0000000] Mapped to Proximity domain 0 May 15 00:43:54.180209 kernel: ACPI: IORT: SMMU-v3[3fffe0000000] Mapped to Proximity domain 0 May 15 00:43:54.180216 kernel: ACPI: IORT: SMMU-v3[23ffe0000000] Mapped to Proximity domain 0 May 15 00:43:54.180224 kernel: ACPI: IORT: SMMU-v3[27ffe0000000] Mapped to Proximity domain 0 May 15 00:43:54.180232 kernel: ACPI: IORT: SMMU-v3[2bffe0000000] Mapped to Proximity domain 0 May 15 00:43:54.180240 kernel: ACPI: IORT: SMMU-v3[2fffe0000000] Mapped to Proximity domain 0 May 15 00:43:54.180248 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x100002600000 (irq = 19, base_baud = 0) is a SBSA May 15 00:43:54.180256 kernel: printk: console [ttyAMA0] enabled May 15 00:43:54.180264 kernel: ARMH0011:01: ttyAMA1 at MMIO 0x100002620000 (irq = 20, base_baud = 0) is a SBSA May 15 00:43:54.180273 kernel: ACPI: PCI Root Bridge [PCI1] (domain 000d [bus 00-ff]) May 15 00:43:54.180410 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 00:43:54.180483 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug PME LTR] May 15 00:43:54.180548 kernel: acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability] May 15 00:43:54.180611 kernel: acpi PNP0A08:00: MCFG quirk: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 15 00:43:54.180674 kernel: acpi PNP0A08:00: ECAM area [mem 0x37fff0000000-0x37ffffffffff] reserved by PNP0C02:00 May 15 00:43:54.180735 kernel: acpi PNP0A08:00: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] May 15 00:43:54.180748 kernel: PCI host bridge to bus 000d:00 May 15 00:43:54.180831 kernel: pci_bus 000d:00: root bus resource [mem 0x50000000-0x5fffffff window] May 15 00:43:54.180892 kernel: pci_bus 000d:00: root bus resource [mem 0x340000000000-0x37ffdfffffff window] May 15 00:43:54.180952 kernel: pci_bus 000d:00: root bus resource [bus 00-ff] May 15 00:43:54.181033 kernel: pci 000d:00:00.0: [1def:e100] type 00 class 0x060000 May 15 00:43:54.181111 kernel: pci 000d:00:01.0: [1def:e101] type 01 class 0x060400 May 15 00:43:54.181183 kernel: pci 000d:00:01.0: enabling Extended Tags May 15 00:43:54.181250 kernel: pci 000d:00:01.0: supports D1 D2 May 15 00:43:54.181318 kernel: pci 000d:00:01.0: PME# supported from D0 D1 D3hot May 15 00:43:54.181392 kernel: pci 000d:00:02.0: [1def:e102] type 01 class 0x060400 May 15 00:43:54.181460 kernel: pci 000d:00:02.0: supports D1 D2 May 15 00:43:54.181527 kernel: pci 000d:00:02.0: PME# supported from D0 D1 D3hot May 15 00:43:54.181602 kernel: pci 000d:00:03.0: [1def:e103] type 01 class 0x060400 May 15 00:43:54.181671 kernel: pci 000d:00:03.0: supports D1 D2 May 15 00:43:54.181741 kernel: pci 000d:00:03.0: PME# supported from D0 D1 D3hot May 15 00:43:54.181825 kernel: pci 000d:00:04.0: [1def:e104] type 01 class 0x060400 May 15 00:43:54.181892 kernel: pci 000d:00:04.0: supports D1 D2 May 15 00:43:54.181960 kernel: pci 000d:00:04.0: PME# supported from D0 D1 D3hot May 15 00:43:54.181970 kernel: acpiphp: Slot [1] registered May 15 00:43:54.181979 kernel: acpiphp: Slot [2] registered May 15 00:43:54.181989 kernel: acpiphp: Slot [3] registered May 15 00:43:54.181997 kernel: acpiphp: Slot [4] registered May 15 00:43:54.182056 kernel: pci_bus 000d:00: on NUMA node 0 May 15 00:43:54.182124 kernel: pci 000d:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 15 00:43:54.182190 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 15 00:43:54.182258 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 15 00:43:54.182326 kernel: pci 000d:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 15 00:43:54.182392 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 15 00:43:54.182461 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 15 00:43:54.182528 kernel: pci 000d:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 15 00:43:54.182600 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 15 00:43:54.182667 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 15 00:43:54.182735 kernel: pci 000d:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 15 00:43:54.182805 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 15 00:43:54.182874 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 15 00:43:54.182943 kernel: pci 000d:00:01.0: BAR 14: assigned [mem 0x50000000-0x501fffff] May 15 00:43:54.183008 kernel: pci 000d:00:01.0: BAR 15: assigned [mem 0x340000000000-0x3400001fffff 64bit pref] May 15 00:43:54.183076 kernel: pci 000d:00:02.0: BAR 14: assigned [mem 0x50200000-0x503fffff] May 15 00:43:54.183142 kernel: pci 000d:00:02.0: BAR 15: assigned [mem 0x340000200000-0x3400003fffff 64bit pref] May 15 00:43:54.183208 kernel: pci 000d:00:03.0: BAR 14: assigned [mem 0x50400000-0x505fffff] May 15 00:43:54.183273 kernel: pci 000d:00:03.0: BAR 15: assigned [mem 0x340000400000-0x3400005fffff 64bit pref] May 15 00:43:54.183340 kernel: pci 000d:00:04.0: BAR 14: assigned [mem 0x50600000-0x507fffff] May 15 00:43:54.183410 kernel: pci 000d:00:04.0: BAR 15: assigned [mem 0x340000600000-0x3400007fffff 64bit pref] May 15 00:43:54.183476 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.183542 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.183609 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.183676 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.183743 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.183813 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.183883 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.183953 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.184020 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.184085 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.184151 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.184216 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.184283 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.184348 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.184416 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.184485 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.184551 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] May 15 00:43:54.184618 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff] May 15 00:43:54.184684 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref] May 15 00:43:54.184752 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] May 15 00:43:54.184821 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff] May 15 00:43:54.184889 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref] May 15 00:43:54.184958 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] May 15 00:43:54.185026 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff] May 15 00:43:54.185093 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref] May 15 00:43:54.185159 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] May 15 00:43:54.185226 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff] May 15 00:43:54.185292 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref] May 15 00:43:54.185356 kernel: pci_bus 000d:00: resource 4 [mem 0x50000000-0x5fffffff window] May 15 00:43:54.185416 kernel: pci_bus 000d:00: resource 5 [mem 0x340000000000-0x37ffdfffffff window] May 15 00:43:54.185488 kernel: pci_bus 000d:01: resource 1 [mem 0x50000000-0x501fffff] May 15 00:43:54.185550 kernel: pci_bus 000d:01: resource 2 [mem 0x340000000000-0x3400001fffff 64bit pref] May 15 00:43:54.185620 kernel: pci_bus 000d:02: resource 1 [mem 0x50200000-0x503fffff] May 15 00:43:54.185683 kernel: pci_bus 000d:02: resource 2 [mem 0x340000200000-0x3400003fffff 64bit pref] May 15 00:43:54.185762 kernel: pci_bus 000d:03: resource 1 [mem 0x50400000-0x505fffff] May 15 00:43:54.185829 kernel: pci_bus 000d:03: resource 2 [mem 0x340000400000-0x3400005fffff 64bit pref] May 15 00:43:54.185898 kernel: pci_bus 000d:04: resource 1 [mem 0x50600000-0x507fffff] May 15 00:43:54.185960 kernel: pci_bus 000d:04: resource 2 [mem 0x340000600000-0x3400007fffff 64bit pref] May 15 00:43:54.185971 kernel: ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 00-ff]) May 15 00:43:54.186043 kernel: acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 00:43:54.186111 kernel: acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug PME LTR] May 15 00:43:54.186176 kernel: acpi PNP0A08:01: _OSC: OS now controls [AER PCIeCapability] May 15 00:43:54.186239 kernel: acpi PNP0A08:01: MCFG quirk: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] with pci_32b_read_ops May 15 00:43:54.186305 kernel: acpi PNP0A08:01: ECAM area [mem 0x3ffff0000000-0x3fffffffffff] reserved by PNP0C02:00 May 15 00:43:54.186368 kernel: acpi PNP0A08:01: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] May 15 00:43:54.186379 kernel: PCI host bridge to bus 0000:00 May 15 00:43:54.186447 kernel: pci_bus 0000:00: root bus resource [mem 0x70000000-0x7fffffff window] May 15 00:43:54.186508 kernel: pci_bus 0000:00: root bus resource [mem 0x3c0000000000-0x3fffdfffffff window] May 15 00:43:54.186567 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 00:43:54.186642 kernel: pci 0000:00:00.0: [1def:e100] type 00 class 0x060000 May 15 00:43:54.186717 kernel: pci 0000:00:01.0: [1def:e101] type 01 class 0x060400 May 15 00:43:54.186789 kernel: pci 0000:00:01.0: enabling Extended Tags May 15 00:43:54.186857 kernel: pci 0000:00:01.0: supports D1 D2 May 15 00:43:54.186924 kernel: pci 0000:00:01.0: PME# supported from D0 D1 D3hot May 15 00:43:54.187000 kernel: pci 0000:00:02.0: [1def:e102] type 01 class 0x060400 May 15 00:43:54.187067 kernel: pci 0000:00:02.0: supports D1 D2 May 15 00:43:54.187133 kernel: pci 0000:00:02.0: PME# supported from D0 D1 D3hot May 15 00:43:54.187207 kernel: pci 0000:00:03.0: [1def:e103] type 01 class 0x060400 May 15 00:43:54.187274 kernel: pci 0000:00:03.0: supports D1 D2 May 15 00:43:54.187339 kernel: pci 0000:00:03.0: PME# supported from D0 D1 D3hot May 15 00:43:54.187412 kernel: pci 0000:00:04.0: [1def:e104] type 01 class 0x060400 May 15 00:43:54.187481 kernel: pci 0000:00:04.0: supports D1 D2 May 15 00:43:54.187547 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D3hot May 15 00:43:54.187558 kernel: acpiphp: Slot [1-1] registered May 15 00:43:54.187566 kernel: acpiphp: Slot [2-1] registered May 15 00:43:54.187574 kernel: acpiphp: Slot [3-1] registered May 15 00:43:54.187581 kernel: acpiphp: Slot [4-1] registered May 15 00:43:54.187639 kernel: pci_bus 0000:00: on NUMA node 0 May 15 00:43:54.187705 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 15 00:43:54.187776 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 15 00:43:54.187845 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 15 00:43:54.187912 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 15 00:43:54.187978 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 15 00:43:54.188044 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 15 00:43:54.188110 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 15 00:43:54.188176 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 15 00:43:54.188244 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 15 00:43:54.188312 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 15 00:43:54.188379 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 15 00:43:54.188444 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 15 00:43:54.188511 kernel: pci 0000:00:01.0: BAR 14: assigned [mem 0x70000000-0x701fffff] May 15 00:43:54.188578 kernel: pci 0000:00:01.0: BAR 15: assigned [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 15 00:43:54.188644 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x70200000-0x703fffff] May 15 00:43:54.188712 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 15 00:43:54.188782 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x70400000-0x705fffff] May 15 00:43:54.188850 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 15 00:43:54.188916 kernel: pci 0000:00:04.0: BAR 14: assigned [mem 0x70600000-0x707fffff] May 15 00:43:54.188983 kernel: pci 0000:00:04.0: BAR 15: assigned [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 15 00:43:54.189048 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.189114 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.189182 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.189248 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.189313 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.189380 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.189446 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.189510 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.189577 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.189642 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.189707 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.189775 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.189849 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.189915 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.189981 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.190050 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.190117 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 15 00:43:54.190185 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff] May 15 00:43:54.190250 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 15 00:43:54.190319 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] May 15 00:43:54.190384 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff] May 15 00:43:54.190451 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 15 00:43:54.190519 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] May 15 00:43:54.190588 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff] May 15 00:43:54.190656 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 15 00:43:54.190722 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] May 15 00:43:54.190792 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff] May 15 00:43:54.190860 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 15 00:43:54.190925 kernel: pci_bus 0000:00: resource 4 [mem 0x70000000-0x7fffffff window] May 15 00:43:54.190984 kernel: pci_bus 0000:00: resource 5 [mem 0x3c0000000000-0x3fffdfffffff window] May 15 00:43:54.191055 kernel: pci_bus 0000:01: resource 1 [mem 0x70000000-0x701fffff] May 15 00:43:54.191118 kernel: pci_bus 0000:01: resource 2 [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 15 00:43:54.191187 kernel: pci_bus 0000:02: resource 1 [mem 0x70200000-0x703fffff] May 15 00:43:54.191249 kernel: pci_bus 0000:02: resource 2 [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 15 00:43:54.191325 kernel: pci_bus 0000:03: resource 1 [mem 0x70400000-0x705fffff] May 15 00:43:54.191391 kernel: pci_bus 0000:03: resource 2 [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 15 00:43:54.191459 kernel: pci_bus 0000:04: resource 1 [mem 0x70600000-0x707fffff] May 15 00:43:54.191521 kernel: pci_bus 0000:04: resource 2 [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 15 00:43:54.191532 kernel: ACPI: PCI Root Bridge [PCI7] (domain 0005 [bus 00-ff]) May 15 00:43:54.191604 kernel: acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 00:43:54.191668 kernel: acpi PNP0A08:02: _OSC: platform does not support [PCIeHotplug PME LTR] May 15 00:43:54.191736 kernel: acpi PNP0A08:02: _OSC: OS now controls [AER PCIeCapability] May 15 00:43:54.191805 kernel: acpi PNP0A08:02: MCFG quirk: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] with pci_32b_read_ops May 15 00:43:54.191869 kernel: acpi PNP0A08:02: ECAM area [mem 0x2ffff0000000-0x2fffffffffff] reserved by PNP0C02:00 May 15 00:43:54.191933 kernel: acpi PNP0A08:02: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] May 15 00:43:54.191943 kernel: PCI host bridge to bus 0005:00 May 15 00:43:54.192009 kernel: pci_bus 0005:00: root bus resource [mem 0x30000000-0x3fffffff window] May 15 00:43:54.192070 kernel: pci_bus 0005:00: root bus resource [mem 0x2c0000000000-0x2fffdfffffff window] May 15 00:43:54.192133 kernel: pci_bus 0005:00: root bus resource [bus 00-ff] May 15 00:43:54.192212 kernel: pci 0005:00:00.0: [1def:e110] type 00 class 0x060000 May 15 00:43:54.192288 kernel: pci 0005:00:01.0: [1def:e111] type 01 class 0x060400 May 15 00:43:54.192357 kernel: pci 0005:00:01.0: supports D1 D2 May 15 00:43:54.192423 kernel: pci 0005:00:01.0: PME# supported from D0 D1 D3hot May 15 00:43:54.192498 kernel: pci 0005:00:03.0: [1def:e113] type 01 class 0x060400 May 15 00:43:54.192567 kernel: pci 0005:00:03.0: supports D1 D2 May 15 00:43:54.192637 kernel: pci 0005:00:03.0: PME# supported from D0 D1 D3hot May 15 00:43:54.192710 kernel: pci 0005:00:05.0: [1def:e115] type 01 class 0x060400 May 15 00:43:54.192776 kernel: pci 0005:00:05.0: supports D1 D2 May 15 00:43:54.192849 kernel: pci 0005:00:05.0: PME# supported from D0 D1 D3hot May 15 00:43:54.192922 kernel: pci 0005:00:07.0: [1def:e117] type 01 class 0x060400 May 15 00:43:54.192989 kernel: pci 0005:00:07.0: supports D1 D2 May 15 00:43:54.193058 kernel: pci 0005:00:07.0: PME# supported from D0 D1 D3hot May 15 00:43:54.193068 kernel: acpiphp: Slot [1-2] registered May 15 00:43:54.193076 kernel: acpiphp: Slot [2-2] registered May 15 00:43:54.193150 kernel: pci 0005:03:00.0: [144d:a808] type 00 class 0x010802 May 15 00:43:54.193220 kernel: pci 0005:03:00.0: reg 0x10: [mem 0x30110000-0x30113fff 64bit] May 15 00:43:54.193287 kernel: pci 0005:03:00.0: reg 0x30: [mem 0x30100000-0x3010ffff pref] May 15 00:43:54.193416 kernel: pci 0005:04:00.0: [144d:a808] type 00 class 0x010802 May 15 00:43:54.193489 kernel: pci 0005:04:00.0: reg 0x10: [mem 0x30010000-0x30013fff 64bit] May 15 00:43:54.193561 kernel: pci 0005:04:00.0: reg 0x30: [mem 0x30000000-0x3000ffff pref] May 15 00:43:54.193622 kernel: pci_bus 0005:00: on NUMA node 0 May 15 00:43:54.193693 kernel: pci 0005:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 15 00:43:54.193761 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 15 00:43:54.193836 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 15 00:43:54.193908 kernel: pci 0005:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 15 00:43:54.193974 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 15 00:43:54.194046 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 15 00:43:54.194113 kernel: pci 0005:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 15 00:43:54.194181 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 15 00:43:54.194250 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 15 00:43:54.194317 kernel: pci 0005:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 15 00:43:54.194384 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 15 00:43:54.194453 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x001fffff] to [bus 04] add_size 100000 add_align 100000 May 15 00:43:54.194520 kernel: pci 0005:00:01.0: BAR 14: assigned [mem 0x30000000-0x301fffff] May 15 00:43:54.194586 kernel: pci 0005:00:01.0: BAR 15: assigned [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 15 00:43:54.194653 kernel: pci 0005:00:03.0: BAR 14: assigned [mem 0x30200000-0x303fffff] May 15 00:43:54.194719 kernel: pci 0005:00:03.0: BAR 15: assigned [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 15 00:43:54.194791 kernel: pci 0005:00:05.0: BAR 14: assigned [mem 0x30400000-0x305fffff] May 15 00:43:54.194859 kernel: pci 0005:00:05.0: BAR 15: assigned [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 15 00:43:54.194924 kernel: pci 0005:00:07.0: BAR 14: assigned [mem 0x30600000-0x307fffff] May 15 00:43:54.194993 kernel: pci 0005:00:07.0: BAR 15: assigned [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 15 00:43:54.195059 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.195128 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.195194 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.195261 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.195328 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.195395 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.195462 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.195530 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.195597 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.195663 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.195730 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.195799 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.195866 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.195933 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.196000 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.196067 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.196137 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] May 15 00:43:54.196204 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff] May 15 00:43:54.196269 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 15 00:43:54.196336 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] May 15 00:43:54.196402 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff] May 15 00:43:54.196469 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 15 00:43:54.196541 kernel: pci 0005:03:00.0: BAR 6: assigned [mem 0x30400000-0x3040ffff pref] May 15 00:43:54.196610 kernel: pci 0005:03:00.0: BAR 0: assigned [mem 0x30410000-0x30413fff 64bit] May 15 00:43:54.196677 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] May 15 00:43:54.196742 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff] May 15 00:43:54.196813 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 15 00:43:54.196884 kernel: pci 0005:04:00.0: BAR 6: assigned [mem 0x30600000-0x3060ffff pref] May 15 00:43:54.196951 kernel: pci 0005:04:00.0: BAR 0: assigned [mem 0x30610000-0x30613fff 64bit] May 15 00:43:54.197022 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] May 15 00:43:54.197088 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff] May 15 00:43:54.197155 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 15 00:43:54.197217 kernel: pci_bus 0005:00: resource 4 [mem 0x30000000-0x3fffffff window] May 15 00:43:54.197277 kernel: pci_bus 0005:00: resource 5 [mem 0x2c0000000000-0x2fffdfffffff window] May 15 00:43:54.197348 kernel: pci_bus 0005:01: resource 1 [mem 0x30000000-0x301fffff] May 15 00:43:54.197410 kernel: pci_bus 0005:01: resource 2 [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 15 00:43:54.197491 kernel: pci_bus 0005:02: resource 1 [mem 0x30200000-0x303fffff] May 15 00:43:54.197554 kernel: pci_bus 0005:02: resource 2 [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 15 00:43:54.197646 kernel: pci_bus 0005:03: resource 1 [mem 0x30400000-0x305fffff] May 15 00:43:54.197711 kernel: pci_bus 0005:03: resource 2 [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 15 00:43:54.197789 kernel: pci_bus 0005:04: resource 1 [mem 0x30600000-0x307fffff] May 15 00:43:54.197856 kernel: pci_bus 0005:04: resource 2 [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 15 00:43:54.197867 kernel: ACPI: PCI Root Bridge [PCI5] (domain 0003 [bus 00-ff]) May 15 00:43:54.197941 kernel: acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 00:43:54.198007 kernel: acpi PNP0A08:03: _OSC: platform does not support [PCIeHotplug PME LTR] May 15 00:43:54.198075 kernel: acpi PNP0A08:03: _OSC: OS now controls [AER PCIeCapability] May 15 00:43:54.198144 kernel: acpi PNP0A08:03: MCFG quirk: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 15 00:43:54.198207 kernel: acpi PNP0A08:03: ECAM area [mem 0x27fff0000000-0x27ffffffffff] reserved by PNP0C02:00 May 15 00:43:54.198274 kernel: acpi PNP0A08:03: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] May 15 00:43:54.198285 kernel: PCI host bridge to bus 0003:00 May 15 00:43:54.198350 kernel: pci_bus 0003:00: root bus resource [mem 0x10000000-0x1fffffff window] May 15 00:43:54.198410 kernel: pci_bus 0003:00: root bus resource [mem 0x240000000000-0x27ffdfffffff window] May 15 00:43:54.198470 kernel: pci_bus 0003:00: root bus resource [bus 00-ff] May 15 00:43:54.198545 kernel: pci 0003:00:00.0: [1def:e110] type 00 class 0x060000 May 15 00:43:54.198627 kernel: pci 0003:00:01.0: [1def:e111] type 01 class 0x060400 May 15 00:43:54.198701 kernel: pci 0003:00:01.0: supports D1 D2 May 15 00:43:54.198770 kernel: pci 0003:00:01.0: PME# supported from D0 D1 D3hot May 15 00:43:54.198853 kernel: pci 0003:00:03.0: [1def:e113] type 01 class 0x060400 May 15 00:43:54.198922 kernel: pci 0003:00:03.0: supports D1 D2 May 15 00:43:54.198988 kernel: pci 0003:00:03.0: PME# supported from D0 D1 D3hot May 15 00:43:54.199063 kernel: pci 0003:00:05.0: [1def:e115] type 01 class 0x060400 May 15 00:43:54.199135 kernel: pci 0003:00:05.0: supports D1 D2 May 15 00:43:54.199201 kernel: pci 0003:00:05.0: PME# supported from D0 D1 D3hot May 15 00:43:54.199211 kernel: acpiphp: Slot [1-3] registered May 15 00:43:54.199219 kernel: acpiphp: Slot [2-3] registered May 15 00:43:54.199294 kernel: pci 0003:03:00.0: [8086:1521] type 00 class 0x020000 May 15 00:43:54.199363 kernel: pci 0003:03:00.0: reg 0x10: [mem 0x10020000-0x1003ffff] May 15 00:43:54.199433 kernel: pci 0003:03:00.0: reg 0x18: [io 0x0020-0x003f] May 15 00:43:54.199502 kernel: pci 0003:03:00.0: reg 0x1c: [mem 0x10044000-0x10047fff] May 15 00:43:54.199572 kernel: pci 0003:03:00.0: PME# supported from D0 D3hot D3cold May 15 00:43:54.199641 kernel: pci 0003:03:00.0: reg 0x184: [mem 0x240000060000-0x240000063fff 64bit pref] May 15 00:43:54.199709 kernel: pci 0003:03:00.0: VF(n) BAR0 space: [mem 0x240000060000-0x24000007ffff 64bit pref] (contains BAR0 for 8 VFs) May 15 00:43:54.199781 kernel: pci 0003:03:00.0: reg 0x190: [mem 0x240000040000-0x240000043fff 64bit pref] May 15 00:43:54.199852 kernel: pci 0003:03:00.0: VF(n) BAR3 space: [mem 0x240000040000-0x24000005ffff 64bit pref] (contains BAR3 for 8 VFs) May 15 00:43:54.199921 kernel: pci 0003:03:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x2 link at 0003:00:05.0 (capable of 16.000 Gb/s with 5.0 GT/s PCIe x4 link) May 15 00:43:54.199997 kernel: pci 0003:03:00.1: [8086:1521] type 00 class 0x020000 May 15 00:43:54.200067 kernel: pci 0003:03:00.1: reg 0x10: [mem 0x10000000-0x1001ffff] May 15 00:43:54.200136 kernel: pci 0003:03:00.1: reg 0x18: [io 0x0000-0x001f] May 15 00:43:54.200204 kernel: pci 0003:03:00.1: reg 0x1c: [mem 0x10040000-0x10043fff] May 15 00:43:54.200272 kernel: pci 0003:03:00.1: PME# supported from D0 D3hot D3cold May 15 00:43:54.200341 kernel: pci 0003:03:00.1: reg 0x184: [mem 0x240000020000-0x240000023fff 64bit pref] May 15 00:43:54.200409 kernel: pci 0003:03:00.1: VF(n) BAR0 space: [mem 0x240000020000-0x24000003ffff 64bit pref] (contains BAR0 for 8 VFs) May 15 00:43:54.200479 kernel: pci 0003:03:00.1: reg 0x190: [mem 0x240000000000-0x240000003fff 64bit pref] May 15 00:43:54.200549 kernel: pci 0003:03:00.1: VF(n) BAR3 space: [mem 0x240000000000-0x24000001ffff 64bit pref] (contains BAR3 for 8 VFs) May 15 00:43:54.200610 kernel: pci_bus 0003:00: on NUMA node 0 May 15 00:43:54.200678 kernel: pci 0003:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 15 00:43:54.200744 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 15 00:43:54.200815 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 15 00:43:54.200883 kernel: pci 0003:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 15 00:43:54.200951 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 15 00:43:54.201020 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 15 00:43:54.201088 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03-04] add_size 300000 add_align 100000 May 15 00:43:54.201155 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03-04] add_size 100000 add_align 100000 May 15 00:43:54.201222 kernel: pci 0003:00:01.0: BAR 14: assigned [mem 0x10000000-0x101fffff] May 15 00:43:54.201299 kernel: pci 0003:00:01.0: BAR 15: assigned [mem 0x240000000000-0x2400001fffff 64bit pref] May 15 00:43:54.201367 kernel: pci 0003:00:03.0: BAR 14: assigned [mem 0x10200000-0x103fffff] May 15 00:43:54.201438 kernel: pci 0003:00:03.0: BAR 15: assigned [mem 0x240000200000-0x2400003fffff 64bit pref] May 15 00:43:54.201505 kernel: pci 0003:00:05.0: BAR 14: assigned [mem 0x10400000-0x105fffff] May 15 00:43:54.201575 kernel: pci 0003:00:05.0: BAR 15: assigned [mem 0x240000400000-0x2400006fffff 64bit pref] May 15 00:43:54.201641 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.201708 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.201776 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.202063 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.202131 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.202196 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.202261 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.202330 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.202395 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.202459 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.202524 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.202589 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.202653 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] May 15 00:43:54.202717 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff] May 15 00:43:54.202785 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref] May 15 00:43:54.202855 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] May 15 00:43:54.202922 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff] May 15 00:43:54.202988 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref] May 15 00:43:54.203057 kernel: pci 0003:03:00.0: BAR 0: assigned [mem 0x10400000-0x1041ffff] May 15 00:43:54.203127 kernel: pci 0003:03:00.1: BAR 0: assigned [mem 0x10420000-0x1043ffff] May 15 00:43:54.203197 kernel: pci 0003:03:00.0: BAR 3: assigned [mem 0x10440000-0x10443fff] May 15 00:43:54.203267 kernel: pci 0003:03:00.0: BAR 7: assigned [mem 0x240000400000-0x24000041ffff 64bit pref] May 15 00:43:54.203334 kernel: pci 0003:03:00.0: BAR 10: assigned [mem 0x240000420000-0x24000043ffff 64bit pref] May 15 00:43:54.203402 kernel: pci 0003:03:00.1: BAR 3: assigned [mem 0x10444000-0x10447fff] May 15 00:43:54.203469 kernel: pci 0003:03:00.1: BAR 7: assigned [mem 0x240000440000-0x24000045ffff 64bit pref] May 15 00:43:54.203536 kernel: pci 0003:03:00.1: BAR 10: assigned [mem 0x240000460000-0x24000047ffff 64bit pref] May 15 00:43:54.203603 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] May 15 00:43:54.203670 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] May 15 00:43:54.203738 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] May 15 00:43:54.203810 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] May 15 00:43:54.203878 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] May 15 00:43:54.203945 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] May 15 00:43:54.204012 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] May 15 00:43:54.204079 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] May 15 00:43:54.204145 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] May 15 00:43:54.204210 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff] May 15 00:43:54.204278 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref] May 15 00:43:54.204339 kernel: pci_bus 0003:00: Some PCI device resources are unassigned, try booting with pci=realloc May 15 00:43:54.204397 kernel: pci_bus 0003:00: resource 4 [mem 0x10000000-0x1fffffff window] May 15 00:43:54.204455 kernel: pci_bus 0003:00: resource 5 [mem 0x240000000000-0x27ffdfffffff window] May 15 00:43:54.204534 kernel: pci_bus 0003:01: resource 1 [mem 0x10000000-0x101fffff] May 15 00:43:54.204596 kernel: pci_bus 0003:01: resource 2 [mem 0x240000000000-0x2400001fffff 64bit pref] May 15 00:43:54.204668 kernel: pci_bus 0003:02: resource 1 [mem 0x10200000-0x103fffff] May 15 00:43:54.204729 kernel: pci_bus 0003:02: resource 2 [mem 0x240000200000-0x2400003fffff 64bit pref] May 15 00:43:54.204800 kernel: pci_bus 0003:03: resource 1 [mem 0x10400000-0x105fffff] May 15 00:43:54.204862 kernel: pci_bus 0003:03: resource 2 [mem 0x240000400000-0x2400006fffff 64bit pref] May 15 00:43:54.204873 kernel: ACPI: PCI Root Bridge [PCI0] (domain 000c [bus 00-ff]) May 15 00:43:54.204945 kernel: acpi PNP0A08:04: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 00:43:54.205015 kernel: acpi PNP0A08:04: _OSC: platform does not support [PCIeHotplug PME LTR] May 15 00:43:54.205081 kernel: acpi PNP0A08:04: _OSC: OS now controls [AER PCIeCapability] May 15 00:43:54.205144 kernel: acpi PNP0A08:04: MCFG quirk: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 15 00:43:54.205208 kernel: acpi PNP0A08:04: ECAM area [mem 0x33fff0000000-0x33ffffffffff] reserved by PNP0C02:00 May 15 00:43:54.205270 kernel: acpi PNP0A08:04: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] May 15 00:43:54.205281 kernel: PCI host bridge to bus 000c:00 May 15 00:43:54.205347 kernel: pci_bus 000c:00: root bus resource [mem 0x40000000-0x4fffffff window] May 15 00:43:54.205407 kernel: pci_bus 000c:00: root bus resource [mem 0x300000000000-0x33ffdfffffff window] May 15 00:43:54.205466 kernel: pci_bus 000c:00: root bus resource [bus 00-ff] May 15 00:43:54.205539 kernel: pci 000c:00:00.0: [1def:e100] type 00 class 0x060000 May 15 00:43:54.205612 kernel: pci 000c:00:01.0: [1def:e101] type 01 class 0x060400 May 15 00:43:54.205679 kernel: pci 000c:00:01.0: enabling Extended Tags May 15 00:43:54.205745 kernel: pci 000c:00:01.0: supports D1 D2 May 15 00:43:54.205814 kernel: pci 000c:00:01.0: PME# supported from D0 D1 D3hot May 15 00:43:54.205889 kernel: pci 000c:00:02.0: [1def:e102] type 01 class 0x060400 May 15 00:43:54.205957 kernel: pci 000c:00:02.0: supports D1 D2 May 15 00:43:54.206022 kernel: pci 000c:00:02.0: PME# supported from D0 D1 D3hot May 15 00:43:54.206095 kernel: pci 000c:00:03.0: [1def:e103] type 01 class 0x060400 May 15 00:43:54.206161 kernel: pci 000c:00:03.0: supports D1 D2 May 15 00:43:54.206226 kernel: pci 000c:00:03.0: PME# supported from D0 D1 D3hot May 15 00:43:54.206298 kernel: pci 000c:00:04.0: [1def:e104] type 01 class 0x060400 May 15 00:43:54.206367 kernel: pci 000c:00:04.0: supports D1 D2 May 15 00:43:54.206433 kernel: pci 000c:00:04.0: PME# supported from D0 D1 D3hot May 15 00:43:54.206443 kernel: acpiphp: Slot [1-4] registered May 15 00:43:54.206452 kernel: acpiphp: Slot [2-4] registered May 15 00:43:54.206460 kernel: acpiphp: Slot [3-2] registered May 15 00:43:54.206469 kernel: acpiphp: Slot [4-2] registered May 15 00:43:54.206524 kernel: pci_bus 000c:00: on NUMA node 0 May 15 00:43:54.206594 kernel: pci 000c:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 15 00:43:54.206664 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 15 00:43:54.206730 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 15 00:43:54.206801 kernel: pci 000c:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 15 00:43:54.206867 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 15 00:43:54.206936 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 15 00:43:54.207003 kernel: pci 000c:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 15 00:43:54.207069 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 15 00:43:54.207136 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 15 00:43:54.207203 kernel: pci 000c:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 15 00:43:54.207269 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 15 00:43:54.207334 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 15 00:43:54.207400 kernel: pci 000c:00:01.0: BAR 14: assigned [mem 0x40000000-0x401fffff] May 15 00:43:54.207465 kernel: pci 000c:00:01.0: BAR 15: assigned [mem 0x300000000000-0x3000001fffff 64bit pref] May 15 00:43:54.207530 kernel: pci 000c:00:02.0: BAR 14: assigned [mem 0x40200000-0x403fffff] May 15 00:43:54.207597 kernel: pci 000c:00:02.0: BAR 15: assigned [mem 0x300000200000-0x3000003fffff 64bit pref] May 15 00:43:54.207663 kernel: pci 000c:00:03.0: BAR 14: assigned [mem 0x40400000-0x405fffff] May 15 00:43:54.207728 kernel: pci 000c:00:03.0: BAR 15: assigned [mem 0x300000400000-0x3000005fffff 64bit pref] May 15 00:43:54.207796 kernel: pci 000c:00:04.0: BAR 14: assigned [mem 0x40600000-0x407fffff] May 15 00:43:54.207863 kernel: pci 000c:00:04.0: BAR 15: assigned [mem 0x300000600000-0x3000007fffff 64bit pref] May 15 00:43:54.207928 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.207994 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.208059 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.208127 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.208192 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.208258 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.208324 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.208388 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.208454 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.208518 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.208584 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.208648 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.208715 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.208782 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.208849 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.208916 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.208980 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] May 15 00:43:54.209047 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff] May 15 00:43:54.209112 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref] May 15 00:43:54.209181 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] May 15 00:43:54.209246 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff] May 15 00:43:54.209312 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref] May 15 00:43:54.209377 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] May 15 00:43:54.209442 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff] May 15 00:43:54.209508 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref] May 15 00:43:54.209573 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] May 15 00:43:54.209641 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff] May 15 00:43:54.209706 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref] May 15 00:43:54.209767 kernel: pci_bus 000c:00: resource 4 [mem 0x40000000-0x4fffffff window] May 15 00:43:54.209829 kernel: pci_bus 000c:00: resource 5 [mem 0x300000000000-0x33ffdfffffff window] May 15 00:43:54.209900 kernel: pci_bus 000c:01: resource 1 [mem 0x40000000-0x401fffff] May 15 00:43:54.209962 kernel: pci_bus 000c:01: resource 2 [mem 0x300000000000-0x3000001fffff 64bit pref] May 15 00:43:54.210040 kernel: pci_bus 000c:02: resource 1 [mem 0x40200000-0x403fffff] May 15 00:43:54.210101 kernel: pci_bus 000c:02: resource 2 [mem 0x300000200000-0x3000003fffff 64bit pref] May 15 00:43:54.210169 kernel: pci_bus 000c:03: resource 1 [mem 0x40400000-0x405fffff] May 15 00:43:54.210230 kernel: pci_bus 000c:03: resource 2 [mem 0x300000400000-0x3000005fffff 64bit pref] May 15 00:43:54.210298 kernel: pci_bus 000c:04: resource 1 [mem 0x40600000-0x407fffff] May 15 00:43:54.210360 kernel: pci_bus 000c:04: resource 2 [mem 0x300000600000-0x3000007fffff 64bit pref] May 15 00:43:54.210372 kernel: ACPI: PCI Root Bridge [PCI4] (domain 0002 [bus 00-ff]) May 15 00:43:54.210444 kernel: acpi PNP0A08:05: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 00:43:54.210507 kernel: acpi PNP0A08:05: _OSC: platform does not support [PCIeHotplug PME LTR] May 15 00:43:54.210570 kernel: acpi PNP0A08:05: _OSC: OS now controls [AER PCIeCapability] May 15 00:43:54.210633 kernel: acpi PNP0A08:05: MCFG quirk: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 15 00:43:54.210696 kernel: acpi PNP0A08:05: ECAM area [mem 0x23fff0000000-0x23ffffffffff] reserved by PNP0C02:00 May 15 00:43:54.210758 kernel: acpi PNP0A08:05: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] May 15 00:43:54.210771 kernel: PCI host bridge to bus 0002:00 May 15 00:43:54.210844 kernel: pci_bus 0002:00: root bus resource [mem 0x00800000-0x0fffffff window] May 15 00:43:54.210904 kernel: pci_bus 0002:00: root bus resource [mem 0x200000000000-0x23ffdfffffff window] May 15 00:43:54.210962 kernel: pci_bus 0002:00: root bus resource [bus 00-ff] May 15 00:43:54.211034 kernel: pci 0002:00:00.0: [1def:e110] type 00 class 0x060000 May 15 00:43:54.211107 kernel: pci 0002:00:01.0: [1def:e111] type 01 class 0x060400 May 15 00:43:54.211174 kernel: pci 0002:00:01.0: supports D1 D2 May 15 00:43:54.211241 kernel: pci 0002:00:01.0: PME# supported from D0 D1 D3hot May 15 00:43:54.211314 kernel: pci 0002:00:03.0: [1def:e113] type 01 class 0x060400 May 15 00:43:54.211380 kernel: pci 0002:00:03.0: supports D1 D2 May 15 00:43:54.211446 kernel: pci 0002:00:03.0: PME# supported from D0 D1 D3hot May 15 00:43:54.211518 kernel: pci 0002:00:05.0: [1def:e115] type 01 class 0x060400 May 15 00:43:54.211584 kernel: pci 0002:00:05.0: supports D1 D2 May 15 00:43:54.211650 kernel: pci 0002:00:05.0: PME# supported from D0 D1 D3hot May 15 00:43:54.211725 kernel: pci 0002:00:07.0: [1def:e117] type 01 class 0x060400 May 15 00:43:54.211795 kernel: pci 0002:00:07.0: supports D1 D2 May 15 00:43:54.211860 kernel: pci 0002:00:07.0: PME# supported from D0 D1 D3hot May 15 00:43:54.211871 kernel: acpiphp: Slot [1-5] registered May 15 00:43:54.211879 kernel: acpiphp: Slot [2-5] registered May 15 00:43:54.211887 kernel: acpiphp: Slot [3-3] registered May 15 00:43:54.211896 kernel: acpiphp: Slot [4-3] registered May 15 00:43:54.211953 kernel: pci_bus 0002:00: on NUMA node 0 May 15 00:43:54.212020 kernel: pci 0002:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 15 00:43:54.212087 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 15 00:43:54.212153 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 15 00:43:54.212221 kernel: pci 0002:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 15 00:43:54.212289 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 15 00:43:54.212354 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 15 00:43:54.212420 kernel: pci 0002:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 15 00:43:54.212485 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 15 00:43:54.212550 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 15 00:43:54.212617 kernel: pci 0002:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 15 00:43:54.212682 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 15 00:43:54.212749 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 15 00:43:54.212818 kernel: pci 0002:00:01.0: BAR 14: assigned [mem 0x00800000-0x009fffff] May 15 00:43:54.212885 kernel: pci 0002:00:01.0: BAR 15: assigned [mem 0x200000000000-0x2000001fffff 64bit pref] May 15 00:43:54.212950 kernel: pci 0002:00:03.0: BAR 14: assigned [mem 0x00a00000-0x00bfffff] May 15 00:43:54.213016 kernel: pci 0002:00:03.0: BAR 15: assigned [mem 0x200000200000-0x2000003fffff 64bit pref] May 15 00:43:54.213081 kernel: pci 0002:00:05.0: BAR 14: assigned [mem 0x00c00000-0x00dfffff] May 15 00:43:54.213147 kernel: pci 0002:00:05.0: BAR 15: assigned [mem 0x200000400000-0x2000005fffff 64bit pref] May 15 00:43:54.213215 kernel: pci 0002:00:07.0: BAR 14: assigned [mem 0x00e00000-0x00ffffff] May 15 00:43:54.213280 kernel: pci 0002:00:07.0: BAR 15: assigned [mem 0x200000600000-0x2000007fffff 64bit pref] May 15 00:43:54.213345 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.213410 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.213476 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.213540 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.213607 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.213672 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.213739 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.213810 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.213876 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.213942 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.214008 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.214076 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.214141 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.214207 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.214274 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.214340 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.214410 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] May 15 00:43:54.214475 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff] May 15 00:43:54.214545 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref] May 15 00:43:54.214610 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] May 15 00:43:54.214676 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff] May 15 00:43:54.214742 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref] May 15 00:43:54.214813 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] May 15 00:43:54.214879 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff] May 15 00:43:54.214944 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref] May 15 00:43:54.215010 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] May 15 00:43:54.215076 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff] May 15 00:43:54.215142 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref] May 15 00:43:54.215205 kernel: pci_bus 0002:00: resource 4 [mem 0x00800000-0x0fffffff window] May 15 00:43:54.215264 kernel: pci_bus 0002:00: resource 5 [mem 0x200000000000-0x23ffdfffffff window] May 15 00:43:54.215335 kernel: pci_bus 0002:01: resource 1 [mem 0x00800000-0x009fffff] May 15 00:43:54.215396 kernel: pci_bus 0002:01: resource 2 [mem 0x200000000000-0x2000001fffff 64bit pref] May 15 00:43:54.215478 kernel: pci_bus 0002:02: resource 1 [mem 0x00a00000-0x00bfffff] May 15 00:43:54.215540 kernel: pci_bus 0002:02: resource 2 [mem 0x200000200000-0x2000003fffff 64bit pref] May 15 00:43:54.215616 kernel: pci_bus 0002:03: resource 1 [mem 0x00c00000-0x00dfffff] May 15 00:43:54.215680 kernel: pci_bus 0002:03: resource 2 [mem 0x200000400000-0x2000005fffff 64bit pref] May 15 00:43:54.215750 kernel: pci_bus 0002:04: resource 1 [mem 0x00e00000-0x00ffffff] May 15 00:43:54.215890 kernel: pci_bus 0002:04: resource 2 [mem 0x200000600000-0x2000007fffff 64bit pref] May 15 00:43:54.215903 kernel: ACPI: PCI Root Bridge [PCI2] (domain 0001 [bus 00-ff]) May 15 00:43:54.215976 kernel: acpi PNP0A08:06: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 00:43:54.216040 kernel: acpi PNP0A08:06: _OSC: platform does not support [PCIeHotplug PME LTR] May 15 00:43:54.216107 kernel: acpi PNP0A08:06: _OSC: OS now controls [AER PCIeCapability] May 15 00:43:54.216170 kernel: acpi PNP0A08:06: MCFG quirk: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] with pci_32b_read_ops May 15 00:43:54.216233 kernel: acpi PNP0A08:06: ECAM area [mem 0x3bfff0000000-0x3bffffffffff] reserved by PNP0C02:00 May 15 00:43:54.216294 kernel: acpi PNP0A08:06: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] May 15 00:43:54.216305 kernel: PCI host bridge to bus 0001:00 May 15 00:43:54.216371 kernel: pci_bus 0001:00: root bus resource [mem 0x60000000-0x6fffffff window] May 15 00:43:54.216431 kernel: pci_bus 0001:00: root bus resource [mem 0x380000000000-0x3bffdfffffff window] May 15 00:43:54.216490 kernel: pci_bus 0001:00: root bus resource [bus 00-ff] May 15 00:43:54.216563 kernel: pci 0001:00:00.0: [1def:e100] type 00 class 0x060000 May 15 00:43:54.216638 kernel: pci 0001:00:01.0: [1def:e101] type 01 class 0x060400 May 15 00:43:54.216704 kernel: pci 0001:00:01.0: enabling Extended Tags May 15 00:43:54.216769 kernel: pci 0001:00:01.0: supports D1 D2 May 15 00:43:54.216847 kernel: pci 0001:00:01.0: PME# supported from D0 D1 D3hot May 15 00:43:54.216928 kernel: pci 0001:00:02.0: [1def:e102] type 01 class 0x060400 May 15 00:43:54.216995 kernel: pci 0001:00:02.0: supports D1 D2 May 15 00:43:54.217060 kernel: pci 0001:00:02.0: PME# supported from D0 D1 D3hot May 15 00:43:54.217133 kernel: pci 0001:00:03.0: [1def:e103] type 01 class 0x060400 May 15 00:43:54.217198 kernel: pci 0001:00:03.0: supports D1 D2 May 15 00:43:54.217264 kernel: pci 0001:00:03.0: PME# supported from D0 D1 D3hot May 15 00:43:54.217336 kernel: pci 0001:00:04.0: [1def:e104] type 01 class 0x060400 May 15 00:43:54.217403 kernel: pci 0001:00:04.0: supports D1 D2 May 15 00:43:54.217472 kernel: pci 0001:00:04.0: PME# supported from D0 D1 D3hot May 15 00:43:54.217483 kernel: acpiphp: Slot [1-6] registered May 15 00:43:54.217557 kernel: pci 0001:01:00.0: [15b3:1015] type 00 class 0x020000 May 15 00:43:54.217627 kernel: pci 0001:01:00.0: reg 0x10: [mem 0x380002000000-0x380003ffffff 64bit pref] May 15 00:43:54.217694 kernel: pci 0001:01:00.0: reg 0x30: [mem 0x60100000-0x601fffff pref] May 15 00:43:54.217762 kernel: pci 0001:01:00.0: PME# supported from D3cold May 15 00:43:54.217841 kernel: pci 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 15 00:43:54.217917 kernel: pci 0001:01:00.1: [15b3:1015] type 00 class 0x020000 May 15 00:43:54.217987 kernel: pci 0001:01:00.1: reg 0x10: [mem 0x380000000000-0x380001ffffff 64bit pref] May 15 00:43:54.218054 kernel: pci 0001:01:00.1: reg 0x30: [mem 0x60000000-0x600fffff pref] May 15 00:43:54.218122 kernel: pci 0001:01:00.1: PME# supported from D3cold May 15 00:43:54.218133 kernel: acpiphp: Slot [2-6] registered May 15 00:43:54.218141 kernel: acpiphp: Slot [3-4] registered May 15 00:43:54.218152 kernel: acpiphp: Slot [4-4] registered May 15 00:43:54.218211 kernel: pci_bus 0001:00: on NUMA node 0 May 15 00:43:54.218277 kernel: pci 0001:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 15 00:43:54.218344 kernel: pci 0001:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 15 00:43:54.218409 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 15 00:43:54.218487 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 15 00:43:54.218554 kernel: pci 0001:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 15 00:43:54.218620 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 15 00:43:54.218690 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 15 00:43:54.218757 kernel: pci 0001:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 15 00:43:54.218826 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 15 00:43:54.218891 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 15 00:43:54.218957 kernel: pci 0001:00:01.0: BAR 15: assigned [mem 0x380000000000-0x380003ffffff 64bit pref] May 15 00:43:54.219022 kernel: pci 0001:00:01.0: BAR 14: assigned [mem 0x60000000-0x601fffff] May 15 00:43:54.219088 kernel: pci 0001:00:02.0: BAR 14: assigned [mem 0x60200000-0x603fffff] May 15 00:43:54.219155 kernel: pci 0001:00:02.0: BAR 15: assigned [mem 0x380004000000-0x3800041fffff 64bit pref] May 15 00:43:54.219222 kernel: pci 0001:00:03.0: BAR 14: assigned [mem 0x60400000-0x605fffff] May 15 00:43:54.219287 kernel: pci 0001:00:03.0: BAR 15: assigned [mem 0x380004200000-0x3800043fffff 64bit pref] May 15 00:43:54.219352 kernel: pci 0001:00:04.0: BAR 14: assigned [mem 0x60600000-0x607fffff] May 15 00:43:54.219418 kernel: pci 0001:00:04.0: BAR 15: assigned [mem 0x380004400000-0x3800045fffff 64bit pref] May 15 00:43:54.219482 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.219548 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.219615 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.219680 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.219746 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.219815 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.219882 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.219947 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.220013 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.220077 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.220146 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.220212 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.220280 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.220347 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.220414 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.220481 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.220549 kernel: pci 0001:01:00.0: BAR 0: assigned [mem 0x380000000000-0x380001ffffff 64bit pref] May 15 00:43:54.220620 kernel: pci 0001:01:00.1: BAR 0: assigned [mem 0x380002000000-0x380003ffffff 64bit pref] May 15 00:43:54.220688 kernel: pci 0001:01:00.0: BAR 6: assigned [mem 0x60000000-0x600fffff pref] May 15 00:43:54.220760 kernel: pci 0001:01:00.1: BAR 6: assigned [mem 0x60100000-0x601fffff pref] May 15 00:43:54.220886 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] May 15 00:43:54.220954 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] May 15 00:43:54.221019 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref] May 15 00:43:54.221085 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] May 15 00:43:54.221150 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff] May 15 00:43:54.221215 kernel: pci 0001:00:02.0: bridge window [mem 0x380004000000-0x3800041fffff 64bit pref] May 15 00:43:54.221283 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] May 15 00:43:54.221348 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff] May 15 00:43:54.221413 kernel: pci 0001:00:03.0: bridge window [mem 0x380004200000-0x3800043fffff 64bit pref] May 15 00:43:54.221478 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] May 15 00:43:54.221543 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff] May 15 00:43:54.221609 kernel: pci 0001:00:04.0: bridge window [mem 0x380004400000-0x3800045fffff 64bit pref] May 15 00:43:54.221672 kernel: pci_bus 0001:00: resource 4 [mem 0x60000000-0x6fffffff window] May 15 00:43:54.221730 kernel: pci_bus 0001:00: resource 5 [mem 0x380000000000-0x3bffdfffffff window] May 15 00:43:54.221813 kernel: pci_bus 0001:01: resource 1 [mem 0x60000000-0x601fffff] May 15 00:43:54.221875 kernel: pci_bus 0001:01: resource 2 [mem 0x380000000000-0x380003ffffff 64bit pref] May 15 00:43:54.221943 kernel: pci_bus 0001:02: resource 1 [mem 0x60200000-0x603fffff] May 15 00:43:54.222004 kernel: pci_bus 0001:02: resource 2 [mem 0x380004000000-0x3800041fffff 64bit pref] May 15 00:43:54.222074 kernel: pci_bus 0001:03: resource 1 [mem 0x60400000-0x605fffff] May 15 00:43:54.222136 kernel: pci_bus 0001:03: resource 2 [mem 0x380004200000-0x3800043fffff 64bit pref] May 15 00:43:54.222203 kernel: pci_bus 0001:04: resource 1 [mem 0x60600000-0x607fffff] May 15 00:43:54.222265 kernel: pci_bus 0001:04: resource 2 [mem 0x380004400000-0x3800045fffff 64bit pref] May 15 00:43:54.222275 kernel: ACPI: PCI Root Bridge [PCI6] (domain 0004 [bus 00-ff]) May 15 00:43:54.222346 kernel: acpi PNP0A08:07: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 00:43:54.222413 kernel: acpi PNP0A08:07: _OSC: platform does not support [PCIeHotplug PME LTR] May 15 00:43:54.222475 kernel: acpi PNP0A08:07: _OSC: OS now controls [AER PCIeCapability] May 15 00:43:54.222538 kernel: acpi PNP0A08:07: MCFG quirk: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] with pci_32b_read_ops May 15 00:43:54.222601 kernel: acpi PNP0A08:07: ECAM area [mem 0x2bfff0000000-0x2bffffffffff] reserved by PNP0C02:00 May 15 00:43:54.222663 kernel: acpi PNP0A08:07: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] May 15 00:43:54.222674 kernel: PCI host bridge to bus 0004:00 May 15 00:43:54.222739 kernel: pci_bus 0004:00: root bus resource [mem 0x20000000-0x2fffffff window] May 15 00:43:54.222805 kernel: pci_bus 0004:00: root bus resource [mem 0x280000000000-0x2bffdfffffff window] May 15 00:43:54.222864 kernel: pci_bus 0004:00: root bus resource [bus 00-ff] May 15 00:43:54.222936 kernel: pci 0004:00:00.0: [1def:e110] type 00 class 0x060000 May 15 00:43:54.223012 kernel: pci 0004:00:01.0: [1def:e111] type 01 class 0x060400 May 15 00:43:54.223079 kernel: pci 0004:00:01.0: supports D1 D2 May 15 00:43:54.223144 kernel: pci 0004:00:01.0: PME# supported from D0 D1 D3hot May 15 00:43:54.223217 kernel: pci 0004:00:03.0: [1def:e113] type 01 class 0x060400 May 15 00:43:54.223287 kernel: pci 0004:00:03.0: supports D1 D2 May 15 00:43:54.223353 kernel: pci 0004:00:03.0: PME# supported from D0 D1 D3hot May 15 00:43:54.223427 kernel: pci 0004:00:05.0: [1def:e115] type 01 class 0x060400 May 15 00:43:54.223493 kernel: pci 0004:00:05.0: supports D1 D2 May 15 00:43:54.223558 kernel: pci 0004:00:05.0: PME# supported from D0 D1 D3hot May 15 00:43:54.223633 kernel: pci 0004:01:00.0: [1a03:1150] type 01 class 0x060400 May 15 00:43:54.223702 kernel: pci 0004:01:00.0: enabling Extended Tags May 15 00:43:54.223771 kernel: pci 0004:01:00.0: supports D1 D2 May 15 00:43:54.223843 kernel: pci 0004:01:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 15 00:43:54.223923 kernel: pci_bus 0004:02: extended config space not accessible May 15 00:43:54.224005 kernel: pci 0004:02:00.0: [1a03:2000] type 00 class 0x030000 May 15 00:43:54.224078 kernel: pci 0004:02:00.0: reg 0x10: [mem 0x20000000-0x21ffffff] May 15 00:43:54.224149 kernel: pci 0004:02:00.0: reg 0x14: [mem 0x22000000-0x2201ffff] May 15 00:43:54.224223 kernel: pci 0004:02:00.0: reg 0x18: [io 0x0000-0x007f] May 15 00:43:54.224296 kernel: pci 0004:02:00.0: BAR 0: assigned to efifb May 15 00:43:54.224366 kernel: pci 0004:02:00.0: supports D1 D2 May 15 00:43:54.224437 kernel: pci 0004:02:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 15 00:43:54.224514 kernel: pci 0004:03:00.0: [1912:0014] type 00 class 0x0c0330 May 15 00:43:54.224585 kernel: pci 0004:03:00.0: reg 0x10: [mem 0x22200000-0x22201fff 64bit] May 15 00:43:54.224655 kernel: pci 0004:03:00.0: PME# supported from D0 D3hot D3cold May 15 00:43:54.224714 kernel: pci_bus 0004:00: on NUMA node 0 May 15 00:43:54.224788 kernel: pci 0004:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01-02] add_size 200000 add_align 100000 May 15 00:43:54.224856 kernel: pci 0004:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 15 00:43:54.224923 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 15 00:43:54.224989 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 15 00:43:54.225057 kernel: pci 0004:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 15 00:43:54.225123 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 15 00:43:54.225191 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 15 00:43:54.225261 kernel: pci 0004:00:01.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] May 15 00:43:54.225327 kernel: pci 0004:00:01.0: BAR 15: assigned [mem 0x280000000000-0x2800001fffff 64bit pref] May 15 00:43:54.225394 kernel: pci 0004:00:03.0: BAR 14: assigned [mem 0x23000000-0x231fffff] May 15 00:43:54.225460 kernel: pci 0004:00:03.0: BAR 15: assigned [mem 0x280000200000-0x2800003fffff 64bit pref] May 15 00:43:54.225527 kernel: pci 0004:00:05.0: BAR 14: assigned [mem 0x23200000-0x233fffff] May 15 00:43:54.225593 kernel: pci 0004:00:05.0: BAR 15: assigned [mem 0x280000400000-0x2800005fffff 64bit pref] May 15 00:43:54.225661 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.225729 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.225802 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.225870 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.225937 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.226003 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.226069 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.226136 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.226202 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.226268 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.226338 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.226404 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.226475 kernel: pci 0004:01:00.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] May 15 00:43:54.226544 kernel: pci 0004:01:00.0: BAR 13: no space for [io size 0x1000] May 15 00:43:54.226613 kernel: pci 0004:01:00.0: BAR 13: failed to assign [io size 0x1000] May 15 00:43:54.226684 kernel: pci 0004:02:00.0: BAR 0: assigned [mem 0x20000000-0x21ffffff] May 15 00:43:54.226755 kernel: pci 0004:02:00.0: BAR 1: assigned [mem 0x22000000-0x2201ffff] May 15 00:43:54.226832 kernel: pci 0004:02:00.0: BAR 2: no space for [io size 0x0080] May 15 00:43:54.226906 kernel: pci 0004:02:00.0: BAR 2: failed to assign [io size 0x0080] May 15 00:43:54.226976 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] May 15 00:43:54.227045 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff] May 15 00:43:54.227113 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] May 15 00:43:54.227181 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff] May 15 00:43:54.227246 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref] May 15 00:43:54.227317 kernel: pci 0004:03:00.0: BAR 0: assigned [mem 0x23000000-0x23001fff 64bit] May 15 00:43:54.227383 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] May 15 00:43:54.227453 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff] May 15 00:43:54.227521 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref] May 15 00:43:54.227588 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] May 15 00:43:54.227655 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff] May 15 00:43:54.227722 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref] May 15 00:43:54.227787 kernel: pci_bus 0004:00: Some PCI device resources are unassigned, try booting with pci=realloc May 15 00:43:54.227849 kernel: pci_bus 0004:00: resource 4 [mem 0x20000000-0x2fffffff window] May 15 00:43:54.227909 kernel: pci_bus 0004:00: resource 5 [mem 0x280000000000-0x2bffdfffffff window] May 15 00:43:54.227979 kernel: pci_bus 0004:01: resource 1 [mem 0x20000000-0x22ffffff] May 15 00:43:54.228043 kernel: pci_bus 0004:01: resource 2 [mem 0x280000000000-0x2800001fffff 64bit pref] May 15 00:43:54.228109 kernel: pci_bus 0004:02: resource 1 [mem 0x20000000-0x22ffffff] May 15 00:43:54.228179 kernel: pci_bus 0004:03: resource 1 [mem 0x23000000-0x231fffff] May 15 00:43:54.228242 kernel: pci_bus 0004:03: resource 2 [mem 0x280000200000-0x2800003fffff 64bit pref] May 15 00:43:54.228313 kernel: pci_bus 0004:04: resource 1 [mem 0x23200000-0x233fffff] May 15 00:43:54.228378 kernel: pci_bus 0004:04: resource 2 [mem 0x280000400000-0x2800005fffff 64bit pref] May 15 00:43:54.228389 kernel: iommu: Default domain type: Translated May 15 00:43:54.228397 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 15 00:43:54.228406 kernel: efivars: Registered efivars operations May 15 00:43:54.228474 kernel: pci 0004:02:00.0: vgaarb: setting as boot VGA device May 15 00:43:54.228547 kernel: pci 0004:02:00.0: vgaarb: bridge control possible May 15 00:43:54.228621 kernel: pci 0004:02:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none May 15 00:43:54.228633 kernel: vgaarb: loaded May 15 00:43:54.228642 kernel: clocksource: Switched to clocksource arch_sys_counter May 15 00:43:54.228650 kernel: VFS: Disk quotas dquot_6.6.0 May 15 00:43:54.228659 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 00:43:54.228667 kernel: pnp: PnP ACPI init May 15 00:43:54.228739 kernel: system 00:00: [mem 0x3bfff0000000-0x3bffffffffff window] could not be reserved May 15 00:43:54.228807 kernel: system 00:00: [mem 0x3ffff0000000-0x3fffffffffff window] could not be reserved May 15 00:43:54.228871 kernel: system 00:00: [mem 0x23fff0000000-0x23ffffffffff window] could not be reserved May 15 00:43:54.228932 kernel: system 00:00: [mem 0x27fff0000000-0x27ffffffffff window] could not be reserved May 15 00:43:54.228992 kernel: system 00:00: [mem 0x2bfff0000000-0x2bffffffffff window] could not be reserved May 15 00:43:54.229052 kernel: system 00:00: [mem 0x2ffff0000000-0x2fffffffffff window] could not be reserved May 15 00:43:54.229115 kernel: system 00:00: [mem 0x33fff0000000-0x33ffffffffff window] could not be reserved May 15 00:43:54.229175 kernel: system 00:00: [mem 0x37fff0000000-0x37ffffffffff window] could not be reserved May 15 00:43:54.229188 kernel: pnp: PnP ACPI: found 1 devices May 15 00:43:54.229196 kernel: NET: Registered PF_INET protocol family May 15 00:43:54.229205 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 00:43:54.229214 kernel: tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear) May 15 00:43:54.229222 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 00:43:54.229231 kernel: TCP established hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 00:43:54.229239 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 15 00:43:54.229247 kernel: TCP: Hash tables configured (established 524288 bind 65536) May 15 00:43:54.229256 kernel: UDP hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 15 00:43:54.229266 kernel: UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 15 00:43:54.229274 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 00:43:54.229346 kernel: pci 0001:01:00.0: CLS mismatch (64 != 32), using 64 bytes May 15 00:43:54.229357 kernel: kvm [1]: IPA Size Limit: 48 bits May 15 00:43:54.229366 kernel: kvm [1]: GICv3: no GICV resource entry May 15 00:43:54.229374 kernel: kvm [1]: disabling GICv2 emulation May 15 00:43:54.229383 kernel: kvm [1]: GIC system register CPU interface enabled May 15 00:43:54.229391 kernel: kvm [1]: vgic interrupt IRQ9 May 15 00:43:54.229399 kernel: kvm [1]: VHE mode initialized successfully May 15 00:43:54.229409 kernel: Initialise system trusted keyrings May 15 00:43:54.229417 kernel: workingset: timestamp_bits=39 max_order=26 bucket_order=0 May 15 00:43:54.229426 kernel: Key type asymmetric registered May 15 00:43:54.229434 kernel: Asymmetric key parser 'x509' registered May 15 00:43:54.229442 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 15 00:43:54.229451 kernel: io scheduler mq-deadline registered May 15 00:43:54.229459 kernel: io scheduler kyber registered May 15 00:43:54.229467 kernel: io scheduler bfq registered May 15 00:43:54.229475 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 15 00:43:54.229485 kernel: ACPI: button: Power Button [PWRB] May 15 00:43:54.229494 kernel: ACPI GTDT: found 1 SBSA generic Watchdog(s). May 15 00:43:54.229502 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 00:43:54.229577 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: option mask 0x0 May 15 00:43:54.229642 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: IDR0.COHACC overridden by FW configuration (false) May 15 00:43:54.229704 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 15 00:43:54.229768 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for cmdq May 15 00:43:54.229835 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 131072 entries for evtq May 15 00:43:54.229900 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for priq May 15 00:43:54.229972 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: option mask 0x0 May 15 00:43:54.230034 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: IDR0.COHACC overridden by FW configuration (false) May 15 00:43:54.230097 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 15 00:43:54.230159 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for cmdq May 15 00:43:54.230222 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 131072 entries for evtq May 15 00:43:54.230288 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for priq May 15 00:43:54.230358 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: option mask 0x0 May 15 00:43:54.230423 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: IDR0.COHACC overridden by FW configuration (false) May 15 00:43:54.230485 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 15 00:43:54.230548 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for cmdq May 15 00:43:54.230610 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 131072 entries for evtq May 15 00:43:54.230673 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for priq May 15 00:43:54.230744 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: option mask 0x0 May 15 00:43:54.231015 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: IDR0.COHACC overridden by FW configuration (false) May 15 00:43:54.231085 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 15 00:43:54.231147 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for cmdq May 15 00:43:54.231207 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 131072 entries for evtq May 15 00:43:54.231268 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for priq May 15 00:43:54.231346 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: option mask 0x0 May 15 00:43:54.231414 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: IDR0.COHACC overridden by FW configuration (false) May 15 00:43:54.231474 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 15 00:43:54.231535 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for cmdq May 15 00:43:54.231596 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 131072 entries for evtq May 15 00:43:54.231656 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for priq May 15 00:43:54.231726 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: option mask 0x0 May 15 00:43:54.231795 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: IDR0.COHACC overridden by FW configuration (false) May 15 00:43:54.231865 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 15 00:43:54.231926 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for cmdq May 15 00:43:54.231986 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 131072 entries for evtq May 15 00:43:54.232046 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for priq May 15 00:43:54.232116 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: option mask 0x0 May 15 00:43:54.232180 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: IDR0.COHACC overridden by FW configuration (false) May 15 00:43:54.232242 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 15 00:43:54.232304 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for cmdq May 15 00:43:54.232367 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 131072 entries for evtq May 15 00:43:54.232429 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for priq May 15 00:43:54.232496 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: option mask 0x0 May 15 00:43:54.232560 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: IDR0.COHACC overridden by FW configuration (false) May 15 00:43:54.232620 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 15 00:43:54.232681 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for cmdq May 15 00:43:54.232741 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 131072 entries for evtq May 15 00:43:54.232806 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for priq May 15 00:43:54.232818 kernel: thunder_xcv, ver 1.0 May 15 00:43:54.232826 kernel: thunder_bgx, ver 1.0 May 15 00:43:54.232834 kernel: nicpf, ver 1.0 May 15 00:43:54.232845 kernel: nicvf, ver 1.0 May 15 00:43:54.232913 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 15 00:43:54.232974 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-15T00:43:52 UTC (1747269832) May 15 00:43:54.232985 kernel: efifb: probing for efifb May 15 00:43:54.232994 kernel: efifb: framebuffer at 0x20000000, using 1876k, total 1875k May 15 00:43:54.233002 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 May 15 00:43:54.233011 kernel: efifb: scrolling: redraw May 15 00:43:54.233019 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 15 00:43:54.233029 kernel: Console: switching to colour frame buffer device 100x37 May 15 00:43:54.233038 kernel: fb0: EFI VGA frame buffer device May 15 00:43:54.233046 kernel: SMCCC: SOC_ID: ID = jep106:0a16:0001 Revision = 0x000000a1 May 15 00:43:54.233054 kernel: hid: raw HID events driver (C) Jiri Kosina May 15 00:43:54.233063 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 15 00:43:54.233071 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 15 00:43:54.233080 kernel: watchdog: Hard watchdog permanently disabled May 15 00:43:54.233088 kernel: NET: Registered PF_INET6 protocol family May 15 00:43:54.233096 kernel: Segment Routing with IPv6 May 15 00:43:54.233106 kernel: In-situ OAM (IOAM) with IPv6 May 15 00:43:54.233114 kernel: NET: Registered PF_PACKET protocol family May 15 00:43:54.233122 kernel: Key type dns_resolver registered May 15 00:43:54.233130 kernel: registered taskstats version 1 May 15 00:43:54.233140 kernel: Loading compiled-in X.509 certificates May 15 00:43:54.233149 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: cdb7ce3984a1665183e8a6ab3419833bc5e4e7f4' May 15 00:43:54.233157 kernel: Key type .fscrypt registered May 15 00:43:54.233166 kernel: Key type fscrypt-provisioning registered May 15 00:43:54.233174 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 00:43:54.233184 kernel: ima: Allocated hash algorithm: sha1 May 15 00:43:54.233192 kernel: ima: No architecture policies found May 15 00:43:54.233200 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 15 00:43:54.233271 kernel: pcieport 000d:00:01.0: Adding to iommu group 0 May 15 00:43:54.233337 kernel: pcieport 000d:00:01.0: AER: enabled with IRQ 91 May 15 00:43:54.233405 kernel: pcieport 000d:00:02.0: Adding to iommu group 1 May 15 00:43:54.233471 kernel: pcieport 000d:00:02.0: AER: enabled with IRQ 91 May 15 00:43:54.233538 kernel: pcieport 000d:00:03.0: Adding to iommu group 2 May 15 00:43:54.233603 kernel: pcieport 000d:00:03.0: AER: enabled with IRQ 91 May 15 00:43:54.233673 kernel: pcieport 000d:00:04.0: Adding to iommu group 3 May 15 00:43:54.233740 kernel: pcieport 000d:00:04.0: AER: enabled with IRQ 91 May 15 00:43:54.233810 kernel: pcieport 0000:00:01.0: Adding to iommu group 4 May 15 00:43:54.233877 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 92 May 15 00:43:54.233944 kernel: pcieport 0000:00:02.0: Adding to iommu group 5 May 15 00:43:54.234011 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 92 May 15 00:43:54.234078 kernel: pcieport 0000:00:03.0: Adding to iommu group 6 May 15 00:43:54.234144 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 92 May 15 00:43:54.234214 kernel: pcieport 0000:00:04.0: Adding to iommu group 7 May 15 00:43:54.234280 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 92 May 15 00:43:54.234347 kernel: pcieport 0005:00:01.0: Adding to iommu group 8 May 15 00:43:54.234413 kernel: pcieport 0005:00:01.0: AER: enabled with IRQ 93 May 15 00:43:54.234481 kernel: pcieport 0005:00:03.0: Adding to iommu group 9 May 15 00:43:54.234546 kernel: pcieport 0005:00:03.0: AER: enabled with IRQ 93 May 15 00:43:54.234613 kernel: pcieport 0005:00:05.0: Adding to iommu group 10 May 15 00:43:54.234679 kernel: pcieport 0005:00:05.0: AER: enabled with IRQ 93 May 15 00:43:54.234748 kernel: pcieport 0005:00:07.0: Adding to iommu group 11 May 15 00:43:54.234817 kernel: pcieport 0005:00:07.0: AER: enabled with IRQ 93 May 15 00:43:54.234884 kernel: pcieport 0003:00:01.0: Adding to iommu group 12 May 15 00:43:54.234951 kernel: pcieport 0003:00:01.0: AER: enabled with IRQ 94 May 15 00:43:54.235017 kernel: pcieport 0003:00:03.0: Adding to iommu group 13 May 15 00:43:54.235083 kernel: pcieport 0003:00:03.0: AER: enabled with IRQ 94 May 15 00:43:54.235149 kernel: pcieport 0003:00:05.0: Adding to iommu group 14 May 15 00:43:54.235215 kernel: pcieport 0003:00:05.0: AER: enabled with IRQ 94 May 15 00:43:54.235283 kernel: pcieport 000c:00:01.0: Adding to iommu group 15 May 15 00:43:54.235352 kernel: pcieport 000c:00:01.0: AER: enabled with IRQ 95 May 15 00:43:54.235419 kernel: pcieport 000c:00:02.0: Adding to iommu group 16 May 15 00:43:54.235485 kernel: pcieport 000c:00:02.0: AER: enabled with IRQ 95 May 15 00:43:54.235552 kernel: pcieport 000c:00:03.0: Adding to iommu group 17 May 15 00:43:54.235618 kernel: pcieport 000c:00:03.0: AER: enabled with IRQ 95 May 15 00:43:54.235685 kernel: pcieport 000c:00:04.0: Adding to iommu group 18 May 15 00:43:54.235750 kernel: pcieport 000c:00:04.0: AER: enabled with IRQ 95 May 15 00:43:54.235822 kernel: pcieport 0002:00:01.0: Adding to iommu group 19 May 15 00:43:54.235891 kernel: pcieport 0002:00:01.0: AER: enabled with IRQ 96 May 15 00:43:54.235957 kernel: pcieport 0002:00:03.0: Adding to iommu group 20 May 15 00:43:54.236024 kernel: pcieport 0002:00:03.0: AER: enabled with IRQ 96 May 15 00:43:54.236090 kernel: pcieport 0002:00:05.0: Adding to iommu group 21 May 15 00:43:54.236156 kernel: pcieport 0002:00:05.0: AER: enabled with IRQ 96 May 15 00:43:54.236222 kernel: pcieport 0002:00:07.0: Adding to iommu group 22 May 15 00:43:54.236288 kernel: pcieport 0002:00:07.0: AER: enabled with IRQ 96 May 15 00:43:54.236355 kernel: pcieport 0001:00:01.0: Adding to iommu group 23 May 15 00:43:54.236423 kernel: pcieport 0001:00:01.0: AER: enabled with IRQ 97 May 15 00:43:54.236490 kernel: pcieport 0001:00:02.0: Adding to iommu group 24 May 15 00:43:54.236555 kernel: pcieport 0001:00:02.0: AER: enabled with IRQ 97 May 15 00:43:54.236623 kernel: pcieport 0001:00:03.0: Adding to iommu group 25 May 15 00:43:54.236688 kernel: pcieport 0001:00:03.0: AER: enabled with IRQ 97 May 15 00:43:54.236754 kernel: pcieport 0001:00:04.0: Adding to iommu group 26 May 15 00:43:54.236825 kernel: pcieport 0001:00:04.0: AER: enabled with IRQ 97 May 15 00:43:54.236894 kernel: pcieport 0004:00:01.0: Adding to iommu group 27 May 15 00:43:54.236962 kernel: pcieport 0004:00:01.0: AER: enabled with IRQ 98 May 15 00:43:54.237029 kernel: pcieport 0004:00:03.0: Adding to iommu group 28 May 15 00:43:54.237096 kernel: pcieport 0004:00:03.0: AER: enabled with IRQ 98 May 15 00:43:54.237162 kernel: pcieport 0004:00:05.0: Adding to iommu group 29 May 15 00:43:54.237228 kernel: pcieport 0004:00:05.0: AER: enabled with IRQ 98 May 15 00:43:54.237296 kernel: pcieport 0004:01:00.0: Adding to iommu group 30 May 15 00:43:54.237308 kernel: clk: Disabling unused clocks May 15 00:43:54.237316 kernel: Freeing unused kernel memory: 38336K May 15 00:43:54.237327 kernel: Run /init as init process May 15 00:43:54.237335 kernel: with arguments: May 15 00:43:54.237343 kernel: /init May 15 00:43:54.237351 kernel: with environment: May 15 00:43:54.237360 kernel: HOME=/ May 15 00:43:54.237367 kernel: TERM=linux May 15 00:43:54.237376 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 00:43:54.237385 systemd[1]: Successfully made /usr/ read-only. May 15 00:43:54.237396 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 00:43:54.237407 systemd[1]: Detected architecture arm64. May 15 00:43:54.237416 systemd[1]: Running in initrd. May 15 00:43:54.237424 systemd[1]: No hostname configured, using default hostname. May 15 00:43:54.237433 systemd[1]: Hostname set to . May 15 00:43:54.237442 systemd[1]: Initializing machine ID from random generator. May 15 00:43:54.237450 systemd[1]: Queued start job for default target initrd.target. May 15 00:43:54.237459 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:43:54.237469 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:43:54.237479 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 00:43:54.237488 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 00:43:54.237497 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 00:43:54.237506 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 00:43:54.237516 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 00:43:54.237525 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 00:43:54.237536 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:43:54.237544 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 00:43:54.237553 systemd[1]: Reached target paths.target - Path Units. May 15 00:43:54.237562 systemd[1]: Reached target slices.target - Slice Units. May 15 00:43:54.237571 systemd[1]: Reached target swap.target - Swaps. May 15 00:43:54.237579 systemd[1]: Reached target timers.target - Timer Units. May 15 00:43:54.237588 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 00:43:54.237597 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 00:43:54.237607 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 00:43:54.237616 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 15 00:43:54.237625 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 00:43:54.237634 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 00:43:54.237642 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:43:54.237651 systemd[1]: Reached target sockets.target - Socket Units. May 15 00:43:54.237660 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 00:43:54.237669 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 00:43:54.237677 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 00:43:54.237688 systemd[1]: Starting systemd-fsck-usr.service... May 15 00:43:54.237696 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 00:43:54.237705 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 00:43:54.237738 systemd-journald[900]: Collecting audit messages is disabled. May 15 00:43:54.237761 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:43:54.237769 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 00:43:54.237781 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 00:43:54.237790 kernel: Bridge firewalling registered May 15 00:43:54.237800 systemd-journald[900]: Journal started May 15 00:43:54.237820 systemd-journald[900]: Runtime Journal (/run/log/journal/3c7e2142887c4778933e275a1b80aa79) is 8M, max 4G, 3.9G free. May 15 00:43:54.194119 systemd-modules-load[904]: Inserted module 'overlay' May 15 00:43:54.271442 systemd[1]: Started systemd-journald.service - Journal Service. May 15 00:43:54.218620 systemd-modules-load[904]: Inserted module 'br_netfilter' May 15 00:43:54.277084 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:43:54.287922 systemd[1]: Finished systemd-fsck-usr.service. May 15 00:43:54.298837 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 00:43:54.309660 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:43:54.337951 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:43:54.344130 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 00:43:54.361794 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 00:43:54.373475 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 00:43:54.390283 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:43:54.406086 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 00:43:54.422804 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 00:43:54.434199 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:43:54.465961 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 00:43:54.473533 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 00:43:54.485955 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 00:43:54.511914 dracut-cmdline[945]: dracut-dracut-053 May 15 00:43:54.511914 dracut-cmdline[945]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=bfa141d6f8686d8fe96245516ecbaee60c938beef41636c397e3939a2c9a6ed9 May 15 00:43:54.499939 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:43:54.513955 systemd-resolved[951]: Positive Trust Anchors: May 15 00:43:54.513964 systemd-resolved[951]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 00:43:54.513995 systemd-resolved[951]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 00:43:54.528942 systemd-resolved[951]: Defaulting to hostname 'linux'. May 15 00:43:54.530359 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 00:43:54.662406 kernel: SCSI subsystem initialized May 15 00:43:54.564121 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 00:43:54.678353 kernel: Loading iSCSI transport class v2.0-870. May 15 00:43:54.691785 kernel: iscsi: registered transport (tcp) May 15 00:43:54.719128 kernel: iscsi: registered transport (qla4xxx) May 15 00:43:54.719154 kernel: QLogic iSCSI HBA Driver May 15 00:43:54.762247 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 00:43:54.788944 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 00:43:54.834130 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 00:43:54.834149 kernel: device-mapper: uevent: version 1.0.3 May 15 00:43:54.852787 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 15 00:43:54.909791 kernel: raid6: neonx8 gen() 15854 MB/s May 15 00:43:54.935789 kernel: raid6: neonx4 gen() 15880 MB/s May 15 00:43:54.960789 kernel: raid6: neonx2 gen() 13253 MB/s May 15 00:43:54.985788 kernel: raid6: neonx1 gen() 10583 MB/s May 15 00:43:55.010788 kernel: raid6: int64x8 gen() 6814 MB/s May 15 00:43:55.035789 kernel: raid6: int64x4 gen() 7376 MB/s May 15 00:43:55.060789 kernel: raid6: int64x2 gen() 6133 MB/s May 15 00:43:55.088992 kernel: raid6: int64x1 gen() 5077 MB/s May 15 00:43:55.089013 kernel: raid6: using algorithm neonx4 gen() 15880 MB/s May 15 00:43:55.123454 kernel: raid6: .... xor() 12396 MB/s, rmw enabled May 15 00:43:55.123477 kernel: raid6: using neon recovery algorithm May 15 00:43:55.146656 kernel: xor: measuring software checksum speed May 15 00:43:55.146679 kernel: 8regs : 21664 MB/sec May 15 00:43:55.154707 kernel: 32regs : 21704 MB/sec May 15 00:43:55.162587 kernel: arm64_neon : 28080 MB/sec May 15 00:43:55.170339 kernel: xor: using function: arm64_neon (28080 MB/sec) May 15 00:43:55.230788 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 00:43:55.241806 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 00:43:55.257906 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:43:55.272000 systemd-udevd[1140]: Using default interface naming scheme 'v255'. May 15 00:43:55.275532 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:43:55.298933 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 00:43:55.312851 dracut-pre-trigger[1150]: rd.md=0: removing MD RAID activation May 15 00:43:55.338502 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 00:43:55.357948 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 00:43:55.462604 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:43:55.493542 kernel: pps_core: LinuxPPS API ver. 1 registered May 15 00:43:55.493566 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 15 00:43:55.516470 kernel: ACPI: bus type USB registered May 15 00:43:55.516501 kernel: usbcore: registered new interface driver usbfs May 15 00:43:55.526420 kernel: usbcore: registered new interface driver hub May 15 00:43:55.526444 kernel: PTP clock support registered May 15 00:43:55.530993 kernel: usbcore: registered new device driver usb May 15 00:43:55.542921 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 00:43:55.555510 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 00:43:55.718184 kernel: igb: Intel(R) Gigabit Ethernet Network Driver May 15 00:43:55.718199 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. May 15 00:43:55.718210 kernel: igb 0003:03:00.0: Adding to iommu group 31 May 15 00:43:55.718350 kernel: xhci_hcd 0004:03:00.0: Adding to iommu group 32 May 15 00:43:55.718445 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller May 15 00:43:55.718528 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 1 May 15 00:43:55.718610 kernel: xhci_hcd 0004:03:00.0: Zeroing 64bit base registers, expecting fault May 15 00:43:55.718695 kernel: nvme 0005:03:00.0: Adding to iommu group 33 May 15 00:43:55.718791 kernel: igb 0003:03:00.0: added PHC on eth0 May 15 00:43:55.718878 kernel: mlx5_core 0001:01:00.0: Adding to iommu group 34 May 15 00:43:55.718969 kernel: igb 0003:03:00.0: Intel(R) Gigabit Ethernet Network Connection May 15 00:43:55.719052 kernel: nvme 0005:04:00.0: Adding to iommu group 35 May 15 00:43:55.719141 kernel: igb 0003:03:00.0: eth0: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:81:7e:98 May 15 00:43:55.718131 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 00:43:55.724076 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:43:55.741142 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 00:43:55.752611 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 00:43:55.804027 kernel: igb 0003:03:00.0: eth0: PBA No: 106300-000 May 15 00:43:55.804152 kernel: igb 0003:03:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) May 15 00:43:55.804235 kernel: igb 0003:03:00.1: Adding to iommu group 36 May 15 00:43:55.752790 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:43:55.798676 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:43:55.826993 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 00:43:55.838623 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 00:43:55.838771 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:43:55.851117 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:43:55.879053 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:43:55.892607 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 00:43:55.906352 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 00:43:55.986560 kernel: xhci_hcd 0004:03:00.0: hcc params 0x014051cf hci version 0x100 quirks 0x0000001100000010 May 15 00:43:55.986727 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller May 15 00:43:55.986818 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 2 May 15 00:43:55.986900 kernel: xhci_hcd 0004:03:00.0: Host supports USB 3.0 SuperSpeed May 15 00:43:55.986986 kernel: nvme nvme0: pci function 0005:03:00.0 May 15 00:43:55.987085 kernel: hub 1-0:1.0: USB hub found May 15 00:43:55.987182 kernel: hub 1-0:1.0: 4 ports detected May 15 00:43:55.906554 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:43:56.103557 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 15 00:43:56.103769 kernel: hub 2-0:1.0: USB hub found May 15 00:43:56.103870 kernel: nvme nvme0: Shutdown timeout set to 8 seconds May 15 00:43:56.103947 kernel: mlx5_core 0001:01:00.0: firmware version: 14.31.1014 May 15 00:43:56.104041 kernel: mlx5_core 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 15 00:43:56.104121 kernel: hub 2-0:1.0: 4 ports detected May 15 00:43:56.104201 kernel: nvme nvme1: pci function 0005:04:00.0 May 15 00:43:56.104287 kernel: nvme nvme1: Shutdown timeout set to 8 seconds May 15 00:43:56.079163 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:43:56.117874 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:43:56.132834 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:43:56.145986 kernel: nvme nvme0: 32/0/0 default/read/poll queues May 15 00:43:56.158947 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:43:56.369857 kernel: nvme nvme1: 32/0/0 default/read/poll queues May 15 00:43:56.370053 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 00:43:56.370066 kernel: GPT:9289727 != 1875385007 May 15 00:43:56.370079 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 00:43:56.370089 kernel: GPT:9289727 != 1875385007 May 15 00:43:56.370099 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 00:43:56.370109 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 15 00:43:56.370120 kernel: igb 0003:03:00.1: added PHC on eth1 May 15 00:43:56.370216 kernel: igb 0003:03:00.1: Intel(R) Gigabit Ethernet Network Connection May 15 00:43:56.370298 kernel: igb 0003:03:00.1: eth1: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:81:7e:99 May 15 00:43:56.370379 kernel: igb 0003:03:00.1: eth1: PBA No: 106300-000 May 15 00:43:56.370457 kernel: igb 0003:03:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) May 15 00:43:56.370537 kernel: igb 0003:03:00.1 eno2: renamed from eth1 May 15 00:43:56.370620 kernel: BTRFS: device fsid 369506fd-904a-45c2-a4ab-2d03e7866799 devid 1 transid 44 /dev/nvme0n1p3 scanned by (udev-worker) (1205) May 15 00:43:56.370631 kernel: igb 0003:03:00.0 eno1: renamed from eth0 May 15 00:43:56.370716 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by (udev-worker) (1225) May 15 00:43:56.370729 kernel: usb 1-3: new high-speed USB device number 2 using xhci_hcd May 15 00:43:56.369982 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:43:56.397337 kernel: mlx5_core 0001:01:00.0: Port module event: module 0, Cable plugged May 15 00:43:56.400046 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - SAMSUNG MZ1LB960HAJQ-00007 EFI-SYSTEM. May 15 00:43:56.428291 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - SAMSUNG MZ1LB960HAJQ-00007 ROOT. May 15 00:43:56.439843 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. May 15 00:43:56.449685 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. May 15 00:43:56.473245 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. May 15 00:43:56.492909 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 00:43:56.551050 kernel: hub 1-3:1.0: USB hub found May 15 00:43:56.551215 kernel: hub 1-3:1.0: 4 ports detected May 15 00:43:56.551318 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 15 00:43:56.551329 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 15 00:43:56.551416 disk-uuid[1310]: Primary Header is updated. May 15 00:43:56.551416 disk-uuid[1310]: Secondary Entries is updated. May 15 00:43:56.551416 disk-uuid[1310]: Secondary Header is updated. May 15 00:43:56.615788 kernel: usb 2-3: new SuperSpeed USB device number 2 using xhci_hcd May 15 00:43:56.644791 kernel: hub 2-3:1.0: USB hub found May 15 00:43:56.654786 kernel: hub 2-3:1.0: 4 ports detected May 15 00:43:56.694791 kernel: mlx5_core 0001:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 15 00:43:56.707783 kernel: mlx5_core 0001:01:00.1: Adding to iommu group 37 May 15 00:43:56.730983 kernel: mlx5_core 0001:01:00.1: firmware version: 14.31.1014 May 15 00:43:56.731069 kernel: mlx5_core 0001:01:00.1: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 15 00:43:57.077926 kernel: mlx5_core 0001:01:00.1: Port module event: module 1, Cable plugged May 15 00:43:57.391791 kernel: mlx5_core 0001:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 15 00:43:57.406792 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: renamed from eth1 May 15 00:43:57.424786 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: renamed from eth0 May 15 00:43:57.540406 disk-uuid[1311]: The operation has completed successfully. May 15 00:43:57.545883 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 15 00:43:57.570232 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 00:43:57.570335 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 00:43:57.624920 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 00:43:57.635098 sh[1481]: Success May 15 00:43:57.653794 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 15 00:43:57.687053 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 00:43:57.707973 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 00:43:57.718485 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 00:43:57.752879 kernel: BTRFS info (device dm-0): first mount of filesystem 369506fd-904a-45c2-a4ab-2d03e7866799 May 15 00:43:57.752896 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 15 00:43:57.770477 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 15 00:43:57.784614 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 15 00:43:57.796105 kernel: BTRFS info (device dm-0): using free space tree May 15 00:43:57.815791 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 15 00:43:57.817137 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 00:43:57.827654 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 00:43:57.838928 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 00:43:57.850516 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 00:43:57.955010 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 15 00:43:57.955029 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 15 00:43:57.955040 kernel: BTRFS info (device nvme0n1p6): using free space tree May 15 00:43:57.955051 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 15 00:43:57.955061 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 15 00:43:57.955071 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 15 00:43:57.960958 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 00:43:57.972529 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 00:43:58.009898 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 00:43:58.017589 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 00:43:58.047246 systemd-networkd[1673]: lo: Link UP May 15 00:43:58.047252 systemd-networkd[1673]: lo: Gained carrier May 15 00:43:58.051444 systemd-networkd[1673]: Enumeration completed May 15 00:43:58.051602 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 00:43:58.052816 systemd-networkd[1673]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:43:58.058888 systemd[1]: Reached target network.target - Network. May 15 00:43:58.098567 ignition[1668]: Ignition 2.20.0 May 15 00:43:58.098574 ignition[1668]: Stage: fetch-offline May 15 00:43:58.104293 systemd-networkd[1673]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:43:58.098614 ignition[1668]: no configs at "/usr/lib/ignition/base.d" May 15 00:43:58.109701 unknown[1668]: fetched base config from "system" May 15 00:43:58.098623 ignition[1668]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 15 00:43:58.109708 unknown[1668]: fetched user config from "system" May 15 00:43:58.098791 ignition[1668]: parsed url from cmdline: "" May 15 00:43:58.112187 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 00:43:58.098794 ignition[1668]: no config URL provided May 15 00:43:58.126194 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 00:43:58.098798 ignition[1668]: reading system config file "/usr/lib/ignition/user.ign" May 15 00:43:58.140915 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 00:43:58.098849 ignition[1668]: parsing config with SHA512: a9718cd227c6de6c173c4ff1ec6757f756690c929e9e1b01ccfa903564d9d052c22d5757260910992d147ef6241734833afe42f6ffe2f6c40a185cb411058c7c May 15 00:43:58.151310 systemd-networkd[1673]: enP1p1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:43:58.110140 ignition[1668]: fetch-offline: fetch-offline passed May 15 00:43:58.110146 ignition[1668]: POST message to Packet Timeline May 15 00:43:58.110151 ignition[1668]: POST Status error: resource requires networking May 15 00:43:58.110218 ignition[1668]: Ignition finished successfully May 15 00:43:58.156002 ignition[1706]: Ignition 2.20.0 May 15 00:43:58.156007 ignition[1706]: Stage: kargs May 15 00:43:58.156172 ignition[1706]: no configs at "/usr/lib/ignition/base.d" May 15 00:43:58.156181 ignition[1706]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 15 00:43:58.157637 ignition[1706]: kargs: kargs passed May 15 00:43:58.157659 ignition[1706]: POST message to Packet Timeline May 15 00:43:58.157799 ignition[1706]: GET https://metadata.packet.net/metadata: attempt #1 May 15 00:43:58.161166 ignition[1706]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:33153->[::1]:53: read: connection refused May 15 00:43:58.361510 ignition[1706]: GET https://metadata.packet.net/metadata: attempt #2 May 15 00:43:58.361955 ignition[1706]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56725->[::1]:53: read: connection refused May 15 00:43:58.739788 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up May 15 00:43:58.742746 systemd-networkd[1673]: enP1p1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:43:58.762796 ignition[1706]: GET https://metadata.packet.net/metadata: attempt #3 May 15 00:43:58.763446 ignition[1706]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:40192->[::1]:53: read: connection refused May 15 00:43:59.402793 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up May 15 00:43:59.406265 systemd-networkd[1673]: eno1: Link UP May 15 00:43:59.406399 systemd-networkd[1673]: eno2: Link UP May 15 00:43:59.406521 systemd-networkd[1673]: enP1p1s0f0np0: Link UP May 15 00:43:59.406664 systemd-networkd[1673]: enP1p1s0f0np0: Gained carrier May 15 00:43:59.417002 systemd-networkd[1673]: enP1p1s0f1np1: Link UP May 15 00:43:59.455824 systemd-networkd[1673]: enP1p1s0f0np0: DHCPv4 address 147.28.145.22/30, gateway 147.28.145.21 acquired from 147.28.144.140 May 15 00:43:59.563706 ignition[1706]: GET https://metadata.packet.net/metadata: attempt #4 May 15 00:43:59.564137 ignition[1706]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:54928->[::1]:53: read: connection refused May 15 00:43:59.745993 systemd-networkd[1673]: enP1p1s0f1np1: Gained carrier May 15 00:44:00.465885 systemd-networkd[1673]: enP1p1s0f0np0: Gained IPv6LL May 15 00:44:00.849881 systemd-networkd[1673]: enP1p1s0f1np1: Gained IPv6LL May 15 00:44:01.165023 ignition[1706]: GET https://metadata.packet.net/metadata: attempt #5 May 15 00:44:01.165429 ignition[1706]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:45045->[::1]:53: read: connection refused May 15 00:44:04.368100 ignition[1706]: GET https://metadata.packet.net/metadata: attempt #6 May 15 00:44:04.854929 ignition[1706]: GET result: OK May 15 00:44:05.618284 ignition[1706]: Ignition finished successfully May 15 00:44:05.621863 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 00:44:05.632949 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 00:44:05.647775 ignition[1725]: Ignition 2.20.0 May 15 00:44:05.647788 ignition[1725]: Stage: disks May 15 00:44:05.647941 ignition[1725]: no configs at "/usr/lib/ignition/base.d" May 15 00:44:05.647950 ignition[1725]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 15 00:44:05.648775 ignition[1725]: disks: disks passed May 15 00:44:05.648785 ignition[1725]: POST message to Packet Timeline May 15 00:44:05.648803 ignition[1725]: GET https://metadata.packet.net/metadata: attempt #1 May 15 00:44:06.723248 ignition[1725]: GET result: OK May 15 00:44:07.044945 ignition[1725]: Ignition finished successfully May 15 00:44:07.047042 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 00:44:07.053675 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 00:44:07.060947 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 00:44:07.068857 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 00:44:07.077284 systemd[1]: Reached target sysinit.target - System Initialization. May 15 00:44:07.086094 systemd[1]: Reached target basic.target - Basic System. May 15 00:44:07.104881 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 00:44:07.120392 systemd-fsck[1744]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 15 00:44:07.123451 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 00:44:07.140848 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 00:44:07.209784 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 737cda88-7069-47ce-b2bc-d891099a68fb r/w with ordered data mode. Quota mode: none. May 15 00:44:07.209832 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 00:44:07.220174 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 00:44:07.243854 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 00:44:07.252783 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/nvme0n1p6 scanned by mount (1753) May 15 00:44:07.252799 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 15 00:44:07.252810 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 15 00:44:07.252821 kernel: BTRFS info (device nvme0n1p6): using free space tree May 15 00:44:07.253785 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 15 00:44:07.253807 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 15 00:44:07.345861 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 00:44:07.352398 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 15 00:44:07.363217 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... May 15 00:44:07.379078 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 00:44:07.379116 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 00:44:07.411797 coreos-metadata[1775]: May 15 00:44:07.409 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 15 00:44:07.428617 coreos-metadata[1773]: May 15 00:44:07.409 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 15 00:44:07.392589 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 00:44:07.406339 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 00:44:07.427985 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 00:44:07.462140 initrd-setup-root[1798]: cut: /sysroot/etc/passwd: No such file or directory May 15 00:44:07.468160 initrd-setup-root[1806]: cut: /sysroot/etc/group: No such file or directory May 15 00:44:07.474481 initrd-setup-root[1813]: cut: /sysroot/etc/shadow: No such file or directory May 15 00:44:07.480869 initrd-setup-root[1821]: cut: /sysroot/etc/gshadow: No such file or directory May 15 00:44:07.550745 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 00:44:07.577840 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 00:44:07.608881 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 15 00:44:07.584392 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 00:44:07.615676 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 00:44:07.632435 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 00:44:07.637966 ignition[1895]: INFO : Ignition 2.20.0 May 15 00:44:07.637966 ignition[1895]: INFO : Stage: mount May 15 00:44:07.637966 ignition[1895]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:44:07.637966 ignition[1895]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 15 00:44:07.637966 ignition[1895]: INFO : mount: mount passed May 15 00:44:07.637966 ignition[1895]: INFO : POST message to Packet Timeline May 15 00:44:07.637966 ignition[1895]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 15 00:44:08.040049 coreos-metadata[1773]: May 15 00:44:08.040 INFO Fetch successful May 15 00:44:08.085913 coreos-metadata[1775]: May 15 00:44:08.085 INFO Fetch successful May 15 00:44:08.090567 coreos-metadata[1773]: May 15 00:44:08.087 INFO wrote hostname ci-4230.1.1-n-3631181341 to /sysroot/etc/hostname May 15 00:44:08.091008 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 15 00:44:08.128407 ignition[1895]: INFO : GET result: OK May 15 00:44:08.135479 systemd[1]: flatcar-static-network.service: Deactivated successfully. May 15 00:44:08.135638 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. May 15 00:44:08.529426 ignition[1895]: INFO : Ignition finished successfully May 15 00:44:08.531900 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 00:44:08.549875 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 00:44:08.562020 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 00:44:08.597484 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/nvme0n1p6 scanned by mount (1922) May 15 00:44:08.597521 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 15 00:44:08.611839 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 15 00:44:08.624758 kernel: BTRFS info (device nvme0n1p6): using free space tree May 15 00:44:08.647483 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 15 00:44:08.647506 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 15 00:44:08.655562 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 00:44:08.688044 ignition[1940]: INFO : Ignition 2.20.0 May 15 00:44:08.688044 ignition[1940]: INFO : Stage: files May 15 00:44:08.697502 ignition[1940]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:44:08.697502 ignition[1940]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 15 00:44:08.697502 ignition[1940]: DEBUG : files: compiled without relabeling support, skipping May 15 00:44:08.697502 ignition[1940]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 00:44:08.697502 ignition[1940]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 00:44:08.697502 ignition[1940]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 00:44:08.697502 ignition[1940]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 00:44:08.697502 ignition[1940]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 00:44:08.697502 ignition[1940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 15 00:44:08.697502 ignition[1940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 15 00:44:08.693889 unknown[1940]: wrote ssh authorized keys file for user: core May 15 00:44:08.790026 ignition[1940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 00:44:08.871188 ignition[1940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 15 00:44:08.881769 ignition[1940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 15 00:44:08.881769 ignition[1940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 15 00:44:08.881769 ignition[1940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 00:44:08.881769 ignition[1940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 00:44:08.881769 ignition[1940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 00:44:08.881769 ignition[1940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 00:44:08.881769 ignition[1940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 00:44:08.881769 ignition[1940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 00:44:08.881769 ignition[1940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 00:44:08.881769 ignition[1940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 00:44:08.881769 ignition[1940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 15 00:44:08.881769 ignition[1940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 15 00:44:08.881769 ignition[1940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 15 00:44:08.881769 ignition[1940]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 15 00:44:09.064123 ignition[1940]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 15 00:44:09.363235 ignition[1940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 15 00:44:09.363235 ignition[1940]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 15 00:44:09.387955 ignition[1940]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 00:44:09.387955 ignition[1940]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 00:44:09.387955 ignition[1940]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 15 00:44:09.387955 ignition[1940]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 15 00:44:09.387955 ignition[1940]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 15 00:44:09.387955 ignition[1940]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 00:44:09.387955 ignition[1940]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 00:44:09.387955 ignition[1940]: INFO : files: files passed May 15 00:44:09.387955 ignition[1940]: INFO : POST message to Packet Timeline May 15 00:44:09.387955 ignition[1940]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 15 00:44:09.863684 ignition[1940]: INFO : GET result: OK May 15 00:44:10.212047 ignition[1940]: INFO : Ignition finished successfully May 15 00:44:10.215140 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 00:44:10.237971 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 00:44:10.250546 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 00:44:10.269178 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 00:44:10.269342 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 00:44:10.287407 initrd-setup-root-after-ignition[1983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 00:44:10.287407 initrd-setup-root-after-ignition[1983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 00:44:10.281927 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 00:44:10.333624 initrd-setup-root-after-ignition[1987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 00:44:10.294697 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 00:44:10.323967 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 00:44:10.365858 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 00:44:10.366052 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 00:44:10.375598 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 00:44:10.385863 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 00:44:10.402829 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 00:44:10.415944 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 00:44:10.437696 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 00:44:10.454897 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 00:44:10.477975 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 00:44:10.483947 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:44:10.495612 systemd[1]: Stopped target timers.target - Timer Units. May 15 00:44:10.507370 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 00:44:10.507508 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 00:44:10.519213 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 00:44:10.530671 systemd[1]: Stopped target basic.target - Basic System. May 15 00:44:10.542310 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 00:44:10.553892 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 00:44:10.565365 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 00:44:10.576789 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 00:44:10.588235 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 00:44:10.599701 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 00:44:10.611167 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 00:44:10.628204 systemd[1]: Stopped target swap.target - Swaps. May 15 00:44:10.639769 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 00:44:10.639883 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 00:44:10.651567 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 00:44:10.662934 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:44:10.674222 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 00:44:10.674298 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:44:10.685568 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 00:44:10.685666 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 00:44:10.697099 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 00:44:10.697223 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 00:44:10.708488 systemd[1]: Stopped target paths.target - Path Units. May 15 00:44:10.719746 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 00:44:10.723808 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:44:10.737019 systemd[1]: Stopped target slices.target - Slice Units. May 15 00:44:10.748624 systemd[1]: Stopped target sockets.target - Socket Units. May 15 00:44:10.760296 systemd[1]: iscsid.socket: Deactivated successfully. May 15 00:44:10.760406 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 00:44:10.860785 ignition[2010]: INFO : Ignition 2.20.0 May 15 00:44:10.860785 ignition[2010]: INFO : Stage: umount May 15 00:44:10.860785 ignition[2010]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:44:10.860785 ignition[2010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 15 00:44:10.860785 ignition[2010]: INFO : umount: umount passed May 15 00:44:10.860785 ignition[2010]: INFO : POST message to Packet Timeline May 15 00:44:10.860785 ignition[2010]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 15 00:44:10.771970 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 00:44:10.772028 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 00:44:10.783824 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 00:44:10.783914 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 00:44:10.795516 systemd[1]: ignition-files.service: Deactivated successfully. May 15 00:44:10.795601 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 00:44:10.807240 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 15 00:44:10.807342 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 15 00:44:10.833940 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 00:44:10.842562 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 00:44:10.842673 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:44:10.855433 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 00:44:10.866988 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 00:44:10.867102 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:44:10.878598 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 00:44:10.878684 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 00:44:10.892448 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 00:44:10.895228 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 00:44:10.895318 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 00:44:10.925942 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 00:44:10.927834 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 00:44:11.330772 ignition[2010]: INFO : GET result: OK May 15 00:44:11.609957 ignition[2010]: INFO : Ignition finished successfully May 15 00:44:11.612021 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 00:44:11.612309 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 00:44:11.625362 systemd[1]: Stopped target network.target - Network. May 15 00:44:11.634566 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 00:44:11.634643 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 00:44:11.644277 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 00:44:11.644312 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 00:44:11.653993 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 00:44:11.654044 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 00:44:11.663884 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 00:44:11.663916 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 00:44:11.673766 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 00:44:11.673838 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 00:44:11.683870 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 00:44:11.693541 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 00:44:11.703557 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 00:44:11.703656 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 00:44:11.717350 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 15 00:44:11.718569 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 00:44:11.718670 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:44:11.731675 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 15 00:44:11.731971 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 00:44:11.732089 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 00:44:11.739467 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 15 00:44:11.740388 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 00:44:11.740567 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 00:44:11.761900 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 00:44:11.769832 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 00:44:11.769886 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 00:44:11.785450 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 00:44:11.785491 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 00:44:11.796179 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 00:44:11.796228 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 00:44:11.812282 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:44:11.830128 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 00:44:11.830496 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 00:44:11.830623 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:44:11.854864 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 00:44:11.855025 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 00:44:11.863337 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 00:44:11.863401 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:44:11.874545 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 00:44:11.874615 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 00:44:11.891890 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 00:44:11.891966 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 00:44:11.909104 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 00:44:11.909179 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:44:11.936895 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 00:44:11.944387 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 00:44:11.944435 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:44:11.956135 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 00:44:11.956192 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:44:11.974766 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 15 00:44:11.974845 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 00:44:11.975203 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 00:44:11.975283 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 00:44:12.510359 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 00:44:12.510513 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 00:44:12.522017 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 00:44:12.541972 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 00:44:12.555873 systemd[1]: Switching root. May 15 00:44:12.608424 systemd-journald[900]: Journal stopped May 15 00:44:14.701545 systemd-journald[900]: Received SIGTERM from PID 1 (systemd). May 15 00:44:14.701572 kernel: SELinux: policy capability network_peer_controls=1 May 15 00:44:14.701582 kernel: SELinux: policy capability open_perms=1 May 15 00:44:14.701590 kernel: SELinux: policy capability extended_socket_class=1 May 15 00:44:14.701598 kernel: SELinux: policy capability always_check_network=0 May 15 00:44:14.701605 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 00:44:14.701614 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 00:44:14.701623 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 00:44:14.701631 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 00:44:14.701639 kernel: audit: type=1403 audit(1747269852.781:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 00:44:14.701648 systemd[1]: Successfully loaded SELinux policy in 116.096ms. May 15 00:44:14.701657 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.842ms. May 15 00:44:14.701667 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 00:44:14.701679 systemd[1]: Detected architecture arm64. May 15 00:44:14.701690 systemd[1]: Detected first boot. May 15 00:44:14.701699 systemd[1]: Hostname set to . May 15 00:44:14.701708 systemd[1]: Initializing machine ID from random generator. May 15 00:44:14.701716 zram_generator::config[2093]: No configuration found. May 15 00:44:14.701727 systemd[1]: Populated /etc with preset unit settings. May 15 00:44:14.701737 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 15 00:44:14.701745 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 00:44:14.701754 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 00:44:14.701763 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 00:44:14.701772 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 00:44:14.701784 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 00:44:14.701796 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 00:44:14.701805 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 00:44:14.701814 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 00:44:14.701823 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 00:44:14.701832 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 00:44:14.701841 systemd[1]: Created slice user.slice - User and Session Slice. May 15 00:44:14.701850 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:44:14.701859 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:44:14.701870 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 00:44:14.701879 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 00:44:14.701888 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 00:44:14.701897 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 00:44:14.701906 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 15 00:44:14.701915 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:44:14.701924 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 00:44:14.701935 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 00:44:14.701944 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 00:44:14.701954 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 00:44:14.701963 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:44:14.701972 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 00:44:14.701981 systemd[1]: Reached target slices.target - Slice Units. May 15 00:44:14.701990 systemd[1]: Reached target swap.target - Swaps. May 15 00:44:14.701999 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 00:44:14.702009 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 00:44:14.702019 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 15 00:44:14.702028 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 00:44:14.702038 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 00:44:14.702047 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:44:14.702056 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 00:44:14.702067 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 00:44:14.702078 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 00:44:14.702087 systemd[1]: Mounting media.mount - External Media Directory... May 15 00:44:14.702096 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 00:44:14.702106 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 00:44:14.702115 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 00:44:14.702125 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 00:44:14.702134 systemd[1]: Reached target machines.target - Containers. May 15 00:44:14.702145 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 00:44:14.702154 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:44:14.702163 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 00:44:14.702173 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 00:44:14.702182 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:44:14.702191 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 00:44:14.702200 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:44:14.702209 kernel: ACPI: bus type drm_connector registered May 15 00:44:14.702218 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 00:44:14.702228 kernel: fuse: init (API version 7.39) May 15 00:44:14.702237 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:44:14.702247 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 00:44:14.702256 kernel: loop: module loaded May 15 00:44:14.702265 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 00:44:14.702274 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 00:44:14.702283 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 00:44:14.702292 systemd[1]: Stopped systemd-fsck-usr.service. May 15 00:44:14.702303 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 00:44:14.702313 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 00:44:14.702322 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 00:44:14.702346 systemd-journald[2204]: Collecting audit messages is disabled. May 15 00:44:14.702370 systemd-journald[2204]: Journal started May 15 00:44:14.702389 systemd-journald[2204]: Runtime Journal (/run/log/journal/47694edf422543e0afb4cf445403e33d) is 8M, max 4G, 3.9G free. May 15 00:44:13.331890 systemd[1]: Queued start job for default target multi-user.target. May 15 00:44:13.344061 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 15 00:44:13.344380 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 00:44:13.344673 systemd[1]: systemd-journald.service: Consumed 3.517s CPU time. May 15 00:44:14.726792 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 00:44:14.753796 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 00:44:14.781805 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 15 00:44:14.803792 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 00:44:14.826384 systemd[1]: verity-setup.service: Deactivated successfully. May 15 00:44:14.826399 systemd[1]: Stopped verity-setup.service. May 15 00:44:14.851798 systemd[1]: Started systemd-journald.service - Journal Service. May 15 00:44:14.857695 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 00:44:14.863311 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 00:44:14.868890 systemd[1]: Mounted media.mount - External Media Directory. May 15 00:44:14.874360 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 00:44:14.879806 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 00:44:14.885195 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 00:44:14.890718 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 00:44:14.896286 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:44:14.901905 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 00:44:14.902062 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 00:44:14.907497 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:44:14.908807 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:44:14.914138 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 00:44:14.914300 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 00:44:14.921748 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:44:14.923811 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:44:14.929315 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 00:44:14.929472 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 00:44:14.936082 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:44:14.936232 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:44:14.941923 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 00:44:14.947121 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 00:44:14.952187 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 00:44:14.957274 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 15 00:44:14.963872 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:44:14.979112 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 00:44:15.001876 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 00:44:15.007856 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 00:44:15.012691 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 00:44:15.012719 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 00:44:15.018302 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 15 00:44:15.023884 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 00:44:15.029709 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 00:44:15.034506 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:44:15.035926 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 00:44:15.041602 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 00:44:15.046326 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:44:15.047400 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 00:44:15.052156 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 00:44:15.052721 systemd-journald[2204]: Time spent on flushing to /var/log/journal/47694edf422543e0afb4cf445403e33d is 24.385ms for 2361 entries. May 15 00:44:15.052721 systemd-journald[2204]: System Journal (/var/log/journal/47694edf422543e0afb4cf445403e33d) is 8M, max 195.6M, 187.6M free. May 15 00:44:15.094096 systemd-journald[2204]: Received client request to flush runtime journal. May 15 00:44:15.094216 kernel: loop0: detected capacity change from 0 to 194096 May 15 00:44:15.053231 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 00:44:15.094878 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 00:44:15.070816 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 00:44:15.076619 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 00:44:15.082425 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 15 00:44:15.109276 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 00:44:15.113868 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 00:44:15.119225 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 00:44:15.124092 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 00:44:15.130140 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 00:44:15.136809 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 00:44:15.141634 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 00:44:15.153770 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 00:44:15.157797 kernel: loop1: detected capacity change from 0 to 113512 May 15 00:44:15.180055 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 15 00:44:15.186282 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 00:44:15.191980 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 00:44:15.192815 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 15 00:44:15.199588 udevadm[2250]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 15 00:44:15.213792 kernel: loop2: detected capacity change from 0 to 8 May 15 00:44:15.216041 systemd-tmpfiles[2274]: ACLs are not supported, ignoring. May 15 00:44:15.216054 systemd-tmpfiles[2274]: ACLs are not supported, ignoring. May 15 00:44:15.221812 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:44:15.270793 kernel: loop3: detected capacity change from 0 to 123192 May 15 00:44:15.308544 ldconfig[2239]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 00:44:15.309889 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 00:44:15.314794 kernel: loop4: detected capacity change from 0 to 194096 May 15 00:44:15.338788 kernel: loop5: detected capacity change from 0 to 113512 May 15 00:44:15.354785 kernel: loop6: detected capacity change from 0 to 8 May 15 00:44:15.366818 kernel: loop7: detected capacity change from 0 to 123192 May 15 00:44:15.370832 (sd-merge)[2292]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. May 15 00:44:15.371270 (sd-merge)[2292]: Merged extensions into '/usr'. May 15 00:44:15.374115 systemd[1]: Reload requested from client PID 2247 ('systemd-sysext') (unit systemd-sysext.service)... May 15 00:44:15.374128 systemd[1]: Reloading... May 15 00:44:15.411795 zram_generator::config[2322]: No configuration found. May 15 00:44:15.503719 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:44:15.564535 systemd[1]: Reloading finished in 190 ms. May 15 00:44:15.580176 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 00:44:15.585894 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 00:44:15.606117 systemd[1]: Starting ensure-sysext.service... May 15 00:44:15.612061 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 00:44:15.618798 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:44:15.629803 systemd[1]: Reload requested from client PID 2374 ('systemctl') (unit ensure-sysext.service)... May 15 00:44:15.629817 systemd[1]: Reloading... May 15 00:44:15.632066 systemd-tmpfiles[2375]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 00:44:15.632259 systemd-tmpfiles[2375]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 00:44:15.632917 systemd-tmpfiles[2375]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 00:44:15.633125 systemd-tmpfiles[2375]: ACLs are not supported, ignoring. May 15 00:44:15.633172 systemd-tmpfiles[2375]: ACLs are not supported, ignoring. May 15 00:44:15.636104 systemd-tmpfiles[2375]: Detected autofs mount point /boot during canonicalization of boot. May 15 00:44:15.636111 systemd-tmpfiles[2375]: Skipping /boot May 15 00:44:15.644736 systemd-tmpfiles[2375]: Detected autofs mount point /boot during canonicalization of boot. May 15 00:44:15.644744 systemd-tmpfiles[2375]: Skipping /boot May 15 00:44:15.646177 systemd-udevd[2376]: Using default interface naming scheme 'v255'. May 15 00:44:15.682790 zram_generator::config[2424]: No configuration found. May 15 00:44:15.706792 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 44 scanned by (udev-worker) (2441) May 15 00:44:15.730185 kernel: IPMI message handler: version 39.2 May 15 00:44:15.739791 kernel: ipmi device interface May 15 00:44:15.757657 kernel: ipmi_si: IPMI System Interface driver May 15 00:44:15.757721 kernel: ipmi_si: Unable to find any System Interface(s) May 15 00:44:15.763788 kernel: ipmi_ssif: IPMI SSIF Interface driver May 15 00:44:15.807045 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:44:15.886988 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 15 00:44:15.887464 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. May 15 00:44:15.892153 systemd[1]: Reloading finished in 262 ms. May 15 00:44:15.905236 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:44:15.929677 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:44:15.952012 systemd[1]: Finished ensure-sysext.service. May 15 00:44:15.956836 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 15 00:44:15.995957 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 00:44:16.002107 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 00:44:16.007386 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:44:16.008496 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 15 00:44:16.014538 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:44:16.020696 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 00:44:16.026602 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:44:16.026810 lvm[2619]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 00:44:16.032516 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:44:16.034756 augenrules[2639]: No rules May 15 00:44:16.037595 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:44:16.038533 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 00:44:16.043453 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 00:44:16.044670 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 00:44:16.051335 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 00:44:16.058024 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 00:44:16.064337 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 00:44:16.070032 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 00:44:16.075795 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:44:16.081351 systemd[1]: audit-rules.service: Deactivated successfully. May 15 00:44:16.081535 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 00:44:16.086637 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 00:44:16.092852 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 15 00:44:16.097981 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:44:16.098132 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:44:16.103160 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 00:44:16.103309 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 00:44:16.108351 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:44:16.108509 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:44:16.113488 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:44:16.114232 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:44:16.119174 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 00:44:16.124754 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 00:44:16.129792 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:44:16.143459 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 00:44:16.159931 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 15 00:44:16.163957 lvm[2667]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 00:44:16.164547 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:44:16.164615 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 00:44:16.165839 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 00:44:16.172449 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 00:44:16.177234 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 00:44:16.178328 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 00:44:16.183307 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 00:44:16.204304 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 15 00:44:16.209697 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 00:44:16.266933 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 00:44:16.272029 systemd[1]: Reached target time-set.target - System Time Set. May 15 00:44:16.277080 systemd-resolved[2648]: Positive Trust Anchors: May 15 00:44:16.277092 systemd-resolved[2648]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 00:44:16.277125 systemd-resolved[2648]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 00:44:16.280469 systemd-resolved[2648]: Using system hostname 'ci-4230.1.1-n-3631181341'. May 15 00:44:16.280576 systemd-networkd[2647]: lo: Link UP May 15 00:44:16.280581 systemd-networkd[2647]: lo: Gained carrier May 15 00:44:16.281747 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 00:44:16.284287 systemd-networkd[2647]: bond0: netdev ready May 15 00:44:16.286225 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 00:44:16.290601 systemd[1]: Reached target sysinit.target - System Initialization. May 15 00:44:16.293469 systemd-networkd[2647]: Enumeration completed May 15 00:44:16.294892 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 00:44:16.297006 systemd-networkd[2647]: enP1p1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:49:b3:c8.network. May 15 00:44:16.299227 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 00:44:16.303729 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 00:44:16.308115 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 00:44:16.312490 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 00:44:16.316857 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 00:44:16.316881 systemd[1]: Reached target paths.target - Path Units. May 15 00:44:16.321218 systemd[1]: Reached target timers.target - Timer Units. May 15 00:44:16.326322 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 00:44:16.332173 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 00:44:16.338461 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 15 00:44:16.345356 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 00:44:16.350311 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 15 00:44:16.355306 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 00:44:16.360046 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 00:44:16.364585 systemd[1]: Reached target network.target - Network. May 15 00:44:16.369004 systemd[1]: Reached target sockets.target - Socket Units. May 15 00:44:16.373356 systemd[1]: Reached target basic.target - Basic System. May 15 00:44:16.377660 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 00:44:16.377679 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 00:44:16.388880 systemd[1]: Starting containerd.service - containerd container runtime... May 15 00:44:16.394487 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 15 00:44:16.400426 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 00:44:16.406095 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 00:44:16.411755 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 00:44:16.415918 coreos-metadata[2700]: May 15 00:44:16.415 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 15 00:44:16.416116 jq[2704]: false May 15 00:44:16.416332 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 00:44:16.417363 coreos-metadata[2700]: May 15 00:44:16.417 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 15 00:44:16.417477 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 00:44:16.421479 dbus-daemon[2701]: [system] SELinux support is enabled May 15 00:44:16.423075 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 00:44:16.428723 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 00:44:16.432707 extend-filesystems[2705]: Found loop4 May 15 00:44:16.438171 extend-filesystems[2705]: Found loop5 May 15 00:44:16.438171 extend-filesystems[2705]: Found loop6 May 15 00:44:16.438171 extend-filesystems[2705]: Found loop7 May 15 00:44:16.438171 extend-filesystems[2705]: Found nvme0n1 May 15 00:44:16.438171 extend-filesystems[2705]: Found nvme0n1p1 May 15 00:44:16.438171 extend-filesystems[2705]: Found nvme0n1p2 May 15 00:44:16.438171 extend-filesystems[2705]: Found nvme0n1p3 May 15 00:44:16.438171 extend-filesystems[2705]: Found usr May 15 00:44:16.438171 extend-filesystems[2705]: Found nvme0n1p4 May 15 00:44:16.438171 extend-filesystems[2705]: Found nvme0n1p6 May 15 00:44:16.438171 extend-filesystems[2705]: Found nvme0n1p7 May 15 00:44:16.438171 extend-filesystems[2705]: Found nvme0n1p9 May 15 00:44:16.438171 extend-filesystems[2705]: Checking size of /dev/nvme0n1p9 May 15 00:44:16.578096 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 233815889 blocks May 15 00:44:16.578168 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 44 scanned by (udev-worker) (2587) May 15 00:44:16.434615 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 00:44:16.578293 extend-filesystems[2705]: Resized partition /dev/nvme0n1p9 May 15 00:44:16.446978 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 00:44:16.583189 extend-filesystems[2724]: resize2fs 1.47.1 (20-May-2024) May 15 00:44:16.578939 dbus-daemon[2701]: [system] Successfully activated service 'org.freedesktop.systemd1' May 15 00:44:16.452948 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 15 00:44:16.498003 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 00:44:16.507094 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 00:44:16.507720 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 00:44:16.588705 update_engine[2736]: I20250515 00:44:16.557960 2736 main.cc:92] Flatcar Update Engine starting May 15 00:44:16.588705 update_engine[2736]: I20250515 00:44:16.560730 2736 update_check_scheduler.cc:74] Next update check in 7m15s May 15 00:44:16.508337 systemd[1]: Starting update-engine.service - Update Engine... May 15 00:44:16.588963 jq[2737]: true May 15 00:44:16.516502 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 00:44:16.525369 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 00:44:16.589236 tar[2739]: linux-arm64/helm May 15 00:44:16.539074 systemd-logind[2723]: Watching system buttons on /dev/input/event0 (Power Button) May 15 00:44:16.539153 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 00:44:16.539374 systemd-logind[2723]: New seat seat0. May 15 00:44:16.589733 jq[2741]: true May 15 00:44:16.539374 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 00:44:16.539650 systemd[1]: motdgen.service: Deactivated successfully. May 15 00:44:16.539829 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 00:44:16.550211 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 00:44:16.550390 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 00:44:16.559080 systemd[1]: Started systemd-logind.service - User Login Management. May 15 00:44:16.578904 (ntainerd)[2742]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 00:44:16.597587 systemd[1]: Started update-engine.service - Update Engine. May 15 00:44:16.603596 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 00:44:16.603744 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 00:44:16.608777 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 00:44:16.608883 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 00:44:16.615155 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 00:44:16.618496 bash[2766]: Updated "/home/core/.ssh/authorized_keys" May 15 00:44:16.623690 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 00:44:16.631095 systemd[1]: Starting sshkeys.service... May 15 00:44:16.643701 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 15 00:44:16.649908 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 15 00:44:16.655005 locksmithd[2767]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 00:44:16.669989 coreos-metadata[2778]: May 15 00:44:16.669 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 15 00:44:16.671155 coreos-metadata[2778]: May 15 00:44:16.671 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 15 00:44:16.720991 containerd[2742]: time="2025-05-15T00:44:16.720912400Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 15 00:44:16.742747 containerd[2742]: time="2025-05-15T00:44:16.742710040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 00:44:16.743937 containerd[2742]: time="2025-05-15T00:44:16.743903160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 00:44:16.743958 containerd[2742]: time="2025-05-15T00:44:16.743935600Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 00:44:16.743958 containerd[2742]: time="2025-05-15T00:44:16.743950520Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 00:44:16.744112 containerd[2742]: time="2025-05-15T00:44:16.744095800Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 15 00:44:16.744133 containerd[2742]: time="2025-05-15T00:44:16.744116600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 15 00:44:16.744184 containerd[2742]: time="2025-05-15T00:44:16.744169760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:44:16.744204 containerd[2742]: time="2025-05-15T00:44:16.744190800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 00:44:16.744995 containerd[2742]: time="2025-05-15T00:44:16.744960920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:44:16.745025 containerd[2742]: time="2025-05-15T00:44:16.744996640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 00:44:16.745025 containerd[2742]: time="2025-05-15T00:44:16.745012680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:44:16.745057 containerd[2742]: time="2025-05-15T00:44:16.745024400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 00:44:16.745179 containerd[2742]: time="2025-05-15T00:44:16.745163080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 00:44:16.745392 containerd[2742]: time="2025-05-15T00:44:16.745375880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 00:44:16.745537 containerd[2742]: time="2025-05-15T00:44:16.745520800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:44:16.745629 containerd[2742]: time="2025-05-15T00:44:16.745617320Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 00:44:16.745716 containerd[2742]: time="2025-05-15T00:44:16.745701720Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 00:44:16.745753 containerd[2742]: time="2025-05-15T00:44:16.745743960Z" level=info msg="metadata content store policy set" policy=shared May 15 00:44:16.753426 containerd[2742]: time="2025-05-15T00:44:16.753400760Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 00:44:16.753465 containerd[2742]: time="2025-05-15T00:44:16.753445560Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 00:44:16.753465 containerd[2742]: time="2025-05-15T00:44:16.753461600Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 15 00:44:16.753538 containerd[2742]: time="2025-05-15T00:44:16.753478120Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 15 00:44:16.753538 containerd[2742]: time="2025-05-15T00:44:16.753493400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 00:44:16.753646 containerd[2742]: time="2025-05-15T00:44:16.753625960Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 00:44:16.753913 containerd[2742]: time="2025-05-15T00:44:16.753897040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 00:44:16.754037 containerd[2742]: time="2025-05-15T00:44:16.754022280Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 15 00:44:16.754062 containerd[2742]: time="2025-05-15T00:44:16.754039840Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 15 00:44:16.754062 containerd[2742]: time="2025-05-15T00:44:16.754055720Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 15 00:44:16.754094 containerd[2742]: time="2025-05-15T00:44:16.754068960Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 00:44:16.754094 containerd[2742]: time="2025-05-15T00:44:16.754085200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 00:44:16.754126 containerd[2742]: time="2025-05-15T00:44:16.754097560Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 00:44:16.754126 containerd[2742]: time="2025-05-15T00:44:16.754112080Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 00:44:16.754161 containerd[2742]: time="2025-05-15T00:44:16.754126200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 00:44:16.754161 containerd[2742]: time="2025-05-15T00:44:16.754138920Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 00:44:16.754161 containerd[2742]: time="2025-05-15T00:44:16.754150920Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 00:44:16.754211 containerd[2742]: time="2025-05-15T00:44:16.754161320Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 00:44:16.754211 containerd[2742]: time="2025-05-15T00:44:16.754181400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 00:44:16.754211 containerd[2742]: time="2025-05-15T00:44:16.754193880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 00:44:16.754211 containerd[2742]: time="2025-05-15T00:44:16.754206240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 00:44:16.754279 containerd[2742]: time="2025-05-15T00:44:16.754218680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 00:44:16.754279 containerd[2742]: time="2025-05-15T00:44:16.754234800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 00:44:16.754279 containerd[2742]: time="2025-05-15T00:44:16.754247160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 00:44:16.754279 containerd[2742]: time="2025-05-15T00:44:16.754257600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 00:44:16.754279 containerd[2742]: time="2025-05-15T00:44:16.754269320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 00:44:16.754358 containerd[2742]: time="2025-05-15T00:44:16.754280880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 15 00:44:16.754358 containerd[2742]: time="2025-05-15T00:44:16.754294720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 15 00:44:16.754358 containerd[2742]: time="2025-05-15T00:44:16.754305120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 00:44:16.754358 containerd[2742]: time="2025-05-15T00:44:16.754315720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 15 00:44:16.754358 containerd[2742]: time="2025-05-15T00:44:16.754327080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 00:44:16.754358 containerd[2742]: time="2025-05-15T00:44:16.754341840Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 15 00:44:16.754455 containerd[2742]: time="2025-05-15T00:44:16.754364200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 15 00:44:16.754455 containerd[2742]: time="2025-05-15T00:44:16.754377200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 00:44:16.754455 containerd[2742]: time="2025-05-15T00:44:16.754387080Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 00:44:16.754573 containerd[2742]: time="2025-05-15T00:44:16.754563320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 00:44:16.754595 containerd[2742]: time="2025-05-15T00:44:16.754581240Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 15 00:44:16.754595 containerd[2742]: time="2025-05-15T00:44:16.754591000Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 00:44:16.754631 containerd[2742]: time="2025-05-15T00:44:16.754601800Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 15 00:44:16.754631 containerd[2742]: time="2025-05-15T00:44:16.754610680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 00:44:16.754631 containerd[2742]: time="2025-05-15T00:44:16.754621880Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 15 00:44:16.754678 containerd[2742]: time="2025-05-15T00:44:16.754636400Z" level=info msg="NRI interface is disabled by configuration." May 15 00:44:16.754678 containerd[2742]: time="2025-05-15T00:44:16.754650640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 00:44:16.755061 containerd[2742]: time="2025-05-15T00:44:16.755019520Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 00:44:16.755161 containerd[2742]: time="2025-05-15T00:44:16.755070280Z" level=info msg="Connect containerd service" May 15 00:44:16.755161 containerd[2742]: time="2025-05-15T00:44:16.755097440Z" level=info msg="using legacy CRI server" May 15 00:44:16.755161 containerd[2742]: time="2025-05-15T00:44:16.755103880Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 00:44:16.755370 containerd[2742]: time="2025-05-15T00:44:16.755359280Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 00:44:16.756010 containerd[2742]: time="2025-05-15T00:44:16.755987760Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 00:44:16.756209 containerd[2742]: time="2025-05-15T00:44:16.756173680Z" level=info msg="Start subscribing containerd event" May 15 00:44:16.756238 containerd[2742]: time="2025-05-15T00:44:16.756229560Z" level=info msg="Start recovering state" May 15 00:44:16.756304 containerd[2742]: time="2025-05-15T00:44:16.756294400Z" level=info msg="Start event monitor" May 15 00:44:16.756323 containerd[2742]: time="2025-05-15T00:44:16.756308520Z" level=info msg="Start snapshots syncer" May 15 00:44:16.756323 containerd[2742]: time="2025-05-15T00:44:16.756319200Z" level=info msg="Start cni network conf syncer for default" May 15 00:44:16.756363 containerd[2742]: time="2025-05-15T00:44:16.756326960Z" level=info msg="Start streaming server" May 15 00:44:16.756497 containerd[2742]: time="2025-05-15T00:44:16.756482640Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 00:44:16.756537 containerd[2742]: time="2025-05-15T00:44:16.756528760Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 00:44:16.756584 containerd[2742]: time="2025-05-15T00:44:16.756576320Z" level=info msg="containerd successfully booted in 0.036468s" May 15 00:44:16.756631 systemd[1]: Started containerd.service - containerd container runtime. May 15 00:44:16.878641 tar[2739]: linux-arm64/LICENSE May 15 00:44:16.878709 tar[2739]: linux-arm64/README.md May 15 00:44:16.896822 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 00:44:16.917059 sshd_keygen[2729]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 00:44:16.935090 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 00:44:16.961143 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 00:44:16.969913 systemd[1]: issuegen.service: Deactivated successfully. May 15 00:44:16.970106 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 00:44:16.976848 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 00:44:16.989356 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 00:44:16.995863 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 00:44:17.002053 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 15 00:44:17.007145 systemd[1]: Reached target getty.target - Login Prompts. May 15 00:44:17.031794 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 233815889 May 15 00:44:17.046817 extend-filesystems[2724]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 15 00:44:17.046817 extend-filesystems[2724]: old_desc_blocks = 1, new_desc_blocks = 112 May 15 00:44:17.046817 extend-filesystems[2724]: The filesystem on /dev/nvme0n1p9 is now 233815889 (4k) blocks long. May 15 00:44:17.074943 extend-filesystems[2705]: Resized filesystem in /dev/nvme0n1p9 May 15 00:44:17.074943 extend-filesystems[2705]: Found nvme1n1 May 15 00:44:17.049262 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 00:44:17.049558 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 00:44:17.061799 systemd[1]: extend-filesystems.service: Consumed 219ms CPU time, 68.8M memory peak. May 15 00:44:17.417455 coreos-metadata[2700]: May 15 00:44:17.417 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 May 15 00:44:17.417981 coreos-metadata[2700]: May 15 00:44:17.417 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 15 00:44:17.609795 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up May 15 00:44:17.626788 kernel: bond0: (slave enP1p1s0f0np0): Enslaving as a backup interface with an up link May 15 00:44:17.627986 systemd-networkd[2647]: enP1p1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:49:b3:c9.network. May 15 00:44:17.671337 coreos-metadata[2778]: May 15 00:44:17.671 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 May 15 00:44:17.671700 coreos-metadata[2778]: May 15 00:44:17.671 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 15 00:44:18.237795 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up May 15 00:44:18.254593 systemd-networkd[2647]: bond0: Configuring with /etc/systemd/network/05-bond0.network. May 15 00:44:18.254784 kernel: bond0: (slave enP1p1s0f1np1): Enslaving as a backup interface with an up link May 15 00:44:18.256025 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 15 00:44:18.256569 systemd-networkd[2647]: enP1p1s0f0np0: Link UP May 15 00:44:18.256816 systemd-networkd[2647]: enP1p1s0f0np0: Gained carrier May 15 00:44:18.265731 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond May 15 00:44:18.285219 systemd-networkd[2647]: enP1p1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:49:b3:c8.network. May 15 00:44:18.285505 systemd-networkd[2647]: enP1p1s0f1np1: Link UP May 15 00:44:18.285708 systemd-networkd[2647]: enP1p1s0f1np1: Gained carrier May 15 00:44:18.302065 systemd-networkd[2647]: bond0: Link UP May 15 00:44:18.302272 systemd-networkd[2647]: bond0: Gained carrier May 15 00:44:18.302429 systemd-timesyncd[2649]: Network configuration changed, trying to establish connection. May 15 00:44:18.303057 systemd-timesyncd[2649]: Network configuration changed, trying to establish connection. May 15 00:44:18.303310 systemd-timesyncd[2649]: Network configuration changed, trying to establish connection. May 15 00:44:18.303442 systemd-timesyncd[2649]: Network configuration changed, trying to establish connection. May 15 00:44:18.380849 kernel: bond0: (slave enP1p1s0f0np0): link status definitely up, 25000 Mbps full duplex May 15 00:44:18.380883 kernel: bond0: active interface up! May 15 00:44:18.504789 kernel: bond0: (slave enP1p1s0f1np1): link status definitely up, 25000 Mbps full duplex May 15 00:44:19.418073 coreos-metadata[2700]: May 15 00:44:19.418 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 May 15 00:44:19.671837 coreos-metadata[2778]: May 15 00:44:19.671 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 May 15 00:44:19.794130 systemd-timesyncd[2649]: Network configuration changed, trying to establish connection. May 15 00:44:20.305887 systemd-networkd[2647]: bond0: Gained IPv6LL May 15 00:44:20.306231 systemd-timesyncd[2649]: Network configuration changed, trying to establish connection. May 15 00:44:20.308107 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 00:44:20.313867 systemd[1]: Reached target network-online.target - Network is Online. May 15 00:44:20.332087 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:44:20.338669 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 00:44:20.360472 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 00:44:20.913037 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:44:20.918867 (kubelet)[2846]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:44:21.386408 kubelet[2846]: E0515 00:44:21.386366 2846 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:44:21.389259 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:44:21.389401 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:44:21.389750 systemd[1]: kubelet.service: Consumed 715ms CPU time, 252.7M memory peak. May 15 00:44:21.793609 kernel: mlx5_core 0001:01:00.0: lag map: port 1:1 port 2:2 May 15 00:44:21.793919 kernel: mlx5_core 0001:01:00.0: shared_fdb:0 mode:queue_affinity May 15 00:44:21.998471 coreos-metadata[2700]: May 15 00:44:21.998 INFO Fetch successful May 15 00:44:22.043547 login[2822]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying May 15 00:44:22.043821 login[2821]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 15 00:44:22.043939 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 00:44:22.045251 systemd[1]: Started sshd@0-147.28.145.22:22-139.178.68.195:33714.service - OpenSSH per-connection server daemon (139.178.68.195:33714). May 15 00:44:22.056240 systemd-logind[2723]: New session 2 of user core. May 15 00:44:22.059009 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 00:44:22.060415 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 00:44:22.062866 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 15 00:44:22.065109 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... May 15 00:44:22.071614 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 00:44:22.074151 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 00:44:22.079985 (systemd)[2886]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 00:44:22.081931 systemd-logind[2723]: New session c1 of user core. May 15 00:44:22.099817 coreos-metadata[2778]: May 15 00:44:22.099 INFO Fetch successful May 15 00:44:22.152716 unknown[2778]: wrote ssh authorized keys file for user: core May 15 00:44:22.182777 update-ssh-keys[2894]: Updated "/home/core/.ssh/authorized_keys" May 15 00:44:22.185819 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 15 00:44:22.187454 systemd[1]: Finished sshkeys.service. May 15 00:44:22.209981 systemd[2886]: Queued start job for default target default.target. May 15 00:44:22.222881 systemd[2886]: Created slice app.slice - User Application Slice. May 15 00:44:22.222907 systemd[2886]: Reached target paths.target - Paths. May 15 00:44:22.222939 systemd[2886]: Reached target timers.target - Timers. May 15 00:44:22.224219 systemd[2886]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 00:44:22.232430 systemd[2886]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 00:44:22.232481 systemd[2886]: Reached target sockets.target - Sockets. May 15 00:44:22.232524 systemd[2886]: Reached target basic.target - Basic System. May 15 00:44:22.232551 systemd[2886]: Reached target default.target - Main User Target. May 15 00:44:22.232572 systemd[2886]: Startup finished in 146ms. May 15 00:44:22.232907 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 00:44:22.234451 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 00:44:22.363657 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. May 15 00:44:22.364143 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 00:44:22.364271 systemd[1]: Startup finished in 3.225s (kernel) + 19.309s (initrd) + 9.698s (userspace) = 32.233s. May 15 00:44:22.451268 sshd[2874]: Accepted publickey for core from 139.178.68.195 port 33714 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 00:44:22.452407 sshd-session[2874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:44:22.455400 systemd-logind[2723]: New session 3 of user core. May 15 00:44:22.468886 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 00:44:22.808569 systemd[1]: Started sshd@1-147.28.145.22:22-139.178.68.195:33724.service - OpenSSH per-connection server daemon (139.178.68.195:33724). May 15 00:44:23.046047 login[2822]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 15 00:44:23.049595 systemd-logind[2723]: New session 1 of user core. May 15 00:44:23.058897 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 00:44:23.225022 sshd[2917]: Accepted publickey for core from 139.178.68.195 port 33724 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 00:44:23.226254 sshd-session[2917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:44:23.228892 systemd-logind[2723]: New session 4 of user core. May 15 00:44:23.238893 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 00:44:23.525921 sshd[2927]: Connection closed by 139.178.68.195 port 33724 May 15 00:44:23.526559 sshd-session[2917]: pam_unix(sshd:session): session closed for user core May 15 00:44:23.530276 systemd[1]: sshd@1-147.28.145.22:22-139.178.68.195:33724.service: Deactivated successfully. May 15 00:44:23.532470 systemd[1]: session-4.scope: Deactivated successfully. May 15 00:44:23.533012 systemd-logind[2723]: Session 4 logged out. Waiting for processes to exit. May 15 00:44:23.533524 systemd-logind[2723]: Removed session 4. May 15 00:44:23.602629 systemd[1]: Started sshd@2-147.28.145.22:22-139.178.68.195:33726.service - OpenSSH per-connection server daemon (139.178.68.195:33726). May 15 00:44:24.022400 sshd[2934]: Accepted publickey for core from 139.178.68.195 port 33726 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 00:44:24.023527 sshd-session[2934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:44:24.026521 systemd-logind[2723]: New session 5 of user core. May 15 00:44:24.035888 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 00:44:24.322867 sshd[2936]: Connection closed by 139.178.68.195 port 33726 May 15 00:44:24.323500 sshd-session[2934]: pam_unix(sshd:session): session closed for user core May 15 00:44:24.326932 systemd[1]: sshd@2-147.28.145.22:22-139.178.68.195:33726.service: Deactivated successfully. May 15 00:44:24.329147 systemd[1]: session-5.scope: Deactivated successfully. May 15 00:44:24.329683 systemd-logind[2723]: Session 5 logged out. Waiting for processes to exit. May 15 00:44:24.330241 systemd-logind[2723]: Removed session 5. May 15 00:44:24.395694 systemd[1]: Started sshd@3-147.28.145.22:22-139.178.68.195:53880.service - OpenSSH per-connection server daemon (139.178.68.195:53880). May 15 00:44:24.808172 sshd[2942]: Accepted publickey for core from 139.178.68.195 port 53880 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 00:44:24.809189 sshd-session[2942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:44:24.811967 systemd-logind[2723]: New session 6 of user core. May 15 00:44:24.822949 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 00:44:25.106622 sshd[2944]: Connection closed by 139.178.68.195 port 53880 May 15 00:44:25.107126 sshd-session[2942]: pam_unix(sshd:session): session closed for user core May 15 00:44:25.110708 systemd[1]: sshd@3-147.28.145.22:22-139.178.68.195:53880.service: Deactivated successfully. May 15 00:44:25.112537 systemd[1]: session-6.scope: Deactivated successfully. May 15 00:44:25.113259 systemd-logind[2723]: Session 6 logged out. Waiting for processes to exit. May 15 00:44:25.113769 systemd-logind[2723]: Removed session 6. May 15 00:44:25.178582 systemd[1]: Started sshd@4-147.28.145.22:22-139.178.68.195:53886.service - OpenSSH per-connection server daemon (139.178.68.195:53886). May 15 00:44:25.292193 systemd-timesyncd[2649]: Network configuration changed, trying to establish connection. May 15 00:44:25.579279 sshd[2952]: Accepted publickey for core from 139.178.68.195 port 53886 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 00:44:25.580424 sshd-session[2952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:44:25.583557 systemd-logind[2723]: New session 7 of user core. May 15 00:44:25.597948 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 00:44:25.817037 sudo[2955]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 00:44:25.817288 sudo[2955]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:44:25.831550 sudo[2955]: pam_unix(sudo:session): session closed for user root May 15 00:44:25.893745 sshd[2954]: Connection closed by 139.178.68.195 port 53886 May 15 00:44:25.894344 sshd-session[2952]: pam_unix(sshd:session): session closed for user core May 15 00:44:25.898083 systemd[1]: sshd@4-147.28.145.22:22-139.178.68.195:53886.service: Deactivated successfully. May 15 00:44:25.900216 systemd[1]: session-7.scope: Deactivated successfully. May 15 00:44:25.900811 systemd-logind[2723]: Session 7 logged out. Waiting for processes to exit. May 15 00:44:25.901413 systemd-logind[2723]: Removed session 7. May 15 00:44:25.969630 systemd[1]: Started sshd@5-147.28.145.22:22-139.178.68.195:53888.service - OpenSSH per-connection server daemon (139.178.68.195:53888). May 15 00:44:26.396576 sshd[2962]: Accepted publickey for core from 139.178.68.195 port 53888 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 00:44:26.397585 sshd-session[2962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:44:26.400285 systemd-logind[2723]: New session 8 of user core. May 15 00:44:26.418943 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 00:44:26.637563 sudo[2966]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 00:44:26.637828 sudo[2966]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:44:26.640350 sudo[2966]: pam_unix(sudo:session): session closed for user root May 15 00:44:26.644503 sudo[2965]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 00:44:26.644752 sudo[2965]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:44:26.668011 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 00:44:26.689732 augenrules[2988]: No rules May 15 00:44:26.690830 systemd[1]: audit-rules.service: Deactivated successfully. May 15 00:44:26.691034 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 00:44:26.691723 sudo[2965]: pam_unix(sudo:session): session closed for user root May 15 00:44:26.756745 sshd[2964]: Connection closed by 139.178.68.195 port 53888 May 15 00:44:26.757134 sshd-session[2962]: pam_unix(sshd:session): session closed for user core May 15 00:44:26.759965 systemd[1]: sshd@5-147.28.145.22:22-139.178.68.195:53888.service: Deactivated successfully. May 15 00:44:26.762280 systemd[1]: session-8.scope: Deactivated successfully. May 15 00:44:26.764197 systemd-logind[2723]: Session 8 logged out. Waiting for processes to exit. May 15 00:44:26.764739 systemd-logind[2723]: Removed session 8. May 15 00:44:26.830507 systemd[1]: Started sshd@6-147.28.145.22:22-139.178.68.195:53898.service - OpenSSH per-connection server daemon (139.178.68.195:53898). May 15 00:44:27.251386 sshd[2997]: Accepted publickey for core from 139.178.68.195 port 53898 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 00:44:27.253440 sshd-session[2997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:44:27.256217 systemd-logind[2723]: New session 9 of user core. May 15 00:44:27.264931 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 00:44:27.491053 sudo[3000]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 00:44:27.491322 sudo[3000]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:44:27.791064 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 00:44:27.791135 (dockerd)[3033]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 00:44:28.012810 dockerd[3033]: time="2025-05-15T00:44:28.012595600Z" level=info msg="Starting up" May 15 00:44:28.081985 dockerd[3033]: time="2025-05-15T00:44:28.081916840Z" level=info msg="Loading containers: start." May 15 00:44:28.211805 kernel: Initializing XFRM netlink socket May 15 00:44:28.229519 systemd-timesyncd[2649]: Network configuration changed, trying to establish connection. May 15 00:44:28.275678 systemd-networkd[2647]: docker0: Link UP May 15 00:44:28.315014 dockerd[3033]: time="2025-05-15T00:44:28.314986000Z" level=info msg="Loading containers: done." May 15 00:44:28.323700 dockerd[3033]: time="2025-05-15T00:44:28.323670760Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 00:44:28.323768 dockerd[3033]: time="2025-05-15T00:44:28.323744480Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 15 00:44:28.323923 dockerd[3033]: time="2025-05-15T00:44:28.323907200Z" level=info msg="Daemon has completed initialization" May 15 00:44:28.343722 dockerd[3033]: time="2025-05-15T00:44:28.343559360Z" level=info msg="API listen on /run/docker.sock" May 15 00:44:28.343692 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 00:44:27.927589 systemd-resolved[2648]: Clock change detected. Flushing caches. May 15 00:44:27.935452 systemd-journald[2204]: Time jumped backwards, rotating. May 15 00:44:27.927736 systemd-timesyncd[2649]: Contacted time server [2604:2dc0:202:300::140d]:123 (2.flatcar.pool.ntp.org). May 15 00:44:27.927782 systemd-timesyncd[2649]: Initial clock synchronization to Thu 2025-05-15 00:44:27.927537 UTC. May 15 00:44:28.473238 containerd[2742]: time="2025-05-15T00:44:28.473202479Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 15 00:44:28.552154 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3594808265-merged.mount: Deactivated successfully. May 15 00:44:28.977140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount732182633.mount: Deactivated successfully. May 15 00:44:30.839247 containerd[2742]: time="2025-05-15T00:44:30.839172679Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794150" May 15 00:44:30.839247 containerd[2742]: time="2025-05-15T00:44:30.839189119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:44:30.840368 containerd[2742]: time="2025-05-15T00:44:30.840343639Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:44:30.843268 containerd[2742]: time="2025-05-15T00:44:30.843246199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:44:30.844363 containerd[2742]: time="2025-05-15T00:44:30.844336039Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 2.37109484s" May 15 00:44:30.844388 containerd[2742]: time="2025-05-15T00:44:30.844373719Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 15 00:44:30.863063 containerd[2742]: time="2025-05-15T00:44:30.863040199Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 15 00:44:30.952906 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 00:44:30.967204 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:44:31.060657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:44:31.063935 (kubelet)[3375]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:44:31.114608 kubelet[3375]: E0515 00:44:31.114541 3375 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:44:31.118121 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:44:31.118255 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:44:31.119141 systemd[1]: kubelet.service: Consumed 135ms CPU time, 108M memory peak. May 15 00:44:32.548167 containerd[2742]: time="2025-05-15T00:44:32.548131519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:44:32.548451 containerd[2742]: time="2025-05-15T00:44:32.548202279Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855550" May 15 00:44:32.549185 containerd[2742]: time="2025-05-15T00:44:32.549158639Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:44:32.551963 containerd[2742]: time="2025-05-15T00:44:32.551937119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:44:32.553101 containerd[2742]: time="2025-05-15T00:44:32.553064599Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.68998596s" May 15 00:44:32.553141 containerd[2742]: time="2025-05-15T00:44:32.553110839Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 15 00:44:32.572202 containerd[2742]: time="2025-05-15T00:44:32.572170079Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 15 00:44:33.897409 containerd[2742]: time="2025-05-15T00:44:33.897354399Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263945" May 15 00:44:33.897409 containerd[2742]: time="2025-05-15T00:44:33.897365159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:44:33.898452 containerd[2742]: time="2025-05-15T00:44:33.898427399Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:44:33.901368 containerd[2742]: time="2025-05-15T00:44:33.901343199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:44:33.902475 containerd[2742]: time="2025-05-15T00:44:33.902444279Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.33022844s" May 15 00:44:33.902503 containerd[2742]: time="2025-05-15T00:44:33.902483199Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 15 00:44:33.921360 containerd[2742]: time="2025-05-15T00:44:33.921336799Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 15 00:44:34.524160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3426424072.mount: Deactivated successfully. May 15 00:44:35.097459 containerd[2742]: time="2025-05-15T00:44:35.097394759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:44:35.097748 containerd[2742]: time="2025-05-15T00:44:35.097429839Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775705" May 15 00:44:35.098293 containerd[2742]: time="2025-05-15T00:44:35.098072359Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:44:35.099917 containerd[2742]: time="2025-05-15T00:44:35.099892439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:44:35.100635 containerd[2742]: time="2025-05-15T00:44:35.100616639Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.17925108s" May 15 00:44:35.100664 containerd[2742]: time="2025-05-15T00:44:35.100642079Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 15 00:44:35.118902 containerd[2742]: time="2025-05-15T00:44:35.118874279Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 00:44:35.428180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount730083551.mount: Deactivated successfully. May 15 00:44:35.916194 containerd[2742]: time="2025-05-15T00:44:35.916153999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:44:35.916340 containerd[2742]: time="2025-05-15T00:44:35.916162039Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" May 15 00:44:35.917322 containerd[2742]: time="2025-05-15T00:44:35.917294119Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:44:35.920162 containerd[2742]: time="2025-05-15T00:44:35.920133759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:44:35.921318 containerd[2742]: time="2025-05-15T00:44:35.921281919Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 802.36592ms" May 15 00:44:35.921363 containerd[2742]: time="2025-05-15T00:44:35.921319319Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 15 00:44:35.939974 containerd[2742]: time="2025-05-15T00:44:35.939944999Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 15 00:44:36.167657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1465308945.mount: Deactivated successfully. May 15 00:44:36.168007 containerd[2742]: time="2025-05-15T00:44:36.167975839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:44:36.168184 containerd[2742]: time="2025-05-15T00:44:36.167999399Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" May 15 00:44:36.168719 containerd[2742]: time="2025-05-15T00:44:36.168699559Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:44:36.170911 containerd[2742]: time="2025-05-15T00:44:36.170890079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:44:36.172466 containerd[2742]: time="2025-05-15T00:44:36.172361759Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 232.3724ms" May 15 00:44:36.172466 containerd[2742]: time="2025-05-15T00:44:36.172405239Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 15 00:44:36.190636 containerd[2742]: time="2025-05-15T00:44:36.190608359Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 15 00:44:36.483624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2223895499.mount: Deactivated successfully. May 15 00:44:39.658281 containerd[2742]: time="2025-05-15T00:44:39.658238839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:44:39.658659 containerd[2742]: time="2025-05-15T00:44:39.658295399Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" May 15 00:44:39.659433 containerd[2742]: time="2025-05-15T00:44:39.659407439Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:44:39.662333 containerd[2742]: time="2025-05-15T00:44:39.662313519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:44:39.663574 containerd[2742]: time="2025-05-15T00:44:39.663535159Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.47287632s" May 15 00:44:39.663617 containerd[2742]: time="2025-05-15T00:44:39.663586799Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 15 00:44:41.202838 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 00:44:41.212318 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:44:41.311820 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:44:41.315153 (kubelet)[3798]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:44:41.347434 kubelet[3798]: E0515 00:44:41.347393 3798 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:44:41.349669 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:44:41.349802 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:44:41.351091 systemd[1]: kubelet.service: Consumed 126ms CPU time, 106.6M memory peak. May 15 00:44:43.445061 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:44:43.445286 systemd[1]: kubelet.service: Consumed 126ms CPU time, 106.6M memory peak. May 15 00:44:43.456336 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:44:43.474941 systemd[1]: Reload requested from client PID 3831 ('systemctl') (unit session-9.scope)... May 15 00:44:43.474952 systemd[1]: Reloading... May 15 00:44:43.549997 zram_generator::config[3884]: No configuration found. May 15 00:44:43.639374 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:44:43.730191 systemd[1]: Reloading finished in 254 ms. May 15 00:44:43.769890 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:44:43.773256 (kubelet)[3936]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 00:44:43.774603 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:44:43.775073 systemd[1]: kubelet.service: Deactivated successfully. May 15 00:44:43.775299 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:44:43.775336 systemd[1]: kubelet.service: Consumed 76ms CPU time, 82.4M memory peak. May 15 00:44:43.778119 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:44:43.875928 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:44:43.879263 (kubelet)[3948]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 00:44:43.910479 kubelet[3948]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:44:43.910479 kubelet[3948]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 00:44:43.910479 kubelet[3948]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:44:43.911500 kubelet[3948]: I0515 00:44:43.911466 3948 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 00:44:44.537387 kubelet[3948]: I0515 00:44:44.537361 3948 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 15 00:44:44.537387 kubelet[3948]: I0515 00:44:44.537385 3948 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 00:44:44.537551 kubelet[3948]: I0515 00:44:44.537539 3948 server.go:927] "Client rotation is on, will bootstrap in background" May 15 00:44:44.555036 kubelet[3948]: I0515 00:44:44.554336 3948 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:44:44.555516 kubelet[3948]: E0515 00:44:44.555495 3948 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://147.28.145.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 147.28.145.22:6443: connect: connection refused May 15 00:44:44.579076 kubelet[3948]: I0515 00:44:44.579050 3948 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 00:44:44.580185 kubelet[3948]: I0515 00:44:44.580146 3948 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 00:44:44.580338 kubelet[3948]: I0515 00:44:44.580187 3948 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.1-n-3631181341","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 15 00:44:44.580409 kubelet[3948]: I0515 00:44:44.580402 3948 topology_manager.go:138] "Creating topology manager with none policy" May 15 00:44:44.580430 kubelet[3948]: I0515 00:44:44.580411 3948 container_manager_linux.go:301] "Creating device plugin manager" May 15 00:44:44.580670 kubelet[3948]: I0515 00:44:44.580657 3948 state_mem.go:36] "Initialized new in-memory state store" May 15 00:44:44.581581 kubelet[3948]: I0515 00:44:44.581566 3948 kubelet.go:400] "Attempting to sync node with API server" May 15 00:44:44.581608 kubelet[3948]: I0515 00:44:44.581583 3948 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 00:44:44.581804 kubelet[3948]: I0515 00:44:44.581795 3948 kubelet.go:312] "Adding apiserver pod source" May 15 00:44:44.581825 kubelet[3948]: I0515 00:44:44.581809 3948 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 00:44:44.582251 kubelet[3948]: W0515 00:44:44.582212 3948 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.145.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-3631181341&limit=500&resourceVersion=0": dial tcp 147.28.145.22:6443: connect: connection refused May 15 00:44:44.582271 kubelet[3948]: E0515 00:44:44.582262 3948 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.28.145.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-n-3631181341&limit=500&resourceVersion=0": dial tcp 147.28.145.22:6443: connect: connection refused May 15 00:44:44.582330 kubelet[3948]: W0515 00:44:44.582293 3948 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.28.145.22:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.28.145.22:6443: connect: connection refused May 15 00:44:44.582355 kubelet[3948]: E0515 00:44:44.582339 3948 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.28.145.22:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.28.145.22:6443: connect: connection refused May 15 00:44:44.582698 kubelet[3948]: I0515 00:44:44.582686 3948 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 00:44:44.583043 kubelet[3948]: I0515 00:44:44.583033 3948 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 00:44:44.583144 kubelet[3948]: W0515 00:44:44.583136 3948 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 00:44:44.583888 kubelet[3948]: I0515 00:44:44.583876 3948 server.go:1264] "Started kubelet" May 15 00:44:44.584132 kubelet[3948]: I0515 00:44:44.584099 3948 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 00:44:44.584215 kubelet[3948]: I0515 00:44:44.584163 3948 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 00:44:44.584453 kubelet[3948]: I0515 00:44:44.584437 3948 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 00:44:44.585022 kubelet[3948]: I0515 00:44:44.585006 3948 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 00:44:44.585246 kubelet[3948]: I0515 00:44:44.585233 3948 volume_manager.go:291] "Starting Kubelet Volume Manager" May 15 00:44:44.585311 kubelet[3948]: I0515 00:44:44.585291 3948 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 00:44:44.585354 kubelet[3948]: I0515 00:44:44.585343 3948 reconciler.go:26] "Reconciler: start to sync state" May 15 00:44:44.585444 kubelet[3948]: W0515 00:44:44.585402 3948 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.28.145.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.145.22:6443: connect: connection refused May 15 00:44:44.585471 kubelet[3948]: E0515 00:44:44.585423 3948 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.145.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-3631181341?timeout=10s\": dial tcp 147.28.145.22:6443: connect: connection refused" interval="200ms" May 15 00:44:44.585471 kubelet[3948]: E0515 00:44:44.585456 3948 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.28.145.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.145.22:6443: connect: connection refused May 15 00:44:44.586111 kubelet[3948]: E0515 00:44:44.585905 3948 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.28.145.22:6443/api/v1/namespaces/default/events\": dial tcp 147.28.145.22:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.1.1-n-3631181341.183f8cb3dd5bb537 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.1-n-3631181341,UID:ci-4230.1.1-n-3631181341,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.1-n-3631181341,},FirstTimestamp:2025-05-15 00:44:44.583851319 +0000 UTC m=+0.701787241,LastTimestamp:2025-05-15 00:44:44.583851319 +0000 UTC m=+0.701787241,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.1-n-3631181341,}" May 15 00:44:44.586722 kubelet[3948]: I0515 00:44:44.586707 3948 server.go:455] "Adding debug handlers to kubelet server" May 15 00:44:44.587149 kubelet[3948]: I0515 00:44:44.587132 3948 factory.go:221] Registration of the containerd container factory successfully May 15 00:44:44.587175 kubelet[3948]: I0515 00:44:44.587149 3948 factory.go:221] Registration of the systemd container factory successfully May 15 00:44:44.587248 kubelet[3948]: I0515 00:44:44.587233 3948 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 00:44:44.588554 kubelet[3948]: E0515 00:44:44.588520 3948 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 00:44:44.599592 kubelet[3948]: I0515 00:44:44.599557 3948 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 00:44:44.600572 kubelet[3948]: I0515 00:44:44.600558 3948 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 00:44:44.600716 kubelet[3948]: I0515 00:44:44.600707 3948 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 00:44:44.600746 kubelet[3948]: I0515 00:44:44.600736 3948 kubelet.go:2337] "Starting kubelet main sync loop" May 15 00:44:44.600792 kubelet[3948]: E0515 00:44:44.600777 3948 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 00:44:44.601227 kubelet[3948]: W0515 00:44:44.601183 3948 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.28.145.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.145.22:6443: connect: connection refused May 15 00:44:44.601247 kubelet[3948]: E0515 00:44:44.601236 3948 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.28.145.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.145.22:6443: connect: connection refused May 15 00:44:44.604339 kubelet[3948]: I0515 00:44:44.604322 3948 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 00:44:44.604339 kubelet[3948]: I0515 00:44:44.604336 3948 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 00:44:44.604377 kubelet[3948]: I0515 00:44:44.604350 3948 state_mem.go:36] "Initialized new in-memory state store" May 15 00:44:44.605154 kubelet[3948]: I0515 00:44:44.605138 3948 policy_none.go:49] "None policy: Start" May 15 00:44:44.605547 kubelet[3948]: I0515 00:44:44.605533 3948 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 00:44:44.605568 kubelet[3948]: I0515 00:44:44.605551 3948 state_mem.go:35] "Initializing new in-memory state store" May 15 00:44:44.609739 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 00:44:44.622006 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 00:44:44.624423 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 00:44:44.641452 kubelet[3948]: I0515 00:44:44.641428 3948 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 00:44:44.641635 kubelet[3948]: I0515 00:44:44.641602 3948 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 00:44:44.641729 kubelet[3948]: I0515 00:44:44.641718 3948 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 00:44:44.642474 kubelet[3948]: E0515 00:44:44.642458 3948 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.1.1-n-3631181341\" not found" May 15 00:44:44.687222 kubelet[3948]: I0515 00:44:44.687201 3948 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-n-3631181341" May 15 00:44:44.687487 kubelet[3948]: E0515 00:44:44.687465 3948 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.28.145.22:6443/api/v1/nodes\": dial tcp 147.28.145.22:6443: connect: connection refused" node="ci-4230.1.1-n-3631181341" May 15 00:44:44.701621 kubelet[3948]: I0515 00:44:44.701570 3948 topology_manager.go:215] "Topology Admit Handler" podUID="65fe15231909d9ed4ca9442bbef31da9" podNamespace="kube-system" podName="kube-scheduler-ci-4230.1.1-n-3631181341" May 15 00:44:44.702861 kubelet[3948]: I0515 00:44:44.702841 3948 topology_manager.go:215] "Topology Admit Handler" podUID="f1e4323eecf962d632b473693f04fd4b" podNamespace="kube-system" podName="kube-apiserver-ci-4230.1.1-n-3631181341" May 15 00:44:44.704401 kubelet[3948]: I0515 00:44:44.704376 3948 topology_manager.go:215] "Topology Admit Handler" podUID="40bd8746c87ef40d3f7239bb9ef77a1b" podNamespace="kube-system" podName="kube-controller-manager-ci-4230.1.1-n-3631181341" May 15 00:44:44.707911 systemd[1]: Created slice kubepods-burstable-pod65fe15231909d9ed4ca9442bbef31da9.slice - libcontainer container kubepods-burstable-pod65fe15231909d9ed4ca9442bbef31da9.slice. May 15 00:44:44.738368 systemd[1]: Created slice kubepods-burstable-podf1e4323eecf962d632b473693f04fd4b.slice - libcontainer container kubepods-burstable-podf1e4323eecf962d632b473693f04fd4b.slice. May 15 00:44:44.752009 systemd[1]: Created slice kubepods-burstable-pod40bd8746c87ef40d3f7239bb9ef77a1b.slice - libcontainer container kubepods-burstable-pod40bd8746c87ef40d3f7239bb9ef77a1b.slice. May 15 00:44:44.786444 kubelet[3948]: E0515 00:44:44.786410 3948 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.145.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-3631181341?timeout=10s\": dial tcp 147.28.145.22:6443: connect: connection refused" interval="400ms" May 15 00:44:44.882937 kubelet[3948]: E0515 00:44:44.882791 3948 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.28.145.22:6443/api/v1/namespaces/default/events\": dial tcp 147.28.145.22:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.1.1-n-3631181341.183f8cb3dd5bb537 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.1-n-3631181341,UID:ci-4230.1.1-n-3631181341,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.1-n-3631181341,},FirstTimestamp:2025-05-15 00:44:44.583851319 +0000 UTC m=+0.701787241,LastTimestamp:2025-05-15 00:44:44.583851319 +0000 UTC m=+0.701787241,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.1-n-3631181341,}" May 15 00:44:44.886929 kubelet[3948]: I0515 00:44:44.886901 3948 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1e4323eecf962d632b473693f04fd4b-k8s-certs\") pod \"kube-apiserver-ci-4230.1.1-n-3631181341\" (UID: \"f1e4323eecf962d632b473693f04fd4b\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-3631181341" May 15 00:44:44.887000 kubelet[3948]: I0515 00:44:44.886930 3948 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/40bd8746c87ef40d3f7239bb9ef77a1b-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-3631181341\" (UID: \"40bd8746c87ef40d3f7239bb9ef77a1b\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-3631181341" May 15 00:44:44.887000 kubelet[3948]: I0515 00:44:44.886951 3948 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/65fe15231909d9ed4ca9442bbef31da9-kubeconfig\") pod \"kube-scheduler-ci-4230.1.1-n-3631181341\" (UID: \"65fe15231909d9ed4ca9442bbef31da9\") " pod="kube-system/kube-scheduler-ci-4230.1.1-n-3631181341" May 15 00:44:44.887000 kubelet[3948]: I0515 00:44:44.886967 3948 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1e4323eecf962d632b473693f04fd4b-ca-certs\") pod \"kube-apiserver-ci-4230.1.1-n-3631181341\" (UID: \"f1e4323eecf962d632b473693f04fd4b\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-3631181341" May 15 00:44:44.887095 kubelet[3948]: I0515 00:44:44.887018 3948 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1e4323eecf962d632b473693f04fd4b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.1-n-3631181341\" (UID: \"f1e4323eecf962d632b473693f04fd4b\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-3631181341" May 15 00:44:44.887095 kubelet[3948]: I0515 00:44:44.887073 3948 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/40bd8746c87ef40d3f7239bb9ef77a1b-ca-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-3631181341\" (UID: \"40bd8746c87ef40d3f7239bb9ef77a1b\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-3631181341" May 15 00:44:44.887137 kubelet[3948]: I0515 00:44:44.887108 3948 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/40bd8746c87ef40d3f7239bb9ef77a1b-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.1-n-3631181341\" (UID: \"40bd8746c87ef40d3f7239bb9ef77a1b\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-3631181341" May 15 00:44:44.887137 kubelet[3948]: I0515 00:44:44.887130 3948 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/40bd8746c87ef40d3f7239bb9ef77a1b-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.1-n-3631181341\" (UID: \"40bd8746c87ef40d3f7239bb9ef77a1b\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-3631181341" May 15 00:44:44.887175 kubelet[3948]: I0515 00:44:44.887148 3948 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/40bd8746c87ef40d3f7239bb9ef77a1b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.1-n-3631181341\" (UID: \"40bd8746c87ef40d3f7239bb9ef77a1b\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-3631181341" May 15 00:44:44.889443 kubelet[3948]: I0515 00:44:44.889420 3948 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-n-3631181341" May 15 00:44:44.889689 kubelet[3948]: E0515 00:44:44.889664 3948 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.28.145.22:6443/api/v1/nodes\": dial tcp 147.28.145.22:6443: connect: connection refused" node="ci-4230.1.1-n-3631181341" May 15 00:44:45.037871 containerd[2742]: time="2025-05-15T00:44:45.037836199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.1-n-3631181341,Uid:65fe15231909d9ed4ca9442bbef31da9,Namespace:kube-system,Attempt:0,}" May 15 00:44:45.051241 containerd[2742]: time="2025-05-15T00:44:45.051216359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.1-n-3631181341,Uid:f1e4323eecf962d632b473693f04fd4b,Namespace:kube-system,Attempt:0,}" May 15 00:44:45.053720 containerd[2742]: time="2025-05-15T00:44:45.053679239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.1-n-3631181341,Uid:40bd8746c87ef40d3f7239bb9ef77a1b,Namespace:kube-system,Attempt:0,}" May 15 00:44:45.187805 kubelet[3948]: E0515 00:44:45.187733 3948 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.145.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-n-3631181341?timeout=10s\": dial tcp 147.28.145.22:6443: connect: connection refused" interval="800ms" May 15 00:44:45.291712 kubelet[3948]: I0515 00:44:45.291686 3948 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-n-3631181341" May 15 00:44:45.291955 kubelet[3948]: E0515 00:44:45.291926 3948 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.28.145.22:6443/api/v1/nodes\": dial tcp 147.28.145.22:6443: connect: connection refused" node="ci-4230.1.1-n-3631181341" May 15 00:44:45.370285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2586707030.mount: Deactivated successfully. May 15 00:44:45.370990 containerd[2742]: time="2025-05-15T00:44:45.370959719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:44:45.371089 containerd[2742]: time="2025-05-15T00:44:45.370962439Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" May 15 00:44:45.371779 containerd[2742]: time="2025-05-15T00:44:45.371757319Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:44:45.372158 containerd[2742]: time="2025-05-15T00:44:45.372138239Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 00:44:45.372425 containerd[2742]: time="2025-05-15T00:44:45.372399759Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 00:44:45.375743 containerd[2742]: time="2025-05-15T00:44:45.375712559Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:44:45.376598 containerd[2742]: time="2025-05-15T00:44:45.376576879Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 338.66752ms" May 15 00:44:45.377319 containerd[2742]: time="2025-05-15T00:44:45.377298199Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 326.03008ms" May 15 00:44:45.378187 containerd[2742]: time="2025-05-15T00:44:45.378163359Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:44:45.379306 containerd[2742]: time="2025-05-15T00:44:45.379284239Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 325.52056ms" May 15 00:44:45.379850 containerd[2742]: time="2025-05-15T00:44:45.379779319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:44:45.494558 containerd[2742]: time="2025-05-15T00:44:45.494460679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:44:45.494558 containerd[2742]: time="2025-05-15T00:44:45.494534959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:44:45.494558 containerd[2742]: time="2025-05-15T00:44:45.494546599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:44:45.494706 containerd[2742]: time="2025-05-15T00:44:45.494625959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:44:45.494706 containerd[2742]: time="2025-05-15T00:44:45.494602759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:44:45.494706 containerd[2742]: time="2025-05-15T00:44:45.494691319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:44:45.494766 containerd[2742]: time="2025-05-15T00:44:45.494724079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:44:45.494816 containerd[2742]: time="2025-05-15T00:44:45.494802759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:44:45.494953 containerd[2742]: time="2025-05-15T00:44:45.494906959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:44:45.494975 containerd[2742]: time="2025-05-15T00:44:45.494953119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:44:45.495009 containerd[2742]: time="2025-05-15T00:44:45.494965159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:44:45.495064 containerd[2742]: time="2025-05-15T00:44:45.495048359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:44:45.526176 systemd[1]: Started cri-containerd-64dc011d8c22b0b5ba07447f936fd627c633eb6fe81109de112aea53a682be65.scope - libcontainer container 64dc011d8c22b0b5ba07447f936fd627c633eb6fe81109de112aea53a682be65. May 15 00:44:45.527553 systemd[1]: Started cri-containerd-ca154ed4588ba9727cac58490b709c3b103af377f425254f0fb67936ae5c2c42.scope - libcontainer container ca154ed4588ba9727cac58490b709c3b103af377f425254f0fb67936ae5c2c42. May 15 00:44:45.528882 systemd[1]: Started cri-containerd-f915e8a27106c270b3c9325e137170d9f4d19c91508eb1775b9169983f098c34.scope - libcontainer container f915e8a27106c270b3c9325e137170d9f4d19c91508eb1775b9169983f098c34. May 15 00:44:45.550949 containerd[2742]: time="2025-05-15T00:44:45.550917639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.1-n-3631181341,Uid:40bd8746c87ef40d3f7239bb9ef77a1b,Namespace:kube-system,Attempt:0,} returns sandbox id \"64dc011d8c22b0b5ba07447f936fd627c633eb6fe81109de112aea53a682be65\"" May 15 00:44:45.551044 containerd[2742]: time="2025-05-15T00:44:45.551018399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.1-n-3631181341,Uid:65fe15231909d9ed4ca9442bbef31da9,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca154ed4588ba9727cac58490b709c3b103af377f425254f0fb67936ae5c2c42\"" May 15 00:44:45.551825 containerd[2742]: time="2025-05-15T00:44:45.551758919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.1-n-3631181341,Uid:f1e4323eecf962d632b473693f04fd4b,Namespace:kube-system,Attempt:0,} returns sandbox id \"f915e8a27106c270b3c9325e137170d9f4d19c91508eb1775b9169983f098c34\"" May 15 00:44:45.554199 containerd[2742]: time="2025-05-15T00:44:45.554175199Z" level=info msg="CreateContainer within sandbox \"64dc011d8c22b0b5ba07447f936fd627c633eb6fe81109de112aea53a682be65\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 00:44:45.554261 containerd[2742]: time="2025-05-15T00:44:45.554243719Z" level=info msg="CreateContainer within sandbox \"f915e8a27106c270b3c9325e137170d9f4d19c91508eb1775b9169983f098c34\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 00:44:45.554280 containerd[2742]: time="2025-05-15T00:44:45.554177599Z" level=info msg="CreateContainer within sandbox \"ca154ed4588ba9727cac58490b709c3b103af377f425254f0fb67936ae5c2c42\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 00:44:45.560537 containerd[2742]: time="2025-05-15T00:44:45.560511879Z" level=info msg="CreateContainer within sandbox \"64dc011d8c22b0b5ba07447f936fd627c633eb6fe81109de112aea53a682be65\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0be9d788aa924d9d410307ee68b2b2610e07173232be5b0edb65fb68a20031c7\"" May 15 00:44:45.560940 containerd[2742]: time="2025-05-15T00:44:45.560914319Z" level=info msg="CreateContainer within sandbox \"ca154ed4588ba9727cac58490b709c3b103af377f425254f0fb67936ae5c2c42\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"90ac7900dbaf19f75fc5bd1cf4d46ad8fe818251b977b23b2d060260be90367d\"" May 15 00:44:45.561017 containerd[2742]: time="2025-05-15T00:44:45.560996199Z" level=info msg="StartContainer for \"0be9d788aa924d9d410307ee68b2b2610e07173232be5b0edb65fb68a20031c7\"" May 15 00:44:45.561182 containerd[2742]: time="2025-05-15T00:44:45.561167039Z" level=info msg="StartContainer for \"90ac7900dbaf19f75fc5bd1cf4d46ad8fe818251b977b23b2d060260be90367d\"" May 15 00:44:45.561222 containerd[2742]: time="2025-05-15T00:44:45.561200759Z" level=info msg="CreateContainer within sandbox \"f915e8a27106c270b3c9325e137170d9f4d19c91508eb1775b9169983f098c34\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0b2708636e9cb32fe4605f0a1ba07411680ac13a9e5c22d8bacb286bd0838136\"" May 15 00:44:45.561478 containerd[2742]: time="2025-05-15T00:44:45.561453079Z" level=info msg="StartContainer for \"0b2708636e9cb32fe4605f0a1ba07411680ac13a9e5c22d8bacb286bd0838136\"" May 15 00:44:45.573953 kubelet[3948]: W0515 00:44:45.573902 3948 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.28.145.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.145.22:6443: connect: connection refused May 15 00:44:45.574015 kubelet[3948]: E0515 00:44:45.573966 3948 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.28.145.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.145.22:6443: connect: connection refused May 15 00:44:45.595178 systemd[1]: Started cri-containerd-0b2708636e9cb32fe4605f0a1ba07411680ac13a9e5c22d8bacb286bd0838136.scope - libcontainer container 0b2708636e9cb32fe4605f0a1ba07411680ac13a9e5c22d8bacb286bd0838136. May 15 00:44:45.596328 systemd[1]: Started cri-containerd-0be9d788aa924d9d410307ee68b2b2610e07173232be5b0edb65fb68a20031c7.scope - libcontainer container 0be9d788aa924d9d410307ee68b2b2610e07173232be5b0edb65fb68a20031c7. May 15 00:44:45.597559 systemd[1]: Started cri-containerd-90ac7900dbaf19f75fc5bd1cf4d46ad8fe818251b977b23b2d060260be90367d.scope - libcontainer container 90ac7900dbaf19f75fc5bd1cf4d46ad8fe818251b977b23b2d060260be90367d. May 15 00:44:45.620009 containerd[2742]: time="2025-05-15T00:44:45.619907599Z" level=info msg="StartContainer for \"0b2708636e9cb32fe4605f0a1ba07411680ac13a9e5c22d8bacb286bd0838136\" returns successfully" May 15 00:44:45.621736 containerd[2742]: time="2025-05-15T00:44:45.621669039Z" level=info msg="StartContainer for \"90ac7900dbaf19f75fc5bd1cf4d46ad8fe818251b977b23b2d060260be90367d\" returns successfully" May 15 00:44:45.623359 containerd[2742]: time="2025-05-15T00:44:45.623332199Z" level=info msg="StartContainer for \"0be9d788aa924d9d410307ee68b2b2610e07173232be5b0edb65fb68a20031c7\" returns successfully" May 15 00:44:46.094448 kubelet[3948]: I0515 00:44:46.094426 3948 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-n-3631181341" May 15 00:44:46.855778 kubelet[3948]: E0515 00:44:46.855741 3948 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.1.1-n-3631181341\" not found" node="ci-4230.1.1-n-3631181341" May 15 00:44:46.962373 kubelet[3948]: I0515 00:44:46.962341 3948 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230.1.1-n-3631181341" May 15 00:44:47.583407 kubelet[3948]: I0515 00:44:47.583386 3948 apiserver.go:52] "Watching apiserver" May 15 00:44:47.586296 kubelet[3948]: I0515 00:44:47.586277 3948 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 00:44:47.615938 kubelet[3948]: E0515 00:44:47.615914 3948 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4230.1.1-n-3631181341\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230.1.1-n-3631181341" May 15 00:44:47.616015 kubelet[3948]: E0515 00:44:47.615913 3948 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4230.1.1-n-3631181341\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230.1.1-n-3631181341" May 15 00:44:47.616057 kubelet[3948]: E0515 00:44:47.615915 3948 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.1.1-n-3631181341\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.1.1-n-3631181341" May 15 00:44:48.614408 kubelet[3948]: W0515 00:44:48.614388 3948 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 00:44:48.806717 systemd[1]: Reload requested from client PID 4368 ('systemctl') (unit session-9.scope)... May 15 00:44:48.806727 systemd[1]: Reloading... May 15 00:44:48.862001 zram_generator::config[4419]: No configuration found. May 15 00:44:48.950812 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:44:49.052657 systemd[1]: Reloading finished in 245 ms. May 15 00:44:49.070690 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:44:49.085283 systemd[1]: kubelet.service: Deactivated successfully. May 15 00:44:49.085510 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:44:49.085550 systemd[1]: kubelet.service: Consumed 1.144s CPU time, 138.6M memory peak. May 15 00:44:49.104272 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:44:49.207159 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:44:49.210612 (kubelet)[4479]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 00:44:49.243162 kubelet[4479]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:44:49.243162 kubelet[4479]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 00:44:49.243162 kubelet[4479]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:44:49.243394 kubelet[4479]: I0515 00:44:49.243209 4479 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 00:44:49.246843 kubelet[4479]: I0515 00:44:49.246824 4479 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 15 00:44:49.246863 kubelet[4479]: I0515 00:44:49.246844 4479 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 00:44:49.247010 kubelet[4479]: I0515 00:44:49.247001 4479 server.go:927] "Client rotation is on, will bootstrap in background" May 15 00:44:49.248194 kubelet[4479]: I0515 00:44:49.248180 4479 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 00:44:49.249204 kubelet[4479]: I0515 00:44:49.249185 4479 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:44:49.269339 kubelet[4479]: I0515 00:44:49.269313 4479 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 00:44:49.269506 kubelet[4479]: I0515 00:44:49.269484 4479 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 00:44:49.269649 kubelet[4479]: I0515 00:44:49.269506 4479 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.1-n-3631181341","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 15 00:44:49.269717 kubelet[4479]: I0515 00:44:49.269651 4479 topology_manager.go:138] "Creating topology manager with none policy" May 15 00:44:49.269717 kubelet[4479]: I0515 00:44:49.269660 4479 container_manager_linux.go:301] "Creating device plugin manager" May 15 00:44:49.269717 kubelet[4479]: I0515 00:44:49.269689 4479 state_mem.go:36] "Initialized new in-memory state store" May 15 00:44:49.269783 kubelet[4479]: I0515 00:44:49.269773 4479 kubelet.go:400] "Attempting to sync node with API server" May 15 00:44:49.269803 kubelet[4479]: I0515 00:44:49.269784 4479 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 00:44:49.269820 kubelet[4479]: I0515 00:44:49.269808 4479 kubelet.go:312] "Adding apiserver pod source" May 15 00:44:49.269840 kubelet[4479]: I0515 00:44:49.269822 4479 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 00:44:49.270218 kubelet[4479]: I0515 00:44:49.270201 4479 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 00:44:49.270376 kubelet[4479]: I0515 00:44:49.270364 4479 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 00:44:49.271961 kubelet[4479]: I0515 00:44:49.271943 4479 server.go:1264] "Started kubelet" May 15 00:44:49.272046 kubelet[4479]: I0515 00:44:49.272007 4479 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 00:44:49.272070 kubelet[4479]: I0515 00:44:49.272014 4479 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 00:44:49.272445 kubelet[4479]: I0515 00:44:49.272428 4479 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 00:44:49.272903 kubelet[4479]: I0515 00:44:49.272887 4479 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 00:44:49.273068 kubelet[4479]: I0515 00:44:49.273053 4479 volume_manager.go:291] "Starting Kubelet Volume Manager" May 15 00:44:49.273089 kubelet[4479]: E0515 00:44:49.273064 4479 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4230.1.1-n-3631181341\" not found" May 15 00:44:49.273114 kubelet[4479]: I0515 00:44:49.273093 4479 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 00:44:49.273249 kubelet[4479]: I0515 00:44:49.273229 4479 reconciler.go:26] "Reconciler: start to sync state" May 15 00:44:49.273368 kubelet[4479]: E0515 00:44:49.273347 4479 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 00:44:49.273588 kubelet[4479]: I0515 00:44:49.273571 4479 factory.go:221] Registration of the systemd container factory successfully May 15 00:44:49.273684 kubelet[4479]: I0515 00:44:49.273665 4479 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 00:44:49.274058 kubelet[4479]: I0515 00:44:49.274041 4479 server.go:455] "Adding debug handlers to kubelet server" May 15 00:44:49.274322 kubelet[4479]: I0515 00:44:49.274309 4479 factory.go:221] Registration of the containerd container factory successfully May 15 00:44:49.280649 kubelet[4479]: I0515 00:44:49.280027 4479 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 00:44:49.281718 kubelet[4479]: I0515 00:44:49.281699 4479 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 00:44:49.281738 kubelet[4479]: I0515 00:44:49.281733 4479 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 00:44:49.281755 kubelet[4479]: I0515 00:44:49.281747 4479 kubelet.go:2337] "Starting kubelet main sync loop" May 15 00:44:49.281808 kubelet[4479]: E0515 00:44:49.281792 4479 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 00:44:49.304490 kubelet[4479]: I0515 00:44:49.304471 4479 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 00:44:49.304490 kubelet[4479]: I0515 00:44:49.304487 4479 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 00:44:49.304562 kubelet[4479]: I0515 00:44:49.304503 4479 state_mem.go:36] "Initialized new in-memory state store" May 15 00:44:49.304649 kubelet[4479]: I0515 00:44:49.304633 4479 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 00:44:49.304671 kubelet[4479]: I0515 00:44:49.304644 4479 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 00:44:49.304671 kubelet[4479]: I0515 00:44:49.304661 4479 policy_none.go:49] "None policy: Start" May 15 00:44:49.305100 kubelet[4479]: I0515 00:44:49.305089 4479 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 00:44:49.305121 kubelet[4479]: I0515 00:44:49.305105 4479 state_mem.go:35] "Initializing new in-memory state store" May 15 00:44:49.305255 kubelet[4479]: I0515 00:44:49.305245 4479 state_mem.go:75] "Updated machine memory state" May 15 00:44:49.308274 kubelet[4479]: I0515 00:44:49.308255 4479 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 00:44:49.308444 kubelet[4479]: I0515 00:44:49.308408 4479 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 00:44:49.308524 kubelet[4479]: I0515 00:44:49.308517 4479 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 00:44:49.376406 kubelet[4479]: I0515 00:44:49.376384 4479 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-n-3631181341" May 15 00:44:49.380878 kubelet[4479]: I0515 00:44:49.380854 4479 kubelet_node_status.go:112] "Node was previously registered" node="ci-4230.1.1-n-3631181341" May 15 00:44:49.380936 kubelet[4479]: I0515 00:44:49.380923 4479 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230.1.1-n-3631181341" May 15 00:44:49.382029 kubelet[4479]: I0515 00:44:49.381999 4479 topology_manager.go:215] "Topology Admit Handler" podUID="f1e4323eecf962d632b473693f04fd4b" podNamespace="kube-system" podName="kube-apiserver-ci-4230.1.1-n-3631181341" May 15 00:44:49.382104 kubelet[4479]: I0515 00:44:49.382092 4479 topology_manager.go:215] "Topology Admit Handler" podUID="40bd8746c87ef40d3f7239bb9ef77a1b" podNamespace="kube-system" podName="kube-controller-manager-ci-4230.1.1-n-3631181341" May 15 00:44:49.382142 kubelet[4479]: I0515 00:44:49.382129 4479 topology_manager.go:215] "Topology Admit Handler" podUID="65fe15231909d9ed4ca9442bbef31da9" podNamespace="kube-system" podName="kube-scheduler-ci-4230.1.1-n-3631181341" May 15 00:44:49.389936 kubelet[4479]: W0515 00:44:49.389910 4479 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 00:44:49.390025 kubelet[4479]: W0515 00:44:49.390001 4479 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 00:44:49.390025 kubelet[4479]: W0515 00:44:49.390003 4479 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 00:44:49.390114 kubelet[4479]: E0515 00:44:49.390047 4479 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.1.1-n-3631181341\" already exists" pod="kube-system/kube-apiserver-ci-4230.1.1-n-3631181341" May 15 00:44:49.474584 kubelet[4479]: I0515 00:44:49.474510 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/40bd8746c87ef40d3f7239bb9ef77a1b-ca-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-3631181341\" (UID: \"40bd8746c87ef40d3f7239bb9ef77a1b\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-3631181341" May 15 00:44:49.474584 kubelet[4479]: I0515 00:44:49.474541 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/40bd8746c87ef40d3f7239bb9ef77a1b-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.1-n-3631181341\" (UID: \"40bd8746c87ef40d3f7239bb9ef77a1b\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-3631181341" May 15 00:44:49.474584 kubelet[4479]: I0515 00:44:49.474574 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/40bd8746c87ef40d3f7239bb9ef77a1b-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.1-n-3631181341\" (UID: \"40bd8746c87ef40d3f7239bb9ef77a1b\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-3631181341" May 15 00:44:49.474714 kubelet[4479]: I0515 00:44:49.474615 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/40bd8746c87ef40d3f7239bb9ef77a1b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.1-n-3631181341\" (UID: \"40bd8746c87ef40d3f7239bb9ef77a1b\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-3631181341" May 15 00:44:49.474714 kubelet[4479]: I0515 00:44:49.474659 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1e4323eecf962d632b473693f04fd4b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.1-n-3631181341\" (UID: \"f1e4323eecf962d632b473693f04fd4b\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-3631181341" May 15 00:44:49.474788 kubelet[4479]: I0515 00:44:49.474734 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/40bd8746c87ef40d3f7239bb9ef77a1b-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.1-n-3631181341\" (UID: \"40bd8746c87ef40d3f7239bb9ef77a1b\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-n-3631181341" May 15 00:44:49.474826 kubelet[4479]: I0515 00:44:49.474804 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/65fe15231909d9ed4ca9442bbef31da9-kubeconfig\") pod \"kube-scheduler-ci-4230.1.1-n-3631181341\" (UID: \"65fe15231909d9ed4ca9442bbef31da9\") " pod="kube-system/kube-scheduler-ci-4230.1.1-n-3631181341" May 15 00:44:49.474861 kubelet[4479]: I0515 00:44:49.474836 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1e4323eecf962d632b473693f04fd4b-ca-certs\") pod \"kube-apiserver-ci-4230.1.1-n-3631181341\" (UID: \"f1e4323eecf962d632b473693f04fd4b\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-3631181341" May 15 00:44:49.474882 kubelet[4479]: I0515 00:44:49.474865 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1e4323eecf962d632b473693f04fd4b-k8s-certs\") pod \"kube-apiserver-ci-4230.1.1-n-3631181341\" (UID: \"f1e4323eecf962d632b473693f04fd4b\") " pod="kube-system/kube-apiserver-ci-4230.1.1-n-3631181341" May 15 00:44:50.270651 kubelet[4479]: I0515 00:44:50.270571 4479 apiserver.go:52] "Watching apiserver" May 15 00:44:50.273892 kubelet[4479]: I0515 00:44:50.273870 4479 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 00:44:50.292537 kubelet[4479]: W0515 00:44:50.292513 4479 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 00:44:50.292618 kubelet[4479]: E0515 00:44:50.292563 4479 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.1.1-n-3631181341\" already exists" pod="kube-system/kube-apiserver-ci-4230.1.1-n-3631181341" May 15 00:44:50.292705 kubelet[4479]: W0515 00:44:50.292682 4479 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 00:44:50.292751 kubelet[4479]: E0515 00:44:50.292727 4479 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4230.1.1-n-3631181341\" already exists" pod="kube-system/kube-controller-manager-ci-4230.1.1-n-3631181341" May 15 00:44:50.313980 kubelet[4479]: I0515 00:44:50.313939 4479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.1.1-n-3631181341" podStartSLOduration=1.313925319 podStartE2EDuration="1.313925319s" podCreationTimestamp="2025-05-15 00:44:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:44:50.313899599 +0000 UTC m=+1.100364961" watchObservedRunningTime="2025-05-15 00:44:50.313925319 +0000 UTC m=+1.100390681" May 15 00:44:50.325454 kubelet[4479]: I0515 00:44:50.325413 4479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.1.1-n-3631181341" podStartSLOduration=1.325402839 podStartE2EDuration="1.325402839s" podCreationTimestamp="2025-05-15 00:44:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:44:50.319560919 +0000 UTC m=+1.106026281" watchObservedRunningTime="2025-05-15 00:44:50.325402839 +0000 UTC m=+1.111868161" May 15 00:44:50.331950 kubelet[4479]: I0515 00:44:50.331909 4479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.1.1-n-3631181341" podStartSLOduration=2.331900759 podStartE2EDuration="2.331900759s" podCreationTimestamp="2025-05-15 00:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:44:50.325718879 +0000 UTC m=+1.112184241" watchObservedRunningTime="2025-05-15 00:44:50.331900759 +0000 UTC m=+1.118366121" May 15 00:44:53.785719 sudo[3000]: pam_unix(sudo:session): session closed for user root May 15 00:44:53.851028 sshd[2999]: Connection closed by 139.178.68.195 port 53898 May 15 00:44:53.850928 sshd-session[2997]: pam_unix(sshd:session): session closed for user core May 15 00:44:53.854688 systemd[1]: sshd@6-147.28.145.22:22-139.178.68.195:53898.service: Deactivated successfully. May 15 00:44:53.857175 systemd[1]: session-9.scope: Deactivated successfully. May 15 00:44:53.857385 systemd[1]: session-9.scope: Consumed 5.968s CPU time, 273.4M memory peak. May 15 00:44:53.858416 systemd-logind[2723]: Session 9 logged out. Waiting for processes to exit. May 15 00:44:53.858968 systemd-logind[2723]: Removed session 9. May 15 00:45:01.619779 update_engine[2736]: I20250515 00:45:01.619719 2736 update_attempter.cc:509] Updating boot flags... May 15 00:45:01.652001 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 44 scanned by (udev-worker) (4746) May 15 00:45:01.681003 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 44 scanned by (udev-worker) (4749) May 15 00:45:03.617969 kubelet[4479]: I0515 00:45:03.617909 4479 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 00:45:03.618327 containerd[2742]: time="2025-05-15T00:45:03.618203269Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 00:45:03.618488 kubelet[4479]: I0515 00:45:03.618357 4479 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 00:45:04.594482 kubelet[4479]: I0515 00:45:04.594406 4479 topology_manager.go:215] "Topology Admit Handler" podUID="2e876982-eea9-4ad0-8189-119cd7503790" podNamespace="kube-system" podName="kube-proxy-h9tlx" May 15 00:45:04.598892 systemd[1]: Created slice kubepods-besteffort-pod2e876982_eea9_4ad0_8189_119cd7503790.slice - libcontainer container kubepods-besteffort-pod2e876982_eea9_4ad0_8189_119cd7503790.slice. May 15 00:45:04.682806 kubelet[4479]: I0515 00:45:04.682757 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e876982-eea9-4ad0-8189-119cd7503790-xtables-lock\") pod \"kube-proxy-h9tlx\" (UID: \"2e876982-eea9-4ad0-8189-119cd7503790\") " pod="kube-system/kube-proxy-h9tlx" May 15 00:45:04.682806 kubelet[4479]: I0515 00:45:04.682805 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e876982-eea9-4ad0-8189-119cd7503790-lib-modules\") pod \"kube-proxy-h9tlx\" (UID: \"2e876982-eea9-4ad0-8189-119cd7503790\") " pod="kube-system/kube-proxy-h9tlx" May 15 00:45:04.683118 kubelet[4479]: I0515 00:45:04.682828 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29skh\" (UniqueName: \"kubernetes.io/projected/2e876982-eea9-4ad0-8189-119cd7503790-kube-api-access-29skh\") pod \"kube-proxy-h9tlx\" (UID: \"2e876982-eea9-4ad0-8189-119cd7503790\") " pod="kube-system/kube-proxy-h9tlx" May 15 00:45:04.683118 kubelet[4479]: I0515 00:45:04.682849 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2e876982-eea9-4ad0-8189-119cd7503790-kube-proxy\") pod \"kube-proxy-h9tlx\" (UID: \"2e876982-eea9-4ad0-8189-119cd7503790\") " pod="kube-system/kube-proxy-h9tlx" May 15 00:45:04.750513 kubelet[4479]: I0515 00:45:04.750471 4479 topology_manager.go:215] "Topology Admit Handler" podUID="ddd2dc7e-86bc-4637-9eb1-ccc1895e39c9" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-4j874" May 15 00:45:04.754726 systemd[1]: Created slice kubepods-besteffort-podddd2dc7e_86bc_4637_9eb1_ccc1895e39c9.slice - libcontainer container kubepods-besteffort-podddd2dc7e_86bc_4637_9eb1_ccc1895e39c9.slice. May 15 00:45:04.783336 kubelet[4479]: I0515 00:45:04.783316 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ddd2dc7e-86bc-4637-9eb1-ccc1895e39c9-var-lib-calico\") pod \"tigera-operator-797db67f8-4j874\" (UID: \"ddd2dc7e-86bc-4637-9eb1-ccc1895e39c9\") " pod="tigera-operator/tigera-operator-797db67f8-4j874" May 15 00:45:04.783373 kubelet[4479]: I0515 00:45:04.783344 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmzp2\" (UniqueName: \"kubernetes.io/projected/ddd2dc7e-86bc-4637-9eb1-ccc1895e39c9-kube-api-access-vmzp2\") pod \"tigera-operator-797db67f8-4j874\" (UID: \"ddd2dc7e-86bc-4637-9eb1-ccc1895e39c9\") " pod="tigera-operator/tigera-operator-797db67f8-4j874" May 15 00:45:04.914230 containerd[2742]: time="2025-05-15T00:45:04.914150190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h9tlx,Uid:2e876982-eea9-4ad0-8189-119cd7503790,Namespace:kube-system,Attempt:0,}" May 15 00:45:04.926632 containerd[2742]: time="2025-05-15T00:45:04.926572925Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:45:04.926676 containerd[2742]: time="2025-05-15T00:45:04.926632532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:45:04.926676 containerd[2742]: time="2025-05-15T00:45:04.926643493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:45:04.926734 containerd[2742]: time="2025-05-15T00:45:04.926720020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:45:04.948117 systemd[1]: Started cri-containerd-c7d66aa7ea5ed27372e82e350e9a922cd8469238117a4d77174b5ef66f6f5982.scope - libcontainer container c7d66aa7ea5ed27372e82e350e9a922cd8469238117a4d77174b5ef66f6f5982. May 15 00:45:04.963756 containerd[2742]: time="2025-05-15T00:45:04.963721359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h9tlx,Uid:2e876982-eea9-4ad0-8189-119cd7503790,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7d66aa7ea5ed27372e82e350e9a922cd8469238117a4d77174b5ef66f6f5982\"" May 15 00:45:04.965856 containerd[2742]: time="2025-05-15T00:45:04.965832932Z" level=info msg="CreateContainer within sandbox \"c7d66aa7ea5ed27372e82e350e9a922cd8469238117a4d77174b5ef66f6f5982\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 00:45:04.972706 containerd[2742]: time="2025-05-15T00:45:04.972674984Z" level=info msg="CreateContainer within sandbox \"c7d66aa7ea5ed27372e82e350e9a922cd8469238117a4d77174b5ef66f6f5982\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a41269d1951b2e3ec6aa3d469d0c948bdff1e9c17947f2f5a1a63843a2304896\"" May 15 00:45:04.973108 containerd[2742]: time="2025-05-15T00:45:04.973091346Z" level=info msg="StartContainer for \"a41269d1951b2e3ec6aa3d469d0c948bdff1e9c17947f2f5a1a63843a2304896\"" May 15 00:45:05.001164 systemd[1]: Started cri-containerd-a41269d1951b2e3ec6aa3d469d0c948bdff1e9c17947f2f5a1a63843a2304896.scope - libcontainer container a41269d1951b2e3ec6aa3d469d0c948bdff1e9c17947f2f5a1a63843a2304896. May 15 00:45:05.030962 containerd[2742]: time="2025-05-15T00:45:05.030930247Z" level=info msg="StartContainer for \"a41269d1951b2e3ec6aa3d469d0c948bdff1e9c17947f2f5a1a63843a2304896\" returns successfully" May 15 00:45:05.057573 containerd[2742]: time="2025-05-15T00:45:05.057550008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-4j874,Uid:ddd2dc7e-86bc-4637-9eb1-ccc1895e39c9,Namespace:tigera-operator,Attempt:0,}" May 15 00:45:05.070143 containerd[2742]: time="2025-05-15T00:45:05.070077475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:45:05.070168 containerd[2742]: time="2025-05-15T00:45:05.070138841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:45:05.070168 containerd[2742]: time="2025-05-15T00:45:05.070150442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:45:05.070247 containerd[2742]: time="2025-05-15T00:45:05.070228329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:45:05.091115 systemd[1]: Started cri-containerd-f04b00adce281876021a95c60b7a7b4f01ed3ee4a835d6cae9ff8d975cd92d92.scope - libcontainer container f04b00adce281876021a95c60b7a7b4f01ed3ee4a835d6cae9ff8d975cd92d92. May 15 00:45:05.114240 containerd[2742]: time="2025-05-15T00:45:05.114212496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-4j874,Uid:ddd2dc7e-86bc-4637-9eb1-ccc1895e39c9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f04b00adce281876021a95c60b7a7b4f01ed3ee4a835d6cae9ff8d975cd92d92\"" May 15 00:45:05.116941 containerd[2742]: time="2025-05-15T00:45:05.116918552Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 15 00:45:05.312157 kubelet[4479]: I0515 00:45:05.312095 4479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h9tlx" podStartSLOduration=1.312077839 podStartE2EDuration="1.312077839s" podCreationTimestamp="2025-05-15 00:45:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:45:05.311933345 +0000 UTC m=+16.098398707" watchObservedRunningTime="2025-05-15 00:45:05.312077839 +0000 UTC m=+16.098543201" May 15 00:45:06.199412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount89504461.mount: Deactivated successfully. May 15 00:45:07.184300 containerd[2742]: time="2025-05-15T00:45:07.184237717Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:45:07.184645 containerd[2742]: time="2025-05-15T00:45:07.184219715Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 15 00:45:07.185211 containerd[2742]: time="2025-05-15T00:45:07.184973298Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:45:07.187087 containerd[2742]: time="2025-05-15T00:45:07.187066712Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:45:07.187859 containerd[2742]: time="2025-05-15T00:45:07.187832096Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 2.070881941s" May 15 00:45:07.187901 containerd[2742]: time="2025-05-15T00:45:07.187861219Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 15 00:45:07.189617 containerd[2742]: time="2025-05-15T00:45:07.189592963Z" level=info msg="CreateContainer within sandbox \"f04b00adce281876021a95c60b7a7b4f01ed3ee4a835d6cae9ff8d975cd92d92\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 15 00:45:07.195620 containerd[2742]: time="2025-05-15T00:45:07.195589342Z" level=info msg="CreateContainer within sandbox \"f04b00adce281876021a95c60b7a7b4f01ed3ee4a835d6cae9ff8d975cd92d92\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7ec4569b6e82ebc46be298ba0759edec32168d3e76597625f0802624219df5b9\"" May 15 00:45:07.196109 containerd[2742]: time="2025-05-15T00:45:07.196001936Z" level=info msg="StartContainer for \"7ec4569b6e82ebc46be298ba0759edec32168d3e76597625f0802624219df5b9\"" May 15 00:45:07.222175 systemd[1]: Started cri-containerd-7ec4569b6e82ebc46be298ba0759edec32168d3e76597625f0802624219df5b9.scope - libcontainer container 7ec4569b6e82ebc46be298ba0759edec32168d3e76597625f0802624219df5b9. May 15 00:45:07.238816 containerd[2742]: time="2025-05-15T00:45:07.238784338Z" level=info msg="StartContainer for \"7ec4569b6e82ebc46be298ba0759edec32168d3e76597625f0802624219df5b9\" returns successfully" May 15 00:45:07.315943 kubelet[4479]: I0515 00:45:07.315895 4479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-4j874" podStartSLOduration=1.243757706 podStartE2EDuration="3.315880837s" podCreationTimestamp="2025-05-15 00:45:04 +0000 UTC" firstStartedPulling="2025-05-15 00:45:05.116348858 +0000 UTC m=+15.902814220" lastFinishedPulling="2025-05-15 00:45:07.188472029 +0000 UTC m=+17.974937351" observedRunningTime="2025-05-15 00:45:07.315706022 +0000 UTC m=+18.102171384" watchObservedRunningTime="2025-05-15 00:45:07.315880837 +0000 UTC m=+18.102346199" May 15 00:45:10.716003 kubelet[4479]: I0515 00:45:10.715961 4479 topology_manager.go:215] "Topology Admit Handler" podUID="a79d223b-c5ea-42a8-8150-67a1378543ac" podNamespace="calico-system" podName="calico-typha-5bdb66bdfc-vlmcd" May 15 00:45:10.721610 systemd[1]: Created slice kubepods-besteffort-poda79d223b_c5ea_42a8_8150_67a1378543ac.slice - libcontainer container kubepods-besteffort-poda79d223b_c5ea_42a8_8150_67a1378543ac.slice. May 15 00:45:10.734161 kubelet[4479]: I0515 00:45:10.734136 4479 topology_manager.go:215] "Topology Admit Handler" podUID="1394c6ce-5b81-41af-8d1c-99f99079370b" podNamespace="calico-system" podName="calico-node-27pvq" May 15 00:45:10.739352 systemd[1]: Created slice kubepods-besteffort-pod1394c6ce_5b81_41af_8d1c_99f99079370b.slice - libcontainer container kubepods-besteffort-pod1394c6ce_5b81_41af_8d1c_99f99079370b.slice. May 15 00:45:10.819175 kubelet[4479]: I0515 00:45:10.819140 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a79d223b-c5ea-42a8-8150-67a1378543ac-typha-certs\") pod \"calico-typha-5bdb66bdfc-vlmcd\" (UID: \"a79d223b-c5ea-42a8-8150-67a1378543ac\") " pod="calico-system/calico-typha-5bdb66bdfc-vlmcd" May 15 00:45:10.819175 kubelet[4479]: I0515 00:45:10.819175 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1394c6ce-5b81-41af-8d1c-99f99079370b-policysync\") pod \"calico-node-27pvq\" (UID: \"1394c6ce-5b81-41af-8d1c-99f99079370b\") " pod="calico-system/calico-node-27pvq" May 15 00:45:10.819330 kubelet[4479]: I0515 00:45:10.819191 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1394c6ce-5b81-41af-8d1c-99f99079370b-cni-bin-dir\") pod \"calico-node-27pvq\" (UID: \"1394c6ce-5b81-41af-8d1c-99f99079370b\") " pod="calico-system/calico-node-27pvq" May 15 00:45:10.819330 kubelet[4479]: I0515 00:45:10.819208 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1394c6ce-5b81-41af-8d1c-99f99079370b-lib-modules\") pod \"calico-node-27pvq\" (UID: \"1394c6ce-5b81-41af-8d1c-99f99079370b\") " pod="calico-system/calico-node-27pvq" May 15 00:45:10.819330 kubelet[4479]: I0515 00:45:10.819246 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1394c6ce-5b81-41af-8d1c-99f99079370b-tigera-ca-bundle\") pod \"calico-node-27pvq\" (UID: \"1394c6ce-5b81-41af-8d1c-99f99079370b\") " pod="calico-system/calico-node-27pvq" May 15 00:45:10.819330 kubelet[4479]: I0515 00:45:10.819262 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1394c6ce-5b81-41af-8d1c-99f99079370b-node-certs\") pod \"calico-node-27pvq\" (UID: \"1394c6ce-5b81-41af-8d1c-99f99079370b\") " pod="calico-system/calico-node-27pvq" May 15 00:45:10.819330 kubelet[4479]: I0515 00:45:10.819281 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1394c6ce-5b81-41af-8d1c-99f99079370b-flexvol-driver-host\") pod \"calico-node-27pvq\" (UID: \"1394c6ce-5b81-41af-8d1c-99f99079370b\") " pod="calico-system/calico-node-27pvq" May 15 00:45:10.819434 kubelet[4479]: I0515 00:45:10.819299 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1394c6ce-5b81-41af-8d1c-99f99079370b-xtables-lock\") pod \"calico-node-27pvq\" (UID: \"1394c6ce-5b81-41af-8d1c-99f99079370b\") " pod="calico-system/calico-node-27pvq" May 15 00:45:10.819434 kubelet[4479]: I0515 00:45:10.819313 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1394c6ce-5b81-41af-8d1c-99f99079370b-var-run-calico\") pod \"calico-node-27pvq\" (UID: \"1394c6ce-5b81-41af-8d1c-99f99079370b\") " pod="calico-system/calico-node-27pvq" May 15 00:45:10.819434 kubelet[4479]: I0515 00:45:10.819329 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a79d223b-c5ea-42a8-8150-67a1378543ac-tigera-ca-bundle\") pod \"calico-typha-5bdb66bdfc-vlmcd\" (UID: \"a79d223b-c5ea-42a8-8150-67a1378543ac\") " pod="calico-system/calico-typha-5bdb66bdfc-vlmcd" May 15 00:45:10.819434 kubelet[4479]: I0515 00:45:10.819346 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln525\" (UniqueName: \"kubernetes.io/projected/a79d223b-c5ea-42a8-8150-67a1378543ac-kube-api-access-ln525\") pod \"calico-typha-5bdb66bdfc-vlmcd\" (UID: \"a79d223b-c5ea-42a8-8150-67a1378543ac\") " pod="calico-system/calico-typha-5bdb66bdfc-vlmcd" May 15 00:45:10.819434 kubelet[4479]: I0515 00:45:10.819360 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1394c6ce-5b81-41af-8d1c-99f99079370b-cni-net-dir\") pod \"calico-node-27pvq\" (UID: \"1394c6ce-5b81-41af-8d1c-99f99079370b\") " pod="calico-system/calico-node-27pvq" May 15 00:45:10.819528 kubelet[4479]: I0515 00:45:10.819378 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqgdp\" (UniqueName: \"kubernetes.io/projected/1394c6ce-5b81-41af-8d1c-99f99079370b-kube-api-access-qqgdp\") pod \"calico-node-27pvq\" (UID: \"1394c6ce-5b81-41af-8d1c-99f99079370b\") " pod="calico-system/calico-node-27pvq" May 15 00:45:10.819528 kubelet[4479]: I0515 00:45:10.819485 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1394c6ce-5b81-41af-8d1c-99f99079370b-var-lib-calico\") pod \"calico-node-27pvq\" (UID: \"1394c6ce-5b81-41af-8d1c-99f99079370b\") " pod="calico-system/calico-node-27pvq" May 15 00:45:10.819569 kubelet[4479]: I0515 00:45:10.819538 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1394c6ce-5b81-41af-8d1c-99f99079370b-cni-log-dir\") pod \"calico-node-27pvq\" (UID: \"1394c6ce-5b81-41af-8d1c-99f99079370b\") " pod="calico-system/calico-node-27pvq" May 15 00:45:10.841896 kubelet[4479]: I0515 00:45:10.841844 4479 topology_manager.go:215] "Topology Admit Handler" podUID="d4c8c3bf-81ae-482c-8ebe-086a362f11d9" podNamespace="calico-system" podName="csi-node-driver-lhgzs" May 15 00:45:10.842170 kubelet[4479]: E0515 00:45:10.842152 4479 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lhgzs" podUID="d4c8c3bf-81ae-482c-8ebe-086a362f11d9" May 15 00:45:10.920183 kubelet[4479]: I0515 00:45:10.920151 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d4c8c3bf-81ae-482c-8ebe-086a362f11d9-registration-dir\") pod \"csi-node-driver-lhgzs\" (UID: \"d4c8c3bf-81ae-482c-8ebe-086a362f11d9\") " pod="calico-system/csi-node-driver-lhgzs" May 15 00:45:10.920183 kubelet[4479]: I0515 00:45:10.920394 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d4c8c3bf-81ae-482c-8ebe-086a362f11d9-kubelet-dir\") pod \"csi-node-driver-lhgzs\" (UID: \"d4c8c3bf-81ae-482c-8ebe-086a362f11d9\") " pod="calico-system/csi-node-driver-lhgzs" May 15 00:45:10.920183 kubelet[4479]: I0515 00:45:10.920429 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d4c8c3bf-81ae-482c-8ebe-086a362f11d9-socket-dir\") pod \"csi-node-driver-lhgzs\" (UID: \"d4c8c3bf-81ae-482c-8ebe-086a362f11d9\") " pod="calico-system/csi-node-driver-lhgzs" May 15 00:45:10.920613 kubelet[4479]: I0515 00:45:10.920593 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d4c8c3bf-81ae-482c-8ebe-086a362f11d9-varrun\") pod \"csi-node-driver-lhgzs\" (UID: \"d4c8c3bf-81ae-482c-8ebe-086a362f11d9\") " pod="calico-system/csi-node-driver-lhgzs" May 15 00:45:10.920640 kubelet[4479]: I0515 00:45:10.920624 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59ntm\" (UniqueName: \"kubernetes.io/projected/d4c8c3bf-81ae-482c-8ebe-086a362f11d9-kube-api-access-59ntm\") pod \"csi-node-driver-lhgzs\" (UID: \"d4c8c3bf-81ae-482c-8ebe-086a362f11d9\") " pod="calico-system/csi-node-driver-lhgzs" May 15 00:45:10.921457 kubelet[4479]: E0515 00:45:10.921441 4479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:45:10.921504 kubelet[4479]: W0515 00:45:10.921457 4479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:45:10.921504 kubelet[4479]: E0515 00:45:10.921476 4479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:45:10.921704 kubelet[4479]: E0515 00:45:10.921695 4479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:45:10.921729 kubelet[4479]: W0515 00:45:10.921703 4479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:45:10.921729 kubelet[4479]: E0515 00:45:10.921711 4479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:45:10.922992 kubelet[4479]: E0515 00:45:10.922975 4479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:45:10.923034 kubelet[4479]: W0515 00:45:10.922994 4479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:45:10.923034 kubelet[4479]: E0515 00:45:10.923011 4479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:45:10.923248 kubelet[4479]: E0515 00:45:10.923238 4479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:45:10.923275 kubelet[4479]: W0515 00:45:10.923247 4479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:45:10.923275 kubelet[4479]: E0515 00:45:10.923256 4479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:45:10.928751 kubelet[4479]: E0515 00:45:10.928734 4479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:45:10.928751 kubelet[4479]: W0515 00:45:10.928747 4479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:45:10.928808 kubelet[4479]: E0515 00:45:10.928761 4479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:45:10.928990 kubelet[4479]: E0515 00:45:10.928981 4479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:45:10.929013 kubelet[4479]: W0515 00:45:10.928994 4479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:45:10.929013 kubelet[4479]: E0515 00:45:10.929002 4479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:45:11.021335 kubelet[4479]: E0515 00:45:11.021257 4479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:45:11.021335 kubelet[4479]: W0515 00:45:11.021276 4479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:45:11.021335 kubelet[4479]: E0515 00:45:11.021292 4479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:45:11.021581 kubelet[4479]: E0515 00:45:11.021570 4479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:45:11.021581 kubelet[4479]: W0515 00:45:11.021580 4479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:45:11.021623 kubelet[4479]: E0515 00:45:11.021591 4479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:45:11.021855 kubelet[4479]: E0515 00:45:11.021843 4479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:45:11.021879 kubelet[4479]: W0515 00:45:11.021855 4479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:45:11.021879 kubelet[4479]: E0515 00:45:11.021870 4479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:45:11.022037 kubelet[4479]: E0515 00:45:11.022026 4479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:45:11.022037 kubelet[4479]: W0515 00:45:11.022034 4479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:45:11.022076 kubelet[4479]: E0515 00:45:11.022046 4479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:45:11.022200 kubelet[4479]: E0515 00:45:11.022191 4479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:45:11.022200 kubelet[4479]: W0515 00:45:11.022199 4479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:45:11.022240 kubelet[4479]: E0515 00:45:11.022209 4479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:45:11.022485 kubelet[4479]: E0515 00:45:11.022474 4479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:45:11.022485 kubelet[4479]: W0515 00:45:11.022482 4479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:45:11.022536 kubelet[4479]: E0515 00:45:11.022494 4479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:45:11.022810 kubelet[4479]: E0515 00:45:11.022794 4479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:45:11.022834 kubelet[4479]: W0515 00:45:11.022810 4479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:45:11.022834 kubelet[4479]: E0515 00:45:11.022830 4479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:45:11.023030 kubelet[4479]: E0515 00:45:11.023017 4479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:45:11.023030 kubelet[4479]: W0515 00:45:11.023027 4479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:45:11.023084 kubelet[4479]: E0515 00:45:11.023065 4479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:45:11.023185 kubelet[4479]: E0515 00:45:11.023173 4479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:45:11.023185 kubelet[4479]: W0515 00:45:11.023182 4479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:45:11.023225 kubelet[4479]: E0515 00:45:11.023204 4479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:45:11.023341 kubelet[4479]: E0515 00:45:11.023330 4479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:45:11.023341 kubelet[4479]: W0515 00:45:11.023338 4479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:45:11.023382 kubelet[4479]: E0515 00:45:11.023350 4479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:45:11.023406 containerd[2742]: time="2025-05-15T00:45:11.023354104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bdb66bdfc-vlmcd,Uid:a79d223b-c5ea-42a8-8150-67a1378543ac,Namespace:calico-system,Attempt:0,}" May 15 00:45:11.023607 kubelet[4479]: E0515 00:45:11.023496 4479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:45:11.023607 kubelet[4479]: W0515 00:45:11.023504 4479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:45:11.023607 kubelet[4479]: E0515 00:45:11.023515 4479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:45:11.023732 kubelet[4479]: E0515 00:45:11.023721 4479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:45:11.023732 kubelet[4479]: W0515 00:45:11.023730 4479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:45:11.023775 kubelet[4479]: E0515 00:45:11.023742 4479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:45:11.024001 kubelet[4479]: E0515 00:45:11.023984 4479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:45:11.024001 kubelet[4479]: W0515 00:45:11.023996 4479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:45:11.024047 kubelet[4479]: E0515 00:45:11.024011 4479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:45:11.024209 kubelet[4479]: E0515 00:45:11.024200 4479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:45:11.024232 kubelet[4479]: W0515 00:45:11.024211 4479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:45:11.024232 kubelet[4479]: E0515 00:45:11.024223 4479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:45:11.024436 kubelet[4479]: E0515 00:45:11.024425 4479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:45:11.024458 kubelet[4479]: W0515 00:45:11.024436 4479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:45:11.024479 kubelet[4479]: E0515 00:45:11.024456 4479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:45:11.024659 kubelet[4479]: E0515 00:45:11.024650 4479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:45:11.024681 kubelet[4479]: W0515 00:45:11.024659 4479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:45:11.024705 kubelet[4479]: E0515 00:45:11.024680 4479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:45:11.024867 kubelet[4479]: E0515 00:45:11.024859 4479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:45:11.024886 kubelet[4479]: W0515 00:45:11.024867 4479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:45:11.024886 kubelet[4479]: E0515 00:45:11.024882 4479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:45:11.025082 kubelet[4479]: E0515 00:45:11.025073 4479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:45:11.025305 kubelet[4479]: W0515 00:45:11.025081 4479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:45:11.025305 kubelet[4479]: E0515 00:45:11.025093 4479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:45:11.025305 kubelet[4479]: E0515 00:45:11.025237 4479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:45:11.025305 kubelet[4479]: W0515 00:45:11.025244 4479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:45:11.025305 kubelet[4479]: E0515 00:45:11.025255 4479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:45:11.025438 kubelet[4479]: E0515 00:45:11.025429 4479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:45:11.025457 kubelet[4479]: W0515 00:45:11.025438 4479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:45:11.025457 kubelet[4479]: E0515 00:45:11.025450 4479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:45:11.025610 kubelet[4479]: E0515 00:45:11.025602 4479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:45:11.025629 kubelet[4479]: W0515 00:45:11.025611 4479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:45:11.025629 kubelet[4479]: E0515 00:45:11.025621 4479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:45:11.025831 kubelet[4479]: E0515 00:45:11.025823 4479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:45:11.025852 kubelet[4479]: W0515 00:45:11.025831 4479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:45:11.025852 kubelet[4479]: E0515 00:45:11.025842 4479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:45:11.026135 kubelet[4479]: E0515 00:45:11.026127 4479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:45:11.026160 kubelet[4479]: W0515 00:45:11.026135 4479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:45:11.026160 kubelet[4479]: E0515 00:45:11.026151 4479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:45:11.026281 kubelet[4479]: E0515 00:45:11.026273 4479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:45:11.026301 kubelet[4479]: W0515 00:45:11.026281 4479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:45:11.026301 kubelet[4479]: E0515 00:45:11.026290 4479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:45:11.026560 kubelet[4479]: E0515 00:45:11.026550 4479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:45:11.026581 kubelet[4479]: W0515 00:45:11.026561 4479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:45:11.026581 kubelet[4479]: E0515 00:45:11.026571 4479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:45:11.034892 kubelet[4479]: E0515 00:45:11.034879 4479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:45:11.034912 kubelet[4479]: W0515 00:45:11.034893 4479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:45:11.034938 kubelet[4479]: E0515 00:45:11.034908 4479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:45:11.036244 containerd[2742]: time="2025-05-15T00:45:11.036181769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:45:11.036268 containerd[2742]: time="2025-05-15T00:45:11.036238212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:45:11.036268 containerd[2742]: time="2025-05-15T00:45:11.036249213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:45:11.036339 containerd[2742]: time="2025-05-15T00:45:11.036324218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:45:11.041210 containerd[2742]: time="2025-05-15T00:45:11.041183250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-27pvq,Uid:1394c6ce-5b81-41af-8d1c-99f99079370b,Namespace:calico-system,Attempt:0,}" May 15 00:45:11.053396 containerd[2742]: time="2025-05-15T00:45:11.053331311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:45:11.053421 containerd[2742]: time="2025-05-15T00:45:11.053391475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:45:11.053421 containerd[2742]: time="2025-05-15T00:45:11.053403316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:45:11.053496 containerd[2742]: time="2025-05-15T00:45:11.053478201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:45:11.061104 systemd[1]: Started cri-containerd-dac78809995ed82e0ec94098c5cd5f7f6b066820de2003625fc03497d72b2885.scope - libcontainer container dac78809995ed82e0ec94098c5cd5f7f6b066820de2003625fc03497d72b2885. May 15 00:45:11.063385 systemd[1]: Started cri-containerd-5f4a79ceeba17cbb92465f087d4596824a2f94e9c56ab36af3cefcfb5e6078a9.scope - libcontainer container 5f4a79ceeba17cbb92465f087d4596824a2f94e9c56ab36af3cefcfb5e6078a9. May 15 00:45:11.078496 containerd[2742]: time="2025-05-15T00:45:11.078464128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-27pvq,Uid:1394c6ce-5b81-41af-8d1c-99f99079370b,Namespace:calico-system,Attempt:0,} returns sandbox id \"5f4a79ceeba17cbb92465f087d4596824a2f94e9c56ab36af3cefcfb5e6078a9\"" May 15 00:45:11.079679 containerd[2742]: time="2025-05-15T00:45:11.079655725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 15 00:45:11.083868 containerd[2742]: time="2025-05-15T00:45:11.083844674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bdb66bdfc-vlmcd,Uid:a79d223b-c5ea-42a8-8150-67a1378543ac,Namespace:calico-system,Attempt:0,} returns sandbox id \"dac78809995ed82e0ec94098c5cd5f7f6b066820de2003625fc03497d72b2885\"" May 15 00:45:11.502118 containerd[2742]: time="2025-05-15T00:45:11.502078292Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:45:11.502222 containerd[2742]: time="2025-05-15T00:45:11.502135856Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 15 00:45:11.503478 containerd[2742]: time="2025-05-15T00:45:11.503446460Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:45:11.506111 containerd[2742]: time="2025-05-15T00:45:11.506083510Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:45:11.506859 containerd[2742]: time="2025-05-15T00:45:11.506828558Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 427.141911ms" May 15 00:45:11.506888 containerd[2742]: time="2025-05-15T00:45:11.506861320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 15 00:45:11.507680 containerd[2742]: time="2025-05-15T00:45:11.507660531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 15 00:45:11.508581 containerd[2742]: time="2025-05-15T00:45:11.508558349Z" level=info msg="CreateContainer within sandbox \"5f4a79ceeba17cbb92465f087d4596824a2f94e9c56ab36af3cefcfb5e6078a9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 15 00:45:11.525346 containerd[2742]: time="2025-05-15T00:45:11.525316227Z" level=info msg="CreateContainer within sandbox \"5f4a79ceeba17cbb92465f087d4596824a2f94e9c56ab36af3cefcfb5e6078a9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a0dd099481cf68cca18a813278603b4d70b6f35c43106803fc353a25d1cdb60a\"" May 15 00:45:11.525652 containerd[2742]: time="2025-05-15T00:45:11.525628927Z" level=info msg="StartContainer for \"a0dd099481cf68cca18a813278603b4d70b6f35c43106803fc353a25d1cdb60a\"" May 15 00:45:11.549153 systemd[1]: Started cri-containerd-a0dd099481cf68cca18a813278603b4d70b6f35c43106803fc353a25d1cdb60a.scope - libcontainer container a0dd099481cf68cca18a813278603b4d70b6f35c43106803fc353a25d1cdb60a. May 15 00:45:11.568975 containerd[2742]: time="2025-05-15T00:45:11.568946993Z" level=info msg="StartContainer for \"a0dd099481cf68cca18a813278603b4d70b6f35c43106803fc353a25d1cdb60a\" returns successfully" May 15 00:45:11.581672 systemd[1]: cri-containerd-a0dd099481cf68cca18a813278603b4d70b6f35c43106803fc353a25d1cdb60a.scope: Deactivated successfully. May 15 00:45:11.676046 containerd[2742]: time="2025-05-15T00:45:11.675678497Z" level=info msg="shim disconnected" id=a0dd099481cf68cca18a813278603b4d70b6f35c43106803fc353a25d1cdb60a namespace=k8s.io May 15 00:45:11.676046 containerd[2742]: time="2025-05-15T00:45:11.675731420Z" level=warning msg="cleaning up after shim disconnected" id=a0dd099481cf68cca18a813278603b4d70b6f35c43106803fc353a25d1cdb60a namespace=k8s.io May 15 00:45:11.676046 containerd[2742]: time="2025-05-15T00:45:11.675739661Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:45:12.224068 containerd[2742]: time="2025-05-15T00:45:12.224032027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:45:12.224345 containerd[2742]: time="2025-05-15T00:45:12.224108592Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 15 00:45:12.224767 containerd[2742]: time="2025-05-15T00:45:12.224750551Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:45:12.226506 containerd[2742]: time="2025-05-15T00:45:12.226482455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:45:12.227260 containerd[2742]: time="2025-05-15T00:45:12.227238301Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 719.551728ms" May 15 00:45:12.227287 containerd[2742]: time="2025-05-15T00:45:12.227265262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 15 00:45:12.232965 containerd[2742]: time="2025-05-15T00:45:12.232944605Z" level=info msg="CreateContainer within sandbox \"dac78809995ed82e0ec94098c5cd5f7f6b066820de2003625fc03497d72b2885\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 15 00:45:12.248879 containerd[2742]: time="2025-05-15T00:45:12.248840283Z" level=info msg="CreateContainer within sandbox \"dac78809995ed82e0ec94098c5cd5f7f6b066820de2003625fc03497d72b2885\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6046c96b053324a6d28e02ee2e2cb1869fc0672ef6652367afccd603112974b8\"" May 15 00:45:12.249243 containerd[2742]: time="2025-05-15T00:45:12.249214266Z" level=info msg="StartContainer for \"6046c96b053324a6d28e02ee2e2cb1869fc0672ef6652367afccd603112974b8\"" May 15 00:45:12.274155 systemd[1]: Started cri-containerd-6046c96b053324a6d28e02ee2e2cb1869fc0672ef6652367afccd603112974b8.scope - libcontainer container 6046c96b053324a6d28e02ee2e2cb1869fc0672ef6652367afccd603112974b8. May 15 00:45:12.282867 kubelet[4479]: E0515 00:45:12.282844 4479 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lhgzs" podUID="d4c8c3bf-81ae-482c-8ebe-086a362f11d9" May 15 00:45:12.298541 containerd[2742]: time="2025-05-15T00:45:12.298511638Z" level=info msg="StartContainer for \"6046c96b053324a6d28e02ee2e2cb1869fc0672ef6652367afccd603112974b8\" returns successfully" May 15 00:45:12.317964 containerd[2742]: time="2025-05-15T00:45:12.317940489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 15 00:45:12.344729 kubelet[4479]: I0515 00:45:12.344680 4479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5bdb66bdfc-vlmcd" podStartSLOduration=1.20131308 podStartE2EDuration="2.344664221s" podCreationTimestamp="2025-05-15 00:45:10 +0000 UTC" firstStartedPulling="2025-05-15 00:45:11.084547679 +0000 UTC m=+21.871013041" lastFinishedPulling="2025-05-15 00:45:12.22789882 +0000 UTC m=+23.014364182" observedRunningTime="2025-05-15 00:45:12.344571175 +0000 UTC m=+23.131036537" watchObservedRunningTime="2025-05-15 00:45:12.344664221 +0000 UTC m=+23.131129583" May 15 00:45:13.319976 kubelet[4479]: I0515 00:45:13.319951 4479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:45:13.813976 containerd[2742]: time="2025-05-15T00:45:13.813940227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:45:13.814375 containerd[2742]: time="2025-05-15T00:45:13.813978869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 15 00:45:13.816284 containerd[2742]: time="2025-05-15T00:45:13.816249077Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:45:13.817205 containerd[2742]: time="2025-05-15T00:45:13.817133327Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 1.499162796s" May 15 00:45:13.817205 containerd[2742]: time="2025-05-15T00:45:13.817168289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 15 00:45:13.817606 containerd[2742]: time="2025-05-15T00:45:13.817553871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:45:13.818904 containerd[2742]: time="2025-05-15T00:45:13.818863545Z" level=info msg="CreateContainer within sandbox \"5f4a79ceeba17cbb92465f087d4596824a2f94e9c56ab36af3cefcfb5e6078a9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 15 00:45:13.824596 containerd[2742]: time="2025-05-15T00:45:13.824572348Z" level=info msg="CreateContainer within sandbox \"5f4a79ceeba17cbb92465f087d4596824a2f94e9c56ab36af3cefcfb5e6078a9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e8f92d24e2a7968d2d20c2cbc417ad6a6b89f92931ced6931d9b17da7f466788\"" May 15 00:45:13.824948 containerd[2742]: time="2025-05-15T00:45:13.824927888Z" level=info msg="StartContainer for \"e8f92d24e2a7968d2d20c2cbc417ad6a6b89f92931ced6931d9b17da7f466788\"" May 15 00:45:13.859095 systemd[1]: Started cri-containerd-e8f92d24e2a7968d2d20c2cbc417ad6a6b89f92931ced6931d9b17da7f466788.scope - libcontainer container e8f92d24e2a7968d2d20c2cbc417ad6a6b89f92931ced6931d9b17da7f466788. May 15 00:45:13.879537 containerd[2742]: time="2025-05-15T00:45:13.879503293Z" level=info msg="StartContainer for \"e8f92d24e2a7968d2d20c2cbc417ad6a6b89f92931ced6931d9b17da7f466788\" returns successfully" May 15 00:45:14.229413 containerd[2742]: time="2025-05-15T00:45:14.229379023Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 00:45:14.230883 systemd[1]: cri-containerd-e8f92d24e2a7968d2d20c2cbc417ad6a6b89f92931ced6931d9b17da7f466788.scope: Deactivated successfully. May 15 00:45:14.231198 systemd[1]: cri-containerd-e8f92d24e2a7968d2d20c2cbc417ad6a6b89f92931ced6931d9b17da7f466788.scope: Consumed 875ms CPU time, 180.9M memory peak, 150.3M written to disk. May 15 00:45:14.245269 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8f92d24e2a7968d2d20c2cbc417ad6a6b89f92931ced6931d9b17da7f466788-rootfs.mount: Deactivated successfully. May 15 00:45:14.256960 kubelet[4479]: I0515 00:45:14.256934 4479 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 15 00:45:14.270094 kubelet[4479]: I0515 00:45:14.269705 4479 topology_manager.go:215] "Topology Admit Handler" podUID="9172cd52-0d57-416c-ba87-816358bd2302" podNamespace="calico-system" podName="calico-kube-controllers-596b57fb45-8rtv2" May 15 00:45:14.273205 kubelet[4479]: I0515 00:45:14.270241 4479 topology_manager.go:215] "Topology Admit Handler" podUID="439669d8-aecd-4ab8-a6b4-fddb8f0d476b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-nzhwf" May 15 00:45:14.273205 kubelet[4479]: I0515 00:45:14.270335 4479 topology_manager.go:215] "Topology Admit Handler" podUID="261e97f9-6a47-47b9-9d3b-8808cd9cee51" podNamespace="calico-apiserver" podName="calico-apiserver-8c899d4d9-gzqjf" May 15 00:45:14.273205 kubelet[4479]: I0515 00:45:14.270639 4479 topology_manager.go:215] "Topology Admit Handler" podUID="3c5cf155-23c9-4359-ad93-bccee4c59f03" podNamespace="calico-apiserver" podName="calico-apiserver-8c899d4d9-smfp6" May 15 00:45:14.273205 kubelet[4479]: I0515 00:45:14.270874 4479 topology_manager.go:215] "Topology Admit Handler" podUID="c07e4604-a033-42cb-84d7-c3f474ee1073" podNamespace="kube-system" podName="coredns-7db6d8ff4d-vrcxb" May 15 00:45:14.274915 systemd[1]: Created slice kubepods-besteffort-pod261e97f9_6a47_47b9_9d3b_8808cd9cee51.slice - libcontainer container kubepods-besteffort-pod261e97f9_6a47_47b9_9d3b_8808cd9cee51.slice. May 15 00:45:14.278881 systemd[1]: Created slice kubepods-besteffort-pod9172cd52_0d57_416c_ba87_816358bd2302.slice - libcontainer container kubepods-besteffort-pod9172cd52_0d57_416c_ba87_816358bd2302.slice. May 15 00:45:14.283466 systemd[1]: Created slice kubepods-burstable-pod439669d8_aecd_4ab8_a6b4_fddb8f0d476b.slice - libcontainer container kubepods-burstable-pod439669d8_aecd_4ab8_a6b4_fddb8f0d476b.slice. May 15 00:45:14.287788 systemd[1]: Created slice kubepods-besteffort-pod3c5cf155_23c9_4359_ad93_bccee4c59f03.slice - libcontainer container kubepods-besteffort-pod3c5cf155_23c9_4359_ad93_bccee4c59f03.slice. May 15 00:45:14.291304 systemd[1]: Created slice kubepods-burstable-podc07e4604_a033_42cb_84d7_c3f474ee1073.slice - libcontainer container kubepods-burstable-podc07e4604_a033_42cb_84d7_c3f474ee1073.slice. May 15 00:45:14.294849 systemd[1]: Created slice kubepods-besteffort-podd4c8c3bf_81ae_482c_8ebe_086a362f11d9.slice - libcontainer container kubepods-besteffort-podd4c8c3bf_81ae_482c_8ebe_086a362f11d9.slice. May 15 00:45:14.296492 containerd[2742]: time="2025-05-15T00:45:14.296464098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lhgzs,Uid:d4c8c3bf-81ae-482c-8ebe-086a362f11d9,Namespace:calico-system,Attempt:0,}" May 15 00:45:14.344699 kubelet[4479]: I0515 00:45:14.344664 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/261e97f9-6a47-47b9-9d3b-8808cd9cee51-calico-apiserver-certs\") pod \"calico-apiserver-8c899d4d9-gzqjf\" (UID: \"261e97f9-6a47-47b9-9d3b-8808cd9cee51\") " pod="calico-apiserver/calico-apiserver-8c899d4d9-gzqjf" May 15 00:45:14.344699 kubelet[4479]: I0515 00:45:14.344704 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3c5cf155-23c9-4359-ad93-bccee4c59f03-calico-apiserver-certs\") pod \"calico-apiserver-8c899d4d9-smfp6\" (UID: \"3c5cf155-23c9-4359-ad93-bccee4c59f03\") " pod="calico-apiserver/calico-apiserver-8c899d4d9-smfp6" May 15 00:45:14.345103 kubelet[4479]: I0515 00:45:14.344757 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/439669d8-aecd-4ab8-a6b4-fddb8f0d476b-config-volume\") pod \"coredns-7db6d8ff4d-nzhwf\" (UID: \"439669d8-aecd-4ab8-a6b4-fddb8f0d476b\") " pod="kube-system/coredns-7db6d8ff4d-nzhwf" May 15 00:45:14.345103 kubelet[4479]: I0515 00:45:14.344775 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcqt7\" (UniqueName: \"kubernetes.io/projected/3c5cf155-23c9-4359-ad93-bccee4c59f03-kube-api-access-dcqt7\") pod \"calico-apiserver-8c899d4d9-smfp6\" (UID: \"3c5cf155-23c9-4359-ad93-bccee4c59f03\") " pod="calico-apiserver/calico-apiserver-8c899d4d9-smfp6" May 15 00:45:14.345103 kubelet[4479]: I0515 00:45:14.344794 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfgl5\" (UniqueName: \"kubernetes.io/projected/261e97f9-6a47-47b9-9d3b-8808cd9cee51-kube-api-access-tfgl5\") pod \"calico-apiserver-8c899d4d9-gzqjf\" (UID: \"261e97f9-6a47-47b9-9d3b-8808cd9cee51\") " pod="calico-apiserver/calico-apiserver-8c899d4d9-gzqjf" May 15 00:45:14.345103 kubelet[4479]: I0515 00:45:14.344822 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm6q4\" (UniqueName: \"kubernetes.io/projected/439669d8-aecd-4ab8-a6b4-fddb8f0d476b-kube-api-access-cm6q4\") pod \"coredns-7db6d8ff4d-nzhwf\" (UID: \"439669d8-aecd-4ab8-a6b4-fddb8f0d476b\") " pod="kube-system/coredns-7db6d8ff4d-nzhwf" May 15 00:45:14.345103 kubelet[4479]: I0515 00:45:14.344839 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvdpm\" (UniqueName: \"kubernetes.io/projected/9172cd52-0d57-416c-ba87-816358bd2302-kube-api-access-hvdpm\") pod \"calico-kube-controllers-596b57fb45-8rtv2\" (UID: \"9172cd52-0d57-416c-ba87-816358bd2302\") " pod="calico-system/calico-kube-controllers-596b57fb45-8rtv2" May 15 00:45:14.345315 kubelet[4479]: I0515 00:45:14.344859 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9172cd52-0d57-416c-ba87-816358bd2302-tigera-ca-bundle\") pod \"calico-kube-controllers-596b57fb45-8rtv2\" (UID: \"9172cd52-0d57-416c-ba87-816358bd2302\") " pod="calico-system/calico-kube-controllers-596b57fb45-8rtv2" May 15 00:45:14.345315 kubelet[4479]: I0515 00:45:14.344874 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c07e4604-a033-42cb-84d7-c3f474ee1073-config-volume\") pod \"coredns-7db6d8ff4d-vrcxb\" (UID: \"c07e4604-a033-42cb-84d7-c3f474ee1073\") " pod="kube-system/coredns-7db6d8ff4d-vrcxb" May 15 00:45:14.345315 kubelet[4479]: I0515 00:45:14.344890 4479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbtkp\" (UniqueName: \"kubernetes.io/projected/c07e4604-a033-42cb-84d7-c3f474ee1073-kube-api-access-mbtkp\") pod \"coredns-7db6d8ff4d-vrcxb\" (UID: \"c07e4604-a033-42cb-84d7-c3f474ee1073\") " pod="kube-system/coredns-7db6d8ff4d-vrcxb" May 15 00:45:14.462220 containerd[2742]: time="2025-05-15T00:45:14.462161199Z" level=info msg="shim disconnected" id=e8f92d24e2a7968d2d20c2cbc417ad6a6b89f92931ced6931d9b17da7f466788 namespace=k8s.io May 15 00:45:14.462220 containerd[2742]: time="2025-05-15T00:45:14.462214282Z" level=warning msg="cleaning up after shim disconnected" id=e8f92d24e2a7968d2d20c2cbc417ad6a6b89f92931ced6931d9b17da7f466788 namespace=k8s.io May 15 00:45:14.462220 containerd[2742]: time="2025-05-15T00:45:14.462222162Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:45:14.514824 containerd[2742]: time="2025-05-15T00:45:14.514749986Z" level=error msg="Failed to destroy network for sandbox \"0fce0119c3f9429803f91957b67a10b2d0b2c830bc2b19a8820f9661d89e69a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:14.515084 containerd[2742]: time="2025-05-15T00:45:14.515058442Z" level=error msg="encountered an error cleaning up failed sandbox \"0fce0119c3f9429803f91957b67a10b2d0b2c830bc2b19a8820f9661d89e69a3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:14.515136 containerd[2742]: time="2025-05-15T00:45:14.515116485Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lhgzs,Uid:d4c8c3bf-81ae-482c-8ebe-086a362f11d9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0fce0119c3f9429803f91957b67a10b2d0b2c830bc2b19a8820f9661d89e69a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:14.515329 kubelet[4479]: E0515 00:45:14.515293 4479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fce0119c3f9429803f91957b67a10b2d0b2c830bc2b19a8820f9661d89e69a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:14.515367 kubelet[4479]: E0515 00:45:14.515354 4479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fce0119c3f9429803f91957b67a10b2d0b2c830bc2b19a8820f9661d89e69a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lhgzs" May 15 00:45:14.515388 kubelet[4479]: E0515 00:45:14.515373 4479 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fce0119c3f9429803f91957b67a10b2d0b2c830bc2b19a8820f9661d89e69a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lhgzs" May 15 00:45:14.515436 kubelet[4479]: E0515 00:45:14.515413 4479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lhgzs_calico-system(d4c8c3bf-81ae-482c-8ebe-086a362f11d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lhgzs_calico-system(d4c8c3bf-81ae-482c-8ebe-086a362f11d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0fce0119c3f9429803f91957b67a10b2d0b2c830bc2b19a8820f9661d89e69a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lhgzs" podUID="d4c8c3bf-81ae-482c-8ebe-086a362f11d9" May 15 00:45:14.577377 containerd[2742]: time="2025-05-15T00:45:14.577340743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8c899d4d9-gzqjf,Uid:261e97f9-6a47-47b9-9d3b-8808cd9cee51,Namespace:calico-apiserver,Attempt:0,}" May 15 00:45:14.580903 containerd[2742]: time="2025-05-15T00:45:14.580870330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596b57fb45-8rtv2,Uid:9172cd52-0d57-416c-ba87-816358bd2302,Namespace:calico-system,Attempt:0,}" May 15 00:45:14.586533 containerd[2742]: time="2025-05-15T00:45:14.586507829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzhwf,Uid:439669d8-aecd-4ab8-a6b4-fddb8f0d476b,Namespace:kube-system,Attempt:0,}" May 15 00:45:14.590035 containerd[2742]: time="2025-05-15T00:45:14.590009814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8c899d4d9-smfp6,Uid:3c5cf155-23c9-4359-ad93-bccee4c59f03,Namespace:calico-apiserver,Attempt:0,}" May 15 00:45:14.593598 containerd[2742]: time="2025-05-15T00:45:14.593571643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vrcxb,Uid:c07e4604-a033-42cb-84d7-c3f474ee1073,Namespace:kube-system,Attempt:0,}" May 15 00:45:14.626877 containerd[2742]: time="2025-05-15T00:45:14.626828765Z" level=error msg="Failed to destroy network for sandbox \"eaedd48b44f386ae86ad7d510a4e82c76588335e346e5f046a7e3921144b79af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:14.627296 containerd[2742]: time="2025-05-15T00:45:14.627270109Z" level=error msg="encountered an error cleaning up failed sandbox \"eaedd48b44f386ae86ad7d510a4e82c76588335e346e5f046a7e3921144b79af\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:14.627354 containerd[2742]: time="2025-05-15T00:45:14.627338832Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596b57fb45-8rtv2,Uid:9172cd52-0d57-416c-ba87-816358bd2302,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eaedd48b44f386ae86ad7d510a4e82c76588335e346e5f046a7e3921144b79af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:14.627421 containerd[2742]: time="2025-05-15T00:45:14.627388035Z" level=error msg="Failed to destroy network for sandbox \"6a46bdf3b6328eb42ffd7a69e6a280c0d8c593d3ad715b2a12e5d54f363b4a45\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:14.627559 kubelet[4479]: E0515 00:45:14.627527 4479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eaedd48b44f386ae86ad7d510a4e82c76588335e346e5f046a7e3921144b79af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:14.627594 kubelet[4479]: E0515 00:45:14.627580 4479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eaedd48b44f386ae86ad7d510a4e82c76588335e346e5f046a7e3921144b79af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-596b57fb45-8rtv2" May 15 00:45:14.627617 kubelet[4479]: E0515 00:45:14.627598 4479 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eaedd48b44f386ae86ad7d510a4e82c76588335e346e5f046a7e3921144b79af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-596b57fb45-8rtv2" May 15 00:45:14.627658 kubelet[4479]: E0515 00:45:14.627638 4479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-596b57fb45-8rtv2_calico-system(9172cd52-0d57-416c-ba87-816358bd2302)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-596b57fb45-8rtv2_calico-system(9172cd52-0d57-416c-ba87-816358bd2302)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eaedd48b44f386ae86ad7d510a4e82c76588335e346e5f046a7e3921144b79af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-596b57fb45-8rtv2" podUID="9172cd52-0d57-416c-ba87-816358bd2302" May 15 00:45:14.627756 containerd[2742]: time="2025-05-15T00:45:14.627730933Z" level=error msg="encountered an error cleaning up failed sandbox \"6a46bdf3b6328eb42ffd7a69e6a280c0d8c593d3ad715b2a12e5d54f363b4a45\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:14.627797 containerd[2742]: time="2025-05-15T00:45:14.627781736Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8c899d4d9-gzqjf,Uid:261e97f9-6a47-47b9-9d3b-8808cd9cee51,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6a46bdf3b6328eb42ffd7a69e6a280c0d8c593d3ad715b2a12e5d54f363b4a45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:14.627912 kubelet[4479]: E0515 00:45:14.627889 4479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a46bdf3b6328eb42ffd7a69e6a280c0d8c593d3ad715b2a12e5d54f363b4a45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:14.627937 kubelet[4479]: E0515 00:45:14.627927 4479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a46bdf3b6328eb42ffd7a69e6a280c0d8c593d3ad715b2a12e5d54f363b4a45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8c899d4d9-gzqjf" May 15 00:45:14.627956 kubelet[4479]: E0515 00:45:14.627944 4479 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a46bdf3b6328eb42ffd7a69e6a280c0d8c593d3ad715b2a12e5d54f363b4a45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8c899d4d9-gzqjf" May 15 00:45:14.628003 kubelet[4479]: E0515 00:45:14.627977 4479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8c899d4d9-gzqjf_calico-apiserver(261e97f9-6a47-47b9-9d3b-8808cd9cee51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8c899d4d9-gzqjf_calico-apiserver(261e97f9-6a47-47b9-9d3b-8808cd9cee51)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a46bdf3b6328eb42ffd7a69e6a280c0d8c593d3ad715b2a12e5d54f363b4a45\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8c899d4d9-gzqjf" podUID="261e97f9-6a47-47b9-9d3b-8808cd9cee51" May 15 00:45:14.631494 containerd[2742]: time="2025-05-15T00:45:14.631455090Z" level=error msg="Failed to destroy network for sandbox \"8569a80021e912c60d5fb9151b75e24a1a43faa25724b7430a62874467169141\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:14.631794 containerd[2742]: time="2025-05-15T00:45:14.631772347Z" level=error msg="encountered an error cleaning up failed sandbox \"8569a80021e912c60d5fb9151b75e24a1a43faa25724b7430a62874467169141\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:14.631832 containerd[2742]: time="2025-05-15T00:45:14.631817670Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzhwf,Uid:439669d8-aecd-4ab8-a6b4-fddb8f0d476b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8569a80021e912c60d5fb9151b75e24a1a43faa25724b7430a62874467169141\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:14.631947 kubelet[4479]: E0515 00:45:14.631926 4479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8569a80021e912c60d5fb9151b75e24a1a43faa25724b7430a62874467169141\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:14.631977 kubelet[4479]: E0515 00:45:14.631959 4479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8569a80021e912c60d5fb9151b75e24a1a43faa25724b7430a62874467169141\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nzhwf" May 15 00:45:14.632018 kubelet[4479]: E0515 00:45:14.631977 4479 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8569a80021e912c60d5fb9151b75e24a1a43faa25724b7430a62874467169141\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nzhwf" May 15 00:45:14.632046 kubelet[4479]: E0515 00:45:14.632028 4479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nzhwf_kube-system(439669d8-aecd-4ab8-a6b4-fddb8f0d476b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nzhwf_kube-system(439669d8-aecd-4ab8-a6b4-fddb8f0d476b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8569a80021e912c60d5fb9151b75e24a1a43faa25724b7430a62874467169141\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nzhwf" podUID="439669d8-aecd-4ab8-a6b4-fddb8f0d476b" May 15 00:45:14.635071 containerd[2742]: time="2025-05-15T00:45:14.635043841Z" level=error msg="Failed to destroy network for sandbox \"0a83d9d484576d9844bcc82eafa65c2799ad439480b7d80198fb65d09db951da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:14.635446 containerd[2742]: time="2025-05-15T00:45:14.635383779Z" level=error msg="encountered an error cleaning up failed sandbox \"0a83d9d484576d9844bcc82eafa65c2799ad439480b7d80198fb65d09db951da\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:14.635446 containerd[2742]: time="2025-05-15T00:45:14.635429021Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8c899d4d9-smfp6,Uid:3c5cf155-23c9-4359-ad93-bccee4c59f03,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0a83d9d484576d9844bcc82eafa65c2799ad439480b7d80198fb65d09db951da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:14.635601 kubelet[4479]: E0515 00:45:14.635570 4479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a83d9d484576d9844bcc82eafa65c2799ad439480b7d80198fb65d09db951da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:14.635639 kubelet[4479]: E0515 00:45:14.635610 4479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a83d9d484576d9844bcc82eafa65c2799ad439480b7d80198fb65d09db951da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8c899d4d9-smfp6" May 15 00:45:14.635639 kubelet[4479]: E0515 00:45:14.635626 4479 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a83d9d484576d9844bcc82eafa65c2799ad439480b7d80198fb65d09db951da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8c899d4d9-smfp6" May 15 00:45:14.635684 kubelet[4479]: E0515 00:45:14.635655 4479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8c899d4d9-smfp6_calico-apiserver(3c5cf155-23c9-4359-ad93-bccee4c59f03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8c899d4d9-smfp6_calico-apiserver(3c5cf155-23c9-4359-ad93-bccee4c59f03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a83d9d484576d9844bcc82eafa65c2799ad439480b7d80198fb65d09db951da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8c899d4d9-smfp6" podUID="3c5cf155-23c9-4359-ad93-bccee4c59f03" May 15 00:45:14.638835 containerd[2742]: time="2025-05-15T00:45:14.638805360Z" level=error msg="Failed to destroy network for sandbox \"f1b3dd443a8fe16b44a1b8bbfe9267a0e937dc8adf28fb8076ddb7034d32caa3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:14.639297 containerd[2742]: time="2025-05-15T00:45:14.639246583Z" level=error msg="encountered an error cleaning up failed sandbox \"f1b3dd443a8fe16b44a1b8bbfe9267a0e937dc8adf28fb8076ddb7034d32caa3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:14.639340 containerd[2742]: time="2025-05-15T00:45:14.639324107Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vrcxb,Uid:c07e4604-a033-42cb-84d7-c3f474ee1073,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f1b3dd443a8fe16b44a1b8bbfe9267a0e937dc8adf28fb8076ddb7034d32caa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:14.639470 kubelet[4479]: E0515 00:45:14.639444 4479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1b3dd443a8fe16b44a1b8bbfe9267a0e937dc8adf28fb8076ddb7034d32caa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:14.639500 kubelet[4479]: E0515 00:45:14.639488 4479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1b3dd443a8fe16b44a1b8bbfe9267a0e937dc8adf28fb8076ddb7034d32caa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vrcxb" May 15 00:45:14.639526 kubelet[4479]: E0515 00:45:14.639505 4479 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1b3dd443a8fe16b44a1b8bbfe9267a0e937dc8adf28fb8076ddb7034d32caa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vrcxb" May 15 00:45:14.639561 kubelet[4479]: E0515 00:45:14.639539 4479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-vrcxb_kube-system(c07e4604-a033-42cb-84d7-c3f474ee1073)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-vrcxb_kube-system(c07e4604-a033-42cb-84d7-c3f474ee1073)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1b3dd443a8fe16b44a1b8bbfe9267a0e937dc8adf28fb8076ddb7034d32caa3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vrcxb" podUID="c07e4604-a033-42cb-84d7-c3f474ee1073" May 15 00:45:14.929365 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0fce0119c3f9429803f91957b67a10b2d0b2c830bc2b19a8820f9661d89e69a3-shm.mount: Deactivated successfully. May 15 00:45:15.324641 kubelet[4479]: I0515 00:45:15.324613 4479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a46bdf3b6328eb42ffd7a69e6a280c0d8c593d3ad715b2a12e5d54f363b4a45" May 15 00:45:15.325301 kubelet[4479]: I0515 00:45:15.325283 4479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fce0119c3f9429803f91957b67a10b2d0b2c830bc2b19a8820f9661d89e69a3" May 15 00:45:15.325664 containerd[2742]: time="2025-05-15T00:45:15.325631004Z" level=info msg="StopPodSandbox for \"6a46bdf3b6328eb42ffd7a69e6a280c0d8c593d3ad715b2a12e5d54f363b4a45\"" May 15 00:45:15.325881 containerd[2742]: time="2025-05-15T00:45:15.325649484Z" level=info msg="StopPodSandbox for \"0fce0119c3f9429803f91957b67a10b2d0b2c830bc2b19a8820f9661d89e69a3\"" May 15 00:45:15.325881 containerd[2742]: time="2025-05-15T00:45:15.325809132Z" level=info msg="Ensure that sandbox 6a46bdf3b6328eb42ffd7a69e6a280c0d8c593d3ad715b2a12e5d54f363b4a45 in task-service has been cleanup successfully" May 15 00:45:15.325881 containerd[2742]: time="2025-05-15T00:45:15.325866295Z" level=info msg="Ensure that sandbox 0fce0119c3f9429803f91957b67a10b2d0b2c830bc2b19a8820f9661d89e69a3 in task-service has been cleanup successfully" May 15 00:45:15.326049 containerd[2742]: time="2025-05-15T00:45:15.326033944Z" level=info msg="TearDown network for sandbox \"6a46bdf3b6328eb42ffd7a69e6a280c0d8c593d3ad715b2a12e5d54f363b4a45\" successfully" May 15 00:45:15.326072 containerd[2742]: time="2025-05-15T00:45:15.326049184Z" level=info msg="StopPodSandbox for \"6a46bdf3b6328eb42ffd7a69e6a280c0d8c593d3ad715b2a12e5d54f363b4a45\" returns successfully" May 15 00:45:15.326090 containerd[2742]: time="2025-05-15T00:45:15.326073145Z" level=info msg="TearDown network for sandbox \"0fce0119c3f9429803f91957b67a10b2d0b2c830bc2b19a8820f9661d89e69a3\" successfully" May 15 00:45:15.326107 containerd[2742]: time="2025-05-15T00:45:15.326087546Z" level=info msg="StopPodSandbox for \"0fce0119c3f9429803f91957b67a10b2d0b2c830bc2b19a8820f9661d89e69a3\" returns successfully" May 15 00:45:15.326487 containerd[2742]: time="2025-05-15T00:45:15.326468805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8c899d4d9-gzqjf,Uid:261e97f9-6a47-47b9-9d3b-8808cd9cee51,Namespace:calico-apiserver,Attempt:1,}" May 15 00:45:15.326596 containerd[2742]: time="2025-05-15T00:45:15.326574250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lhgzs,Uid:d4c8c3bf-81ae-482c-8ebe-086a362f11d9,Namespace:calico-system,Attempt:1,}" May 15 00:45:15.327350 containerd[2742]: time="2025-05-15T00:45:15.327335368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 00:45:15.327455 systemd[1]: run-netns-cni\x2d2d87ebf2\x2d121c\x2d46be\x2da438\x2d93562e183659.mount: Deactivated successfully. May 15 00:45:15.327506 kubelet[4479]: I0515 00:45:15.327494 4479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1b3dd443a8fe16b44a1b8bbfe9267a0e937dc8adf28fb8076ddb7034d32caa3" May 15 00:45:15.327531 systemd[1]: run-netns-cni\x2db977a4e4\x2de963\x2d36c7\x2d0a88\x2d74e6f2803e5b.mount: Deactivated successfully. May 15 00:45:15.327865 containerd[2742]: time="2025-05-15T00:45:15.327846714Z" level=info msg="StopPodSandbox for \"f1b3dd443a8fe16b44a1b8bbfe9267a0e937dc8adf28fb8076ddb7034d32caa3\"" May 15 00:45:15.328004 containerd[2742]: time="2025-05-15T00:45:15.327984920Z" level=info msg="Ensure that sandbox f1b3dd443a8fe16b44a1b8bbfe9267a0e937dc8adf28fb8076ddb7034d32caa3 in task-service has been cleanup successfully" May 15 00:45:15.328149 containerd[2742]: time="2025-05-15T00:45:15.328136208Z" level=info msg="TearDown network for sandbox \"f1b3dd443a8fe16b44a1b8bbfe9267a0e937dc8adf28fb8076ddb7034d32caa3\" successfully" May 15 00:45:15.328169 containerd[2742]: time="2025-05-15T00:45:15.328149609Z" level=info msg="StopPodSandbox for \"f1b3dd443a8fe16b44a1b8bbfe9267a0e937dc8adf28fb8076ddb7034d32caa3\" returns successfully" May 15 00:45:15.328228 kubelet[4479]: I0515 00:45:15.328213 4479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a83d9d484576d9844bcc82eafa65c2799ad439480b7d80198fb65d09db951da" May 15 00:45:15.328470 containerd[2742]: time="2025-05-15T00:45:15.328451184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vrcxb,Uid:c07e4604-a033-42cb-84d7-c3f474ee1073,Namespace:kube-system,Attempt:1,}" May 15 00:45:15.328612 containerd[2742]: time="2025-05-15T00:45:15.328592471Z" level=info msg="StopPodSandbox for \"0a83d9d484576d9844bcc82eafa65c2799ad439480b7d80198fb65d09db951da\"" May 15 00:45:15.328750 containerd[2742]: time="2025-05-15T00:45:15.328737678Z" level=info msg="Ensure that sandbox 0a83d9d484576d9844bcc82eafa65c2799ad439480b7d80198fb65d09db951da in task-service has been cleanup successfully" May 15 00:45:15.328895 containerd[2742]: time="2025-05-15T00:45:15.328881285Z" level=info msg="TearDown network for sandbox \"0a83d9d484576d9844bcc82eafa65c2799ad439480b7d80198fb65d09db951da\" successfully" May 15 00:45:15.328916 containerd[2742]: time="2025-05-15T00:45:15.328894886Z" level=info msg="StopPodSandbox for \"0a83d9d484576d9844bcc82eafa65c2799ad439480b7d80198fb65d09db951da\" returns successfully" May 15 00:45:15.329027 kubelet[4479]: I0515 00:45:15.329014 4479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8569a80021e912c60d5fb9151b75e24a1a43faa25724b7430a62874467169141" May 15 00:45:15.329225 containerd[2742]: time="2025-05-15T00:45:15.329208221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8c899d4d9-smfp6,Uid:3c5cf155-23c9-4359-ad93-bccee4c59f03,Namespace:calico-apiserver,Attempt:1,}" May 15 00:45:15.329388 containerd[2742]: time="2025-05-15T00:45:15.329372709Z" level=info msg="StopPodSandbox for \"8569a80021e912c60d5fb9151b75e24a1a43faa25724b7430a62874467169141\"" May 15 00:45:15.329505 containerd[2742]: time="2025-05-15T00:45:15.329493035Z" level=info msg="Ensure that sandbox 8569a80021e912c60d5fb9151b75e24a1a43faa25724b7430a62874467169141 in task-service has been cleanup successfully" May 15 00:45:15.329502 systemd[1]: run-netns-cni\x2dca8bf18e\x2dd4f8\x2d91e9\x2d5bb8\x2d9b1b06246a25.mount: Deactivated successfully. May 15 00:45:15.329635 kubelet[4479]: I0515 00:45:15.329618 4479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eaedd48b44f386ae86ad7d510a4e82c76588335e346e5f046a7e3921144b79af" May 15 00:45:15.329663 containerd[2742]: time="2025-05-15T00:45:15.329637563Z" level=info msg="TearDown network for sandbox \"8569a80021e912c60d5fb9151b75e24a1a43faa25724b7430a62874467169141\" successfully" May 15 00:45:15.329663 containerd[2742]: time="2025-05-15T00:45:15.329651283Z" level=info msg="StopPodSandbox for \"8569a80021e912c60d5fb9151b75e24a1a43faa25724b7430a62874467169141\" returns successfully" May 15 00:45:15.330003 containerd[2742]: time="2025-05-15T00:45:15.329980700Z" level=info msg="StopPodSandbox for \"eaedd48b44f386ae86ad7d510a4e82c76588335e346e5f046a7e3921144b79af\"" May 15 00:45:15.330022 containerd[2742]: time="2025-05-15T00:45:15.330001581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzhwf,Uid:439669d8-aecd-4ab8-a6b4-fddb8f0d476b,Namespace:kube-system,Attempt:1,}" May 15 00:45:15.330125 containerd[2742]: time="2025-05-15T00:45:15.330108986Z" level=info msg="Ensure that sandbox eaedd48b44f386ae86ad7d510a4e82c76588335e346e5f046a7e3921144b79af in task-service has been cleanup successfully" May 15 00:45:15.330256 containerd[2742]: time="2025-05-15T00:45:15.330244513Z" level=info msg="TearDown network for sandbox \"eaedd48b44f386ae86ad7d510a4e82c76588335e346e5f046a7e3921144b79af\" successfully" May 15 00:45:15.330282 containerd[2742]: time="2025-05-15T00:45:15.330256793Z" level=info msg="StopPodSandbox for \"eaedd48b44f386ae86ad7d510a4e82c76588335e346e5f046a7e3921144b79af\" returns successfully" May 15 00:45:15.330604 containerd[2742]: time="2025-05-15T00:45:15.330584810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596b57fb45-8rtv2,Uid:9172cd52-0d57-416c-ba87-816358bd2302,Namespace:calico-system,Attempt:1,}" May 15 00:45:15.331263 systemd[1]: run-netns-cni\x2d29edbb2e\x2d3359\x2dd1f5\x2dc39e\x2df53453a38abc.mount: Deactivated successfully. May 15 00:45:15.331332 systemd[1]: run-netns-cni\x2df817ee44\x2d575b\x2d0c15\x2dc210\x2d880898c03198.mount: Deactivated successfully. May 15 00:45:15.331377 systemd[1]: run-netns-cni\x2d71df1fdf\x2dec1d\x2dc75f\x2d6119\x2dc7fd92464f0a.mount: Deactivated successfully. May 15 00:45:15.375610 containerd[2742]: time="2025-05-15T00:45:15.375562084Z" level=error msg="Failed to destroy network for sandbox \"844799c848ddb75ffd035f8ecc112096a30f6cd2c55c728b7dfb087bf50f047d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:15.375981 containerd[2742]: time="2025-05-15T00:45:15.375950463Z" level=error msg="encountered an error cleaning up failed sandbox \"844799c848ddb75ffd035f8ecc112096a30f6cd2c55c728b7dfb087bf50f047d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:15.376049 containerd[2742]: time="2025-05-15T00:45:15.376031227Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8c899d4d9-gzqjf,Uid:261e97f9-6a47-47b9-9d3b-8808cd9cee51,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"844799c848ddb75ffd035f8ecc112096a30f6cd2c55c728b7dfb087bf50f047d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:15.376263 kubelet[4479]: E0515 00:45:15.376232 4479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"844799c848ddb75ffd035f8ecc112096a30f6cd2c55c728b7dfb087bf50f047d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:15.376500 kubelet[4479]: E0515 00:45:15.376285 4479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"844799c848ddb75ffd035f8ecc112096a30f6cd2c55c728b7dfb087bf50f047d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8c899d4d9-gzqjf" May 15 00:45:15.376500 kubelet[4479]: E0515 00:45:15.376305 4479 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"844799c848ddb75ffd035f8ecc112096a30f6cd2c55c728b7dfb087bf50f047d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8c899d4d9-gzqjf" May 15 00:45:15.376500 kubelet[4479]: E0515 00:45:15.376341 4479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8c899d4d9-gzqjf_calico-apiserver(261e97f9-6a47-47b9-9d3b-8808cd9cee51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8c899d4d9-gzqjf_calico-apiserver(261e97f9-6a47-47b9-9d3b-8808cd9cee51)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"844799c848ddb75ffd035f8ecc112096a30f6cd2c55c728b7dfb087bf50f047d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8c899d4d9-gzqjf" podUID="261e97f9-6a47-47b9-9d3b-8808cd9cee51" May 15 00:45:15.376588 containerd[2742]: time="2025-05-15T00:45:15.376555214Z" level=error msg="Failed to destroy network for sandbox \"72c303f90d5db1dd84e9b19b5c453e8009c8dd55b3bab78b8b65ac7fa1239059\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:15.376897 containerd[2742]: time="2025-05-15T00:45:15.376874589Z" level=error msg="encountered an error cleaning up failed sandbox \"72c303f90d5db1dd84e9b19b5c453e8009c8dd55b3bab78b8b65ac7fa1239059\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:15.376940 containerd[2742]: time="2025-05-15T00:45:15.376924472Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lhgzs,Uid:d4c8c3bf-81ae-482c-8ebe-086a362f11d9,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"72c303f90d5db1dd84e9b19b5c453e8009c8dd55b3bab78b8b65ac7fa1239059\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:15.377056 kubelet[4479]: E0515 00:45:15.377031 4479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72c303f90d5db1dd84e9b19b5c453e8009c8dd55b3bab78b8b65ac7fa1239059\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:15.377088 kubelet[4479]: E0515 00:45:15.377072 4479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72c303f90d5db1dd84e9b19b5c453e8009c8dd55b3bab78b8b65ac7fa1239059\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lhgzs" May 15 00:45:15.377111 kubelet[4479]: E0515 00:45:15.377089 4479 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72c303f90d5db1dd84e9b19b5c453e8009c8dd55b3bab78b8b65ac7fa1239059\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lhgzs" May 15 00:45:15.377144 kubelet[4479]: E0515 00:45:15.377121 4479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lhgzs_calico-system(d4c8c3bf-81ae-482c-8ebe-086a362f11d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lhgzs_calico-system(d4c8c3bf-81ae-482c-8ebe-086a362f11d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72c303f90d5db1dd84e9b19b5c453e8009c8dd55b3bab78b8b65ac7fa1239059\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lhgzs" podUID="d4c8c3bf-81ae-482c-8ebe-086a362f11d9" May 15 00:45:15.378489 containerd[2742]: time="2025-05-15T00:45:15.378459508Z" level=error msg="Failed to destroy network for sandbox \"e350aa87adb53dcc992ece155cc0f2c80a06f3e56a3f2d6b4b72c306fd6154d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:15.378773 containerd[2742]: time="2025-05-15T00:45:15.378751123Z" level=error msg="encountered an error cleaning up failed sandbox \"e350aa87adb53dcc992ece155cc0f2c80a06f3e56a3f2d6b4b72c306fd6154d5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:15.378813 containerd[2742]: time="2025-05-15T00:45:15.378798925Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vrcxb,Uid:c07e4604-a033-42cb-84d7-c3f474ee1073,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"e350aa87adb53dcc992ece155cc0f2c80a06f3e56a3f2d6b4b72c306fd6154d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:15.379365 kubelet[4479]: E0515 00:45:15.379328 4479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e350aa87adb53dcc992ece155cc0f2c80a06f3e56a3f2d6b4b72c306fd6154d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:15.379425 kubelet[4479]: E0515 00:45:15.379377 4479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e350aa87adb53dcc992ece155cc0f2c80a06f3e56a3f2d6b4b72c306fd6154d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vrcxb" May 15 00:45:15.380361 containerd[2742]: time="2025-05-15T00:45:15.379704050Z" level=error msg="Failed to destroy network for sandbox \"60458f703b5250a89da38ecd71fc87d7040b8bdb5486447737a94a993b59a538\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:15.380361 containerd[2742]: time="2025-05-15T00:45:15.380163153Z" level=error msg="encountered an error cleaning up failed sandbox \"60458f703b5250a89da38ecd71fc87d7040b8bdb5486447737a94a993b59a538\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:15.380361 containerd[2742]: time="2025-05-15T00:45:15.380209835Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8c899d4d9-smfp6,Uid:3c5cf155-23c9-4359-ad93-bccee4c59f03,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"60458f703b5250a89da38ecd71fc87d7040b8bdb5486447737a94a993b59a538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:15.380505 kubelet[4479]: E0515 00:45:15.379395 4479 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e350aa87adb53dcc992ece155cc0f2c80a06f3e56a3f2d6b4b72c306fd6154d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vrcxb" May 15 00:45:15.380505 kubelet[4479]: E0515 00:45:15.379862 4479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-vrcxb_kube-system(c07e4604-a033-42cb-84d7-c3f474ee1073)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-vrcxb_kube-system(c07e4604-a033-42cb-84d7-c3f474ee1073)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e350aa87adb53dcc992ece155cc0f2c80a06f3e56a3f2d6b4b72c306fd6154d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vrcxb" podUID="c07e4604-a033-42cb-84d7-c3f474ee1073" May 15 00:45:15.380505 kubelet[4479]: E0515 00:45:15.380390 4479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60458f703b5250a89da38ecd71fc87d7040b8bdb5486447737a94a993b59a538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:15.380641 containerd[2742]: time="2025-05-15T00:45:15.380607135Z" level=error msg="Failed to destroy network for sandbox \"d95a031375e5efff54170250299d40453c0eceb2d4a8fd3e6f6b70153cf7ef79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:15.380939 containerd[2742]: time="2025-05-15T00:45:15.380914910Z" level=error msg="encountered an error cleaning up failed sandbox \"d95a031375e5efff54170250299d40453c0eceb2d4a8fd3e6f6b70153cf7ef79\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:15.381000 containerd[2742]: time="2025-05-15T00:45:15.380965753Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596b57fb45-8rtv2,Uid:9172cd52-0d57-416c-ba87-816358bd2302,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"d95a031375e5efff54170250299d40453c0eceb2d4a8fd3e6f6b70153cf7ef79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:15.381110 kubelet[4479]: E0515 00:45:15.381014 4479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60458f703b5250a89da38ecd71fc87d7040b8bdb5486447737a94a993b59a538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8c899d4d9-smfp6" May 15 00:45:15.381110 kubelet[4479]: E0515 00:45:15.381042 4479 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60458f703b5250a89da38ecd71fc87d7040b8bdb5486447737a94a993b59a538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8c899d4d9-smfp6" May 15 00:45:15.381110 kubelet[4479]: E0515 00:45:15.381074 4479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8c899d4d9-smfp6_calico-apiserver(3c5cf155-23c9-4359-ad93-bccee4c59f03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8c899d4d9-smfp6_calico-apiserver(3c5cf155-23c9-4359-ad93-bccee4c59f03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60458f703b5250a89da38ecd71fc87d7040b8bdb5486447737a94a993b59a538\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8c899d4d9-smfp6" podUID="3c5cf155-23c9-4359-ad93-bccee4c59f03" May 15 00:45:15.381192 kubelet[4479]: E0515 00:45:15.381092 4479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d95a031375e5efff54170250299d40453c0eceb2d4a8fd3e6f6b70153cf7ef79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:15.381192 kubelet[4479]: E0515 00:45:15.381125 4479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d95a031375e5efff54170250299d40453c0eceb2d4a8fd3e6f6b70153cf7ef79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-596b57fb45-8rtv2" May 15 00:45:15.381192 kubelet[4479]: E0515 00:45:15.381141 4479 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d95a031375e5efff54170250299d40453c0eceb2d4a8fd3e6f6b70153cf7ef79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-596b57fb45-8rtv2" May 15 00:45:15.381250 kubelet[4479]: E0515 00:45:15.381171 4479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-596b57fb45-8rtv2_calico-system(9172cd52-0d57-416c-ba87-816358bd2302)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-596b57fb45-8rtv2_calico-system(9172cd52-0d57-416c-ba87-816358bd2302)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d95a031375e5efff54170250299d40453c0eceb2d4a8fd3e6f6b70153cf7ef79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-596b57fb45-8rtv2" podUID="9172cd52-0d57-416c-ba87-816358bd2302" May 15 00:45:15.381687 containerd[2742]: time="2025-05-15T00:45:15.381654627Z" level=error msg="Failed to destroy network for sandbox \"a136f66c041125310c5791207e0630fa16b0ccbd969efde218028c9b394a893c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:15.381951 containerd[2742]: time="2025-05-15T00:45:15.381929481Z" level=error msg="encountered an error cleaning up failed sandbox \"a136f66c041125310c5791207e0630fa16b0ccbd969efde218028c9b394a893c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:15.381991 containerd[2742]: time="2025-05-15T00:45:15.381973923Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzhwf,Uid:439669d8-aecd-4ab8-a6b4-fddb8f0d476b,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a136f66c041125310c5791207e0630fa16b0ccbd969efde218028c9b394a893c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:15.382127 kubelet[4479]: E0515 00:45:15.382104 4479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a136f66c041125310c5791207e0630fa16b0ccbd969efde218028c9b394a893c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:15.382156 kubelet[4479]: E0515 00:45:15.382144 4479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a136f66c041125310c5791207e0630fa16b0ccbd969efde218028c9b394a893c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nzhwf" May 15 00:45:15.382184 kubelet[4479]: E0515 00:45:15.382161 4479 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a136f66c041125310c5791207e0630fa16b0ccbd969efde218028c9b394a893c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nzhwf" May 15 00:45:15.382214 kubelet[4479]: E0515 00:45:15.382191 4479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nzhwf_kube-system(439669d8-aecd-4ab8-a6b4-fddb8f0d476b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nzhwf_kube-system(439669d8-aecd-4ab8-a6b4-fddb8f0d476b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a136f66c041125310c5791207e0630fa16b0ccbd969efde218028c9b394a893c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nzhwf" podUID="439669d8-aecd-4ab8-a6b4-fddb8f0d476b" May 15 00:45:15.925397 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-844799c848ddb75ffd035f8ecc112096a30f6cd2c55c728b7dfb087bf50f047d-shm.mount: Deactivated successfully. May 15 00:45:16.332831 kubelet[4479]: I0515 00:45:16.332803 4479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="844799c848ddb75ffd035f8ecc112096a30f6cd2c55c728b7dfb087bf50f047d" May 15 00:45:16.333262 containerd[2742]: time="2025-05-15T00:45:16.333235671Z" level=info msg="StopPodSandbox for \"844799c848ddb75ffd035f8ecc112096a30f6cd2c55c728b7dfb087bf50f047d\"" May 15 00:45:16.333501 containerd[2742]: time="2025-05-15T00:45:16.333394879Z" level=info msg="Ensure that sandbox 844799c848ddb75ffd035f8ecc112096a30f6cd2c55c728b7dfb087bf50f047d in task-service has been cleanup successfully" May 15 00:45:16.333593 containerd[2742]: time="2025-05-15T00:45:16.333577607Z" level=info msg="TearDown network for sandbox \"844799c848ddb75ffd035f8ecc112096a30f6cd2c55c728b7dfb087bf50f047d\" successfully" May 15 00:45:16.333616 containerd[2742]: time="2025-05-15T00:45:16.333591888Z" level=info msg="StopPodSandbox for \"844799c848ddb75ffd035f8ecc112096a30f6cd2c55c728b7dfb087bf50f047d\" returns successfully" May 15 00:45:16.333646 kubelet[4479]: I0515 00:45:16.333631 4479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72c303f90d5db1dd84e9b19b5c453e8009c8dd55b3bab78b8b65ac7fa1239059" May 15 00:45:16.333791 containerd[2742]: time="2025-05-15T00:45:16.333772936Z" level=info msg="StopPodSandbox for \"6a46bdf3b6328eb42ffd7a69e6a280c0d8c593d3ad715b2a12e5d54f363b4a45\"" May 15 00:45:16.333852 containerd[2742]: time="2025-05-15T00:45:16.333842500Z" level=info msg="TearDown network for sandbox \"6a46bdf3b6328eb42ffd7a69e6a280c0d8c593d3ad715b2a12e5d54f363b4a45\" successfully" May 15 00:45:16.333875 containerd[2742]: time="2025-05-15T00:45:16.333852580Z" level=info msg="StopPodSandbox for \"6a46bdf3b6328eb42ffd7a69e6a280c0d8c593d3ad715b2a12e5d54f363b4a45\" returns successfully" May 15 00:45:16.334014 containerd[2742]: time="2025-05-15T00:45:16.333996627Z" level=info msg="StopPodSandbox for \"72c303f90d5db1dd84e9b19b5c453e8009c8dd55b3bab78b8b65ac7fa1239059\"" May 15 00:45:16.334150 containerd[2742]: time="2025-05-15T00:45:16.334137753Z" level=info msg="Ensure that sandbox 72c303f90d5db1dd84e9b19b5c453e8009c8dd55b3bab78b8b65ac7fa1239059 in task-service has been cleanup successfully" May 15 00:45:16.334178 containerd[2742]: time="2025-05-15T00:45:16.334161394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8c899d4d9-gzqjf,Uid:261e97f9-6a47-47b9-9d3b-8808cd9cee51,Namespace:calico-apiserver,Attempt:2,}" May 15 00:45:16.334310 containerd[2742]: time="2025-05-15T00:45:16.334293121Z" level=info msg="TearDown network for sandbox \"72c303f90d5db1dd84e9b19b5c453e8009c8dd55b3bab78b8b65ac7fa1239059\" successfully" May 15 00:45:16.334330 containerd[2742]: time="2025-05-15T00:45:16.334310121Z" level=info msg="StopPodSandbox for \"72c303f90d5db1dd84e9b19b5c453e8009c8dd55b3bab78b8b65ac7fa1239059\" returns successfully" May 15 00:45:16.334515 containerd[2742]: time="2025-05-15T00:45:16.334500850Z" level=info msg="StopPodSandbox for \"0fce0119c3f9429803f91957b67a10b2d0b2c830bc2b19a8820f9661d89e69a3\"" May 15 00:45:16.334572 kubelet[4479]: I0515 00:45:16.334559 4479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d95a031375e5efff54170250299d40453c0eceb2d4a8fd3e6f6b70153cf7ef79" May 15 00:45:16.334596 containerd[2742]: time="2025-05-15T00:45:16.334567093Z" level=info msg="TearDown network for sandbox \"0fce0119c3f9429803f91957b67a10b2d0b2c830bc2b19a8820f9661d89e69a3\" successfully" May 15 00:45:16.334596 containerd[2742]: time="2025-05-15T00:45:16.334576454Z" level=info msg="StopPodSandbox for \"0fce0119c3f9429803f91957b67a10b2d0b2c830bc2b19a8820f9661d89e69a3\" returns successfully" May 15 00:45:16.334890 containerd[2742]: time="2025-05-15T00:45:16.334872028Z" level=info msg="StopPodSandbox for \"d95a031375e5efff54170250299d40453c0eceb2d4a8fd3e6f6b70153cf7ef79\"" May 15 00:45:16.334929 containerd[2742]: time="2025-05-15T00:45:16.334917030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lhgzs,Uid:d4c8c3bf-81ae-482c-8ebe-086a362f11d9,Namespace:calico-system,Attempt:2,}" May 15 00:45:16.335025 containerd[2742]: time="2025-05-15T00:45:16.335012074Z" level=info msg="Ensure that sandbox d95a031375e5efff54170250299d40453c0eceb2d4a8fd3e6f6b70153cf7ef79 in task-service has been cleanup successfully" May 15 00:45:16.335092 systemd[1]: run-netns-cni\x2dbb00e29f\x2d823d\x2dbca8\x2d5e88\x2d73e69856b30c.mount: Deactivated successfully. May 15 00:45:16.335252 containerd[2742]: time="2025-05-15T00:45:16.335176442Z" level=info msg="TearDown network for sandbox \"d95a031375e5efff54170250299d40453c0eceb2d4a8fd3e6f6b70153cf7ef79\" successfully" May 15 00:45:16.335252 containerd[2742]: time="2025-05-15T00:45:16.335192082Z" level=info msg="StopPodSandbox for \"d95a031375e5efff54170250299d40453c0eceb2d4a8fd3e6f6b70153cf7ef79\" returns successfully" May 15 00:45:16.335437 containerd[2742]: time="2025-05-15T00:45:16.335416293Z" level=info msg="StopPodSandbox for \"eaedd48b44f386ae86ad7d510a4e82c76588335e346e5f046a7e3921144b79af\"" May 15 00:45:16.335512 containerd[2742]: time="2025-05-15T00:45:16.335501297Z" level=info msg="TearDown network for sandbox \"eaedd48b44f386ae86ad7d510a4e82c76588335e346e5f046a7e3921144b79af\" successfully" May 15 00:45:16.335539 containerd[2742]: time="2025-05-15T00:45:16.335511977Z" level=info msg="StopPodSandbox for \"eaedd48b44f386ae86ad7d510a4e82c76588335e346e5f046a7e3921144b79af\" returns successfully" May 15 00:45:16.335757 kubelet[4479]: I0515 00:45:16.335562 4479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e350aa87adb53dcc992ece155cc0f2c80a06f3e56a3f2d6b4b72c306fd6154d5" May 15 00:45:16.335972 containerd[2742]: time="2025-05-15T00:45:16.335952678Z" level=info msg="StopPodSandbox for \"e350aa87adb53dcc992ece155cc0f2c80a06f3e56a3f2d6b4b72c306fd6154d5\"" May 15 00:45:16.336024 containerd[2742]: time="2025-05-15T00:45:16.336001400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596b57fb45-8rtv2,Uid:9172cd52-0d57-416c-ba87-816358bd2302,Namespace:calico-system,Attempt:2,}" May 15 00:45:16.336097 containerd[2742]: time="2025-05-15T00:45:16.336084484Z" level=info msg="Ensure that sandbox e350aa87adb53dcc992ece155cc0f2c80a06f3e56a3f2d6b4b72c306fd6154d5 in task-service has been cleanup successfully" May 15 00:45:16.336232 containerd[2742]: time="2025-05-15T00:45:16.336219770Z" level=info msg="TearDown network for sandbox \"e350aa87adb53dcc992ece155cc0f2c80a06f3e56a3f2d6b4b72c306fd6154d5\" successfully" May 15 00:45:16.336271 containerd[2742]: time="2025-05-15T00:45:16.336232171Z" level=info msg="StopPodSandbox for \"e350aa87adb53dcc992ece155cc0f2c80a06f3e56a3f2d6b4b72c306fd6154d5\" returns successfully" May 15 00:45:16.336435 containerd[2742]: time="2025-05-15T00:45:16.336421420Z" level=info msg="StopPodSandbox for \"f1b3dd443a8fe16b44a1b8bbfe9267a0e937dc8adf28fb8076ddb7034d32caa3\"" May 15 00:45:16.336502 containerd[2742]: time="2025-05-15T00:45:16.336482623Z" level=info msg="TearDown network for sandbox \"f1b3dd443a8fe16b44a1b8bbfe9267a0e937dc8adf28fb8076ddb7034d32caa3\" successfully" May 15 00:45:16.336502 containerd[2742]: time="2025-05-15T00:45:16.336493143Z" level=info msg="StopPodSandbox for \"f1b3dd443a8fe16b44a1b8bbfe9267a0e937dc8adf28fb8076ddb7034d32caa3\" returns successfully" May 15 00:45:16.336740 kubelet[4479]: I0515 00:45:16.336599 4479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60458f703b5250a89da38ecd71fc87d7040b8bdb5486447737a94a993b59a538" May 15 00:45:16.336808 containerd[2742]: time="2025-05-15T00:45:16.336789477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vrcxb,Uid:c07e4604-a033-42cb-84d7-c3f474ee1073,Namespace:kube-system,Attempt:2,}" May 15 00:45:16.336933 systemd[1]: run-netns-cni\x2d9d0683cb\x2db053\x2d4e78\x2d0de3\x2dcb91b545bd6a.mount: Deactivated successfully. May 15 00:45:16.337003 containerd[2742]: time="2025-05-15T00:45:16.336967005Z" level=info msg="StopPodSandbox for \"60458f703b5250a89da38ecd71fc87d7040b8bdb5486447737a94a993b59a538\"" May 15 00:45:16.337010 systemd[1]: run-netns-cni\x2d55470ca8\x2d017c\x2d10da\x2d1ef5\x2df27531acb0f2.mount: Deactivated successfully. May 15 00:45:16.337324 containerd[2742]: time="2025-05-15T00:45:16.337110852Z" level=info msg="Ensure that sandbox 60458f703b5250a89da38ecd71fc87d7040b8bdb5486447737a94a993b59a538 in task-service has been cleanup successfully" May 15 00:45:16.337324 containerd[2742]: time="2025-05-15T00:45:16.337318382Z" level=info msg="TearDown network for sandbox \"60458f703b5250a89da38ecd71fc87d7040b8bdb5486447737a94a993b59a538\" successfully" May 15 00:45:16.337391 containerd[2742]: time="2025-05-15T00:45:16.337332742Z" level=info msg="StopPodSandbox for \"60458f703b5250a89da38ecd71fc87d7040b8bdb5486447737a94a993b59a538\" returns successfully" May 15 00:45:16.337516 containerd[2742]: time="2025-05-15T00:45:16.337499030Z" level=info msg="StopPodSandbox for \"0a83d9d484576d9844bcc82eafa65c2799ad439480b7d80198fb65d09db951da\"" May 15 00:45:16.337589 containerd[2742]: time="2025-05-15T00:45:16.337564673Z" level=info msg="TearDown network for sandbox \"0a83d9d484576d9844bcc82eafa65c2799ad439480b7d80198fb65d09db951da\" successfully" May 15 00:45:16.337589 containerd[2742]: time="2025-05-15T00:45:16.337575353Z" level=info msg="StopPodSandbox for \"0a83d9d484576d9844bcc82eafa65c2799ad439480b7d80198fb65d09db951da\" returns successfully" May 15 00:45:16.337816 kubelet[4479]: I0515 00:45:16.337800 4479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a136f66c041125310c5791207e0630fa16b0ccbd969efde218028c9b394a893c" May 15 00:45:16.337890 containerd[2742]: time="2025-05-15T00:45:16.337870407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8c899d4d9-smfp6,Uid:3c5cf155-23c9-4359-ad93-bccee4c59f03,Namespace:calico-apiserver,Attempt:2,}" May 15 00:45:16.338163 containerd[2742]: time="2025-05-15T00:45:16.338147100Z" level=info msg="StopPodSandbox for \"a136f66c041125310c5791207e0630fa16b0ccbd969efde218028c9b394a893c\"" May 15 00:45:16.338280 containerd[2742]: time="2025-05-15T00:45:16.338266906Z" level=info msg="Ensure that sandbox a136f66c041125310c5791207e0630fa16b0ccbd969efde218028c9b394a893c in task-service has been cleanup successfully" May 15 00:45:16.338424 containerd[2742]: time="2025-05-15T00:45:16.338411152Z" level=info msg="TearDown network for sandbox \"a136f66c041125310c5791207e0630fa16b0ccbd969efde218028c9b394a893c\" successfully" May 15 00:45:16.338444 containerd[2742]: time="2025-05-15T00:45:16.338423753Z" level=info msg="StopPodSandbox for \"a136f66c041125310c5791207e0630fa16b0ccbd969efde218028c9b394a893c\" returns successfully" May 15 00:45:16.338657 containerd[2742]: time="2025-05-15T00:45:16.338642003Z" level=info msg="StopPodSandbox for \"8569a80021e912c60d5fb9151b75e24a1a43faa25724b7430a62874467169141\"" May 15 00:45:16.338721 containerd[2742]: time="2025-05-15T00:45:16.338710766Z" level=info msg="TearDown network for sandbox \"8569a80021e912c60d5fb9151b75e24a1a43faa25724b7430a62874467169141\" successfully" May 15 00:45:16.338745 containerd[2742]: time="2025-05-15T00:45:16.338720327Z" level=info msg="StopPodSandbox for \"8569a80021e912c60d5fb9151b75e24a1a43faa25724b7430a62874467169141\" returns successfully" May 15 00:45:16.338935 systemd[1]: run-netns-cni\x2d1ed09f43\x2dc90b\x2d1424\x2da0c9\x2d861c68356bb6.mount: Deactivated successfully. May 15 00:45:16.339008 systemd[1]: run-netns-cni\x2dcb7eeb30\x2d3123\x2d36ff\x2d0337\x2da86c46d1f29f.mount: Deactivated successfully. May 15 00:45:16.339090 containerd[2742]: time="2025-05-15T00:45:16.339070703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzhwf,Uid:439669d8-aecd-4ab8-a6b4-fddb8f0d476b,Namespace:kube-system,Attempt:2,}" May 15 00:45:16.342637 systemd[1]: run-netns-cni\x2d82dba4a8\x2d0362\x2d13da\x2db2f7\x2d39830e843db7.mount: Deactivated successfully. May 15 00:45:16.383951 containerd[2742]: time="2025-05-15T00:45:16.383898351Z" level=error msg="Failed to destroy network for sandbox \"7f0b7c2a7f53afe71dcb6df3d57d30fa392c9d75def3ab64b2051feef4a136b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:16.384289 containerd[2742]: time="2025-05-15T00:45:16.384264528Z" level=error msg="encountered an error cleaning up failed sandbox \"7f0b7c2a7f53afe71dcb6df3d57d30fa392c9d75def3ab64b2051feef4a136b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:16.384340 containerd[2742]: time="2025-05-15T00:45:16.384324211Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8c899d4d9-gzqjf,Uid:261e97f9-6a47-47b9-9d3b-8808cd9cee51,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"7f0b7c2a7f53afe71dcb6df3d57d30fa392c9d75def3ab64b2051feef4a136b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:16.384415 containerd[2742]: time="2025-05-15T00:45:16.384383054Z" level=error msg="Failed to destroy network for sandbox \"54f085cdebd6782586198a79077bd5e898b5109f8cb836c0a124c79f60c4879b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:16.384570 kubelet[4479]: E0515 00:45:16.384537 4479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f0b7c2a7f53afe71dcb6df3d57d30fa392c9d75def3ab64b2051feef4a136b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:16.384804 kubelet[4479]: E0515 00:45:16.384599 4479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f0b7c2a7f53afe71dcb6df3d57d30fa392c9d75def3ab64b2051feef4a136b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8c899d4d9-gzqjf" May 15 00:45:16.384804 kubelet[4479]: E0515 00:45:16.384619 4479 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f0b7c2a7f53afe71dcb6df3d57d30fa392c9d75def3ab64b2051feef4a136b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8c899d4d9-gzqjf" May 15 00:45:16.384804 kubelet[4479]: E0515 00:45:16.384660 4479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8c899d4d9-gzqjf_calico-apiserver(261e97f9-6a47-47b9-9d3b-8808cd9cee51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8c899d4d9-gzqjf_calico-apiserver(261e97f9-6a47-47b9-9d3b-8808cd9cee51)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f0b7c2a7f53afe71dcb6df3d57d30fa392c9d75def3ab64b2051feef4a136b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8c899d4d9-gzqjf" podUID="261e97f9-6a47-47b9-9d3b-8808cd9cee51" May 15 00:45:16.384890 containerd[2742]: time="2025-05-15T00:45:16.384728390Z" level=error msg="encountered an error cleaning up failed sandbox \"54f085cdebd6782586198a79077bd5e898b5109f8cb836c0a124c79f60c4879b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:16.384890 containerd[2742]: time="2025-05-15T00:45:16.384777792Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lhgzs,Uid:d4c8c3bf-81ae-482c-8ebe-086a362f11d9,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"54f085cdebd6782586198a79077bd5e898b5109f8cb836c0a124c79f60c4879b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:16.384976 kubelet[4479]: E0515 00:45:16.384895 4479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54f085cdebd6782586198a79077bd5e898b5109f8cb836c0a124c79f60c4879b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:16.384976 kubelet[4479]: E0515 00:45:16.384937 4479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54f085cdebd6782586198a79077bd5e898b5109f8cb836c0a124c79f60c4879b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lhgzs" May 15 00:45:16.384976 kubelet[4479]: E0515 00:45:16.384953 4479 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54f085cdebd6782586198a79077bd5e898b5109f8cb836c0a124c79f60c4879b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lhgzs" May 15 00:45:16.385046 kubelet[4479]: E0515 00:45:16.384991 4479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lhgzs_calico-system(d4c8c3bf-81ae-482c-8ebe-086a362f11d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lhgzs_calico-system(d4c8c3bf-81ae-482c-8ebe-086a362f11d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54f085cdebd6782586198a79077bd5e898b5109f8cb836c0a124c79f60c4879b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lhgzs" podUID="d4c8c3bf-81ae-482c-8ebe-086a362f11d9" May 15 00:45:16.385658 containerd[2742]: time="2025-05-15T00:45:16.385627312Z" level=error msg="Failed to destroy network for sandbox \"50815db523ba052b0ba38b1f7f5794f326c9ca74d53c2aa7b69d11d63edea035\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:16.386077 containerd[2742]: time="2025-05-15T00:45:16.386053051Z" level=error msg="encountered an error cleaning up failed sandbox \"50815db523ba052b0ba38b1f7f5794f326c9ca74d53c2aa7b69d11d63edea035\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:16.386118 containerd[2742]: time="2025-05-15T00:45:16.386102414Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596b57fb45-8rtv2,Uid:9172cd52-0d57-416c-ba87-816358bd2302,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"50815db523ba052b0ba38b1f7f5794f326c9ca74d53c2aa7b69d11d63edea035\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:16.386238 kubelet[4479]: E0515 00:45:16.386222 4479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50815db523ba052b0ba38b1f7f5794f326c9ca74d53c2aa7b69d11d63edea035\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:16.386270 kubelet[4479]: E0515 00:45:16.386247 4479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50815db523ba052b0ba38b1f7f5794f326c9ca74d53c2aa7b69d11d63edea035\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-596b57fb45-8rtv2" May 15 00:45:16.386270 kubelet[4479]: E0515 00:45:16.386262 4479 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50815db523ba052b0ba38b1f7f5794f326c9ca74d53c2aa7b69d11d63edea035\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-596b57fb45-8rtv2" May 15 00:45:16.386309 kubelet[4479]: E0515 00:45:16.386288 4479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-596b57fb45-8rtv2_calico-system(9172cd52-0d57-416c-ba87-816358bd2302)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-596b57fb45-8rtv2_calico-system(9172cd52-0d57-416c-ba87-816358bd2302)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"50815db523ba052b0ba38b1f7f5794f326c9ca74d53c2aa7b69d11d63edea035\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-596b57fb45-8rtv2" podUID="9172cd52-0d57-416c-ba87-816358bd2302" May 15 00:45:16.386863 containerd[2742]: time="2025-05-15T00:45:16.386832848Z" level=error msg="Failed to destroy network for sandbox \"e1dcdcf456e21dd356a6e4200c4375b00da9045e680c9c561a5a30974a0dc038\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:16.387196 containerd[2742]: time="2025-05-15T00:45:16.387173584Z" level=error msg="encountered an error cleaning up failed sandbox \"e1dcdcf456e21dd356a6e4200c4375b00da9045e680c9c561a5a30974a0dc038\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:16.387240 containerd[2742]: time="2025-05-15T00:45:16.387215466Z" level=error msg="Failed to destroy network for sandbox \"fccafe2fb0271a036ce005e2d52f81dcf977b220969c5a70a51b7fda07aa7288\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:16.387299 containerd[2742]: time="2025-05-15T00:45:16.387219866Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzhwf,Uid:439669d8-aecd-4ab8-a6b4-fddb8f0d476b,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e1dcdcf456e21dd356a6e4200c4375b00da9045e680c9c561a5a30974a0dc038\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:16.387404 kubelet[4479]: E0515 00:45:16.387389 4479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1dcdcf456e21dd356a6e4200c4375b00da9045e680c9c561a5a30974a0dc038\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:16.387426 kubelet[4479]: E0515 00:45:16.387411 4479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1dcdcf456e21dd356a6e4200c4375b00da9045e680c9c561a5a30974a0dc038\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nzhwf" May 15 00:45:16.387455 kubelet[4479]: E0515 00:45:16.387424 4479 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1dcdcf456e21dd356a6e4200c4375b00da9045e680c9c561a5a30974a0dc038\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nzhwf" May 15 00:45:16.387475 kubelet[4479]: E0515 00:45:16.387451 4479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nzhwf_kube-system(439669d8-aecd-4ab8-a6b4-fddb8f0d476b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nzhwf_kube-system(439669d8-aecd-4ab8-a6b4-fddb8f0d476b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1dcdcf456e21dd356a6e4200c4375b00da9045e680c9c561a5a30974a0dc038\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nzhwf" podUID="439669d8-aecd-4ab8-a6b4-fddb8f0d476b" May 15 00:45:16.387560 containerd[2742]: time="2025-05-15T00:45:16.387538321Z" level=error msg="encountered an error cleaning up failed sandbox \"fccafe2fb0271a036ce005e2d52f81dcf977b220969c5a70a51b7fda07aa7288\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:16.387601 containerd[2742]: time="2025-05-15T00:45:16.387587443Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vrcxb,Uid:c07e4604-a033-42cb-84d7-c3f474ee1073,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"fccafe2fb0271a036ce005e2d52f81dcf977b220969c5a70a51b7fda07aa7288\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:16.387728 kubelet[4479]: E0515 00:45:16.387704 4479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fccafe2fb0271a036ce005e2d52f81dcf977b220969c5a70a51b7fda07aa7288\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:16.387757 kubelet[4479]: E0515 00:45:16.387744 4479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fccafe2fb0271a036ce005e2d52f81dcf977b220969c5a70a51b7fda07aa7288\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vrcxb" May 15 00:45:16.387780 kubelet[4479]: E0515 00:45:16.387765 4479 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fccafe2fb0271a036ce005e2d52f81dcf977b220969c5a70a51b7fda07aa7288\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vrcxb" May 15 00:45:16.387814 kubelet[4479]: E0515 00:45:16.387797 4479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-vrcxb_kube-system(c07e4604-a033-42cb-84d7-c3f474ee1073)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-vrcxb_kube-system(c07e4604-a033-42cb-84d7-c3f474ee1073)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fccafe2fb0271a036ce005e2d52f81dcf977b220969c5a70a51b7fda07aa7288\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vrcxb" podUID="c07e4604-a033-42cb-84d7-c3f474ee1073" May 15 00:45:16.388449 containerd[2742]: time="2025-05-15T00:45:16.388425162Z" level=error msg="Failed to destroy network for sandbox \"1b9aaaf3bbfd969701e72e7d4061005251be34cc4ff6594d68fffb8c182d7bbc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:16.388718 containerd[2742]: time="2025-05-15T00:45:16.388699215Z" level=error msg="encountered an error cleaning up failed sandbox \"1b9aaaf3bbfd969701e72e7d4061005251be34cc4ff6594d68fffb8c182d7bbc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:16.388755 containerd[2742]: time="2025-05-15T00:45:16.388740657Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8c899d4d9-smfp6,Uid:3c5cf155-23c9-4359-ad93-bccee4c59f03,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"1b9aaaf3bbfd969701e72e7d4061005251be34cc4ff6594d68fffb8c182d7bbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:16.388874 kubelet[4479]: E0515 00:45:16.388853 4479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b9aaaf3bbfd969701e72e7d4061005251be34cc4ff6594d68fffb8c182d7bbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:16.388900 kubelet[4479]: E0515 00:45:16.388889 4479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b9aaaf3bbfd969701e72e7d4061005251be34cc4ff6594d68fffb8c182d7bbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8c899d4d9-smfp6" May 15 00:45:16.388928 kubelet[4479]: E0515 00:45:16.388905 4479 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b9aaaf3bbfd969701e72e7d4061005251be34cc4ff6594d68fffb8c182d7bbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8c899d4d9-smfp6" May 15 00:45:16.388952 kubelet[4479]: E0515 00:45:16.388933 4479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8c899d4d9-smfp6_calico-apiserver(3c5cf155-23c9-4359-ad93-bccee4c59f03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8c899d4d9-smfp6_calico-apiserver(3c5cf155-23c9-4359-ad93-bccee4c59f03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b9aaaf3bbfd969701e72e7d4061005251be34cc4ff6594d68fffb8c182d7bbc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8c899d4d9-smfp6" podUID="3c5cf155-23c9-4359-ad93-bccee4c59f03" May 15 00:45:16.925933 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-54f085cdebd6782586198a79077bd5e898b5109f8cb836c0a124c79f60c4879b-shm.mount: Deactivated successfully. May 15 00:45:16.926019 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7f0b7c2a7f53afe71dcb6df3d57d30fa392c9d75def3ab64b2051feef4a136b8-shm.mount: Deactivated successfully. May 15 00:45:17.339993 kubelet[4479]: I0515 00:45:17.339964 4479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fccafe2fb0271a036ce005e2d52f81dcf977b220969c5a70a51b7fda07aa7288" May 15 00:45:17.340447 containerd[2742]: time="2025-05-15T00:45:17.340414794Z" level=info msg="StopPodSandbox for \"fccafe2fb0271a036ce005e2d52f81dcf977b220969c5a70a51b7fda07aa7288\"" May 15 00:45:17.340667 containerd[2742]: time="2025-05-15T00:45:17.340578722Z" level=info msg="Ensure that sandbox fccafe2fb0271a036ce005e2d52f81dcf977b220969c5a70a51b7fda07aa7288 in task-service has been cleanup successfully" May 15 00:45:17.340787 containerd[2742]: time="2025-05-15T00:45:17.340772410Z" level=info msg="TearDown network for sandbox \"fccafe2fb0271a036ce005e2d52f81dcf977b220969c5a70a51b7fda07aa7288\" successfully" May 15 00:45:17.340824 containerd[2742]: time="2025-05-15T00:45:17.340787651Z" level=info msg="StopPodSandbox for \"fccafe2fb0271a036ce005e2d52f81dcf977b220969c5a70a51b7fda07aa7288\" returns successfully" May 15 00:45:17.340969 containerd[2742]: time="2025-05-15T00:45:17.340956378Z" level=info msg="StopPodSandbox for \"e350aa87adb53dcc992ece155cc0f2c80a06f3e56a3f2d6b4b72c306fd6154d5\"" May 15 00:45:17.341037 containerd[2742]: time="2025-05-15T00:45:17.341026421Z" level=info msg="TearDown network for sandbox \"e350aa87adb53dcc992ece155cc0f2c80a06f3e56a3f2d6b4b72c306fd6154d5\" successfully" May 15 00:45:17.341059 containerd[2742]: time="2025-05-15T00:45:17.341036462Z" level=info msg="StopPodSandbox for \"e350aa87adb53dcc992ece155cc0f2c80a06f3e56a3f2d6b4b72c306fd6154d5\" returns successfully" May 15 00:45:17.341107 kubelet[4479]: I0515 00:45:17.341093 4479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b9aaaf3bbfd969701e72e7d4061005251be34cc4ff6594d68fffb8c182d7bbc" May 15 00:45:17.341304 containerd[2742]: time="2025-05-15T00:45:17.341282792Z" level=info msg="StopPodSandbox for \"f1b3dd443a8fe16b44a1b8bbfe9267a0e937dc8adf28fb8076ddb7034d32caa3\"" May 15 00:45:17.341380 containerd[2742]: time="2025-05-15T00:45:17.341370316Z" level=info msg="TearDown network for sandbox \"f1b3dd443a8fe16b44a1b8bbfe9267a0e937dc8adf28fb8076ddb7034d32caa3\" successfully" May 15 00:45:17.341399 containerd[2742]: time="2025-05-15T00:45:17.341381077Z" level=info msg="StopPodSandbox for \"f1b3dd443a8fe16b44a1b8bbfe9267a0e937dc8adf28fb8076ddb7034d32caa3\" returns successfully" May 15 00:45:17.341469 containerd[2742]: time="2025-05-15T00:45:17.341448240Z" level=info msg="StopPodSandbox for \"1b9aaaf3bbfd969701e72e7d4061005251be34cc4ff6594d68fffb8c182d7bbc\"" May 15 00:45:17.341604 containerd[2742]: time="2025-05-15T00:45:17.341590406Z" level=info msg="Ensure that sandbox 1b9aaaf3bbfd969701e72e7d4061005251be34cc4ff6594d68fffb8c182d7bbc in task-service has been cleanup successfully" May 15 00:45:17.341729 containerd[2742]: time="2025-05-15T00:45:17.341709611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vrcxb,Uid:c07e4604-a033-42cb-84d7-c3f474ee1073,Namespace:kube-system,Attempt:3,}" May 15 00:45:17.341757 containerd[2742]: time="2025-05-15T00:45:17.341740252Z" level=info msg="TearDown network for sandbox \"1b9aaaf3bbfd969701e72e7d4061005251be34cc4ff6594d68fffb8c182d7bbc\" successfully" May 15 00:45:17.341776 containerd[2742]: time="2025-05-15T00:45:17.341754773Z" level=info msg="StopPodSandbox for \"1b9aaaf3bbfd969701e72e7d4061005251be34cc4ff6594d68fffb8c182d7bbc\" returns successfully" May 15 00:45:17.341945 containerd[2742]: time="2025-05-15T00:45:17.341925140Z" level=info msg="StopPodSandbox for \"60458f703b5250a89da38ecd71fc87d7040b8bdb5486447737a94a993b59a538\"" May 15 00:45:17.342021 containerd[2742]: time="2025-05-15T00:45:17.342009384Z" level=info msg="TearDown network for sandbox \"60458f703b5250a89da38ecd71fc87d7040b8bdb5486447737a94a993b59a538\" successfully" May 15 00:45:17.342042 containerd[2742]: time="2025-05-15T00:45:17.342021745Z" level=info msg="StopPodSandbox for \"60458f703b5250a89da38ecd71fc87d7040b8bdb5486447737a94a993b59a538\" returns successfully" May 15 00:45:17.342244 containerd[2742]: time="2025-05-15T00:45:17.342227874Z" level=info msg="StopPodSandbox for \"0a83d9d484576d9844bcc82eafa65c2799ad439480b7d80198fb65d09db951da\"" May 15 00:45:17.342317 containerd[2742]: time="2025-05-15T00:45:17.342306677Z" level=info msg="TearDown network for sandbox \"0a83d9d484576d9844bcc82eafa65c2799ad439480b7d80198fb65d09db951da\" successfully" May 15 00:45:17.342337 containerd[2742]: time="2025-05-15T00:45:17.342317637Z" level=info msg="StopPodSandbox for \"0a83d9d484576d9844bcc82eafa65c2799ad439480b7d80198fb65d09db951da\" returns successfully" May 15 00:45:17.342362 kubelet[4479]: I0515 00:45:17.342348 4479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1dcdcf456e21dd356a6e4200c4375b00da9045e680c9c561a5a30974a0dc038" May 15 00:45:17.342369 systemd[1]: run-netns-cni\x2dece30198\x2de845\x2d45ed\x2db9b1\x2d326ccd1be8e4.mount: Deactivated successfully. May 15 00:45:17.342686 containerd[2742]: time="2025-05-15T00:45:17.342665093Z" level=info msg="StopPodSandbox for \"e1dcdcf456e21dd356a6e4200c4375b00da9045e680c9c561a5a30974a0dc038\"" May 15 00:45:17.342715 containerd[2742]: time="2025-05-15T00:45:17.342696694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8c899d4d9-smfp6,Uid:3c5cf155-23c9-4359-ad93-bccee4c59f03,Namespace:calico-apiserver,Attempt:3,}" May 15 00:45:17.342817 containerd[2742]: time="2025-05-15T00:45:17.342803539Z" level=info msg="Ensure that sandbox e1dcdcf456e21dd356a6e4200c4375b00da9045e680c9c561a5a30974a0dc038 in task-service has been cleanup successfully" May 15 00:45:17.343058 containerd[2742]: time="2025-05-15T00:45:17.343044069Z" level=info msg="TearDown network for sandbox \"e1dcdcf456e21dd356a6e4200c4375b00da9045e680c9c561a5a30974a0dc038\" successfully" May 15 00:45:17.343078 containerd[2742]: time="2025-05-15T00:45:17.343059230Z" level=info msg="StopPodSandbox for \"e1dcdcf456e21dd356a6e4200c4375b00da9045e680c9c561a5a30974a0dc038\" returns successfully" May 15 00:45:17.343287 containerd[2742]: time="2025-05-15T00:45:17.343267119Z" level=info msg="StopPodSandbox for \"a136f66c041125310c5791207e0630fa16b0ccbd969efde218028c9b394a893c\"" May 15 00:45:17.343358 containerd[2742]: time="2025-05-15T00:45:17.343347202Z" level=info msg="TearDown network for sandbox \"a136f66c041125310c5791207e0630fa16b0ccbd969efde218028c9b394a893c\" successfully" May 15 00:45:17.343381 containerd[2742]: time="2025-05-15T00:45:17.343358603Z" level=info msg="StopPodSandbox for \"a136f66c041125310c5791207e0630fa16b0ccbd969efde218028c9b394a893c\" returns successfully" May 15 00:45:17.343509 kubelet[4479]: I0515 00:45:17.343498 4479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50815db523ba052b0ba38b1f7f5794f326c9ca74d53c2aa7b69d11d63edea035" May 15 00:45:17.343560 containerd[2742]: time="2025-05-15T00:45:17.343544691Z" level=info msg="StopPodSandbox for \"8569a80021e912c60d5fb9151b75e24a1a43faa25724b7430a62874467169141\"" May 15 00:45:17.343629 containerd[2742]: time="2025-05-15T00:45:17.343618374Z" level=info msg="TearDown network for sandbox \"8569a80021e912c60d5fb9151b75e24a1a43faa25724b7430a62874467169141\" successfully" May 15 00:45:17.343651 containerd[2742]: time="2025-05-15T00:45:17.343629055Z" level=info msg="StopPodSandbox for \"8569a80021e912c60d5fb9151b75e24a1a43faa25724b7430a62874467169141\" returns successfully" May 15 00:45:17.343862 containerd[2742]: time="2025-05-15T00:45:17.343844504Z" level=info msg="StopPodSandbox for \"50815db523ba052b0ba38b1f7f5794f326c9ca74d53c2aa7b69d11d63edea035\"" May 15 00:45:17.343986 containerd[2742]: time="2025-05-15T00:45:17.343971910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzhwf,Uid:439669d8-aecd-4ab8-a6b4-fddb8f0d476b,Namespace:kube-system,Attempt:3,}" May 15 00:45:17.344065 containerd[2742]: time="2025-05-15T00:45:17.343982190Z" level=info msg="Ensure that sandbox 50815db523ba052b0ba38b1f7f5794f326c9ca74d53c2aa7b69d11d63edea035 in task-service has been cleanup successfully" May 15 00:45:17.344221 containerd[2742]: time="2025-05-15T00:45:17.344208120Z" level=info msg="TearDown network for sandbox \"50815db523ba052b0ba38b1f7f5794f326c9ca74d53c2aa7b69d11d63edea035\" successfully" May 15 00:45:17.344240 containerd[2742]: time="2025-05-15T00:45:17.344222201Z" level=info msg="StopPodSandbox for \"50815db523ba052b0ba38b1f7f5794f326c9ca74d53c2aa7b69d11d63edea035\" returns successfully" May 15 00:45:17.344244 systemd[1]: run-netns-cni\x2d75e4198b\x2d9b08\x2da738\x2d8aad\x2d8ba1d9931d5f.mount: Deactivated successfully. May 15 00:45:17.344315 systemd[1]: run-netns-cni\x2d12349d4a\x2dbca2\x2d14ae\x2df42a\x2db08284f9d692.mount: Deactivated successfully. May 15 00:45:17.344413 containerd[2742]: time="2025-05-15T00:45:17.344396848Z" level=info msg="StopPodSandbox for \"d95a031375e5efff54170250299d40453c0eceb2d4a8fd3e6f6b70153cf7ef79\"" May 15 00:45:17.344482 containerd[2742]: time="2025-05-15T00:45:17.344471772Z" level=info msg="TearDown network for sandbox \"d95a031375e5efff54170250299d40453c0eceb2d4a8fd3e6f6b70153cf7ef79\" successfully" May 15 00:45:17.344502 containerd[2742]: time="2025-05-15T00:45:17.344482372Z" level=info msg="StopPodSandbox for \"d95a031375e5efff54170250299d40453c0eceb2d4a8fd3e6f6b70153cf7ef79\" returns successfully" May 15 00:45:17.344695 containerd[2742]: time="2025-05-15T00:45:17.344684061Z" level=info msg="StopPodSandbox for \"eaedd48b44f386ae86ad7d510a4e82c76588335e346e5f046a7e3921144b79af\"" May 15 00:45:17.344749 containerd[2742]: time="2025-05-15T00:45:17.344739463Z" level=info msg="TearDown network for sandbox \"eaedd48b44f386ae86ad7d510a4e82c76588335e346e5f046a7e3921144b79af\" successfully" May 15 00:45:17.344769 containerd[2742]: time="2025-05-15T00:45:17.344749344Z" level=info msg="StopPodSandbox for \"eaedd48b44f386ae86ad7d510a4e82c76588335e346e5f046a7e3921144b79af\" returns successfully" May 15 00:45:17.344823 kubelet[4479]: I0515 00:45:17.344812 4479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f0b7c2a7f53afe71dcb6df3d57d30fa392c9d75def3ab64b2051feef4a136b8" May 15 00:45:17.345119 containerd[2742]: time="2025-05-15T00:45:17.345102639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596b57fb45-8rtv2,Uid:9172cd52-0d57-416c-ba87-816358bd2302,Namespace:calico-system,Attempt:3,}" May 15 00:45:17.345162 containerd[2742]: time="2025-05-15T00:45:17.345145921Z" level=info msg="StopPodSandbox for \"7f0b7c2a7f53afe71dcb6df3d57d30fa392c9d75def3ab64b2051feef4a136b8\"" May 15 00:45:17.345320 containerd[2742]: time="2025-05-15T00:45:17.345307008Z" level=info msg="Ensure that sandbox 7f0b7c2a7f53afe71dcb6df3d57d30fa392c9d75def3ab64b2051feef4a136b8 in task-service has been cleanup successfully" May 15 00:45:17.345491 containerd[2742]: time="2025-05-15T00:45:17.345477895Z" level=info msg="TearDown network for sandbox \"7f0b7c2a7f53afe71dcb6df3d57d30fa392c9d75def3ab64b2051feef4a136b8\" successfully" May 15 00:45:17.345510 containerd[2742]: time="2025-05-15T00:45:17.345492216Z" level=info msg="StopPodSandbox for \"7f0b7c2a7f53afe71dcb6df3d57d30fa392c9d75def3ab64b2051feef4a136b8\" returns successfully" May 15 00:45:17.345689 containerd[2742]: time="2025-05-15T00:45:17.345673624Z" level=info msg="StopPodSandbox for \"844799c848ddb75ffd035f8ecc112096a30f6cd2c55c728b7dfb087bf50f047d\"" May 15 00:45:17.345751 containerd[2742]: time="2025-05-15T00:45:17.345742347Z" level=info msg="TearDown network for sandbox \"844799c848ddb75ffd035f8ecc112096a30f6cd2c55c728b7dfb087bf50f047d\" successfully" May 15 00:45:17.345777 containerd[2742]: time="2025-05-15T00:45:17.345752387Z" level=info msg="StopPodSandbox for \"844799c848ddb75ffd035f8ecc112096a30f6cd2c55c728b7dfb087bf50f047d\" returns successfully" May 15 00:45:17.345908 kubelet[4479]: I0515 00:45:17.345897 4479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54f085cdebd6782586198a79077bd5e898b5109f8cb836c0a124c79f60c4879b" May 15 00:45:17.345929 containerd[2742]: time="2025-05-15T00:45:17.345909234Z" level=info msg="StopPodSandbox for \"6a46bdf3b6328eb42ffd7a69e6a280c0d8c593d3ad715b2a12e5d54f363b4a45\"" May 15 00:45:17.345994 containerd[2742]: time="2025-05-15T00:45:17.345979397Z" level=info msg="TearDown network for sandbox \"6a46bdf3b6328eb42ffd7a69e6a280c0d8c593d3ad715b2a12e5d54f363b4a45\" successfully" May 15 00:45:17.346013 containerd[2742]: time="2025-05-15T00:45:17.345994358Z" level=info msg="StopPodSandbox for \"6a46bdf3b6328eb42ffd7a69e6a280c0d8c593d3ad715b2a12e5d54f363b4a45\" returns successfully" May 15 00:45:17.346269 containerd[2742]: time="2025-05-15T00:45:17.346254089Z" level=info msg="StopPodSandbox for \"54f085cdebd6782586198a79077bd5e898b5109f8cb836c0a124c79f60c4879b\"" May 15 00:45:17.346318 systemd[1]: run-netns-cni\x2d8a6024ed\x2d36b7\x2dabea\x2dcbd3\x2dad52cd412c6e.mount: Deactivated successfully. May 15 00:45:17.346358 containerd[2742]: time="2025-05-15T00:45:17.346263570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8c899d4d9-gzqjf,Uid:261e97f9-6a47-47b9-9d3b-8808cd9cee51,Namespace:calico-apiserver,Attempt:3,}" May 15 00:45:17.346391 containerd[2742]: time="2025-05-15T00:45:17.346378415Z" level=info msg="Ensure that sandbox 54f085cdebd6782586198a79077bd5e898b5109f8cb836c0a124c79f60c4879b in task-service has been cleanup successfully" May 15 00:45:17.346552 containerd[2742]: time="2025-05-15T00:45:17.346538542Z" level=info msg="TearDown network for sandbox \"54f085cdebd6782586198a79077bd5e898b5109f8cb836c0a124c79f60c4879b\" successfully" May 15 00:45:17.346571 containerd[2742]: time="2025-05-15T00:45:17.346552222Z" level=info msg="StopPodSandbox for \"54f085cdebd6782586198a79077bd5e898b5109f8cb836c0a124c79f60c4879b\" returns successfully" May 15 00:45:17.346793 containerd[2742]: time="2025-05-15T00:45:17.346776632Z" level=info msg="StopPodSandbox for \"72c303f90d5db1dd84e9b19b5c453e8009c8dd55b3bab78b8b65ac7fa1239059\"" May 15 00:45:17.346858 containerd[2742]: time="2025-05-15T00:45:17.346850435Z" level=info msg="TearDown network for sandbox \"72c303f90d5db1dd84e9b19b5c453e8009c8dd55b3bab78b8b65ac7fa1239059\" successfully" May 15 00:45:17.346879 containerd[2742]: time="2025-05-15T00:45:17.346859156Z" level=info msg="StopPodSandbox for \"72c303f90d5db1dd84e9b19b5c453e8009c8dd55b3bab78b8b65ac7fa1239059\" returns successfully" May 15 00:45:17.347045 containerd[2742]: time="2025-05-15T00:45:17.347027483Z" level=info msg="StopPodSandbox for \"0fce0119c3f9429803f91957b67a10b2d0b2c830bc2b19a8820f9661d89e69a3\"" May 15 00:45:17.347112 containerd[2742]: time="2025-05-15T00:45:17.347099646Z" level=info msg="TearDown network for sandbox \"0fce0119c3f9429803f91957b67a10b2d0b2c830bc2b19a8820f9661d89e69a3\" successfully" May 15 00:45:17.347139 containerd[2742]: time="2025-05-15T00:45:17.347112167Z" level=info msg="StopPodSandbox for \"0fce0119c3f9429803f91957b67a10b2d0b2c830bc2b19a8820f9661d89e69a3\" returns successfully" May 15 00:45:17.347428 containerd[2742]: time="2025-05-15T00:45:17.347407980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lhgzs,Uid:d4c8c3bf-81ae-482c-8ebe-086a362f11d9,Namespace:calico-system,Attempt:3,}" May 15 00:45:17.348844 systemd[1]: run-netns-cni\x2d1f332cdb\x2d29c2\x2d7502\x2d4fa3\x2d161ceaa7df21.mount: Deactivated successfully. May 15 00:45:17.348913 systemd[1]: run-netns-cni\x2d926af6de\x2dc5ff\x2d41e2\x2db5b0\x2d9a1a271d53bf.mount: Deactivated successfully. May 15 00:45:17.390608 containerd[2742]: time="2025-05-15T00:45:17.390560544Z" level=error msg="Failed to destroy network for sandbox \"1399a0d99f7adeee1a7c11a6ca1f65fecd35838d4ee18ff29180204e702a5d04\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:17.391116 containerd[2742]: time="2025-05-15T00:45:17.391090287Z" level=error msg="encountered an error cleaning up failed sandbox \"1399a0d99f7adeee1a7c11a6ca1f65fecd35838d4ee18ff29180204e702a5d04\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:17.391167 containerd[2742]: time="2025-05-15T00:45:17.391152010Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vrcxb,Uid:c07e4604-a033-42cb-84d7-c3f474ee1073,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"1399a0d99f7adeee1a7c11a6ca1f65fecd35838d4ee18ff29180204e702a5d04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:17.391661 containerd[2742]: time="2025-05-15T00:45:17.391308217Z" level=error msg="Failed to destroy network for sandbox \"cc93de283b98479ea68227ba4a0660df5329324d1a421550198b5bf0de9161c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:17.391661 containerd[2742]: time="2025-05-15T00:45:17.391638671Z" level=error msg="encountered an error cleaning up failed sandbox \"cc93de283b98479ea68227ba4a0660df5329324d1a421550198b5bf0de9161c1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:17.391731 kubelet[4479]: E0515 00:45:17.391329 4479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1399a0d99f7adeee1a7c11a6ca1f65fecd35838d4ee18ff29180204e702a5d04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:17.391731 kubelet[4479]: E0515 00:45:17.391382 4479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1399a0d99f7adeee1a7c11a6ca1f65fecd35838d4ee18ff29180204e702a5d04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vrcxb" May 15 00:45:17.391731 kubelet[4479]: E0515 00:45:17.391405 4479 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1399a0d99f7adeee1a7c11a6ca1f65fecd35838d4ee18ff29180204e702a5d04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vrcxb" May 15 00:45:17.392036 containerd[2742]: time="2025-05-15T00:45:17.391694834Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8c899d4d9-smfp6,Uid:3c5cf155-23c9-4359-ad93-bccee4c59f03,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"cc93de283b98479ea68227ba4a0660df5329324d1a421550198b5bf0de9161c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:17.392090 kubelet[4479]: E0515 00:45:17.391447 4479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-vrcxb_kube-system(c07e4604-a033-42cb-84d7-c3f474ee1073)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-vrcxb_kube-system(c07e4604-a033-42cb-84d7-c3f474ee1073)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1399a0d99f7adeee1a7c11a6ca1f65fecd35838d4ee18ff29180204e702a5d04\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vrcxb" podUID="c07e4604-a033-42cb-84d7-c3f474ee1073" May 15 00:45:17.392090 kubelet[4479]: E0515 00:45:17.391820 4479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc93de283b98479ea68227ba4a0660df5329324d1a421550198b5bf0de9161c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:17.392090 kubelet[4479]: E0515 00:45:17.391856 4479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc93de283b98479ea68227ba4a0660df5329324d1a421550198b5bf0de9161c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8c899d4d9-smfp6" May 15 00:45:17.392165 kubelet[4479]: E0515 00:45:17.391873 4479 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc93de283b98479ea68227ba4a0660df5329324d1a421550198b5bf0de9161c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8c899d4d9-smfp6" May 15 00:45:17.392165 kubelet[4479]: E0515 00:45:17.391906 4479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8c899d4d9-smfp6_calico-apiserver(3c5cf155-23c9-4359-ad93-bccee4c59f03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8c899d4d9-smfp6_calico-apiserver(3c5cf155-23c9-4359-ad93-bccee4c59f03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc93de283b98479ea68227ba4a0660df5329324d1a421550198b5bf0de9161c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8c899d4d9-smfp6" podUID="3c5cf155-23c9-4359-ad93-bccee4c59f03" May 15 00:45:17.392385 containerd[2742]: time="2025-05-15T00:45:17.392353022Z" level=error msg="Failed to destroy network for sandbox \"d084fb895192a5045258fdda8c5273615e092dc79ef82c99cf0544b05a4ded28\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:17.392691 containerd[2742]: time="2025-05-15T00:45:17.392668716Z" level=error msg="encountered an error cleaning up failed sandbox \"d084fb895192a5045258fdda8c5273615e092dc79ef82c99cf0544b05a4ded28\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:17.392735 containerd[2742]: time="2025-05-15T00:45:17.392720558Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzhwf,Uid:439669d8-aecd-4ab8-a6b4-fddb8f0d476b,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d084fb895192a5045258fdda8c5273615e092dc79ef82c99cf0544b05a4ded28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:17.392839 kubelet[4479]: E0515 00:45:17.392823 4479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d084fb895192a5045258fdda8c5273615e092dc79ef82c99cf0544b05a4ded28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:17.392863 kubelet[4479]: E0515 00:45:17.392850 4479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d084fb895192a5045258fdda8c5273615e092dc79ef82c99cf0544b05a4ded28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nzhwf" May 15 00:45:17.392884 kubelet[4479]: E0515 00:45:17.392867 4479 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d084fb895192a5045258fdda8c5273615e092dc79ef82c99cf0544b05a4ded28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nzhwf" May 15 00:45:17.392920 kubelet[4479]: E0515 00:45:17.392901 4479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nzhwf_kube-system(439669d8-aecd-4ab8-a6b4-fddb8f0d476b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nzhwf_kube-system(439669d8-aecd-4ab8-a6b4-fddb8f0d476b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d084fb895192a5045258fdda8c5273615e092dc79ef82c99cf0544b05a4ded28\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nzhwf" podUID="439669d8-aecd-4ab8-a6b4-fddb8f0d476b" May 15 00:45:17.396058 containerd[2742]: time="2025-05-15T00:45:17.396021182Z" level=error msg="Failed to destroy network for sandbox \"3b1be38ffdcd631013158fe1aa9d73205f2c9093e873a2d284c4bf0e03344d05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:17.396202 containerd[2742]: time="2025-05-15T00:45:17.396175069Z" level=error msg="Failed to destroy network for sandbox \"f4fafcc2e03678fcd091a01ce281b13b570673b2e03fbc914a29d7234888cd83\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:17.396379 containerd[2742]: time="2025-05-15T00:45:17.396357117Z" level=error msg="encountered an error cleaning up failed sandbox \"3b1be38ffdcd631013158fe1aa9d73205f2c9093e873a2d284c4bf0e03344d05\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:17.396429 containerd[2742]: time="2025-05-15T00:45:17.396413560Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596b57fb45-8rtv2,Uid:9172cd52-0d57-416c-ba87-816358bd2302,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"3b1be38ffdcd631013158fe1aa9d73205f2c9093e873a2d284c4bf0e03344d05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:17.396507 containerd[2742]: time="2025-05-15T00:45:17.396486323Z" level=error msg="encountered an error cleaning up failed sandbox \"f4fafcc2e03678fcd091a01ce281b13b570673b2e03fbc914a29d7234888cd83\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:17.396567 kubelet[4479]: E0515 00:45:17.396546 4479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b1be38ffdcd631013158fe1aa9d73205f2c9093e873a2d284c4bf0e03344d05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:17.396595 kubelet[4479]: E0515 00:45:17.396579 4479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b1be38ffdcd631013158fe1aa9d73205f2c9093e873a2d284c4bf0e03344d05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-596b57fb45-8rtv2" May 15 00:45:17.396616 containerd[2742]: time="2025-05-15T00:45:17.396564006Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8c899d4d9-gzqjf,Uid:261e97f9-6a47-47b9-9d3b-8808cd9cee51,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"f4fafcc2e03678fcd091a01ce281b13b570673b2e03fbc914a29d7234888cd83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:17.396657 kubelet[4479]: E0515 00:45:17.396594 4479 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b1be38ffdcd631013158fe1aa9d73205f2c9093e873a2d284c4bf0e03344d05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-596b57fb45-8rtv2" May 15 00:45:17.396657 kubelet[4479]: E0515 00:45:17.396623 4479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-596b57fb45-8rtv2_calico-system(9172cd52-0d57-416c-ba87-816358bd2302)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-596b57fb45-8rtv2_calico-system(9172cd52-0d57-416c-ba87-816358bd2302)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b1be38ffdcd631013158fe1aa9d73205f2c9093e873a2d284c4bf0e03344d05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-596b57fb45-8rtv2" podUID="9172cd52-0d57-416c-ba87-816358bd2302" May 15 00:45:17.396709 kubelet[4479]: E0515 00:45:17.396661 4479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4fafcc2e03678fcd091a01ce281b13b570673b2e03fbc914a29d7234888cd83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:17.396709 kubelet[4479]: E0515 00:45:17.396678 4479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4fafcc2e03678fcd091a01ce281b13b570673b2e03fbc914a29d7234888cd83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8c899d4d9-gzqjf" May 15 00:45:17.396709 kubelet[4479]: E0515 00:45:17.396690 4479 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4fafcc2e03678fcd091a01ce281b13b570673b2e03fbc914a29d7234888cd83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8c899d4d9-gzqjf" May 15 00:45:17.396765 kubelet[4479]: E0515 00:45:17.396712 4479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8c899d4d9-gzqjf_calico-apiserver(261e97f9-6a47-47b9-9d3b-8808cd9cee51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8c899d4d9-gzqjf_calico-apiserver(261e97f9-6a47-47b9-9d3b-8808cd9cee51)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4fafcc2e03678fcd091a01ce281b13b570673b2e03fbc914a29d7234888cd83\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8c899d4d9-gzqjf" podUID="261e97f9-6a47-47b9-9d3b-8808cd9cee51" May 15 00:45:17.399745 containerd[2742]: time="2025-05-15T00:45:17.399715464Z" level=error msg="Failed to destroy network for sandbox \"5a631848f64e2907df3492d0c9944d682d7428751e164887d2853322f835243c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:17.400035 containerd[2742]: time="2025-05-15T00:45:17.400012517Z" level=error msg="encountered an error cleaning up failed sandbox \"5a631848f64e2907df3492d0c9944d682d7428751e164887d2853322f835243c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:17.400076 containerd[2742]: time="2025-05-15T00:45:17.400061159Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lhgzs,Uid:d4c8c3bf-81ae-482c-8ebe-086a362f11d9,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"5a631848f64e2907df3492d0c9944d682d7428751e164887d2853322f835243c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:17.400199 kubelet[4479]: E0515 00:45:17.400174 4479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a631848f64e2907df3492d0c9944d682d7428751e164887d2853322f835243c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:45:17.400224 kubelet[4479]: E0515 00:45:17.400215 4479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a631848f64e2907df3492d0c9944d682d7428751e164887d2853322f835243c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lhgzs" May 15 00:45:17.400244 kubelet[4479]: E0515 00:45:17.400230 4479 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a631848f64e2907df3492d0c9944d682d7428751e164887d2853322f835243c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lhgzs" May 15 00:45:17.400277 kubelet[4479]: E0515 00:45:17.400260 4479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lhgzs_calico-system(d4c8c3bf-81ae-482c-8ebe-086a362f11d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lhgzs_calico-system(d4c8c3bf-81ae-482c-8ebe-086a362f11d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a631848f64e2907df3492d0c9944d682d7428751e164887d2853322f835243c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lhgzs" podUID="d4c8c3bf-81ae-482c-8ebe-086a362f11d9" May 15 00:45:17.925716 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1399a0d99f7adeee1a7c11a6ca1f65fecd35838d4ee18ff29180204e702a5d04-shm.mount: Deactivated successfully. May 15 00:45:17.938625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4144160345.mount: Deactivated successfully. May 15 00:45:17.957735 containerd[2742]: time="2025-05-15T00:45:17.957695868Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:45:17.957800 containerd[2742]: time="2025-05-15T00:45:17.957696468Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 15 00:45:17.958366 containerd[2742]: time="2025-05-15T00:45:17.958344577Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:45:17.959957 containerd[2742]: time="2025-05-15T00:45:17.959925686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:45:17.960536 containerd[2742]: time="2025-05-15T00:45:17.960513431Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 2.633153902s" May 15 00:45:17.960560 containerd[2742]: time="2025-05-15T00:45:17.960540593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 15 00:45:17.966333 containerd[2742]: time="2025-05-15T00:45:17.966310125Z" level=info msg="CreateContainer within sandbox \"5f4a79ceeba17cbb92465f087d4596824a2f94e9c56ab36af3cefcfb5e6078a9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 15 00:45:17.974894 containerd[2742]: time="2025-05-15T00:45:17.974859218Z" level=info msg="CreateContainer within sandbox \"5f4a79ceeba17cbb92465f087d4596824a2f94e9c56ab36af3cefcfb5e6078a9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b605a7c4a7dd45e45de1e4e927e8b339b0d9e2efea2438ce03ffc85645738db1\"" May 15 00:45:17.975203 containerd[2742]: time="2025-05-15T00:45:17.975180832Z" level=info msg="StartContainer for \"b605a7c4a7dd45e45de1e4e927e8b339b0d9e2efea2438ce03ffc85645738db1\"" May 15 00:45:18.007092 systemd[1]: Started cri-containerd-b605a7c4a7dd45e45de1e4e927e8b339b0d9e2efea2438ce03ffc85645738db1.scope - libcontainer container b605a7c4a7dd45e45de1e4e927e8b339b0d9e2efea2438ce03ffc85645738db1. May 15 00:45:18.028977 containerd[2742]: time="2025-05-15T00:45:18.028951384Z" level=info msg="StartContainer for \"b605a7c4a7dd45e45de1e4e927e8b339b0d9e2efea2438ce03ffc85645738db1\" returns successfully" May 15 00:45:18.152342 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 15 00:45:18.152405 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 15 00:45:18.349424 kubelet[4479]: I0515 00:45:18.349392 4479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc93de283b98479ea68227ba4a0660df5329324d1a421550198b5bf0de9161c1" May 15 00:45:18.349800 containerd[2742]: time="2025-05-15T00:45:18.349774757Z" level=info msg="StopPodSandbox for \"cc93de283b98479ea68227ba4a0660df5329324d1a421550198b5bf0de9161c1\"" May 15 00:45:18.350060 containerd[2742]: time="2025-05-15T00:45:18.349930323Z" level=info msg="Ensure that sandbox cc93de283b98479ea68227ba4a0660df5329324d1a421550198b5bf0de9161c1 in task-service has been cleanup successfully" May 15 00:45:18.350136 containerd[2742]: time="2025-05-15T00:45:18.350122931Z" level=info msg="TearDown network for sandbox \"cc93de283b98479ea68227ba4a0660df5329324d1a421550198b5bf0de9161c1\" successfully" May 15 00:45:18.350159 containerd[2742]: time="2025-05-15T00:45:18.350136332Z" level=info msg="StopPodSandbox for \"cc93de283b98479ea68227ba4a0660df5329324d1a421550198b5bf0de9161c1\" returns successfully" May 15 00:45:18.350782 containerd[2742]: time="2025-05-15T00:45:18.350756637Z" level=info msg="StopPodSandbox for \"1b9aaaf3bbfd969701e72e7d4061005251be34cc4ff6594d68fffb8c182d7bbc\"" May 15 00:45:18.350861 containerd[2742]: time="2025-05-15T00:45:18.350849361Z" level=info msg="TearDown network for sandbox \"1b9aaaf3bbfd969701e72e7d4061005251be34cc4ff6594d68fffb8c182d7bbc\" successfully" May 15 00:45:18.350881 containerd[2742]: time="2025-05-15T00:45:18.350861882Z" level=info msg="StopPodSandbox for \"1b9aaaf3bbfd969701e72e7d4061005251be34cc4ff6594d68fffb8c182d7bbc\" returns successfully" May 15 00:45:18.351059 containerd[2742]: time="2025-05-15T00:45:18.351039529Z" level=info msg="StopPodSandbox for \"60458f703b5250a89da38ecd71fc87d7040b8bdb5486447737a94a993b59a538\"" May 15 00:45:18.351127 containerd[2742]: time="2025-05-15T00:45:18.351115252Z" level=info msg="TearDown network for sandbox \"60458f703b5250a89da38ecd71fc87d7040b8bdb5486447737a94a993b59a538\" successfully" May 15 00:45:18.351147 containerd[2742]: time="2025-05-15T00:45:18.351127332Z" level=info msg="StopPodSandbox for \"60458f703b5250a89da38ecd71fc87d7040b8bdb5486447737a94a993b59a538\" returns successfully" May 15 00:45:18.351298 kubelet[4479]: I0515 00:45:18.351284 4479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d084fb895192a5045258fdda8c5273615e092dc79ef82c99cf0544b05a4ded28" May 15 00:45:18.351345 containerd[2742]: time="2025-05-15T00:45:18.351326461Z" level=info msg="StopPodSandbox for \"0a83d9d484576d9844bcc82eafa65c2799ad439480b7d80198fb65d09db951da\"" May 15 00:45:18.351420 containerd[2742]: time="2025-05-15T00:45:18.351410384Z" level=info msg="TearDown network for sandbox \"0a83d9d484576d9844bcc82eafa65c2799ad439480b7d80198fb65d09db951da\" successfully" May 15 00:45:18.351440 containerd[2742]: time="2025-05-15T00:45:18.351420744Z" level=info msg="StopPodSandbox for \"0a83d9d484576d9844bcc82eafa65c2799ad439480b7d80198fb65d09db951da\" returns successfully" May 15 00:45:18.351659 containerd[2742]: time="2025-05-15T00:45:18.351646474Z" level=info msg="StopPodSandbox for \"d084fb895192a5045258fdda8c5273615e092dc79ef82c99cf0544b05a4ded28\"" May 15 00:45:18.351772 containerd[2742]: time="2025-05-15T00:45:18.351753798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8c899d4d9-smfp6,Uid:3c5cf155-23c9-4359-ad93-bccee4c59f03,Namespace:calico-apiserver,Attempt:4,}" May 15 00:45:18.351799 containerd[2742]: time="2025-05-15T00:45:18.351777359Z" level=info msg="Ensure that sandbox d084fb895192a5045258fdda8c5273615e092dc79ef82c99cf0544b05a4ded28 in task-service has been cleanup successfully" May 15 00:45:18.351943 containerd[2742]: time="2025-05-15T00:45:18.351930285Z" level=info msg="TearDown network for sandbox \"d084fb895192a5045258fdda8c5273615e092dc79ef82c99cf0544b05a4ded28\" successfully" May 15 00:45:18.351962 containerd[2742]: time="2025-05-15T00:45:18.351943406Z" level=info msg="StopPodSandbox for \"d084fb895192a5045258fdda8c5273615e092dc79ef82c99cf0544b05a4ded28\" returns successfully" May 15 00:45:18.352192 containerd[2742]: time="2025-05-15T00:45:18.352174815Z" level=info msg="StopPodSandbox for \"e1dcdcf456e21dd356a6e4200c4375b00da9045e680c9c561a5a30974a0dc038\"" May 15 00:45:18.352261 containerd[2742]: time="2025-05-15T00:45:18.352248458Z" level=info msg="TearDown network for sandbox \"e1dcdcf456e21dd356a6e4200c4375b00da9045e680c9c561a5a30974a0dc038\" successfully" May 15 00:45:18.352279 containerd[2742]: time="2025-05-15T00:45:18.352261499Z" level=info msg="StopPodSandbox for \"e1dcdcf456e21dd356a6e4200c4375b00da9045e680c9c561a5a30974a0dc038\" returns successfully" May 15 00:45:18.352434 containerd[2742]: time="2025-05-15T00:45:18.352421385Z" level=info msg="StopPodSandbox for \"a136f66c041125310c5791207e0630fa16b0ccbd969efde218028c9b394a893c\"" May 15 00:45:18.352486 containerd[2742]: time="2025-05-15T00:45:18.352477028Z" level=info msg="TearDown network for sandbox \"a136f66c041125310c5791207e0630fa16b0ccbd969efde218028c9b394a893c\" successfully" May 15 00:45:18.352505 containerd[2742]: time="2025-05-15T00:45:18.352486868Z" level=info msg="StopPodSandbox for \"a136f66c041125310c5791207e0630fa16b0ccbd969efde218028c9b394a893c\" returns successfully" May 15 00:45:18.352608 kubelet[4479]: I0515 00:45:18.352595 4479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b1be38ffdcd631013158fe1aa9d73205f2c9093e873a2d284c4bf0e03344d05" May 15 00:45:18.352687 containerd[2742]: time="2025-05-15T00:45:18.352670476Z" level=info msg="StopPodSandbox for \"8569a80021e912c60d5fb9151b75e24a1a43faa25724b7430a62874467169141\"" May 15 00:45:18.352754 containerd[2742]: time="2025-05-15T00:45:18.352743279Z" level=info msg="TearDown network for sandbox \"8569a80021e912c60d5fb9151b75e24a1a43faa25724b7430a62874467169141\" successfully" May 15 00:45:18.352773 containerd[2742]: time="2025-05-15T00:45:18.352754359Z" level=info msg="StopPodSandbox for \"8569a80021e912c60d5fb9151b75e24a1a43faa25724b7430a62874467169141\" returns successfully" May 15 00:45:18.352973 containerd[2742]: time="2025-05-15T00:45:18.352957047Z" level=info msg="StopPodSandbox for \"3b1be38ffdcd631013158fe1aa9d73205f2c9093e873a2d284c4bf0e03344d05\"" May 15 00:45:18.353074 containerd[2742]: time="2025-05-15T00:45:18.353060332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzhwf,Uid:439669d8-aecd-4ab8-a6b4-fddb8f0d476b,Namespace:kube-system,Attempt:4,}" May 15 00:45:18.353100 containerd[2742]: time="2025-05-15T00:45:18.353087813Z" level=info msg="Ensure that sandbox 3b1be38ffdcd631013158fe1aa9d73205f2c9093e873a2d284c4bf0e03344d05 in task-service has been cleanup successfully" May 15 00:45:18.353244 containerd[2742]: time="2025-05-15T00:45:18.353230699Z" level=info msg="TearDown network for sandbox \"3b1be38ffdcd631013158fe1aa9d73205f2c9093e873a2d284c4bf0e03344d05\" successfully" May 15 00:45:18.353263 containerd[2742]: time="2025-05-15T00:45:18.353244379Z" level=info msg="StopPodSandbox for \"3b1be38ffdcd631013158fe1aa9d73205f2c9093e873a2d284c4bf0e03344d05\" returns successfully" May 15 00:45:18.353598 containerd[2742]: time="2025-05-15T00:45:18.353579993Z" level=info msg="StopPodSandbox for \"50815db523ba052b0ba38b1f7f5794f326c9ca74d53c2aa7b69d11d63edea035\"" May 15 00:45:18.353673 containerd[2742]: time="2025-05-15T00:45:18.353660996Z" level=info msg="TearDown network for sandbox \"50815db523ba052b0ba38b1f7f5794f326c9ca74d53c2aa7b69d11d63edea035\" successfully" May 15 00:45:18.353693 containerd[2742]: time="2025-05-15T00:45:18.353672877Z" level=info msg="StopPodSandbox for \"50815db523ba052b0ba38b1f7f5794f326c9ca74d53c2aa7b69d11d63edea035\" returns successfully" May 15 00:45:18.354030 containerd[2742]: time="2025-05-15T00:45:18.354011450Z" level=info msg="StopPodSandbox for \"d95a031375e5efff54170250299d40453c0eceb2d4a8fd3e6f6b70153cf7ef79\"" May 15 00:45:18.354106 containerd[2742]: time="2025-05-15T00:45:18.354096134Z" level=info msg="TearDown network for sandbox \"d95a031375e5efff54170250299d40453c0eceb2d4a8fd3e6f6b70153cf7ef79\" successfully" May 15 00:45:18.354132 containerd[2742]: time="2025-05-15T00:45:18.354106774Z" level=info msg="StopPodSandbox for \"d95a031375e5efff54170250299d40453c0eceb2d4a8fd3e6f6b70153cf7ef79\" returns successfully" May 15 00:45:18.354270 containerd[2742]: time="2025-05-15T00:45:18.354257981Z" level=info msg="StopPodSandbox for \"eaedd48b44f386ae86ad7d510a4e82c76588335e346e5f046a7e3921144b79af\"" May 15 00:45:18.354323 kubelet[4479]: I0515 00:45:18.354307 4479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4fafcc2e03678fcd091a01ce281b13b570673b2e03fbc914a29d7234888cd83" May 15 00:45:18.354345 containerd[2742]: time="2025-05-15T00:45:18.354320303Z" level=info msg="TearDown network for sandbox \"eaedd48b44f386ae86ad7d510a4e82c76588335e346e5f046a7e3921144b79af\" successfully" May 15 00:45:18.354345 containerd[2742]: time="2025-05-15T00:45:18.354329183Z" level=info msg="StopPodSandbox for \"eaedd48b44f386ae86ad7d510a4e82c76588335e346e5f046a7e3921144b79af\" returns successfully" May 15 00:45:18.354658 containerd[2742]: time="2025-05-15T00:45:18.354640356Z" level=info msg="StopPodSandbox for \"f4fafcc2e03678fcd091a01ce281b13b570673b2e03fbc914a29d7234888cd83\"" May 15 00:45:18.354784 containerd[2742]: time="2025-05-15T00:45:18.354772282Z" level=info msg="Ensure that sandbox f4fafcc2e03678fcd091a01ce281b13b570673b2e03fbc914a29d7234888cd83 in task-service has been cleanup successfully" May 15 00:45:18.354821 containerd[2742]: time="2025-05-15T00:45:18.354800003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596b57fb45-8rtv2,Uid:9172cd52-0d57-416c-ba87-816358bd2302,Namespace:calico-system,Attempt:4,}" May 15 00:45:18.354957 containerd[2742]: time="2025-05-15T00:45:18.354942129Z" level=info msg="TearDown network for sandbox \"f4fafcc2e03678fcd091a01ce281b13b570673b2e03fbc914a29d7234888cd83\" successfully" May 15 00:45:18.354978 containerd[2742]: time="2025-05-15T00:45:18.354957769Z" level=info msg="StopPodSandbox for \"f4fafcc2e03678fcd091a01ce281b13b570673b2e03fbc914a29d7234888cd83\" returns successfully" May 15 00:45:18.355961 containerd[2742]: time="2025-05-15T00:45:18.355933129Z" level=info msg="StopPodSandbox for \"7f0b7c2a7f53afe71dcb6df3d57d30fa392c9d75def3ab64b2051feef4a136b8\"" May 15 00:45:18.356090 containerd[2742]: time="2025-05-15T00:45:18.356067255Z" level=info msg="TearDown network for sandbox \"7f0b7c2a7f53afe71dcb6df3d57d30fa392c9d75def3ab64b2051feef4a136b8\" successfully" May 15 00:45:18.356111 containerd[2742]: time="2025-05-15T00:45:18.356090736Z" level=info msg="StopPodSandbox for \"7f0b7c2a7f53afe71dcb6df3d57d30fa392c9d75def3ab64b2051feef4a136b8\" returns successfully" May 15 00:45:18.356567 containerd[2742]: time="2025-05-15T00:45:18.356542394Z" level=info msg="StopPodSandbox for \"844799c848ddb75ffd035f8ecc112096a30f6cd2c55c728b7dfb087bf50f047d\"" May 15 00:45:18.357474 containerd[2742]: time="2025-05-15T00:45:18.357228422Z" level=info msg="TearDown network for sandbox \"844799c848ddb75ffd035f8ecc112096a30f6cd2c55c728b7dfb087bf50f047d\" successfully" May 15 00:45:18.357474 containerd[2742]: time="2025-05-15T00:45:18.357252743Z" level=info msg="StopPodSandbox for \"844799c848ddb75ffd035f8ecc112096a30f6cd2c55c728b7dfb087bf50f047d\" returns successfully" May 15 00:45:18.357799 containerd[2742]: time="2025-05-15T00:45:18.357777045Z" level=info msg="StopPodSandbox for \"6a46bdf3b6328eb42ffd7a69e6a280c0d8c593d3ad715b2a12e5d54f363b4a45\"" May 15 00:45:18.357836 kubelet[4479]: I0515 00:45:18.357820 4479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a631848f64e2907df3492d0c9944d682d7428751e164887d2853322f835243c" May 15 00:45:18.357872 containerd[2742]: time="2025-05-15T00:45:18.357859928Z" level=info msg="TearDown network for sandbox \"6a46bdf3b6328eb42ffd7a69e6a280c0d8c593d3ad715b2a12e5d54f363b4a45\" successfully" May 15 00:45:18.357892 containerd[2742]: time="2025-05-15T00:45:18.357872049Z" level=info msg="StopPodSandbox for \"6a46bdf3b6328eb42ffd7a69e6a280c0d8c593d3ad715b2a12e5d54f363b4a45\" returns successfully" May 15 00:45:18.358212 containerd[2742]: time="2025-05-15T00:45:18.358191502Z" level=info msg="StopPodSandbox for \"5a631848f64e2907df3492d0c9944d682d7428751e164887d2853322f835243c\"" May 15 00:45:18.358257 containerd[2742]: time="2025-05-15T00:45:18.358204262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8c899d4d9-gzqjf,Uid:261e97f9-6a47-47b9-9d3b-8808cd9cee51,Namespace:calico-apiserver,Attempt:4,}" May 15 00:45:18.358530 containerd[2742]: time="2025-05-15T00:45:18.358335867Z" level=info msg="Ensure that sandbox 5a631848f64e2907df3492d0c9944d682d7428751e164887d2853322f835243c in task-service has been cleanup successfully" May 15 00:45:18.358530 containerd[2742]: time="2025-05-15T00:45:18.358489514Z" level=info msg="TearDown network for sandbox \"5a631848f64e2907df3492d0c9944d682d7428751e164887d2853322f835243c\" successfully" May 15 00:45:18.358530 containerd[2742]: time="2025-05-15T00:45:18.358505234Z" level=info msg="StopPodSandbox for \"5a631848f64e2907df3492d0c9944d682d7428751e164887d2853322f835243c\" returns successfully" May 15 00:45:18.358705 containerd[2742]: time="2025-05-15T00:45:18.358688682Z" level=info msg="StopPodSandbox for \"54f085cdebd6782586198a79077bd5e898b5109f8cb836c0a124c79f60c4879b\"" May 15 00:45:18.358771 containerd[2742]: time="2025-05-15T00:45:18.358759285Z" level=info msg="TearDown network for sandbox \"54f085cdebd6782586198a79077bd5e898b5109f8cb836c0a124c79f60c4879b\" successfully" May 15 00:45:18.358771 containerd[2742]: time="2025-05-15T00:45:18.358769925Z" level=info msg="StopPodSandbox for \"54f085cdebd6782586198a79077bd5e898b5109f8cb836c0a124c79f60c4879b\" returns successfully" May 15 00:45:18.359457 containerd[2742]: time="2025-05-15T00:45:18.359434912Z" level=info msg="StopPodSandbox for \"72c303f90d5db1dd84e9b19b5c453e8009c8dd55b3bab78b8b65ac7fa1239059\"" May 15 00:45:18.359529 containerd[2742]: time="2025-05-15T00:45:18.359517876Z" level=info msg="TearDown network for sandbox \"72c303f90d5db1dd84e9b19b5c453e8009c8dd55b3bab78b8b65ac7fa1239059\" successfully" May 15 00:45:18.359529 containerd[2742]: time="2025-05-15T00:45:18.359528516Z" level=info msg="StopPodSandbox for \"72c303f90d5db1dd84e9b19b5c453e8009c8dd55b3bab78b8b65ac7fa1239059\" returns successfully" May 15 00:45:18.359773 containerd[2742]: time="2025-05-15T00:45:18.359753646Z" level=info msg="StopPodSandbox for \"0fce0119c3f9429803f91957b67a10b2d0b2c830bc2b19a8820f9661d89e69a3\"" May 15 00:45:18.359837 containerd[2742]: time="2025-05-15T00:45:18.359824208Z" level=info msg="TearDown network for sandbox \"0fce0119c3f9429803f91957b67a10b2d0b2c830bc2b19a8820f9661d89e69a3\" successfully" May 15 00:45:18.359884 containerd[2742]: time="2025-05-15T00:45:18.359835729Z" level=info msg="StopPodSandbox for \"0fce0119c3f9429803f91957b67a10b2d0b2c830bc2b19a8820f9661d89e69a3\" returns successfully" May 15 00:45:18.360182 containerd[2742]: time="2025-05-15T00:45:18.360161062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lhgzs,Uid:d4c8c3bf-81ae-482c-8ebe-086a362f11d9,Namespace:calico-system,Attempt:4,}" May 15 00:45:18.361508 kubelet[4479]: I0515 00:45:18.361494 4479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1399a0d99f7adeee1a7c11a6ca1f65fecd35838d4ee18ff29180204e702a5d04" May 15 00:45:18.362037 containerd[2742]: time="2025-05-15T00:45:18.361993217Z" level=info msg="StopPodSandbox for \"1399a0d99f7adeee1a7c11a6ca1f65fecd35838d4ee18ff29180204e702a5d04\"" May 15 00:45:18.362171 containerd[2742]: time="2025-05-15T00:45:18.362157424Z" level=info msg="Ensure that sandbox 1399a0d99f7adeee1a7c11a6ca1f65fecd35838d4ee18ff29180204e702a5d04 in task-service has been cleanup successfully" May 15 00:45:18.362355 containerd[2742]: time="2025-05-15T00:45:18.362341071Z" level=info msg="TearDown network for sandbox \"1399a0d99f7adeee1a7c11a6ca1f65fecd35838d4ee18ff29180204e702a5d04\" successfully" May 15 00:45:18.362375 containerd[2742]: time="2025-05-15T00:45:18.362356272Z" level=info msg="StopPodSandbox for \"1399a0d99f7adeee1a7c11a6ca1f65fecd35838d4ee18ff29180204e702a5d04\" returns successfully" May 15 00:45:18.362623 containerd[2742]: time="2025-05-15T00:45:18.362603402Z" level=info msg="StopPodSandbox for \"fccafe2fb0271a036ce005e2d52f81dcf977b220969c5a70a51b7fda07aa7288\"" May 15 00:45:18.362706 containerd[2742]: time="2025-05-15T00:45:18.362696006Z" level=info msg="TearDown network for sandbox \"fccafe2fb0271a036ce005e2d52f81dcf977b220969c5a70a51b7fda07aa7288\" successfully" May 15 00:45:18.362727 containerd[2742]: time="2025-05-15T00:45:18.362707406Z" level=info msg="StopPodSandbox for \"fccafe2fb0271a036ce005e2d52f81dcf977b220969c5a70a51b7fda07aa7288\" returns successfully" May 15 00:45:18.362944 containerd[2742]: time="2025-05-15T00:45:18.362926335Z" level=info msg="StopPodSandbox for \"e350aa87adb53dcc992ece155cc0f2c80a06f3e56a3f2d6b4b72c306fd6154d5\"" May 15 00:45:18.363051 containerd[2742]: time="2025-05-15T00:45:18.363037660Z" level=info msg="TearDown network for sandbox \"e350aa87adb53dcc992ece155cc0f2c80a06f3e56a3f2d6b4b72c306fd6154d5\" successfully" May 15 00:45:18.363081 containerd[2742]: time="2025-05-15T00:45:18.363051021Z" level=info msg="StopPodSandbox for \"e350aa87adb53dcc992ece155cc0f2c80a06f3e56a3f2d6b4b72c306fd6154d5\" returns successfully" May 15 00:45:18.363290 containerd[2742]: time="2025-05-15T00:45:18.363273230Z" level=info msg="StopPodSandbox for \"f1b3dd443a8fe16b44a1b8bbfe9267a0e937dc8adf28fb8076ddb7034d32caa3\"" May 15 00:45:18.363363 containerd[2742]: time="2025-05-15T00:45:18.363352393Z" level=info msg="TearDown network for sandbox \"f1b3dd443a8fe16b44a1b8bbfe9267a0e937dc8adf28fb8076ddb7034d32caa3\" successfully" May 15 00:45:18.363383 containerd[2742]: time="2025-05-15T00:45:18.363363033Z" level=info msg="StopPodSandbox for \"f1b3dd443a8fe16b44a1b8bbfe9267a0e937dc8adf28fb8076ddb7034d32caa3\" returns successfully" May 15 00:45:18.363736 containerd[2742]: time="2025-05-15T00:45:18.363714408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vrcxb,Uid:c07e4604-a033-42cb-84d7-c3f474ee1073,Namespace:kube-system,Attempt:4,}" May 15 00:45:18.370457 kubelet[4479]: I0515 00:45:18.370405 4479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-27pvq" podStartSLOduration=1.488693732 podStartE2EDuration="8.370391801s" podCreationTimestamp="2025-05-15 00:45:10 +0000 UTC" firstStartedPulling="2025-05-15 00:45:11.079376107 +0000 UTC m=+21.865841469" lastFinishedPulling="2025-05-15 00:45:17.961074176 +0000 UTC m=+28.747539538" observedRunningTime="2025-05-15 00:45:18.370179192 +0000 UTC m=+29.156644554" watchObservedRunningTime="2025-05-15 00:45:18.370391801 +0000 UTC m=+29.156857123" May 15 00:45:18.469037 systemd-networkd[2647]: cali482e590df80: Link UP May 15 00:45:18.469280 systemd-networkd[2647]: cali482e590df80: Gained carrier May 15 00:45:18.476108 containerd[2742]: 2025-05-15 00:45:18.378 [INFO][7008] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 00:45:18.476108 containerd[2742]: 2025-05-15 00:45:18.390 [INFO][7008] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.1.1--n--3631181341-k8s-coredns--7db6d8ff4d--nzhwf-eth0 coredns-7db6d8ff4d- kube-system 439669d8-aecd-4ab8-a6b4-fddb8f0d476b 651 0 2025-05-15 00:45:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4230.1.1-n-3631181341 coredns-7db6d8ff4d-nzhwf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali482e590df80 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e52674d7226e07cdad015074fadcb6e8f56d588c59214eb31bc4faf150451339" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nzhwf" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-coredns--7db6d8ff4d--nzhwf-" May 15 00:45:18.476108 containerd[2742]: 2025-05-15 00:45:18.390 [INFO][7008] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e52674d7226e07cdad015074fadcb6e8f56d588c59214eb31bc4faf150451339" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nzhwf" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-coredns--7db6d8ff4d--nzhwf-eth0" May 15 00:45:18.476108 containerd[2742]: 2025-05-15 00:45:18.424 [INFO][7156] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e52674d7226e07cdad015074fadcb6e8f56d588c59214eb31bc4faf150451339" HandleID="k8s-pod-network.e52674d7226e07cdad015074fadcb6e8f56d588c59214eb31bc4faf150451339" Workload="ci--4230.1.1--n--3631181341-k8s-coredns--7db6d8ff4d--nzhwf-eth0" May 15 00:45:18.476108 containerd[2742]: 2025-05-15 00:45:18.433 [INFO][7156] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e52674d7226e07cdad015074fadcb6e8f56d588c59214eb31bc4faf150451339" HandleID="k8s-pod-network.e52674d7226e07cdad015074fadcb6e8f56d588c59214eb31bc4faf150451339" Workload="ci--4230.1.1--n--3631181341-k8s-coredns--7db6d8ff4d--nzhwf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000330b50), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4230.1.1-n-3631181341", "pod":"coredns-7db6d8ff4d-nzhwf", "timestamp":"2025-05-15 00:45:18.424671903 +0000 UTC"}, Hostname:"ci-4230.1.1-n-3631181341", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:45:18.476108 containerd[2742]: 2025-05-15 00:45:18.434 [INFO][7156] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:45:18.476108 containerd[2742]: 2025-05-15 00:45:18.434 [INFO][7156] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:45:18.476108 containerd[2742]: 2025-05-15 00:45:18.434 [INFO][7156] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.1.1-n-3631181341' May 15 00:45:18.476108 containerd[2742]: 2025-05-15 00:45:18.435 [INFO][7156] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e52674d7226e07cdad015074fadcb6e8f56d588c59214eb31bc4faf150451339" host="ci-4230.1.1-n-3631181341" May 15 00:45:18.476108 containerd[2742]: 2025-05-15 00:45:18.438 [INFO][7156] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230.1.1-n-3631181341" May 15 00:45:18.476108 containerd[2742]: 2025-05-15 00:45:18.441 [INFO][7156] ipam/ipam.go 489: Trying affinity for 192.168.27.0/26 host="ci-4230.1.1-n-3631181341" May 15 00:45:18.476108 containerd[2742]: 2025-05-15 00:45:18.442 [INFO][7156] ipam/ipam.go 155: Attempting to load block cidr=192.168.27.0/26 host="ci-4230.1.1-n-3631181341" May 15 00:45:18.476108 containerd[2742]: 2025-05-15 00:45:18.443 [INFO][7156] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.27.0/26 host="ci-4230.1.1-n-3631181341" May 15 00:45:18.476108 containerd[2742]: 2025-05-15 00:45:18.443 [INFO][7156] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.27.0/26 handle="k8s-pod-network.e52674d7226e07cdad015074fadcb6e8f56d588c59214eb31bc4faf150451339" host="ci-4230.1.1-n-3631181341" May 15 00:45:18.476108 containerd[2742]: 2025-05-15 00:45:18.444 [INFO][7156] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e52674d7226e07cdad015074fadcb6e8f56d588c59214eb31bc4faf150451339 May 15 00:45:18.476108 containerd[2742]: 2025-05-15 00:45:18.451 [INFO][7156] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.27.0/26 handle="k8s-pod-network.e52674d7226e07cdad015074fadcb6e8f56d588c59214eb31bc4faf150451339" host="ci-4230.1.1-n-3631181341" May 15 00:45:18.476108 containerd[2742]: 2025-05-15 00:45:18.461 [INFO][7156] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.27.1/26] block=192.168.27.0/26 handle="k8s-pod-network.e52674d7226e07cdad015074fadcb6e8f56d588c59214eb31bc4faf150451339" host="ci-4230.1.1-n-3631181341" May 15 00:45:18.476108 containerd[2742]: 2025-05-15 00:45:18.462 [INFO][7156] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.27.1/26] handle="k8s-pod-network.e52674d7226e07cdad015074fadcb6e8f56d588c59214eb31bc4faf150451339" host="ci-4230.1.1-n-3631181341" May 15 00:45:18.476108 containerd[2742]: 2025-05-15 00:45:18.462 [INFO][7156] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:45:18.476108 containerd[2742]: 2025-05-15 00:45:18.462 [INFO][7156] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.27.1/26] IPv6=[] ContainerID="e52674d7226e07cdad015074fadcb6e8f56d588c59214eb31bc4faf150451339" HandleID="k8s-pod-network.e52674d7226e07cdad015074fadcb6e8f56d588c59214eb31bc4faf150451339" Workload="ci--4230.1.1--n--3631181341-k8s-coredns--7db6d8ff4d--nzhwf-eth0" May 15 00:45:18.476572 containerd[2742]: 2025-05-15 00:45:18.463 [INFO][7008] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e52674d7226e07cdad015074fadcb6e8f56d588c59214eb31bc4faf150451339" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nzhwf" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-coredns--7db6d8ff4d--nzhwf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--3631181341-k8s-coredns--7db6d8ff4d--nzhwf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"439669d8-aecd-4ab8-a6b4-fddb8f0d476b", ResourceVersion:"651", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 45, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-3631181341", ContainerID:"", Pod:"coredns-7db6d8ff4d-nzhwf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.27.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali482e590df80", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:45:18.476572 containerd[2742]: 2025-05-15 00:45:18.464 [INFO][7008] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.27.1/32] ContainerID="e52674d7226e07cdad015074fadcb6e8f56d588c59214eb31bc4faf150451339" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nzhwf" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-coredns--7db6d8ff4d--nzhwf-eth0" May 15 00:45:18.476572 containerd[2742]: 2025-05-15 00:45:18.464 [INFO][7008] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali482e590df80 ContainerID="e52674d7226e07cdad015074fadcb6e8f56d588c59214eb31bc4faf150451339" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nzhwf" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-coredns--7db6d8ff4d--nzhwf-eth0" May 15 00:45:18.476572 containerd[2742]: 2025-05-15 00:45:18.469 [INFO][7008] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e52674d7226e07cdad015074fadcb6e8f56d588c59214eb31bc4faf150451339" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nzhwf" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-coredns--7db6d8ff4d--nzhwf-eth0" May 15 00:45:18.476572 containerd[2742]: 2025-05-15 00:45:18.469 [INFO][7008] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e52674d7226e07cdad015074fadcb6e8f56d588c59214eb31bc4faf150451339" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nzhwf" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-coredns--7db6d8ff4d--nzhwf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--3631181341-k8s-coredns--7db6d8ff4d--nzhwf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"439669d8-aecd-4ab8-a6b4-fddb8f0d476b", ResourceVersion:"651", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 45, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-3631181341", ContainerID:"e52674d7226e07cdad015074fadcb6e8f56d588c59214eb31bc4faf150451339", Pod:"coredns-7db6d8ff4d-nzhwf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.27.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali482e590df80", MAC:"6e:4f:e5:31:86:2b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:45:18.476572 containerd[2742]: 2025-05-15 00:45:18.474 [INFO][7008] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e52674d7226e07cdad015074fadcb6e8f56d588c59214eb31bc4faf150451339" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nzhwf" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-coredns--7db6d8ff4d--nzhwf-eth0" May 15 00:45:18.484046 systemd-networkd[2647]: cali1f96e695249: Link UP May 15 00:45:18.484209 systemd-networkd[2647]: cali1f96e695249: Gained carrier May 15 00:45:18.490119 containerd[2742]: time="2025-05-15T00:45:18.489744807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:45:18.490119 containerd[2742]: time="2025-05-15T00:45:18.490109582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:45:18.490176 containerd[2742]: time="2025-05-15T00:45:18.490122422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:45:18.490206 containerd[2742]: time="2025-05-15T00:45:18.490189585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:45:18.490600 containerd[2742]: 2025-05-15 00:45:18.379 [INFO][7014] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 00:45:18.490600 containerd[2742]: 2025-05-15 00:45:18.390 [INFO][7014] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.1.1--n--3631181341-k8s-calico--kube--controllers--596b57fb45--8rtv2-eth0 calico-kube-controllers-596b57fb45- calico-system 9172cd52-0d57-416c-ba87-816358bd2302 648 0 2025-05-15 00:45:10 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:596b57fb45 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4230.1.1-n-3631181341 calico-kube-controllers-596b57fb45-8rtv2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1f96e695249 [] []}} ContainerID="050f1dd7d44f69451d4f992ed02c7f8ca0dea350dd4aba08bc46f30f3f89e05c" Namespace="calico-system" Pod="calico-kube-controllers-596b57fb45-8rtv2" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-calico--kube--controllers--596b57fb45--8rtv2-" May 15 00:45:18.490600 containerd[2742]: 2025-05-15 00:45:18.390 [INFO][7014] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="050f1dd7d44f69451d4f992ed02c7f8ca0dea350dd4aba08bc46f30f3f89e05c" Namespace="calico-system" Pod="calico-kube-controllers-596b57fb45-8rtv2" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-calico--kube--controllers--596b57fb45--8rtv2-eth0" May 15 00:45:18.490600 containerd[2742]: 2025-05-15 00:45:18.424 [INFO][7153] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="050f1dd7d44f69451d4f992ed02c7f8ca0dea350dd4aba08bc46f30f3f89e05c" HandleID="k8s-pod-network.050f1dd7d44f69451d4f992ed02c7f8ca0dea350dd4aba08bc46f30f3f89e05c" Workload="ci--4230.1.1--n--3631181341-k8s-calico--kube--controllers--596b57fb45--8rtv2-eth0" May 15 00:45:18.490600 containerd[2742]: 2025-05-15 00:45:18.434 [INFO][7153] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="050f1dd7d44f69451d4f992ed02c7f8ca0dea350dd4aba08bc46f30f3f89e05c" HandleID="k8s-pod-network.050f1dd7d44f69451d4f992ed02c7f8ca0dea350dd4aba08bc46f30f3f89e05c" Workload="ci--4230.1.1--n--3631181341-k8s-calico--kube--controllers--596b57fb45--8rtv2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000502c20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4230.1.1-n-3631181341", "pod":"calico-kube-controllers-596b57fb45-8rtv2", "timestamp":"2025-05-15 00:45:18.424670863 +0000 UTC"}, Hostname:"ci-4230.1.1-n-3631181341", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:45:18.490600 containerd[2742]: 2025-05-15 00:45:18.434 [INFO][7153] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:45:18.490600 containerd[2742]: 2025-05-15 00:45:18.462 [INFO][7153] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:45:18.490600 containerd[2742]: 2025-05-15 00:45:18.462 [INFO][7153] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.1.1-n-3631181341' May 15 00:45:18.490600 containerd[2742]: 2025-05-15 00:45:18.464 [INFO][7153] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.050f1dd7d44f69451d4f992ed02c7f8ca0dea350dd4aba08bc46f30f3f89e05c" host="ci-4230.1.1-n-3631181341" May 15 00:45:18.490600 containerd[2742]: 2025-05-15 00:45:18.467 [INFO][7153] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230.1.1-n-3631181341" May 15 00:45:18.490600 containerd[2742]: 2025-05-15 00:45:18.470 [INFO][7153] ipam/ipam.go 489: Trying affinity for 192.168.27.0/26 host="ci-4230.1.1-n-3631181341" May 15 00:45:18.490600 containerd[2742]: 2025-05-15 00:45:18.472 [INFO][7153] ipam/ipam.go 155: Attempting to load block cidr=192.168.27.0/26 host="ci-4230.1.1-n-3631181341" May 15 00:45:18.490600 containerd[2742]: 2025-05-15 00:45:18.473 [INFO][7153] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.27.0/26 host="ci-4230.1.1-n-3631181341" May 15 00:45:18.490600 containerd[2742]: 2025-05-15 00:45:18.473 [INFO][7153] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.27.0/26 handle="k8s-pod-network.050f1dd7d44f69451d4f992ed02c7f8ca0dea350dd4aba08bc46f30f3f89e05c" host="ci-4230.1.1-n-3631181341" May 15 00:45:18.490600 containerd[2742]: 2025-05-15 00:45:18.474 [INFO][7153] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.050f1dd7d44f69451d4f992ed02c7f8ca0dea350dd4aba08bc46f30f3f89e05c May 15 00:45:18.490600 containerd[2742]: 2025-05-15 00:45:18.477 [INFO][7153] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.27.0/26 handle="k8s-pod-network.050f1dd7d44f69451d4f992ed02c7f8ca0dea350dd4aba08bc46f30f3f89e05c" host="ci-4230.1.1-n-3631181341" May 15 00:45:18.490600 containerd[2742]: 2025-05-15 00:45:18.481 [INFO][7153] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.27.2/26] block=192.168.27.0/26 handle="k8s-pod-network.050f1dd7d44f69451d4f992ed02c7f8ca0dea350dd4aba08bc46f30f3f89e05c" host="ci-4230.1.1-n-3631181341" May 15 00:45:18.490600 containerd[2742]: 2025-05-15 00:45:18.481 [INFO][7153] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.27.2/26] handle="k8s-pod-network.050f1dd7d44f69451d4f992ed02c7f8ca0dea350dd4aba08bc46f30f3f89e05c" host="ci-4230.1.1-n-3631181341" May 15 00:45:18.490600 containerd[2742]: 2025-05-15 00:45:18.481 [INFO][7153] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:45:18.490600 containerd[2742]: 2025-05-15 00:45:18.481 [INFO][7153] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.27.2/26] IPv6=[] ContainerID="050f1dd7d44f69451d4f992ed02c7f8ca0dea350dd4aba08bc46f30f3f89e05c" HandleID="k8s-pod-network.050f1dd7d44f69451d4f992ed02c7f8ca0dea350dd4aba08bc46f30f3f89e05c" Workload="ci--4230.1.1--n--3631181341-k8s-calico--kube--controllers--596b57fb45--8rtv2-eth0" May 15 00:45:18.490973 containerd[2742]: 2025-05-15 00:45:18.482 [INFO][7014] cni-plugin/k8s.go 386: Populated endpoint ContainerID="050f1dd7d44f69451d4f992ed02c7f8ca0dea350dd4aba08bc46f30f3f89e05c" Namespace="calico-system" Pod="calico-kube-controllers-596b57fb45-8rtv2" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-calico--kube--controllers--596b57fb45--8rtv2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--3631181341-k8s-calico--kube--controllers--596b57fb45--8rtv2-eth0", GenerateName:"calico-kube-controllers-596b57fb45-", Namespace:"calico-system", SelfLink:"", UID:"9172cd52-0d57-416c-ba87-816358bd2302", ResourceVersion:"648", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"596b57fb45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-3631181341", ContainerID:"", Pod:"calico-kube-controllers-596b57fb45-8rtv2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.27.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1f96e695249", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:45:18.490973 containerd[2742]: 2025-05-15 00:45:18.483 [INFO][7014] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.27.2/32] ContainerID="050f1dd7d44f69451d4f992ed02c7f8ca0dea350dd4aba08bc46f30f3f89e05c" Namespace="calico-system" Pod="calico-kube-controllers-596b57fb45-8rtv2" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-calico--kube--controllers--596b57fb45--8rtv2-eth0" May 15 00:45:18.490973 containerd[2742]: 2025-05-15 00:45:18.483 [INFO][7014] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f96e695249 ContainerID="050f1dd7d44f69451d4f992ed02c7f8ca0dea350dd4aba08bc46f30f3f89e05c" Namespace="calico-system" Pod="calico-kube-controllers-596b57fb45-8rtv2" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-calico--kube--controllers--596b57fb45--8rtv2-eth0" May 15 00:45:18.490973 containerd[2742]: 2025-05-15 00:45:18.484 [INFO][7014] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="050f1dd7d44f69451d4f992ed02c7f8ca0dea350dd4aba08bc46f30f3f89e05c" Namespace="calico-system" Pod="calico-kube-controllers-596b57fb45-8rtv2" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-calico--kube--controllers--596b57fb45--8rtv2-eth0" May 15 00:45:18.490973 containerd[2742]: 2025-05-15 00:45:18.484 [INFO][7014] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="050f1dd7d44f69451d4f992ed02c7f8ca0dea350dd4aba08bc46f30f3f89e05c" Namespace="calico-system" Pod="calico-kube-controllers-596b57fb45-8rtv2" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-calico--kube--controllers--596b57fb45--8rtv2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--3631181341-k8s-calico--kube--controllers--596b57fb45--8rtv2-eth0", GenerateName:"calico-kube-controllers-596b57fb45-", Namespace:"calico-system", SelfLink:"", UID:"9172cd52-0d57-416c-ba87-816358bd2302", ResourceVersion:"648", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"596b57fb45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-3631181341", ContainerID:"050f1dd7d44f69451d4f992ed02c7f8ca0dea350dd4aba08bc46f30f3f89e05c", Pod:"calico-kube-controllers-596b57fb45-8rtv2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.27.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1f96e695249", MAC:"ae:10:30:d5:58:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:45:18.490973 containerd[2742]: 2025-05-15 00:45:18.489 [INFO][7014] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="050f1dd7d44f69451d4f992ed02c7f8ca0dea350dd4aba08bc46f30f3f89e05c" Namespace="calico-system" Pod="calico-kube-controllers-596b57fb45-8rtv2" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-calico--kube--controllers--596b57fb45--8rtv2-eth0" May 15 00:45:18.502584 systemd-networkd[2647]: cali2922153b683: Link UP May 15 00:45:18.502764 systemd-networkd[2647]: cali2922153b683: Gained carrier May 15 00:45:18.505731 containerd[2742]: time="2025-05-15T00:45:18.505662939Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:45:18.505731 containerd[2742]: time="2025-05-15T00:45:18.505722501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:45:18.505788 containerd[2742]: time="2025-05-15T00:45:18.505733341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:45:18.505827 containerd[2742]: time="2025-05-15T00:45:18.505808105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:45:18.509704 containerd[2742]: 2025-05-15 00:45:18.385 [INFO][7067] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 00:45:18.509704 containerd[2742]: 2025-05-15 00:45:18.395 [INFO][7067] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.1.1--n--3631181341-k8s-coredns--7db6d8ff4d--vrcxb-eth0 coredns-7db6d8ff4d- kube-system c07e4604-a033-42cb-84d7-c3f474ee1073 649 0 2025-05-15 00:45:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4230.1.1-n-3631181341 coredns-7db6d8ff4d-vrcxb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2922153b683 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a29dbd00ac5768c6bd152d31784636a54042846db4ba1cdbbfb9cbf71c3351aa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vrcxb" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-coredns--7db6d8ff4d--vrcxb-" May 15 00:45:18.509704 containerd[2742]: 2025-05-15 00:45:18.395 [INFO][7067] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a29dbd00ac5768c6bd152d31784636a54042846db4ba1cdbbfb9cbf71c3351aa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vrcxb" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-coredns--7db6d8ff4d--vrcxb-eth0" May 15 00:45:18.509704 containerd[2742]: 2025-05-15 00:45:18.424 [INFO][7186] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a29dbd00ac5768c6bd152d31784636a54042846db4ba1cdbbfb9cbf71c3351aa" HandleID="k8s-pod-network.a29dbd00ac5768c6bd152d31784636a54042846db4ba1cdbbfb9cbf71c3351aa" Workload="ci--4230.1.1--n--3631181341-k8s-coredns--7db6d8ff4d--vrcxb-eth0" May 15 00:45:18.509704 containerd[2742]: 2025-05-15 00:45:18.434 [INFO][7186] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a29dbd00ac5768c6bd152d31784636a54042846db4ba1cdbbfb9cbf71c3351aa" HandleID="k8s-pod-network.a29dbd00ac5768c6bd152d31784636a54042846db4ba1cdbbfb9cbf71c3351aa" Workload="ci--4230.1.1--n--3631181341-k8s-coredns--7db6d8ff4d--vrcxb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40007b2c30), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4230.1.1-n-3631181341", "pod":"coredns-7db6d8ff4d-vrcxb", "timestamp":"2025-05-15 00:45:18.424669543 +0000 UTC"}, Hostname:"ci-4230.1.1-n-3631181341", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:45:18.509704 containerd[2742]: 2025-05-15 00:45:18.434 [INFO][7186] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:45:18.509704 containerd[2742]: 2025-05-15 00:45:18.481 [INFO][7186] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:45:18.509704 containerd[2742]: 2025-05-15 00:45:18.481 [INFO][7186] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.1.1-n-3631181341' May 15 00:45:18.509704 containerd[2742]: 2025-05-15 00:45:18.483 [INFO][7186] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a29dbd00ac5768c6bd152d31784636a54042846db4ba1cdbbfb9cbf71c3351aa" host="ci-4230.1.1-n-3631181341" May 15 00:45:18.509704 containerd[2742]: 2025-05-15 00:45:18.486 [INFO][7186] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230.1.1-n-3631181341" May 15 00:45:18.509704 containerd[2742]: 2025-05-15 00:45:18.489 [INFO][7186] ipam/ipam.go 489: Trying affinity for 192.168.27.0/26 host="ci-4230.1.1-n-3631181341" May 15 00:45:18.509704 containerd[2742]: 2025-05-15 00:45:18.490 [INFO][7186] ipam/ipam.go 155: Attempting to load block cidr=192.168.27.0/26 host="ci-4230.1.1-n-3631181341" May 15 00:45:18.509704 containerd[2742]: 2025-05-15 00:45:18.492 [INFO][7186] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.27.0/26 host="ci-4230.1.1-n-3631181341" May 15 00:45:18.509704 containerd[2742]: 2025-05-15 00:45:18.492 [INFO][7186] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.27.0/26 handle="k8s-pod-network.a29dbd00ac5768c6bd152d31784636a54042846db4ba1cdbbfb9cbf71c3351aa" host="ci-4230.1.1-n-3631181341" May 15 00:45:18.509704 containerd[2742]: 2025-05-15 00:45:18.493 [INFO][7186] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a29dbd00ac5768c6bd152d31784636a54042846db4ba1cdbbfb9cbf71c3351aa May 15 00:45:18.509704 containerd[2742]: 2025-05-15 00:45:18.496 [INFO][7186] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.27.0/26 handle="k8s-pod-network.a29dbd00ac5768c6bd152d31784636a54042846db4ba1cdbbfb9cbf71c3351aa" host="ci-4230.1.1-n-3631181341" May 15 00:45:18.509704 containerd[2742]: 2025-05-15 00:45:18.499 [INFO][7186] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.27.3/26] block=192.168.27.0/26 handle="k8s-pod-network.a29dbd00ac5768c6bd152d31784636a54042846db4ba1cdbbfb9cbf71c3351aa" host="ci-4230.1.1-n-3631181341" May 15 00:45:18.509704 containerd[2742]: 2025-05-15 00:45:18.499 [INFO][7186] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.27.3/26] handle="k8s-pod-network.a29dbd00ac5768c6bd152d31784636a54042846db4ba1cdbbfb9cbf71c3351aa" host="ci-4230.1.1-n-3631181341" May 15 00:45:18.509704 containerd[2742]: 2025-05-15 00:45:18.499 [INFO][7186] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:45:18.509704 containerd[2742]: 2025-05-15 00:45:18.500 [INFO][7186] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.27.3/26] IPv6=[] ContainerID="a29dbd00ac5768c6bd152d31784636a54042846db4ba1cdbbfb9cbf71c3351aa" HandleID="k8s-pod-network.a29dbd00ac5768c6bd152d31784636a54042846db4ba1cdbbfb9cbf71c3351aa" Workload="ci--4230.1.1--n--3631181341-k8s-coredns--7db6d8ff4d--vrcxb-eth0" May 15 00:45:18.510082 containerd[2742]: 2025-05-15 00:45:18.501 [INFO][7067] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a29dbd00ac5768c6bd152d31784636a54042846db4ba1cdbbfb9cbf71c3351aa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vrcxb" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-coredns--7db6d8ff4d--vrcxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--3631181341-k8s-coredns--7db6d8ff4d--vrcxb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c07e4604-a033-42cb-84d7-c3f474ee1073", ResourceVersion:"649", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 45, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-3631181341", ContainerID:"", Pod:"coredns-7db6d8ff4d-vrcxb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.27.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2922153b683", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:45:18.510082 containerd[2742]: 2025-05-15 00:45:18.501 [INFO][7067] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.27.3/32] ContainerID="a29dbd00ac5768c6bd152d31784636a54042846db4ba1cdbbfb9cbf71c3351aa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vrcxb" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-coredns--7db6d8ff4d--vrcxb-eth0" May 15 00:45:18.510082 containerd[2742]: 2025-05-15 00:45:18.501 [INFO][7067] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2922153b683 ContainerID="a29dbd00ac5768c6bd152d31784636a54042846db4ba1cdbbfb9cbf71c3351aa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vrcxb" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-coredns--7db6d8ff4d--vrcxb-eth0" May 15 00:45:18.510082 containerd[2742]: 2025-05-15 00:45:18.502 [INFO][7067] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a29dbd00ac5768c6bd152d31784636a54042846db4ba1cdbbfb9cbf71c3351aa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vrcxb" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-coredns--7db6d8ff4d--vrcxb-eth0" May 15 00:45:18.510082 containerd[2742]: 2025-05-15 00:45:18.503 [INFO][7067] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a29dbd00ac5768c6bd152d31784636a54042846db4ba1cdbbfb9cbf71c3351aa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vrcxb" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-coredns--7db6d8ff4d--vrcxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--3631181341-k8s-coredns--7db6d8ff4d--vrcxb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c07e4604-a033-42cb-84d7-c3f474ee1073", ResourceVersion:"649", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 45, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-3631181341", ContainerID:"a29dbd00ac5768c6bd152d31784636a54042846db4ba1cdbbfb9cbf71c3351aa", Pod:"coredns-7db6d8ff4d-vrcxb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.27.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2922153b683", MAC:"96:ae:3e:20:64:f5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:45:18.510082 containerd[2742]: 2025-05-15 00:45:18.508 [INFO][7067] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a29dbd00ac5768c6bd152d31784636a54042846db4ba1cdbbfb9cbf71c3351aa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vrcxb" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-coredns--7db6d8ff4d--vrcxb-eth0" May 15 00:45:18.519131 systemd[1]: Started cri-containerd-e52674d7226e07cdad015074fadcb6e8f56d588c59214eb31bc4faf150451339.scope - libcontainer container e52674d7226e07cdad015074fadcb6e8f56d588c59214eb31bc4faf150451339. May 15 00:45:18.521978 systemd[1]: Started cri-containerd-050f1dd7d44f69451d4f992ed02c7f8ca0dea350dd4aba08bc46f30f3f89e05c.scope - libcontainer container 050f1dd7d44f69451d4f992ed02c7f8ca0dea350dd4aba08bc46f30f3f89e05c. May 15 00:45:18.522894 containerd[2742]: time="2025-05-15T00:45:18.522831361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:45:18.522894 containerd[2742]: time="2025-05-15T00:45:18.522880363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:45:18.522894 containerd[2742]: time="2025-05-15T00:45:18.522891124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:45:18.523010 containerd[2742]: time="2025-05-15T00:45:18.522967727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:45:18.523126 systemd-networkd[2647]: cali66691eff83d: Link UP May 15 00:45:18.523265 systemd-networkd[2647]: cali66691eff83d: Gained carrier May 15 00:45:18.530649 containerd[2742]: 2025-05-15 00:45:18.381 [INFO][7037] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 00:45:18.530649 containerd[2742]: 2025-05-15 00:45:18.393 [INFO][7037] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.1.1--n--3631181341-k8s-csi--node--driver--lhgzs-eth0 csi-node-driver- calico-system d4c8c3bf-81ae-482c-8ebe-086a362f11d9 597 0 2025-05-15 00:45:10 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4230.1.1-n-3631181341 csi-node-driver-lhgzs eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali66691eff83d [] []}} ContainerID="f0e09dd2add5710acd577fe25a7e903b0d3f79b0f922a90a94a54923a85bcf87" Namespace="calico-system" Pod="csi-node-driver-lhgzs" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-csi--node--driver--lhgzs-" May 15 00:45:18.530649 containerd[2742]: 2025-05-15 00:45:18.393 [INFO][7037] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f0e09dd2add5710acd577fe25a7e903b0d3f79b0f922a90a94a54923a85bcf87" Namespace="calico-system" Pod="csi-node-driver-lhgzs" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-csi--node--driver--lhgzs-eth0" May 15 00:45:18.530649 containerd[2742]: 2025-05-15 00:45:18.424 [INFO][7176] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f0e09dd2add5710acd577fe25a7e903b0d3f79b0f922a90a94a54923a85bcf87" HandleID="k8s-pod-network.f0e09dd2add5710acd577fe25a7e903b0d3f79b0f922a90a94a54923a85bcf87" Workload="ci--4230.1.1--n--3631181341-k8s-csi--node--driver--lhgzs-eth0" May 15 00:45:18.530649 containerd[2742]: 2025-05-15 00:45:18.434 [INFO][7176] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f0e09dd2add5710acd577fe25a7e903b0d3f79b0f922a90a94a54923a85bcf87" HandleID="k8s-pod-network.f0e09dd2add5710acd577fe25a7e903b0d3f79b0f922a90a94a54923a85bcf87" Workload="ci--4230.1.1--n--3631181341-k8s-csi--node--driver--lhgzs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000372d30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4230.1.1-n-3631181341", "pod":"csi-node-driver-lhgzs", "timestamp":"2025-05-15 00:45:18.424674863 +0000 UTC"}, Hostname:"ci-4230.1.1-n-3631181341", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:45:18.530649 containerd[2742]: 2025-05-15 00:45:18.434 [INFO][7176] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:45:18.530649 containerd[2742]: 2025-05-15 00:45:18.499 [INFO][7176] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:45:18.530649 containerd[2742]: 2025-05-15 00:45:18.500 [INFO][7176] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.1.1-n-3631181341' May 15 00:45:18.530649 containerd[2742]: 2025-05-15 00:45:18.502 [INFO][7176] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f0e09dd2add5710acd577fe25a7e903b0d3f79b0f922a90a94a54923a85bcf87" host="ci-4230.1.1-n-3631181341" May 15 00:45:18.530649 containerd[2742]: 2025-05-15 00:45:18.505 [INFO][7176] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230.1.1-n-3631181341" May 15 00:45:18.530649 containerd[2742]: 2025-05-15 00:45:18.507 [INFO][7176] ipam/ipam.go 489: Trying affinity for 192.168.27.0/26 host="ci-4230.1.1-n-3631181341" May 15 00:45:18.530649 containerd[2742]: 2025-05-15 00:45:18.509 [INFO][7176] ipam/ipam.go 155: Attempting to load block cidr=192.168.27.0/26 host="ci-4230.1.1-n-3631181341" May 15 00:45:18.530649 containerd[2742]: 2025-05-15 00:45:18.511 [INFO][7176] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.27.0/26 host="ci-4230.1.1-n-3631181341" May 15 00:45:18.530649 containerd[2742]: 2025-05-15 00:45:18.511 [INFO][7176] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.27.0/26 handle="k8s-pod-network.f0e09dd2add5710acd577fe25a7e903b0d3f79b0f922a90a94a54923a85bcf87" host="ci-4230.1.1-n-3631181341" May 15 00:45:18.530649 containerd[2742]: 2025-05-15 00:45:18.514 [INFO][7176] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f0e09dd2add5710acd577fe25a7e903b0d3f79b0f922a90a94a54923a85bcf87 May 15 00:45:18.530649 containerd[2742]: 2025-05-15 00:45:18.516 [INFO][7176] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.27.0/26 handle="k8s-pod-network.f0e09dd2add5710acd577fe25a7e903b0d3f79b0f922a90a94a54923a85bcf87" host="ci-4230.1.1-n-3631181341" May 15 00:45:18.530649 containerd[2742]: 2025-05-15 00:45:18.520 [INFO][7176] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.27.4/26] block=192.168.27.0/26 handle="k8s-pod-network.f0e09dd2add5710acd577fe25a7e903b0d3f79b0f922a90a94a54923a85bcf87" host="ci-4230.1.1-n-3631181341" May 15 00:45:18.530649 containerd[2742]: 2025-05-15 00:45:18.520 [INFO][7176] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.27.4/26] handle="k8s-pod-network.f0e09dd2add5710acd577fe25a7e903b0d3f79b0f922a90a94a54923a85bcf87" host="ci-4230.1.1-n-3631181341" May 15 00:45:18.530649 containerd[2742]: 2025-05-15 00:45:18.520 [INFO][7176] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:45:18.530649 containerd[2742]: 2025-05-15 00:45:18.520 [INFO][7176] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.27.4/26] IPv6=[] ContainerID="f0e09dd2add5710acd577fe25a7e903b0d3f79b0f922a90a94a54923a85bcf87" HandleID="k8s-pod-network.f0e09dd2add5710acd577fe25a7e903b0d3f79b0f922a90a94a54923a85bcf87" Workload="ci--4230.1.1--n--3631181341-k8s-csi--node--driver--lhgzs-eth0" May 15 00:45:18.531114 containerd[2742]: 2025-05-15 00:45:18.521 [INFO][7037] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f0e09dd2add5710acd577fe25a7e903b0d3f79b0f922a90a94a54923a85bcf87" Namespace="calico-system" Pod="csi-node-driver-lhgzs" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-csi--node--driver--lhgzs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--3631181341-k8s-csi--node--driver--lhgzs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d4c8c3bf-81ae-482c-8ebe-086a362f11d9", ResourceVersion:"597", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-3631181341", ContainerID:"", Pod:"csi-node-driver-lhgzs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.27.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali66691eff83d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:45:18.531114 containerd[2742]: 2025-05-15 00:45:18.522 [INFO][7037] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.27.4/32] ContainerID="f0e09dd2add5710acd577fe25a7e903b0d3f79b0f922a90a94a54923a85bcf87" Namespace="calico-system" Pod="csi-node-driver-lhgzs" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-csi--node--driver--lhgzs-eth0" May 15 00:45:18.531114 containerd[2742]: 2025-05-15 00:45:18.522 [INFO][7037] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali66691eff83d ContainerID="f0e09dd2add5710acd577fe25a7e903b0d3f79b0f922a90a94a54923a85bcf87" Namespace="calico-system" Pod="csi-node-driver-lhgzs" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-csi--node--driver--lhgzs-eth0" May 15 00:45:18.531114 containerd[2742]: 2025-05-15 00:45:18.523 [INFO][7037] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f0e09dd2add5710acd577fe25a7e903b0d3f79b0f922a90a94a54923a85bcf87" Namespace="calico-system" Pod="csi-node-driver-lhgzs" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-csi--node--driver--lhgzs-eth0" May 15 00:45:18.531114 containerd[2742]: 2025-05-15 00:45:18.523 [INFO][7037] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f0e09dd2add5710acd577fe25a7e903b0d3f79b0f922a90a94a54923a85bcf87" Namespace="calico-system" Pod="csi-node-driver-lhgzs" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-csi--node--driver--lhgzs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--3631181341-k8s-csi--node--driver--lhgzs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d4c8c3bf-81ae-482c-8ebe-086a362f11d9", ResourceVersion:"597", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-3631181341", ContainerID:"f0e09dd2add5710acd577fe25a7e903b0d3f79b0f922a90a94a54923a85bcf87", Pod:"csi-node-driver-lhgzs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.27.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali66691eff83d", MAC:"82:bf:17:ad:2a:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:45:18.531114 containerd[2742]: 2025-05-15 00:45:18.529 [INFO][7037] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f0e09dd2add5710acd577fe25a7e903b0d3f79b0f922a90a94a54923a85bcf87" Namespace="calico-system" Pod="csi-node-driver-lhgzs" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-csi--node--driver--lhgzs-eth0" May 15 00:45:18.532360 systemd[1]: Started cri-containerd-a29dbd00ac5768c6bd152d31784636a54042846db4ba1cdbbfb9cbf71c3351aa.scope - libcontainer container a29dbd00ac5768c6bd152d31784636a54042846db4ba1cdbbfb9cbf71c3351aa. May 15 00:45:18.543764 containerd[2742]: time="2025-05-15T00:45:18.543735177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nzhwf,Uid:439669d8-aecd-4ab8-a6b4-fddb8f0d476b,Namespace:kube-system,Attempt:4,} returns sandbox id \"e52674d7226e07cdad015074fadcb6e8f56d588c59214eb31bc4faf150451339\"" May 15 00:45:18.544753 containerd[2742]: time="2025-05-15T00:45:18.544582612Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:45:18.544753 containerd[2742]: time="2025-05-15T00:45:18.544627374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:45:18.544753 containerd[2742]: time="2025-05-15T00:45:18.544638094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:45:18.544753 containerd[2742]: time="2025-05-15T00:45:18.544704137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:45:18.545424 containerd[2742]: time="2025-05-15T00:45:18.545402765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596b57fb45-8rtv2,Uid:9172cd52-0d57-416c-ba87-816358bd2302,Namespace:calico-system,Attempt:4,} returns sandbox id \"050f1dd7d44f69451d4f992ed02c7f8ca0dea350dd4aba08bc46f30f3f89e05c\"" May 15 00:45:18.546121 containerd[2742]: time="2025-05-15T00:45:18.546100834Z" level=info msg="CreateContainer within sandbox \"e52674d7226e07cdad015074fadcb6e8f56d588c59214eb31bc4faf150451339\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 00:45:18.546446 containerd[2742]: time="2025-05-15T00:45:18.546427847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 15 00:45:18.550390 systemd-networkd[2647]: calie278aad26e4: Link UP May 15 00:45:18.550571 systemd-networkd[2647]: calie278aad26e4: Gained carrier May 15 00:45:18.553241 containerd[2742]: time="2025-05-15T00:45:18.553214325Z" level=info msg="CreateContainer within sandbox \"e52674d7226e07cdad015074fadcb6e8f56d588c59214eb31bc4faf150451339\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"734921c08a20ccf14cf5a31df73cdf3a522160ae76f4cb5324f75f2681a56e5d\"" May 15 00:45:18.553604 containerd[2742]: time="2025-05-15T00:45:18.553583620Z" level=info msg="StartContainer for \"734921c08a20ccf14cf5a31df73cdf3a522160ae76f4cb5324f75f2681a56e5d\"" May 15 00:45:18.554779 systemd[1]: Started cri-containerd-f0e09dd2add5710acd577fe25a7e903b0d3f79b0f922a90a94a54923a85bcf87.scope - libcontainer container f0e09dd2add5710acd577fe25a7e903b0d3f79b0f922a90a94a54923a85bcf87. May 15 00:45:18.556895 containerd[2742]: time="2025-05-15T00:45:18.556858554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vrcxb,Uid:c07e4604-a033-42cb-84d7-c3f474ee1073,Namespace:kube-system,Attempt:4,} returns sandbox id \"a29dbd00ac5768c6bd152d31784636a54042846db4ba1cdbbfb9cbf71c3351aa\"" May 15 00:45:18.556942 containerd[2742]: 2025-05-15 00:45:18.381 [INFO][7035] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 00:45:18.556942 containerd[2742]: 2025-05-15 00:45:18.393 [INFO][7035] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.1.1--n--3631181341-k8s-calico--apiserver--8c899d4d9--gzqjf-eth0 calico-apiserver-8c899d4d9- calico-apiserver 261e97f9-6a47-47b9-9d3b-8808cd9cee51 645 0 2025-05-15 00:45:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8c899d4d9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4230.1.1-n-3631181341 calico-apiserver-8c899d4d9-gzqjf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie278aad26e4 [] []}} ContainerID="ee869821746723c1df664676c14f1d38f370e60cd254e57512c2f2e10e0f2ba6" Namespace="calico-apiserver" Pod="calico-apiserver-8c899d4d9-gzqjf" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-calico--apiserver--8c899d4d9--gzqjf-" May 15 00:45:18.556942 containerd[2742]: 2025-05-15 00:45:18.393 [INFO][7035] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ee869821746723c1df664676c14f1d38f370e60cd254e57512c2f2e10e0f2ba6" Namespace="calico-apiserver" Pod="calico-apiserver-8c899d4d9-gzqjf" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-calico--apiserver--8c899d4d9--gzqjf-eth0" May 15 00:45:18.556942 containerd[2742]: 2025-05-15 00:45:18.424 [INFO][7170] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee869821746723c1df664676c14f1d38f370e60cd254e57512c2f2e10e0f2ba6" HandleID="k8s-pod-network.ee869821746723c1df664676c14f1d38f370e60cd254e57512c2f2e10e0f2ba6" Workload="ci--4230.1.1--n--3631181341-k8s-calico--apiserver--8c899d4d9--gzqjf-eth0" May 15 00:45:18.556942 containerd[2742]: 2025-05-15 00:45:18.434 [INFO][7170] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ee869821746723c1df664676c14f1d38f370e60cd254e57512c2f2e10e0f2ba6" HandleID="k8s-pod-network.ee869821746723c1df664676c14f1d38f370e60cd254e57512c2f2e10e0f2ba6" Workload="ci--4230.1.1--n--3631181341-k8s-calico--apiserver--8c899d4d9--gzqjf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000503730), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4230.1.1-n-3631181341", "pod":"calico-apiserver-8c899d4d9-gzqjf", "timestamp":"2025-05-15 00:45:18.424669463 +0000 UTC"}, Hostname:"ci-4230.1.1-n-3631181341", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:45:18.556942 containerd[2742]: 2025-05-15 00:45:18.434 [INFO][7170] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:45:18.556942 containerd[2742]: 2025-05-15 00:45:18.520 [INFO][7170] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:45:18.556942 containerd[2742]: 2025-05-15 00:45:18.520 [INFO][7170] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.1.1-n-3631181341' May 15 00:45:18.556942 containerd[2742]: 2025-05-15 00:45:18.522 [INFO][7170] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ee869821746723c1df664676c14f1d38f370e60cd254e57512c2f2e10e0f2ba6" host="ci-4230.1.1-n-3631181341" May 15 00:45:18.556942 containerd[2742]: 2025-05-15 00:45:18.525 [INFO][7170] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230.1.1-n-3631181341" May 15 00:45:18.556942 containerd[2742]: 2025-05-15 00:45:18.529 [INFO][7170] ipam/ipam.go 489: Trying affinity for 192.168.27.0/26 host="ci-4230.1.1-n-3631181341" May 15 00:45:18.556942 containerd[2742]: 2025-05-15 00:45:18.531 [INFO][7170] ipam/ipam.go 155: Attempting to load block cidr=192.168.27.0/26 host="ci-4230.1.1-n-3631181341" May 15 00:45:18.556942 containerd[2742]: 2025-05-15 00:45:18.533 [INFO][7170] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.27.0/26 host="ci-4230.1.1-n-3631181341" May 15 00:45:18.556942 containerd[2742]: 2025-05-15 00:45:18.533 [INFO][7170] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.27.0/26 handle="k8s-pod-network.ee869821746723c1df664676c14f1d38f370e60cd254e57512c2f2e10e0f2ba6" host="ci-4230.1.1-n-3631181341" May 15 00:45:18.556942 containerd[2742]: 2025-05-15 00:45:18.534 [INFO][7170] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ee869821746723c1df664676c14f1d38f370e60cd254e57512c2f2e10e0f2ba6 May 15 00:45:18.556942 containerd[2742]: 2025-05-15 00:45:18.543 [INFO][7170] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.27.0/26 handle="k8s-pod-network.ee869821746723c1df664676c14f1d38f370e60cd254e57512c2f2e10e0f2ba6" host="ci-4230.1.1-n-3631181341" May 15 00:45:18.556942 containerd[2742]: 2025-05-15 00:45:18.547 [INFO][7170] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.27.5/26] block=192.168.27.0/26 handle="k8s-pod-network.ee869821746723c1df664676c14f1d38f370e60cd254e57512c2f2e10e0f2ba6" host="ci-4230.1.1-n-3631181341" May 15 00:45:18.556942 containerd[2742]: 2025-05-15 00:45:18.547 [INFO][7170] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.27.5/26] handle="k8s-pod-network.ee869821746723c1df664676c14f1d38f370e60cd254e57512c2f2e10e0f2ba6" host="ci-4230.1.1-n-3631181341" May 15 00:45:18.556942 containerd[2742]: 2025-05-15 00:45:18.548 [INFO][7170] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:45:18.556942 containerd[2742]: 2025-05-15 00:45:18.548 [INFO][7170] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.27.5/26] IPv6=[] ContainerID="ee869821746723c1df664676c14f1d38f370e60cd254e57512c2f2e10e0f2ba6" HandleID="k8s-pod-network.ee869821746723c1df664676c14f1d38f370e60cd254e57512c2f2e10e0f2ba6" Workload="ci--4230.1.1--n--3631181341-k8s-calico--apiserver--8c899d4d9--gzqjf-eth0" May 15 00:45:18.557361 containerd[2742]: 2025-05-15 00:45:18.549 [INFO][7035] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ee869821746723c1df664676c14f1d38f370e60cd254e57512c2f2e10e0f2ba6" Namespace="calico-apiserver" Pod="calico-apiserver-8c899d4d9-gzqjf" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-calico--apiserver--8c899d4d9--gzqjf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--3631181341-k8s-calico--apiserver--8c899d4d9--gzqjf-eth0", GenerateName:"calico-apiserver-8c899d4d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"261e97f9-6a47-47b9-9d3b-8808cd9cee51", ResourceVersion:"645", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8c899d4d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-3631181341", ContainerID:"", Pod:"calico-apiserver-8c899d4d9-gzqjf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.27.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie278aad26e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:45:18.557361 containerd[2742]: 2025-05-15 00:45:18.549 [INFO][7035] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.27.5/32] ContainerID="ee869821746723c1df664676c14f1d38f370e60cd254e57512c2f2e10e0f2ba6" Namespace="calico-apiserver" Pod="calico-apiserver-8c899d4d9-gzqjf" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-calico--apiserver--8c899d4d9--gzqjf-eth0" May 15 00:45:18.557361 containerd[2742]: 2025-05-15 00:45:18.549 [INFO][7035] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie278aad26e4 ContainerID="ee869821746723c1df664676c14f1d38f370e60cd254e57512c2f2e10e0f2ba6" Namespace="calico-apiserver" Pod="calico-apiserver-8c899d4d9-gzqjf" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-calico--apiserver--8c899d4d9--gzqjf-eth0" May 15 00:45:18.557361 containerd[2742]: 2025-05-15 00:45:18.550 [INFO][7035] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee869821746723c1df664676c14f1d38f370e60cd254e57512c2f2e10e0f2ba6" Namespace="calico-apiserver" Pod="calico-apiserver-8c899d4d9-gzqjf" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-calico--apiserver--8c899d4d9--gzqjf-eth0" May 15 00:45:18.557361 containerd[2742]: 2025-05-15 00:45:18.550 [INFO][7035] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ee869821746723c1df664676c14f1d38f370e60cd254e57512c2f2e10e0f2ba6" Namespace="calico-apiserver" Pod="calico-apiserver-8c899d4d9-gzqjf" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-calico--apiserver--8c899d4d9--gzqjf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--3631181341-k8s-calico--apiserver--8c899d4d9--gzqjf-eth0", GenerateName:"calico-apiserver-8c899d4d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"261e97f9-6a47-47b9-9d3b-8808cd9cee51", ResourceVersion:"645", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8c899d4d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-3631181341", ContainerID:"ee869821746723c1df664676c14f1d38f370e60cd254e57512c2f2e10e0f2ba6", Pod:"calico-apiserver-8c899d4d9-gzqjf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.27.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie278aad26e4", MAC:"4a:51:eb:8a:92:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:45:18.557361 containerd[2742]: 2025-05-15 00:45:18.555 [INFO][7035] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ee869821746723c1df664676c14f1d38f370e60cd254e57512c2f2e10e0f2ba6" Namespace="calico-apiserver" Pod="calico-apiserver-8c899d4d9-gzqjf" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-calico--apiserver--8c899d4d9--gzqjf-eth0" May 15 00:45:18.560092 containerd[2742]: time="2025-05-15T00:45:18.560062286Z" level=info msg="CreateContainer within sandbox \"a29dbd00ac5768c6bd152d31784636a54042846db4ba1cdbbfb9cbf71c3351aa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 00:45:18.566119 containerd[2742]: time="2025-05-15T00:45:18.566088932Z" level=info msg="CreateContainer within sandbox \"a29dbd00ac5768c6bd152d31784636a54042846db4ba1cdbbfb9cbf71c3351aa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"780efa7dd593e4c184bf5538f5d526b26da22a325ff5633f71086df297c91bbd\"" May 15 00:45:18.566460 containerd[2742]: time="2025-05-15T00:45:18.566444307Z" level=info msg="StartContainer for \"780efa7dd593e4c184bf5538f5d526b26da22a325ff5633f71086df297c91bbd\"" May 15 00:45:18.568960 systemd[1]: Started cri-containerd-734921c08a20ccf14cf5a31df73cdf3a522160ae76f4cb5324f75f2681a56e5d.scope - libcontainer container 734921c08a20ccf14cf5a31df73cdf3a522160ae76f4cb5324f75f2681a56e5d. May 15 00:45:18.571581 containerd[2742]: time="2025-05-15T00:45:18.571524275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:45:18.571606 containerd[2742]: time="2025-05-15T00:45:18.571581157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:45:18.571606 containerd[2742]: time="2025-05-15T00:45:18.571593838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:45:18.572169 containerd[2742]: time="2025-05-15T00:45:18.571667641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:45:18.572517 systemd-networkd[2647]: calie26fb50928a: Link UP May 15 00:45:18.572688 systemd-networkd[2647]: calie26fb50928a: Gained carrier May 15 00:45:18.573025 containerd[2742]: time="2025-05-15T00:45:18.572995895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lhgzs,Uid:d4c8c3bf-81ae-482c-8ebe-086a362f11d9,Namespace:calico-system,Attempt:4,} returns sandbox id \"f0e09dd2add5710acd577fe25a7e903b0d3f79b0f922a90a94a54923a85bcf87\"" May 15 00:45:18.580388 containerd[2742]: 2025-05-15 00:45:18.376 [INFO][6997] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 00:45:18.580388 containerd[2742]: 2025-05-15 00:45:18.390 [INFO][6997] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.1.1--n--3631181341-k8s-calico--apiserver--8c899d4d9--smfp6-eth0 calico-apiserver-8c899d4d9- calico-apiserver 3c5cf155-23c9-4359-ad93-bccee4c59f03 650 0 2025-05-15 00:45:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8c899d4d9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4230.1.1-n-3631181341 calico-apiserver-8c899d4d9-smfp6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie26fb50928a [] []}} ContainerID="e48aa3c39736eb0f32033be5f1e9696058cab55802f6498dbfa848c523500813" Namespace="calico-apiserver" Pod="calico-apiserver-8c899d4d9-smfp6" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-calico--apiserver--8c899d4d9--smfp6-" May 15 00:45:18.580388 containerd[2742]: 2025-05-15 00:45:18.390 [INFO][6997] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e48aa3c39736eb0f32033be5f1e9696058cab55802f6498dbfa848c523500813" Namespace="calico-apiserver" Pod="calico-apiserver-8c899d4d9-smfp6" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-calico--apiserver--8c899d4d9--smfp6-eth0" May 15 00:45:18.580388 containerd[2742]: 2025-05-15 00:45:18.424 [INFO][7154] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e48aa3c39736eb0f32033be5f1e9696058cab55802f6498dbfa848c523500813" HandleID="k8s-pod-network.e48aa3c39736eb0f32033be5f1e9696058cab55802f6498dbfa848c523500813" Workload="ci--4230.1.1--n--3631181341-k8s-calico--apiserver--8c899d4d9--smfp6-eth0" May 15 00:45:18.580388 containerd[2742]: 2025-05-15 00:45:18.434 [INFO][7154] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e48aa3c39736eb0f32033be5f1e9696058cab55802f6498dbfa848c523500813" HandleID="k8s-pod-network.e48aa3c39736eb0f32033be5f1e9696058cab55802f6498dbfa848c523500813" Workload="ci--4230.1.1--n--3631181341-k8s-calico--apiserver--8c899d4d9--smfp6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000502ca0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4230.1.1-n-3631181341", "pod":"calico-apiserver-8c899d4d9-smfp6", "timestamp":"2025-05-15 00:45:18.424670943 +0000 UTC"}, Hostname:"ci-4230.1.1-n-3631181341", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:45:18.580388 containerd[2742]: 2025-05-15 00:45:18.434 [INFO][7154] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:45:18.580388 containerd[2742]: 2025-05-15 00:45:18.548 [INFO][7154] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:45:18.580388 containerd[2742]: 2025-05-15 00:45:18.548 [INFO][7154] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.1.1-n-3631181341' May 15 00:45:18.580388 containerd[2742]: 2025-05-15 00:45:18.549 [INFO][7154] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e48aa3c39736eb0f32033be5f1e9696058cab55802f6498dbfa848c523500813" host="ci-4230.1.1-n-3631181341" May 15 00:45:18.580388 containerd[2742]: 2025-05-15 00:45:18.552 [INFO][7154] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230.1.1-n-3631181341" May 15 00:45:18.580388 containerd[2742]: 2025-05-15 00:45:18.555 [INFO][7154] ipam/ipam.go 489: Trying affinity for 192.168.27.0/26 host="ci-4230.1.1-n-3631181341" May 15 00:45:18.580388 containerd[2742]: 2025-05-15 00:45:18.557 [INFO][7154] ipam/ipam.go 155: Attempting to load block cidr=192.168.27.0/26 host="ci-4230.1.1-n-3631181341" May 15 00:45:18.580388 containerd[2742]: 2025-05-15 00:45:18.560 [INFO][7154] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.27.0/26 host="ci-4230.1.1-n-3631181341" May 15 00:45:18.580388 containerd[2742]: 2025-05-15 00:45:18.560 [INFO][7154] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.27.0/26 handle="k8s-pod-network.e48aa3c39736eb0f32033be5f1e9696058cab55802f6498dbfa848c523500813" host="ci-4230.1.1-n-3631181341" May 15 00:45:18.580388 containerd[2742]: 2025-05-15 00:45:18.561 [INFO][7154] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e48aa3c39736eb0f32033be5f1e9696058cab55802f6498dbfa848c523500813 May 15 00:45:18.580388 containerd[2742]: 2025-05-15 00:45:18.564 [INFO][7154] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.27.0/26 handle="k8s-pod-network.e48aa3c39736eb0f32033be5f1e9696058cab55802f6498dbfa848c523500813" host="ci-4230.1.1-n-3631181341" May 15 00:45:18.580388 containerd[2742]: 2025-05-15 00:45:18.569 [INFO][7154] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.27.6/26] block=192.168.27.0/26 handle="k8s-pod-network.e48aa3c39736eb0f32033be5f1e9696058cab55802f6498dbfa848c523500813" host="ci-4230.1.1-n-3631181341" May 15 00:45:18.580388 containerd[2742]: 2025-05-15 00:45:18.569 [INFO][7154] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.27.6/26] handle="k8s-pod-network.e48aa3c39736eb0f32033be5f1e9696058cab55802f6498dbfa848c523500813" host="ci-4230.1.1-n-3631181341" May 15 00:45:18.580388 containerd[2742]: 2025-05-15 00:45:18.569 [INFO][7154] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:45:18.580388 containerd[2742]: 2025-05-15 00:45:18.569 [INFO][7154] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.27.6/26] IPv6=[] ContainerID="e48aa3c39736eb0f32033be5f1e9696058cab55802f6498dbfa848c523500813" HandleID="k8s-pod-network.e48aa3c39736eb0f32033be5f1e9696058cab55802f6498dbfa848c523500813" Workload="ci--4230.1.1--n--3631181341-k8s-calico--apiserver--8c899d4d9--smfp6-eth0" May 15 00:45:18.580777 containerd[2742]: 2025-05-15 00:45:18.571 [INFO][6997] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e48aa3c39736eb0f32033be5f1e9696058cab55802f6498dbfa848c523500813" Namespace="calico-apiserver" Pod="calico-apiserver-8c899d4d9-smfp6" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-calico--apiserver--8c899d4d9--smfp6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--3631181341-k8s-calico--apiserver--8c899d4d9--smfp6-eth0", GenerateName:"calico-apiserver-8c899d4d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"3c5cf155-23c9-4359-ad93-bccee4c59f03", ResourceVersion:"650", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8c899d4d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-3631181341", ContainerID:"", Pod:"calico-apiserver-8c899d4d9-smfp6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.27.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie26fb50928a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:45:18.580777 containerd[2742]: 2025-05-15 00:45:18.571 [INFO][6997] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.27.6/32] ContainerID="e48aa3c39736eb0f32033be5f1e9696058cab55802f6498dbfa848c523500813" Namespace="calico-apiserver" Pod="calico-apiserver-8c899d4d9-smfp6" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-calico--apiserver--8c899d4d9--smfp6-eth0" May 15 00:45:18.580777 containerd[2742]: 2025-05-15 00:45:18.571 [INFO][6997] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie26fb50928a ContainerID="e48aa3c39736eb0f32033be5f1e9696058cab55802f6498dbfa848c523500813" Namespace="calico-apiserver" Pod="calico-apiserver-8c899d4d9-smfp6" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-calico--apiserver--8c899d4d9--smfp6-eth0" May 15 00:45:18.580777 containerd[2742]: 2025-05-15 00:45:18.572 [INFO][6997] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e48aa3c39736eb0f32033be5f1e9696058cab55802f6498dbfa848c523500813" Namespace="calico-apiserver" Pod="calico-apiserver-8c899d4d9-smfp6" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-calico--apiserver--8c899d4d9--smfp6-eth0" May 15 00:45:18.580777 containerd[2742]: 2025-05-15 00:45:18.573 [INFO][6997] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e48aa3c39736eb0f32033be5f1e9696058cab55802f6498dbfa848c523500813" Namespace="calico-apiserver" Pod="calico-apiserver-8c899d4d9-smfp6" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-calico--apiserver--8c899d4d9--smfp6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--n--3631181341-k8s-calico--apiserver--8c899d4d9--smfp6-eth0", GenerateName:"calico-apiserver-8c899d4d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"3c5cf155-23c9-4359-ad93-bccee4c59f03", ResourceVersion:"650", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8c899d4d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-n-3631181341", ContainerID:"e48aa3c39736eb0f32033be5f1e9696058cab55802f6498dbfa848c523500813", Pod:"calico-apiserver-8c899d4d9-smfp6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.27.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie26fb50928a", MAC:"ea:9c:c8:b0:d1:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:45:18.580777 containerd[2742]: 2025-05-15 00:45:18.579 [INFO][6997] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e48aa3c39736eb0f32033be5f1e9696058cab55802f6498dbfa848c523500813" Namespace="calico-apiserver" Pod="calico-apiserver-8c899d4d9-smfp6" WorkloadEndpoint="ci--4230.1.1--n--3631181341-k8s-calico--apiserver--8c899d4d9--smfp6-eth0" May 15 00:45:18.581835 systemd[1]: Started cri-containerd-780efa7dd593e4c184bf5538f5d526b26da22a325ff5633f71086df297c91bbd.scope - libcontainer container 780efa7dd593e4c184bf5538f5d526b26da22a325ff5633f71086df297c91bbd. May 15 00:45:18.583132 systemd[1]: Started cri-containerd-ee869821746723c1df664676c14f1d38f370e60cd254e57512c2f2e10e0f2ba6.scope - libcontainer container ee869821746723c1df664676c14f1d38f370e60cd254e57512c2f2e10e0f2ba6. May 15 00:45:18.587312 containerd[2742]: time="2025-05-15T00:45:18.587283200Z" level=info msg="StartContainer for \"734921c08a20ccf14cf5a31df73cdf3a522160ae76f4cb5324f75f2681a56e5d\" returns successfully" May 15 00:45:18.595294 containerd[2742]: time="2025-05-15T00:45:18.595083999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:45:18.595294 containerd[2742]: time="2025-05-15T00:45:18.595139921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:45:18.595294 containerd[2742]: time="2025-05-15T00:45:18.595151202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:45:18.595294 containerd[2742]: time="2025-05-15T00:45:18.595226605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:45:18.600064 containerd[2742]: time="2025-05-15T00:45:18.600005601Z" level=info msg="StartContainer for \"780efa7dd593e4c184bf5538f5d526b26da22a325ff5633f71086df297c91bbd\" returns successfully" May 15 00:45:18.604739 systemd[1]: Started cri-containerd-e48aa3c39736eb0f32033be5f1e9696058cab55802f6498dbfa848c523500813.scope - libcontainer container e48aa3c39736eb0f32033be5f1e9696058cab55802f6498dbfa848c523500813. May 15 00:45:18.607107 containerd[2742]: time="2025-05-15T00:45:18.607081530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8c899d4d9-gzqjf,Uid:261e97f9-6a47-47b9-9d3b-8808cd9cee51,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"ee869821746723c1df664676c14f1d38f370e60cd254e57512c2f2e10e0f2ba6\"" May 15 00:45:18.629620 containerd[2742]: time="2025-05-15T00:45:18.629590772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8c899d4d9-smfp6,Uid:3c5cf155-23c9-4359-ad93-bccee4c59f03,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"e48aa3c39736eb0f32033be5f1e9696058cab55802f6498dbfa848c523500813\"" May 15 00:45:18.935545 systemd[1]: run-netns-cni\x2d3f2ef371\x2dbe2c\x2d7805\x2d8cbd\x2d3f7e0ec9b68a.mount: Deactivated successfully. May 15 00:45:18.935624 systemd[1]: run-netns-cni\x2d51a5e041\x2ddf1e\x2d68a0\x2d349a\x2dec78220dc8f1.mount: Deactivated successfully. May 15 00:45:18.935666 systemd[1]: run-netns-cni\x2d6faf72fc\x2df39c\x2d826d\x2d41f1\x2d26bab35931c0.mount: Deactivated successfully. May 15 00:45:18.935707 systemd[1]: run-netns-cni\x2d3e464738\x2d203b\x2d9df3\x2d861a\x2d6cba01910410.mount: Deactivated successfully. May 15 00:45:18.935748 systemd[1]: run-netns-cni\x2ded349b43\x2d51fd\x2de962\x2d94b3\x2d1a4509071694.mount: Deactivated successfully. May 15 00:45:18.935787 systemd[1]: run-netns-cni\x2d382f81ea\x2db311\x2d3109\x2dc2ee\x2dea445605d4e4.mount: Deactivated successfully. May 15 00:45:19.281568 containerd[2742]: time="2025-05-15T00:45:19.281515822Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 15 00:45:19.281568 containerd[2742]: time="2025-05-15T00:45:19.281517503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:45:19.282315 containerd[2742]: time="2025-05-15T00:45:19.282290132Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:45:19.284059 containerd[2742]: time="2025-05-15T00:45:19.284035599Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:45:19.284727 containerd[2742]: time="2025-05-15T00:45:19.284702065Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 738.247817ms" May 15 00:45:19.284752 containerd[2742]: time="2025-05-15T00:45:19.284729626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 15 00:45:19.285506 containerd[2742]: time="2025-05-15T00:45:19.285487895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 15 00:45:19.290383 containerd[2742]: time="2025-05-15T00:45:19.290139673Z" level=info msg="CreateContainer within sandbox \"050f1dd7d44f69451d4f992ed02c7f8ca0dea350dd4aba08bc46f30f3f89e05c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 15 00:45:19.297283 containerd[2742]: time="2025-05-15T00:45:19.297252346Z" level=info msg="CreateContainer within sandbox \"050f1dd7d44f69451d4f992ed02c7f8ca0dea350dd4aba08bc46f30f3f89e05c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"da6daeaa81822df4c71c66ac81c6b674d23b293e5876410ce1722d34b71f30ad\"" May 15 00:45:19.297623 containerd[2742]: time="2025-05-15T00:45:19.297604000Z" level=info msg="StartContainer for \"da6daeaa81822df4c71c66ac81c6b674d23b293e5876410ce1722d34b71f30ad\"" May 15 00:45:19.327102 systemd[1]: Started cri-containerd-da6daeaa81822df4c71c66ac81c6b674d23b293e5876410ce1722d34b71f30ad.scope - libcontainer container da6daeaa81822df4c71c66ac81c6b674d23b293e5876410ce1722d34b71f30ad. May 15 00:45:19.352475 containerd[2742]: time="2025-05-15T00:45:19.352442705Z" level=info msg="StartContainer for \"da6daeaa81822df4c71c66ac81c6b674d23b293e5876410ce1722d34b71f30ad\" returns successfully" May 15 00:45:19.374337 kubelet[4479]: I0515 00:45:19.374289 4479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-596b57fb45-8rtv2" podStartSLOduration=8.63511753 podStartE2EDuration="9.374273262s" podCreationTimestamp="2025-05-15 00:45:10 +0000 UTC" firstStartedPulling="2025-05-15 00:45:18.546217919 +0000 UTC m=+29.332683281" lastFinishedPulling="2025-05-15 00:45:19.285373651 +0000 UTC m=+30.071839013" observedRunningTime="2025-05-15 00:45:19.37368732 +0000 UTC m=+30.160152682" watchObservedRunningTime="2025-05-15 00:45:19.374273262 +0000 UTC m=+30.160738624" May 15 00:45:19.376312 kubelet[4479]: I0515 00:45:19.376295 4479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:45:19.382643 kubelet[4479]: I0515 00:45:19.382590 4479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-vrcxb" podStartSLOduration=15.382572061 podStartE2EDuration="15.382572061s" podCreationTimestamp="2025-05-15 00:45:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:45:19.381998879 +0000 UTC m=+30.168464241" watchObservedRunningTime="2025-05-15 00:45:19.382572061 +0000 UTC m=+30.169037423" May 15 00:45:19.626087 systemd-networkd[2647]: cali482e590df80: Gained IPv6LL May 15 00:45:19.690261 containerd[2742]: time="2025-05-15T00:45:19.690226708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:45:19.690438 containerd[2742]: time="2025-05-15T00:45:19.690397995Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 15 00:45:19.691564 containerd[2742]: time="2025-05-15T00:45:19.691529838Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:45:19.693103 containerd[2742]: time="2025-05-15T00:45:19.693076538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:45:19.693788 containerd[2742]: time="2025-05-15T00:45:19.693763364Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 408.251308ms" May 15 00:45:19.693815 containerd[2742]: time="2025-05-15T00:45:19.693791325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 15 00:45:19.694555 containerd[2742]: time="2025-05-15T00:45:19.694538914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 15 00:45:19.695422 containerd[2742]: time="2025-05-15T00:45:19.695400467Z" level=info msg="CreateContainer within sandbox \"f0e09dd2add5710acd577fe25a7e903b0d3f79b0f922a90a94a54923a85bcf87\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 15 00:45:19.714082 containerd[2742]: time="2025-05-15T00:45:19.714051542Z" level=info msg="CreateContainer within sandbox \"f0e09dd2add5710acd577fe25a7e903b0d3f79b0f922a90a94a54923a85bcf87\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9d5f85dbc53087583a40023c22cb4a996cc062d808125f6ba213971134c40907\"" May 15 00:45:19.714446 containerd[2742]: time="2025-05-15T00:45:19.714417597Z" level=info msg="StartContainer for \"9d5f85dbc53087583a40023c22cb4a996cc062d808125f6ba213971134c40907\"" May 15 00:45:19.740157 systemd[1]: Started cri-containerd-9d5f85dbc53087583a40023c22cb4a996cc062d808125f6ba213971134c40907.scope - libcontainer container 9d5f85dbc53087583a40023c22cb4a996cc062d808125f6ba213971134c40907. May 15 00:45:19.761468 containerd[2742]: time="2025-05-15T00:45:19.761434881Z" level=info msg="StartContainer for \"9d5f85dbc53087583a40023c22cb4a996cc062d808125f6ba213971134c40907\" returns successfully" May 15 00:45:19.818167 systemd-networkd[2647]: cali2922153b683: Gained IPv6LL May 15 00:45:20.074123 systemd-networkd[2647]: cali1f96e695249: Gained IPv6LL May 15 00:45:20.138129 systemd-networkd[2647]: calie26fb50928a: Gained IPv6LL May 15 00:45:20.202072 systemd-networkd[2647]: cali66691eff83d: Gained IPv6LL May 15 00:45:20.266072 systemd-networkd[2647]: calie278aad26e4: Gained IPv6LL May 15 00:45:20.379464 kubelet[4479]: I0515 00:45:20.379424 4479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:45:20.389010 kubelet[4479]: I0515 00:45:20.388891 4479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-nzhwf" podStartSLOduration=16.388872871 podStartE2EDuration="16.388872871s" podCreationTimestamp="2025-05-15 00:45:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:45:19.389671653 +0000 UTC m=+30.176137015" watchObservedRunningTime="2025-05-15 00:45:20.388872871 +0000 UTC m=+31.175338233" May 15 00:45:20.656193 containerd[2742]: time="2025-05-15T00:45:20.656082045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 15 00:45:20.656193 containerd[2742]: time="2025-05-15T00:45:20.656086085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:45:20.656914 containerd[2742]: time="2025-05-15T00:45:20.656881153Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:45:20.658780 containerd[2742]: time="2025-05-15T00:45:20.658729860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:45:20.659534 containerd[2742]: time="2025-05-15T00:45:20.659452006Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 964.890052ms" May 15 00:45:20.659534 containerd[2742]: time="2025-05-15T00:45:20.659484327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 15 00:45:20.660266 containerd[2742]: time="2025-05-15T00:45:20.660242114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 15 00:45:20.661296 containerd[2742]: time="2025-05-15T00:45:20.661267871Z" level=info msg="CreateContainer within sandbox \"ee869821746723c1df664676c14f1d38f370e60cd254e57512c2f2e10e0f2ba6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 00:45:20.666023 containerd[2742]: time="2025-05-15T00:45:20.665996921Z" level=info msg="CreateContainer within sandbox \"ee869821746723c1df664676c14f1d38f370e60cd254e57512c2f2e10e0f2ba6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"83834451244fe3326603eb287c73a25dc1068325bf8ab19c6332847db6e54f8e\"" May 15 00:45:20.666376 containerd[2742]: time="2025-05-15T00:45:20.666358414Z" level=info msg="StartContainer for \"83834451244fe3326603eb287c73a25dc1068325bf8ab19c6332847db6e54f8e\"" May 15 00:45:20.695096 systemd[1]: Started cri-containerd-83834451244fe3326603eb287c73a25dc1068325bf8ab19c6332847db6e54f8e.scope - libcontainer container 83834451244fe3326603eb287c73a25dc1068325bf8ab19c6332847db6e54f8e. May 15 00:45:20.719793 containerd[2742]: time="2025-05-15T00:45:20.719764696Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:45:20.719835 containerd[2742]: time="2025-05-15T00:45:20.719803017Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 15 00:45:20.719991 containerd[2742]: time="2025-05-15T00:45:20.719966663Z" level=info msg="StartContainer for \"83834451244fe3326603eb287c73a25dc1068325bf8ab19c6332847db6e54f8e\" returns successfully" May 15 00:45:20.722228 containerd[2742]: time="2025-05-15T00:45:20.722203744Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 61.929068ms" May 15 00:45:20.722260 containerd[2742]: time="2025-05-15T00:45:20.722230305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 15 00:45:20.722879 containerd[2742]: time="2025-05-15T00:45:20.722862527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 15 00:45:20.723785 containerd[2742]: time="2025-05-15T00:45:20.723760960Z" level=info msg="CreateContainer within sandbox \"e48aa3c39736eb0f32033be5f1e9696058cab55802f6498dbfa848c523500813\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 00:45:20.728665 containerd[2742]: time="2025-05-15T00:45:20.728636015Z" level=info msg="CreateContainer within sandbox \"e48aa3c39736eb0f32033be5f1e9696058cab55802f6498dbfa848c523500813\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"628639a7e0eb90f4e4eb62c9b97e39dcbe9dcb7f9782ad4762ebf21d856bb5de\"" May 15 00:45:20.728990 containerd[2742]: time="2025-05-15T00:45:20.728961547Z" level=info msg="StartContainer for \"628639a7e0eb90f4e4eb62c9b97e39dcbe9dcb7f9782ad4762ebf21d856bb5de\"" May 15 00:45:20.756103 systemd[1]: Started cri-containerd-628639a7e0eb90f4e4eb62c9b97e39dcbe9dcb7f9782ad4762ebf21d856bb5de.scope - libcontainer container 628639a7e0eb90f4e4eb62c9b97e39dcbe9dcb7f9782ad4762ebf21d856bb5de. May 15 00:45:20.780219 containerd[2742]: time="2025-05-15T00:45:20.780187430Z" level=info msg="StartContainer for \"628639a7e0eb90f4e4eb62c9b97e39dcbe9dcb7f9782ad4762ebf21d856bb5de\" returns successfully" May 15 00:45:21.225339 containerd[2742]: time="2025-05-15T00:45:21.225279300Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 15 00:45:21.225511 containerd[2742]: time="2025-05-15T00:45:21.225296381Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:45:21.228180 containerd[2742]: time="2025-05-15T00:45:21.228150037Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:45:21.228895 containerd[2742]: time="2025-05-15T00:45:21.228865421Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 505.978333ms" May 15 00:45:21.228895 containerd[2742]: time="2025-05-15T00:45:21.228897982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 15 00:45:21.229713 containerd[2742]: time="2025-05-15T00:45:21.229390479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:45:21.230645 containerd[2742]: time="2025-05-15T00:45:21.230625120Z" level=info msg="CreateContainer within sandbox \"f0e09dd2add5710acd577fe25a7e903b0d3f79b0f922a90a94a54923a85bcf87\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 15 00:45:21.236252 containerd[2742]: time="2025-05-15T00:45:21.236228989Z" level=info msg="CreateContainer within sandbox \"f0e09dd2add5710acd577fe25a7e903b0d3f79b0f922a90a94a54923a85bcf87\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a4ea0cb77af00c7d3257b773183645b77bf7ab433af9596aa1d02c0279c217de\"" May 15 00:45:21.236571 containerd[2742]: time="2025-05-15T00:45:21.236549040Z" level=info msg="StartContainer for \"a4ea0cb77af00c7d3257b773183645b77bf7ab433af9596aa1d02c0279c217de\"" May 15 00:45:21.270095 systemd[1]: Started cri-containerd-a4ea0cb77af00c7d3257b773183645b77bf7ab433af9596aa1d02c0279c217de.scope - libcontainer container a4ea0cb77af00c7d3257b773183645b77bf7ab433af9596aa1d02c0279c217de. May 15 00:45:21.290116 containerd[2742]: time="2025-05-15T00:45:21.290088526Z" level=info msg="StartContainer for \"a4ea0cb77af00c7d3257b773183645b77bf7ab433af9596aa1d02c0279c217de\" returns successfully" May 15 00:45:21.328340 kubelet[4479]: I0515 00:45:21.328316 4479 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 15 00:45:21.328340 kubelet[4479]: I0515 00:45:21.328343 4479 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 15 00:45:21.396914 kubelet[4479]: I0515 00:45:21.396871 4479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8c899d4d9-smfp6" podStartSLOduration=9.304737898 podStartE2EDuration="11.396856287s" podCreationTimestamp="2025-05-15 00:45:10 +0000 UTC" firstStartedPulling="2025-05-15 00:45:18.630594413 +0000 UTC m=+29.417059735" lastFinishedPulling="2025-05-15 00:45:20.722712762 +0000 UTC m=+31.509178124" observedRunningTime="2025-05-15 00:45:21.39664064 +0000 UTC m=+32.183105962" watchObservedRunningTime="2025-05-15 00:45:21.396856287 +0000 UTC m=+32.183321649" May 15 00:45:21.397200 kubelet[4479]: I0515 00:45:21.397129 4479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-lhgzs" podStartSLOduration=8.741527186 podStartE2EDuration="11.397122856s" podCreationTimestamp="2025-05-15 00:45:10 +0000 UTC" firstStartedPulling="2025-05-15 00:45:18.573889612 +0000 UTC m=+29.360354974" lastFinishedPulling="2025-05-15 00:45:21.229485282 +0000 UTC m=+32.015950644" observedRunningTime="2025-05-15 00:45:21.39040683 +0000 UTC m=+32.176872152" watchObservedRunningTime="2025-05-15 00:45:21.397122856 +0000 UTC m=+32.183588218" May 15 00:45:21.403463 kubelet[4479]: I0515 00:45:21.403426 4479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8c899d4d9-gzqjf" podStartSLOduration=9.351334569 podStartE2EDuration="11.403412389s" podCreationTimestamp="2025-05-15 00:45:10 +0000 UTC" firstStartedPulling="2025-05-15 00:45:18.608024529 +0000 UTC m=+29.394489891" lastFinishedPulling="2025-05-15 00:45:20.660102349 +0000 UTC m=+31.446567711" observedRunningTime="2025-05-15 00:45:21.403220182 +0000 UTC m=+32.189685544" watchObservedRunningTime="2025-05-15 00:45:21.403412389 +0000 UTC m=+32.189877751" May 15 00:45:22.388843 kubelet[4479]: I0515 00:45:22.388811 4479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:45:29.818501 kubelet[4479]: I0515 00:45:29.818455 4479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:45:30.023898 kubelet[4479]: I0515 00:45:30.023866 4479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:45:30.601008 kernel: bpftool[8656]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 15 00:45:30.758573 systemd-networkd[2647]: vxlan.calico: Link UP May 15 00:45:30.758578 systemd-networkd[2647]: vxlan.calico: Gained carrier May 15 00:45:32.298090 systemd-networkd[2647]: vxlan.calico: Gained IPv6LL May 15 00:45:41.682075 kubelet[4479]: I0515 00:45:41.681985 4479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:45:48.547106 kubelet[4479]: I0515 00:45:48.547008 4479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:45:49.275168 containerd[2742]: time="2025-05-15T00:45:49.275137058Z" level=info msg="StopPodSandbox for \"6a46bdf3b6328eb42ffd7a69e6a280c0d8c593d3ad715b2a12e5d54f363b4a45\"" May 15 00:45:49.275465 containerd[2742]: time="2025-05-15T00:45:49.275239778Z" level=info msg="TearDown network for sandbox \"6a46bdf3b6328eb42ffd7a69e6a280c0d8c593d3ad715b2a12e5d54f363b4a45\" successfully" May 15 00:45:49.275465 containerd[2742]: time="2025-05-15T00:45:49.275250178Z" level=info msg="StopPodSandbox for \"6a46bdf3b6328eb42ffd7a69e6a280c0d8c593d3ad715b2a12e5d54f363b4a45\" returns successfully" May 15 00:45:49.275584 containerd[2742]: time="2025-05-15T00:45:49.275558300Z" level=info msg="RemovePodSandbox for \"6a46bdf3b6328eb42ffd7a69e6a280c0d8c593d3ad715b2a12e5d54f363b4a45\"" May 15 00:45:49.275606 containerd[2742]: time="2025-05-15T00:45:49.275592300Z" level=info msg="Forcibly stopping sandbox \"6a46bdf3b6328eb42ffd7a69e6a280c0d8c593d3ad715b2a12e5d54f363b4a45\"" May 15 00:45:49.275667 containerd[2742]: time="2025-05-15T00:45:49.275657461Z" level=info msg="TearDown network for sandbox \"6a46bdf3b6328eb42ffd7a69e6a280c0d8c593d3ad715b2a12e5d54f363b4a45\" successfully" May 15 00:45:49.277185 containerd[2742]: time="2025-05-15T00:45:49.277165069Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6a46bdf3b6328eb42ffd7a69e6a280c0d8c593d3ad715b2a12e5d54f363b4a45\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:45:49.277227 containerd[2742]: time="2025-05-15T00:45:49.277216549Z" level=info msg="RemovePodSandbox \"6a46bdf3b6328eb42ffd7a69e6a280c0d8c593d3ad715b2a12e5d54f363b4a45\" returns successfully" May 15 00:45:49.277478 containerd[2742]: time="2025-05-15T00:45:49.277460431Z" level=info msg="StopPodSandbox for \"844799c848ddb75ffd035f8ecc112096a30f6cd2c55c728b7dfb087bf50f047d\"" May 15 00:45:49.277550 containerd[2742]: time="2025-05-15T00:45:49.277539031Z" level=info msg="TearDown network for sandbox \"844799c848ddb75ffd035f8ecc112096a30f6cd2c55c728b7dfb087bf50f047d\" successfully" May 15 00:45:49.277572 containerd[2742]: time="2025-05-15T00:45:49.277550591Z" level=info msg="StopPodSandbox for \"844799c848ddb75ffd035f8ecc112096a30f6cd2c55c728b7dfb087bf50f047d\" returns successfully" May 15 00:45:49.277743 containerd[2742]: time="2025-05-15T00:45:49.277729232Z" level=info msg="RemovePodSandbox for \"844799c848ddb75ffd035f8ecc112096a30f6cd2c55c728b7dfb087bf50f047d\"" May 15 00:45:49.277770 containerd[2742]: time="2025-05-15T00:45:49.277750192Z" level=info msg="Forcibly stopping sandbox \"844799c848ddb75ffd035f8ecc112096a30f6cd2c55c728b7dfb087bf50f047d\"" May 15 00:45:49.277811 containerd[2742]: time="2025-05-15T00:45:49.277802152Z" level=info msg="TearDown network for sandbox \"844799c848ddb75ffd035f8ecc112096a30f6cd2c55c728b7dfb087bf50f047d\" successfully" May 15 00:45:49.279146 containerd[2742]: time="2025-05-15T00:45:49.279124240Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"844799c848ddb75ffd035f8ecc112096a30f6cd2c55c728b7dfb087bf50f047d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:45:49.279177 containerd[2742]: time="2025-05-15T00:45:49.279166800Z" level=info msg="RemovePodSandbox \"844799c848ddb75ffd035f8ecc112096a30f6cd2c55c728b7dfb087bf50f047d\" returns successfully" May 15 00:45:49.279365 containerd[2742]: time="2025-05-15T00:45:49.279352721Z" level=info msg="StopPodSandbox for \"7f0b7c2a7f53afe71dcb6df3d57d30fa392c9d75def3ab64b2051feef4a136b8\"" May 15 00:45:49.279424 containerd[2742]: time="2025-05-15T00:45:49.279414721Z" level=info msg="TearDown network for sandbox \"7f0b7c2a7f53afe71dcb6df3d57d30fa392c9d75def3ab64b2051feef4a136b8\" successfully" May 15 00:45:49.279449 containerd[2742]: time="2025-05-15T00:45:49.279424721Z" level=info msg="StopPodSandbox for \"7f0b7c2a7f53afe71dcb6df3d57d30fa392c9d75def3ab64b2051feef4a136b8\" returns successfully" May 15 00:45:49.279621 containerd[2742]: time="2025-05-15T00:45:49.279604602Z" level=info msg="RemovePodSandbox for \"7f0b7c2a7f53afe71dcb6df3d57d30fa392c9d75def3ab64b2051feef4a136b8\"" May 15 00:45:49.279643 containerd[2742]: time="2025-05-15T00:45:49.279627202Z" level=info msg="Forcibly stopping sandbox \"7f0b7c2a7f53afe71dcb6df3d57d30fa392c9d75def3ab64b2051feef4a136b8\"" May 15 00:45:49.279703 containerd[2742]: time="2025-05-15T00:45:49.279693163Z" level=info msg="TearDown network for sandbox \"7f0b7c2a7f53afe71dcb6df3d57d30fa392c9d75def3ab64b2051feef4a136b8\" successfully" May 15 00:45:49.280998 containerd[2742]: time="2025-05-15T00:45:49.280971890Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7f0b7c2a7f53afe71dcb6df3d57d30fa392c9d75def3ab64b2051feef4a136b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:45:49.281044 containerd[2742]: time="2025-05-15T00:45:49.281019490Z" level=info msg="RemovePodSandbox \"7f0b7c2a7f53afe71dcb6df3d57d30fa392c9d75def3ab64b2051feef4a136b8\" returns successfully" May 15 00:45:49.281251 containerd[2742]: time="2025-05-15T00:45:49.281236731Z" level=info msg="StopPodSandbox for \"f4fafcc2e03678fcd091a01ce281b13b570673b2e03fbc914a29d7234888cd83\"" May 15 00:45:49.281321 containerd[2742]: time="2025-05-15T00:45:49.281310892Z" level=info msg="TearDown network for sandbox \"f4fafcc2e03678fcd091a01ce281b13b570673b2e03fbc914a29d7234888cd83\" successfully" May 15 00:45:49.281343 containerd[2742]: time="2025-05-15T00:45:49.281322052Z" level=info msg="StopPodSandbox for \"f4fafcc2e03678fcd091a01ce281b13b570673b2e03fbc914a29d7234888cd83\" returns successfully" May 15 00:45:49.281506 containerd[2742]: time="2025-05-15T00:45:49.281492973Z" level=info msg="RemovePodSandbox for \"f4fafcc2e03678fcd091a01ce281b13b570673b2e03fbc914a29d7234888cd83\"" May 15 00:45:49.281527 containerd[2742]: time="2025-05-15T00:45:49.281512133Z" level=info msg="Forcibly stopping sandbox \"f4fafcc2e03678fcd091a01ce281b13b570673b2e03fbc914a29d7234888cd83\"" May 15 00:45:49.281588 containerd[2742]: time="2025-05-15T00:45:49.281578173Z" level=info msg="TearDown network for sandbox \"f4fafcc2e03678fcd091a01ce281b13b570673b2e03fbc914a29d7234888cd83\" successfully" May 15 00:45:49.282860 containerd[2742]: time="2025-05-15T00:45:49.282840180Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f4fafcc2e03678fcd091a01ce281b13b570673b2e03fbc914a29d7234888cd83\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:45:49.282892 containerd[2742]: time="2025-05-15T00:45:49.282882781Z" level=info msg="RemovePodSandbox \"f4fafcc2e03678fcd091a01ce281b13b570673b2e03fbc914a29d7234888cd83\" returns successfully" May 15 00:45:49.283071 containerd[2742]: time="2025-05-15T00:45:49.283059061Z" level=info msg="StopPodSandbox for \"eaedd48b44f386ae86ad7d510a4e82c76588335e346e5f046a7e3921144b79af\"" May 15 00:45:49.283131 containerd[2742]: time="2025-05-15T00:45:49.283121782Z" level=info msg="TearDown network for sandbox \"eaedd48b44f386ae86ad7d510a4e82c76588335e346e5f046a7e3921144b79af\" successfully" May 15 00:45:49.283153 containerd[2742]: time="2025-05-15T00:45:49.283131702Z" level=info msg="StopPodSandbox for \"eaedd48b44f386ae86ad7d510a4e82c76588335e346e5f046a7e3921144b79af\" returns successfully" May 15 00:45:49.283330 containerd[2742]: time="2025-05-15T00:45:49.283315863Z" level=info msg="RemovePodSandbox for \"eaedd48b44f386ae86ad7d510a4e82c76588335e346e5f046a7e3921144b79af\"" May 15 00:45:49.283354 containerd[2742]: time="2025-05-15T00:45:49.283335503Z" level=info msg="Forcibly stopping sandbox \"eaedd48b44f386ae86ad7d510a4e82c76588335e346e5f046a7e3921144b79af\"" May 15 00:45:49.283406 containerd[2742]: time="2025-05-15T00:45:49.283395783Z" level=info msg="TearDown network for sandbox \"eaedd48b44f386ae86ad7d510a4e82c76588335e346e5f046a7e3921144b79af\" successfully" May 15 00:45:49.284683 containerd[2742]: time="2025-05-15T00:45:49.284663030Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eaedd48b44f386ae86ad7d510a4e82c76588335e346e5f046a7e3921144b79af\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:45:49.284717 containerd[2742]: time="2025-05-15T00:45:49.284706951Z" level=info msg="RemovePodSandbox \"eaedd48b44f386ae86ad7d510a4e82c76588335e346e5f046a7e3921144b79af\" returns successfully" May 15 00:45:49.284931 containerd[2742]: time="2025-05-15T00:45:49.284916232Z" level=info msg="StopPodSandbox for \"d95a031375e5efff54170250299d40453c0eceb2d4a8fd3e6f6b70153cf7ef79\"" May 15 00:45:49.285013 containerd[2742]: time="2025-05-15T00:45:49.285002032Z" level=info msg="TearDown network for sandbox \"d95a031375e5efff54170250299d40453c0eceb2d4a8fd3e6f6b70153cf7ef79\" successfully" May 15 00:45:49.285032 containerd[2742]: time="2025-05-15T00:45:49.285013512Z" level=info msg="StopPodSandbox for \"d95a031375e5efff54170250299d40453c0eceb2d4a8fd3e6f6b70153cf7ef79\" returns successfully" May 15 00:45:49.285199 containerd[2742]: time="2025-05-15T00:45:49.285184073Z" level=info msg="RemovePodSandbox for \"d95a031375e5efff54170250299d40453c0eceb2d4a8fd3e6f6b70153cf7ef79\"" May 15 00:45:49.285221 containerd[2742]: time="2025-05-15T00:45:49.285204273Z" level=info msg="Forcibly stopping sandbox \"d95a031375e5efff54170250299d40453c0eceb2d4a8fd3e6f6b70153cf7ef79\"" May 15 00:45:49.285271 containerd[2742]: time="2025-05-15T00:45:49.285262114Z" level=info msg="TearDown network for sandbox \"d95a031375e5efff54170250299d40453c0eceb2d4a8fd3e6f6b70153cf7ef79\" successfully" May 15 00:45:49.286517 containerd[2742]: time="2025-05-15T00:45:49.286497881Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d95a031375e5efff54170250299d40453c0eceb2d4a8fd3e6f6b70153cf7ef79\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:45:49.286549 containerd[2742]: time="2025-05-15T00:45:49.286539361Z" level=info msg="RemovePodSandbox \"d95a031375e5efff54170250299d40453c0eceb2d4a8fd3e6f6b70153cf7ef79\" returns successfully" May 15 00:45:49.286781 containerd[2742]: time="2025-05-15T00:45:49.286768482Z" level=info msg="StopPodSandbox for \"50815db523ba052b0ba38b1f7f5794f326c9ca74d53c2aa7b69d11d63edea035\"" May 15 00:45:49.286854 containerd[2742]: time="2025-05-15T00:45:49.286843322Z" level=info msg="TearDown network for sandbox \"50815db523ba052b0ba38b1f7f5794f326c9ca74d53c2aa7b69d11d63edea035\" successfully" May 15 00:45:49.286876 containerd[2742]: time="2025-05-15T00:45:49.286854403Z" level=info msg="StopPodSandbox for \"50815db523ba052b0ba38b1f7f5794f326c9ca74d53c2aa7b69d11d63edea035\" returns successfully" May 15 00:45:49.287099 containerd[2742]: time="2025-05-15T00:45:49.287077084Z" level=info msg="RemovePodSandbox for \"50815db523ba052b0ba38b1f7f5794f326c9ca74d53c2aa7b69d11d63edea035\"" May 15 00:45:49.287127 containerd[2742]: time="2025-05-15T00:45:49.287102204Z" level=info msg="Forcibly stopping sandbox \"50815db523ba052b0ba38b1f7f5794f326c9ca74d53c2aa7b69d11d63edea035\"" May 15 00:45:49.287181 containerd[2742]: time="2025-05-15T00:45:49.287168804Z" level=info msg="TearDown network for sandbox \"50815db523ba052b0ba38b1f7f5794f326c9ca74d53c2aa7b69d11d63edea035\" successfully" May 15 00:45:49.288517 containerd[2742]: time="2025-05-15T00:45:49.288492932Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"50815db523ba052b0ba38b1f7f5794f326c9ca74d53c2aa7b69d11d63edea035\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:45:49.288587 containerd[2742]: time="2025-05-15T00:45:49.288538012Z" level=info msg="RemovePodSandbox \"50815db523ba052b0ba38b1f7f5794f326c9ca74d53c2aa7b69d11d63edea035\" returns successfully" May 15 00:45:49.288809 containerd[2742]: time="2025-05-15T00:45:49.288792613Z" level=info msg="StopPodSandbox for \"3b1be38ffdcd631013158fe1aa9d73205f2c9093e873a2d284c4bf0e03344d05\"" May 15 00:45:49.288867 containerd[2742]: time="2025-05-15T00:45:49.288856054Z" level=info msg="TearDown network for sandbox \"3b1be38ffdcd631013158fe1aa9d73205f2c9093e873a2d284c4bf0e03344d05\" successfully" May 15 00:45:49.288867 containerd[2742]: time="2025-05-15T00:45:49.288865014Z" level=info msg="StopPodSandbox for \"3b1be38ffdcd631013158fe1aa9d73205f2c9093e873a2d284c4bf0e03344d05\" returns successfully" May 15 00:45:49.289063 containerd[2742]: time="2025-05-15T00:45:49.289046695Z" level=info msg="RemovePodSandbox for \"3b1be38ffdcd631013158fe1aa9d73205f2c9093e873a2d284c4bf0e03344d05\"" May 15 00:45:49.289085 containerd[2742]: time="2025-05-15T00:45:49.289067335Z" level=info msg="Forcibly stopping sandbox \"3b1be38ffdcd631013158fe1aa9d73205f2c9093e873a2d284c4bf0e03344d05\"" May 15 00:45:49.289142 containerd[2742]: time="2025-05-15T00:45:49.289129495Z" level=info msg="TearDown network for sandbox \"3b1be38ffdcd631013158fe1aa9d73205f2c9093e873a2d284c4bf0e03344d05\" successfully" May 15 00:45:49.302571 containerd[2742]: time="2025-05-15T00:45:49.302543049Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3b1be38ffdcd631013158fe1aa9d73205f2c9093e873a2d284c4bf0e03344d05\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:45:49.302619 containerd[2742]: time="2025-05-15T00:45:49.302589810Z" level=info msg="RemovePodSandbox \"3b1be38ffdcd631013158fe1aa9d73205f2c9093e873a2d284c4bf0e03344d05\" returns successfully" May 15 00:45:49.302819 containerd[2742]: time="2025-05-15T00:45:49.302799611Z" level=info msg="StopPodSandbox for \"0fce0119c3f9429803f91957b67a10b2d0b2c830bc2b19a8820f9661d89e69a3\"" May 15 00:45:49.302891 containerd[2742]: time="2025-05-15T00:45:49.302878891Z" level=info msg="TearDown network for sandbox \"0fce0119c3f9429803f91957b67a10b2d0b2c830bc2b19a8820f9661d89e69a3\" successfully" May 15 00:45:49.302891 containerd[2742]: time="2025-05-15T00:45:49.302889051Z" level=info msg="StopPodSandbox for \"0fce0119c3f9429803f91957b67a10b2d0b2c830bc2b19a8820f9661d89e69a3\" returns successfully" May 15 00:45:49.303136 containerd[2742]: time="2025-05-15T00:45:49.303118413Z" level=info msg="RemovePodSandbox for \"0fce0119c3f9429803f91957b67a10b2d0b2c830bc2b19a8820f9661d89e69a3\"" May 15 00:45:49.303165 containerd[2742]: time="2025-05-15T00:45:49.303139253Z" level=info msg="Forcibly stopping sandbox \"0fce0119c3f9429803f91957b67a10b2d0b2c830bc2b19a8820f9661d89e69a3\"" May 15 00:45:49.303211 containerd[2742]: time="2025-05-15T00:45:49.303197973Z" level=info msg="TearDown network for sandbox \"0fce0119c3f9429803f91957b67a10b2d0b2c830bc2b19a8820f9661d89e69a3\" successfully" May 15 00:45:49.304489 containerd[2742]: time="2025-05-15T00:45:49.304464140Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0fce0119c3f9429803f91957b67a10b2d0b2c830bc2b19a8820f9661d89e69a3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:45:49.304521 containerd[2742]: time="2025-05-15T00:45:49.304509180Z" level=info msg="RemovePodSandbox \"0fce0119c3f9429803f91957b67a10b2d0b2c830bc2b19a8820f9661d89e69a3\" returns successfully" May 15 00:45:49.304722 containerd[2742]: time="2025-05-15T00:45:49.304705861Z" level=info msg="StopPodSandbox for \"72c303f90d5db1dd84e9b19b5c453e8009c8dd55b3bab78b8b65ac7fa1239059\"" May 15 00:45:49.304794 containerd[2742]: time="2025-05-15T00:45:49.304781462Z" level=info msg="TearDown network for sandbox \"72c303f90d5db1dd84e9b19b5c453e8009c8dd55b3bab78b8b65ac7fa1239059\" successfully" May 15 00:45:49.304794 containerd[2742]: time="2025-05-15T00:45:49.304791822Z" level=info msg="StopPodSandbox for \"72c303f90d5db1dd84e9b19b5c453e8009c8dd55b3bab78b8b65ac7fa1239059\" returns successfully" May 15 00:45:49.304982 containerd[2742]: time="2025-05-15T00:45:49.304965823Z" level=info msg="RemovePodSandbox for \"72c303f90d5db1dd84e9b19b5c453e8009c8dd55b3bab78b8b65ac7fa1239059\"" May 15 00:45:49.305016 containerd[2742]: time="2025-05-15T00:45:49.304984423Z" level=info msg="Forcibly stopping sandbox \"72c303f90d5db1dd84e9b19b5c453e8009c8dd55b3bab78b8b65ac7fa1239059\"" May 15 00:45:49.305067 containerd[2742]: time="2025-05-15T00:45:49.305054943Z" level=info msg="TearDown network for sandbox \"72c303f90d5db1dd84e9b19b5c453e8009c8dd55b3bab78b8b65ac7fa1239059\" successfully" May 15 00:45:49.306312 containerd[2742]: time="2025-05-15T00:45:49.306288470Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"72c303f90d5db1dd84e9b19b5c453e8009c8dd55b3bab78b8b65ac7fa1239059\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:45:49.306341 containerd[2742]: time="2025-05-15T00:45:49.306329270Z" level=info msg="RemovePodSandbox \"72c303f90d5db1dd84e9b19b5c453e8009c8dd55b3bab78b8b65ac7fa1239059\" returns successfully" May 15 00:45:49.306526 containerd[2742]: time="2025-05-15T00:45:49.306509551Z" level=info msg="StopPodSandbox for \"54f085cdebd6782586198a79077bd5e898b5109f8cb836c0a124c79f60c4879b\"" May 15 00:45:49.306585 containerd[2742]: time="2025-05-15T00:45:49.306572632Z" level=info msg="TearDown network for sandbox \"54f085cdebd6782586198a79077bd5e898b5109f8cb836c0a124c79f60c4879b\" successfully" May 15 00:45:49.306585 containerd[2742]: time="2025-05-15T00:45:49.306582512Z" level=info msg="StopPodSandbox for \"54f085cdebd6782586198a79077bd5e898b5109f8cb836c0a124c79f60c4879b\" returns successfully" May 15 00:45:49.306773 containerd[2742]: time="2025-05-15T00:45:49.306756273Z" level=info msg="RemovePodSandbox for \"54f085cdebd6782586198a79077bd5e898b5109f8cb836c0a124c79f60c4879b\"" May 15 00:45:49.306797 containerd[2742]: time="2025-05-15T00:45:49.306775993Z" level=info msg="Forcibly stopping sandbox \"54f085cdebd6782586198a79077bd5e898b5109f8cb836c0a124c79f60c4879b\"" May 15 00:45:49.306847 containerd[2742]: time="2025-05-15T00:45:49.306834793Z" level=info msg="TearDown network for sandbox \"54f085cdebd6782586198a79077bd5e898b5109f8cb836c0a124c79f60c4879b\" successfully" May 15 00:45:49.308102 containerd[2742]: time="2025-05-15T00:45:49.308079080Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"54f085cdebd6782586198a79077bd5e898b5109f8cb836c0a124c79f60c4879b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:45:49.308128 containerd[2742]: time="2025-05-15T00:45:49.308122000Z" level=info msg="RemovePodSandbox \"54f085cdebd6782586198a79077bd5e898b5109f8cb836c0a124c79f60c4879b\" returns successfully" May 15 00:45:49.308340 containerd[2742]: time="2025-05-15T00:45:49.308320241Z" level=info msg="StopPodSandbox for \"5a631848f64e2907df3492d0c9944d682d7428751e164887d2853322f835243c\"" May 15 00:45:49.308405 containerd[2742]: time="2025-05-15T00:45:49.308392842Z" level=info msg="TearDown network for sandbox \"5a631848f64e2907df3492d0c9944d682d7428751e164887d2853322f835243c\" successfully" May 15 00:45:49.308405 containerd[2742]: time="2025-05-15T00:45:49.308402882Z" level=info msg="StopPodSandbox for \"5a631848f64e2907df3492d0c9944d682d7428751e164887d2853322f835243c\" returns successfully" May 15 00:45:49.308588 containerd[2742]: time="2025-05-15T00:45:49.308572243Z" level=info msg="RemovePodSandbox for \"5a631848f64e2907df3492d0c9944d682d7428751e164887d2853322f835243c\"" May 15 00:45:49.308615 containerd[2742]: time="2025-05-15T00:45:49.308592843Z" level=info msg="Forcibly stopping sandbox \"5a631848f64e2907df3492d0c9944d682d7428751e164887d2853322f835243c\"" May 15 00:45:49.308656 containerd[2742]: time="2025-05-15T00:45:49.308644963Z" level=info msg="TearDown network for sandbox \"5a631848f64e2907df3492d0c9944d682d7428751e164887d2853322f835243c\" successfully" May 15 00:45:49.309961 containerd[2742]: time="2025-05-15T00:45:49.309939250Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5a631848f64e2907df3492d0c9944d682d7428751e164887d2853322f835243c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:45:49.310001 containerd[2742]: time="2025-05-15T00:45:49.309985171Z" level=info msg="RemovePodSandbox \"5a631848f64e2907df3492d0c9944d682d7428751e164887d2853322f835243c\" returns successfully" May 15 00:45:49.310209 containerd[2742]: time="2025-05-15T00:45:49.310194812Z" level=info msg="StopPodSandbox for \"8569a80021e912c60d5fb9151b75e24a1a43faa25724b7430a62874467169141\"" May 15 00:45:49.310282 containerd[2742]: time="2025-05-15T00:45:49.310270972Z" level=info msg="TearDown network for sandbox \"8569a80021e912c60d5fb9151b75e24a1a43faa25724b7430a62874467169141\" successfully" May 15 00:45:49.310308 containerd[2742]: time="2025-05-15T00:45:49.310282132Z" level=info msg="StopPodSandbox for \"8569a80021e912c60d5fb9151b75e24a1a43faa25724b7430a62874467169141\" returns successfully" May 15 00:45:49.310481 containerd[2742]: time="2025-05-15T00:45:49.310466133Z" level=info msg="RemovePodSandbox for \"8569a80021e912c60d5fb9151b75e24a1a43faa25724b7430a62874467169141\"" May 15 00:45:49.310503 containerd[2742]: time="2025-05-15T00:45:49.310486293Z" level=info msg="Forcibly stopping sandbox \"8569a80021e912c60d5fb9151b75e24a1a43faa25724b7430a62874467169141\"" May 15 00:45:49.310557 containerd[2742]: time="2025-05-15T00:45:49.310547414Z" level=info msg="TearDown network for sandbox \"8569a80021e912c60d5fb9151b75e24a1a43faa25724b7430a62874467169141\" successfully" May 15 00:45:49.311814 containerd[2742]: time="2025-05-15T00:45:49.311791821Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8569a80021e912c60d5fb9151b75e24a1a43faa25724b7430a62874467169141\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:45:49.311859 containerd[2742]: time="2025-05-15T00:45:49.311835221Z" level=info msg="RemovePodSandbox \"8569a80021e912c60d5fb9151b75e24a1a43faa25724b7430a62874467169141\" returns successfully" May 15 00:45:49.312057 containerd[2742]: time="2025-05-15T00:45:49.312042982Z" level=info msg="StopPodSandbox for \"a136f66c041125310c5791207e0630fa16b0ccbd969efde218028c9b394a893c\"" May 15 00:45:49.312121 containerd[2742]: time="2025-05-15T00:45:49.312111742Z" level=info msg="TearDown network for sandbox \"a136f66c041125310c5791207e0630fa16b0ccbd969efde218028c9b394a893c\" successfully" May 15 00:45:49.312144 containerd[2742]: time="2025-05-15T00:45:49.312120982Z" level=info msg="StopPodSandbox for \"a136f66c041125310c5791207e0630fa16b0ccbd969efde218028c9b394a893c\" returns successfully" May 15 00:45:49.312373 containerd[2742]: time="2025-05-15T00:45:49.312352824Z" level=info msg="RemovePodSandbox for \"a136f66c041125310c5791207e0630fa16b0ccbd969efde218028c9b394a893c\"" May 15 00:45:49.312395 containerd[2742]: time="2025-05-15T00:45:49.312382264Z" level=info msg="Forcibly stopping sandbox \"a136f66c041125310c5791207e0630fa16b0ccbd969efde218028c9b394a893c\"" May 15 00:45:49.312465 containerd[2742]: time="2025-05-15T00:45:49.312454824Z" level=info msg="TearDown network for sandbox \"a136f66c041125310c5791207e0630fa16b0ccbd969efde218028c9b394a893c\" successfully" May 15 00:45:49.313808 containerd[2742]: time="2025-05-15T00:45:49.313781872Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a136f66c041125310c5791207e0630fa16b0ccbd969efde218028c9b394a893c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:45:49.313843 containerd[2742]: time="2025-05-15T00:45:49.313829152Z" level=info msg="RemovePodSandbox \"a136f66c041125310c5791207e0630fa16b0ccbd969efde218028c9b394a893c\" returns successfully" May 15 00:45:49.314067 containerd[2742]: time="2025-05-15T00:45:49.314050553Z" level=info msg="StopPodSandbox for \"e1dcdcf456e21dd356a6e4200c4375b00da9045e680c9c561a5a30974a0dc038\"" May 15 00:45:49.314130 containerd[2742]: time="2025-05-15T00:45:49.314117233Z" level=info msg="TearDown network for sandbox \"e1dcdcf456e21dd356a6e4200c4375b00da9045e680c9c561a5a30974a0dc038\" successfully" May 15 00:45:49.314130 containerd[2742]: time="2025-05-15T00:45:49.314127594Z" level=info msg="StopPodSandbox for \"e1dcdcf456e21dd356a6e4200c4375b00da9045e680c9c561a5a30974a0dc038\" returns successfully" May 15 00:45:49.314348 containerd[2742]: time="2025-05-15T00:45:49.314331635Z" level=info msg="RemovePodSandbox for \"e1dcdcf456e21dd356a6e4200c4375b00da9045e680c9c561a5a30974a0dc038\"" May 15 00:45:49.314372 containerd[2742]: time="2025-05-15T00:45:49.314351875Z" level=info msg="Forcibly stopping sandbox \"e1dcdcf456e21dd356a6e4200c4375b00da9045e680c9c561a5a30974a0dc038\"" May 15 00:45:49.314425 containerd[2742]: time="2025-05-15T00:45:49.314413235Z" level=info msg="TearDown network for sandbox \"e1dcdcf456e21dd356a6e4200c4375b00da9045e680c9c561a5a30974a0dc038\" successfully" May 15 00:45:49.315676 containerd[2742]: time="2025-05-15T00:45:49.315651442Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e1dcdcf456e21dd356a6e4200c4375b00da9045e680c9c561a5a30974a0dc038\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:45:49.315712 containerd[2742]: time="2025-05-15T00:45:49.315699402Z" level=info msg="RemovePodSandbox \"e1dcdcf456e21dd356a6e4200c4375b00da9045e680c9c561a5a30974a0dc038\" returns successfully" May 15 00:45:49.315925 containerd[2742]: time="2025-05-15T00:45:49.315909843Z" level=info msg="StopPodSandbox for \"d084fb895192a5045258fdda8c5273615e092dc79ef82c99cf0544b05a4ded28\"" May 15 00:45:49.316003 containerd[2742]: time="2025-05-15T00:45:49.315991724Z" level=info msg="TearDown network for sandbox \"d084fb895192a5045258fdda8c5273615e092dc79ef82c99cf0544b05a4ded28\" successfully" May 15 00:45:49.316022 containerd[2742]: time="2025-05-15T00:45:49.316002644Z" level=info msg="StopPodSandbox for \"d084fb895192a5045258fdda8c5273615e092dc79ef82c99cf0544b05a4ded28\" returns successfully" May 15 00:45:49.316179 containerd[2742]: time="2025-05-15T00:45:49.316167245Z" level=info msg="RemovePodSandbox for \"d084fb895192a5045258fdda8c5273615e092dc79ef82c99cf0544b05a4ded28\"" May 15 00:45:49.316204 containerd[2742]: time="2025-05-15T00:45:49.316182805Z" level=info msg="Forcibly stopping sandbox \"d084fb895192a5045258fdda8c5273615e092dc79ef82c99cf0544b05a4ded28\"" May 15 00:45:49.316246 containerd[2742]: time="2025-05-15T00:45:49.316237005Z" level=info msg="TearDown network for sandbox \"d084fb895192a5045258fdda8c5273615e092dc79ef82c99cf0544b05a4ded28\" successfully" May 15 00:45:49.317506 containerd[2742]: time="2025-05-15T00:45:49.317484052Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d084fb895192a5045258fdda8c5273615e092dc79ef82c99cf0544b05a4ded28\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:45:49.317546 containerd[2742]: time="2025-05-15T00:45:49.317525412Z" level=info msg="RemovePodSandbox \"d084fb895192a5045258fdda8c5273615e092dc79ef82c99cf0544b05a4ded28\" returns successfully" May 15 00:45:49.317723 containerd[2742]: time="2025-05-15T00:45:49.317703573Z" level=info msg="StopPodSandbox for \"0a83d9d484576d9844bcc82eafa65c2799ad439480b7d80198fb65d09db951da\"" May 15 00:45:49.317802 containerd[2742]: time="2025-05-15T00:45:49.317789054Z" level=info msg="TearDown network for sandbox \"0a83d9d484576d9844bcc82eafa65c2799ad439480b7d80198fb65d09db951da\" successfully" May 15 00:45:49.317802 containerd[2742]: time="2025-05-15T00:45:49.317800454Z" level=info msg="StopPodSandbox for \"0a83d9d484576d9844bcc82eafa65c2799ad439480b7d80198fb65d09db951da\" returns successfully" May 15 00:45:49.317972 containerd[2742]: time="2025-05-15T00:45:49.317956855Z" level=info msg="RemovePodSandbox for \"0a83d9d484576d9844bcc82eafa65c2799ad439480b7d80198fb65d09db951da\"" May 15 00:45:49.317995 containerd[2742]: time="2025-05-15T00:45:49.317977095Z" level=info msg="Forcibly stopping sandbox \"0a83d9d484576d9844bcc82eafa65c2799ad439480b7d80198fb65d09db951da\"" May 15 00:45:49.318054 containerd[2742]: time="2025-05-15T00:45:49.318043615Z" level=info msg="TearDown network for sandbox \"0a83d9d484576d9844bcc82eafa65c2799ad439480b7d80198fb65d09db951da\" successfully" May 15 00:45:49.319352 containerd[2742]: time="2025-05-15T00:45:49.319328422Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0a83d9d484576d9844bcc82eafa65c2799ad439480b7d80198fb65d09db951da\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:45:49.319382 containerd[2742]: time="2025-05-15T00:45:49.319373983Z" level=info msg="RemovePodSandbox \"0a83d9d484576d9844bcc82eafa65c2799ad439480b7d80198fb65d09db951da\" returns successfully" May 15 00:45:49.319593 containerd[2742]: time="2025-05-15T00:45:49.319577744Z" level=info msg="StopPodSandbox for \"60458f703b5250a89da38ecd71fc87d7040b8bdb5486447737a94a993b59a538\"" May 15 00:45:49.319658 containerd[2742]: time="2025-05-15T00:45:49.319645784Z" level=info msg="TearDown network for sandbox \"60458f703b5250a89da38ecd71fc87d7040b8bdb5486447737a94a993b59a538\" successfully" May 15 00:45:49.319658 containerd[2742]: time="2025-05-15T00:45:49.319656184Z" level=info msg="StopPodSandbox for \"60458f703b5250a89da38ecd71fc87d7040b8bdb5486447737a94a993b59a538\" returns successfully" May 15 00:45:49.319842 containerd[2742]: time="2025-05-15T00:45:49.319826225Z" level=info msg="RemovePodSandbox for \"60458f703b5250a89da38ecd71fc87d7040b8bdb5486447737a94a993b59a538\"" May 15 00:45:49.319868 containerd[2742]: time="2025-05-15T00:45:49.319846265Z" level=info msg="Forcibly stopping sandbox \"60458f703b5250a89da38ecd71fc87d7040b8bdb5486447737a94a993b59a538\"" May 15 00:45:49.319925 containerd[2742]: time="2025-05-15T00:45:49.319912746Z" level=info msg="TearDown network for sandbox \"60458f703b5250a89da38ecd71fc87d7040b8bdb5486447737a94a993b59a538\" successfully" May 15 00:45:49.321171 containerd[2742]: time="2025-05-15T00:45:49.321144272Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"60458f703b5250a89da38ecd71fc87d7040b8bdb5486447737a94a993b59a538\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:45:49.321220 containerd[2742]: time="2025-05-15T00:45:49.321187433Z" level=info msg="RemovePodSandbox \"60458f703b5250a89da38ecd71fc87d7040b8bdb5486447737a94a993b59a538\" returns successfully" May 15 00:45:49.321397 containerd[2742]: time="2025-05-15T00:45:49.321380594Z" level=info msg="StopPodSandbox for \"1b9aaaf3bbfd969701e72e7d4061005251be34cc4ff6594d68fffb8c182d7bbc\"" May 15 00:45:49.321453 containerd[2742]: time="2025-05-15T00:45:49.321443874Z" level=info msg="TearDown network for sandbox \"1b9aaaf3bbfd969701e72e7d4061005251be34cc4ff6594d68fffb8c182d7bbc\" successfully" May 15 00:45:49.321476 containerd[2742]: time="2025-05-15T00:45:49.321453714Z" level=info msg="StopPodSandbox for \"1b9aaaf3bbfd969701e72e7d4061005251be34cc4ff6594d68fffb8c182d7bbc\" returns successfully" May 15 00:45:49.321668 containerd[2742]: time="2025-05-15T00:45:49.321649875Z" level=info msg="RemovePodSandbox for \"1b9aaaf3bbfd969701e72e7d4061005251be34cc4ff6594d68fffb8c182d7bbc\"" May 15 00:45:49.321689 containerd[2742]: time="2025-05-15T00:45:49.321673675Z" level=info msg="Forcibly stopping sandbox \"1b9aaaf3bbfd969701e72e7d4061005251be34cc4ff6594d68fffb8c182d7bbc\"" May 15 00:45:49.321756 containerd[2742]: time="2025-05-15T00:45:49.321746276Z" level=info msg="TearDown network for sandbox \"1b9aaaf3bbfd969701e72e7d4061005251be34cc4ff6594d68fffb8c182d7bbc\" successfully" May 15 00:45:49.325027 containerd[2742]: time="2025-05-15T00:45:49.324997414Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1b9aaaf3bbfd969701e72e7d4061005251be34cc4ff6594d68fffb8c182d7bbc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:45:49.325078 containerd[2742]: time="2025-05-15T00:45:49.325055414Z" level=info msg="RemovePodSandbox \"1b9aaaf3bbfd969701e72e7d4061005251be34cc4ff6594d68fffb8c182d7bbc\" returns successfully" May 15 00:45:49.325322 containerd[2742]: time="2025-05-15T00:45:49.325299455Z" level=info msg="StopPodSandbox for \"cc93de283b98479ea68227ba4a0660df5329324d1a421550198b5bf0de9161c1\"" May 15 00:45:49.325404 containerd[2742]: time="2025-05-15T00:45:49.325392576Z" level=info msg="TearDown network for sandbox \"cc93de283b98479ea68227ba4a0660df5329324d1a421550198b5bf0de9161c1\" successfully" May 15 00:45:49.325429 containerd[2742]: time="2025-05-15T00:45:49.325404256Z" level=info msg="StopPodSandbox for \"cc93de283b98479ea68227ba4a0660df5329324d1a421550198b5bf0de9161c1\" returns successfully" May 15 00:45:49.325615 containerd[2742]: time="2025-05-15T00:45:49.325598257Z" level=info msg="RemovePodSandbox for \"cc93de283b98479ea68227ba4a0660df5329324d1a421550198b5bf0de9161c1\"" May 15 00:45:49.325635 containerd[2742]: time="2025-05-15T00:45:49.325621577Z" level=info msg="Forcibly stopping sandbox \"cc93de283b98479ea68227ba4a0660df5329324d1a421550198b5bf0de9161c1\"" May 15 00:45:49.325697 containerd[2742]: time="2025-05-15T00:45:49.325686938Z" level=info msg="TearDown network for sandbox \"cc93de283b98479ea68227ba4a0660df5329324d1a421550198b5bf0de9161c1\" successfully" May 15 00:45:49.327013 containerd[2742]: time="2025-05-15T00:45:49.326983625Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc93de283b98479ea68227ba4a0660df5329324d1a421550198b5bf0de9161c1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:45:49.327047 containerd[2742]: time="2025-05-15T00:45:49.327035665Z" level=info msg="RemovePodSandbox \"cc93de283b98479ea68227ba4a0660df5329324d1a421550198b5bf0de9161c1\" returns successfully" May 15 00:45:49.327271 containerd[2742]: time="2025-05-15T00:45:49.327249746Z" level=info msg="StopPodSandbox for \"f1b3dd443a8fe16b44a1b8bbfe9267a0e937dc8adf28fb8076ddb7034d32caa3\"" May 15 00:45:49.327347 containerd[2742]: time="2025-05-15T00:45:49.327336587Z" level=info msg="TearDown network for sandbox \"f1b3dd443a8fe16b44a1b8bbfe9267a0e937dc8adf28fb8076ddb7034d32caa3\" successfully" May 15 00:45:49.327366 containerd[2742]: time="2025-05-15T00:45:49.327348267Z" level=info msg="StopPodSandbox for \"f1b3dd443a8fe16b44a1b8bbfe9267a0e937dc8adf28fb8076ddb7034d32caa3\" returns successfully" May 15 00:45:49.327548 containerd[2742]: time="2025-05-15T00:45:49.327530148Z" level=info msg="RemovePodSandbox for \"f1b3dd443a8fe16b44a1b8bbfe9267a0e937dc8adf28fb8076ddb7034d32caa3\"" May 15 00:45:49.327567 containerd[2742]: time="2025-05-15T00:45:49.327554228Z" level=info msg="Forcibly stopping sandbox \"f1b3dd443a8fe16b44a1b8bbfe9267a0e937dc8adf28fb8076ddb7034d32caa3\"" May 15 00:45:49.327637 containerd[2742]: time="2025-05-15T00:45:49.327626308Z" level=info msg="TearDown network for sandbox \"f1b3dd443a8fe16b44a1b8bbfe9267a0e937dc8adf28fb8076ddb7034d32caa3\" successfully" May 15 00:45:49.328885 containerd[2742]: time="2025-05-15T00:45:49.328857715Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f1b3dd443a8fe16b44a1b8bbfe9267a0e937dc8adf28fb8076ddb7034d32caa3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:45:49.328942 containerd[2742]: time="2025-05-15T00:45:49.328900875Z" level=info msg="RemovePodSandbox \"f1b3dd443a8fe16b44a1b8bbfe9267a0e937dc8adf28fb8076ddb7034d32caa3\" returns successfully" May 15 00:45:49.329135 containerd[2742]: time="2025-05-15T00:45:49.329112716Z" level=info msg="StopPodSandbox for \"e350aa87adb53dcc992ece155cc0f2c80a06f3e56a3f2d6b4b72c306fd6154d5\"" May 15 00:45:49.329204 containerd[2742]: time="2025-05-15T00:45:49.329187517Z" level=info msg="TearDown network for sandbox \"e350aa87adb53dcc992ece155cc0f2c80a06f3e56a3f2d6b4b72c306fd6154d5\" successfully" May 15 00:45:49.329204 containerd[2742]: time="2025-05-15T00:45:49.329198637Z" level=info msg="StopPodSandbox for \"e350aa87adb53dcc992ece155cc0f2c80a06f3e56a3f2d6b4b72c306fd6154d5\" returns successfully" May 15 00:45:49.329380 containerd[2742]: time="2025-05-15T00:45:49.329361598Z" level=info msg="RemovePodSandbox for \"e350aa87adb53dcc992ece155cc0f2c80a06f3e56a3f2d6b4b72c306fd6154d5\"" May 15 00:45:49.329406 containerd[2742]: time="2025-05-15T00:45:49.329381838Z" level=info msg="Forcibly stopping sandbox \"e350aa87adb53dcc992ece155cc0f2c80a06f3e56a3f2d6b4b72c306fd6154d5\"" May 15 00:45:49.329450 containerd[2742]: time="2025-05-15T00:45:49.329438158Z" level=info msg="TearDown network for sandbox \"e350aa87adb53dcc992ece155cc0f2c80a06f3e56a3f2d6b4b72c306fd6154d5\" successfully" May 15 00:45:49.330760 containerd[2742]: time="2025-05-15T00:45:49.330736045Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e350aa87adb53dcc992ece155cc0f2c80a06f3e56a3f2d6b4b72c306fd6154d5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:45:49.330795 containerd[2742]: time="2025-05-15T00:45:49.330783726Z" level=info msg="RemovePodSandbox \"e350aa87adb53dcc992ece155cc0f2c80a06f3e56a3f2d6b4b72c306fd6154d5\" returns successfully" May 15 00:45:49.331007 containerd[2742]: time="2025-05-15T00:45:49.330984367Z" level=info msg="StopPodSandbox for \"fccafe2fb0271a036ce005e2d52f81dcf977b220969c5a70a51b7fda07aa7288\"" May 15 00:45:49.331084 containerd[2742]: time="2025-05-15T00:45:49.331073687Z" level=info msg="TearDown network for sandbox \"fccafe2fb0271a036ce005e2d52f81dcf977b220969c5a70a51b7fda07aa7288\" successfully" May 15 00:45:49.331107 containerd[2742]: time="2025-05-15T00:45:49.331085567Z" level=info msg="StopPodSandbox for \"fccafe2fb0271a036ce005e2d52f81dcf977b220969c5a70a51b7fda07aa7288\" returns successfully" May 15 00:45:49.331292 containerd[2742]: time="2025-05-15T00:45:49.331278928Z" level=info msg="RemovePodSandbox for \"fccafe2fb0271a036ce005e2d52f81dcf977b220969c5a70a51b7fda07aa7288\"" May 15 00:45:49.331313 containerd[2742]: time="2025-05-15T00:45:49.331296809Z" level=info msg="Forcibly stopping sandbox \"fccafe2fb0271a036ce005e2d52f81dcf977b220969c5a70a51b7fda07aa7288\"" May 15 00:45:49.331366 containerd[2742]: time="2025-05-15T00:45:49.331354729Z" level=info msg="TearDown network for sandbox \"fccafe2fb0271a036ce005e2d52f81dcf977b220969c5a70a51b7fda07aa7288\" successfully" May 15 00:45:49.332624 containerd[2742]: time="2025-05-15T00:45:49.332600576Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fccafe2fb0271a036ce005e2d52f81dcf977b220969c5a70a51b7fda07aa7288\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:45:49.332668 containerd[2742]: time="2025-05-15T00:45:49.332648376Z" level=info msg="RemovePodSandbox \"fccafe2fb0271a036ce005e2d52f81dcf977b220969c5a70a51b7fda07aa7288\" returns successfully" May 15 00:45:49.332855 containerd[2742]: time="2025-05-15T00:45:49.332840097Z" level=info msg="StopPodSandbox for \"1399a0d99f7adeee1a7c11a6ca1f65fecd35838d4ee18ff29180204e702a5d04\"" May 15 00:45:49.332927 containerd[2742]: time="2025-05-15T00:45:49.332916058Z" level=info msg="TearDown network for sandbox \"1399a0d99f7adeee1a7c11a6ca1f65fecd35838d4ee18ff29180204e702a5d04\" successfully" May 15 00:45:49.332946 containerd[2742]: time="2025-05-15T00:45:49.332927138Z" level=info msg="StopPodSandbox for \"1399a0d99f7adeee1a7c11a6ca1f65fecd35838d4ee18ff29180204e702a5d04\" returns successfully" May 15 00:45:49.333115 containerd[2742]: time="2025-05-15T00:45:49.333100379Z" level=info msg="RemovePodSandbox for \"1399a0d99f7adeee1a7c11a6ca1f65fecd35838d4ee18ff29180204e702a5d04\"" May 15 00:45:49.333138 containerd[2742]: time="2025-05-15T00:45:49.333120739Z" level=info msg="Forcibly stopping sandbox \"1399a0d99f7adeee1a7c11a6ca1f65fecd35838d4ee18ff29180204e702a5d04\"" May 15 00:45:49.333195 containerd[2742]: time="2025-05-15T00:45:49.333185819Z" level=info msg="TearDown network for sandbox \"1399a0d99f7adeee1a7c11a6ca1f65fecd35838d4ee18ff29180204e702a5d04\" successfully" May 15 00:45:49.334470 containerd[2742]: time="2025-05-15T00:45:49.334446706Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1399a0d99f7adeee1a7c11a6ca1f65fecd35838d4ee18ff29180204e702a5d04\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:45:49.334504 containerd[2742]: time="2025-05-15T00:45:49.334492826Z" level=info msg="RemovePodSandbox \"1399a0d99f7adeee1a7c11a6ca1f65fecd35838d4ee18ff29180204e702a5d04\" returns successfully" May 15 00:47:34.930404 systemd[1]: Started sshd@7-147.28.145.22:22-24.229.22.106:36439.service - OpenSSH per-connection server daemon (24.229.22.106:36439). May 15 00:47:36.248368 sshd[9429]: Invalid user default from 24.229.22.106 port 36439 May 15 00:47:36.534934 sshd-session[9442]: pam_faillock(sshd:auth): User unknown May 15 00:47:36.538967 sshd[9429]: Postponed keyboard-interactive for invalid user default from 24.229.22.106 port 36439 ssh2 [preauth] May 15 00:47:36.843912 sshd-session[9442]: pam_unix(sshd:auth): check pass; user unknown May 15 00:47:36.843935 sshd-session[9442]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=24.229.22.106 May 15 00:47:36.844257 sshd-session[9442]: pam_faillock(sshd:auth): User unknown May 15 00:47:39.238454 sshd[9429]: PAM: Permission denied for illegal user default from 24.229.22.106 May 15 00:47:39.238865 sshd[9429]: Failed keyboard-interactive/pam for invalid user default from 24.229.22.106 port 36439 ssh2 May 15 00:47:39.500841 sshd[9429]: Connection closed by invalid user default 24.229.22.106 port 36439 [preauth] May 15 00:47:39.502904 systemd[1]: sshd@7-147.28.145.22:22-24.229.22.106:36439.service: Deactivated successfully. May 15 00:51:31.711430 update_engine[2736]: I20250515 00:51:31.711360 2736 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 15 00:51:31.711430 update_engine[2736]: I20250515 00:51:31.711412 2736 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 15 00:51:31.712152 update_engine[2736]: I20250515 00:51:31.711632 2736 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 15 00:51:31.712152 update_engine[2736]: I20250515 00:51:31.711939 2736 omaha_request_params.cc:62] Current group set to beta May 15 00:51:31.712152 update_engine[2736]: I20250515 00:51:31.712038 2736 update_attempter.cc:499] Already updated boot flags. Skipping. May 15 00:51:31.712152 update_engine[2736]: I20250515 00:51:31.712049 2736 update_attempter.cc:643] Scheduling an action processor start. May 15 00:51:31.712152 update_engine[2736]: I20250515 00:51:31.712062 2736 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 15 00:51:31.712152 update_engine[2736]: I20250515 00:51:31.712091 2736 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 15 00:51:31.712152 update_engine[2736]: I20250515 00:51:31.712139 2736 omaha_request_action.cc:271] Posting an Omaha request to disabled May 15 00:51:31.712152 update_engine[2736]: I20250515 00:51:31.712146 2736 omaha_request_action.cc:272] Request: May 15 00:51:31.712152 update_engine[2736]: May 15 00:51:31.712152 update_engine[2736]: May 15 00:51:31.712152 update_engine[2736]: May 15 00:51:31.712152 update_engine[2736]: May 15 00:51:31.712152 update_engine[2736]: May 15 00:51:31.712152 update_engine[2736]: May 15 00:51:31.712152 update_engine[2736]: May 15 00:51:31.712152 update_engine[2736]: May 15 00:51:31.712152 update_engine[2736]: I20250515 00:51:31.712152 2736 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 00:51:31.712445 locksmithd[2767]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 15 00:51:31.713139 update_engine[2736]: I20250515 00:51:31.713122 2736 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 00:51:31.713436 update_engine[2736]: I20250515 00:51:31.713418 2736 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 00:51:31.713897 update_engine[2736]: E20250515 00:51:31.713881 2736 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 00:51:31.713939 update_engine[2736]: I20250515 00:51:31.713928 2736 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 15 00:51:41.621978 update_engine[2736]: I20250515 00:51:41.621887 2736 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 00:51:41.622498 update_engine[2736]: I20250515 00:51:41.622197 2736 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 00:51:41.622498 update_engine[2736]: I20250515 00:51:41.622454 2736 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 00:51:41.622817 update_engine[2736]: E20250515 00:51:41.622795 2736 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 00:51:41.622848 update_engine[2736]: I20250515 00:51:41.622838 2736 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 15 00:51:51.621939 update_engine[2736]: I20250515 00:51:51.621878 2736 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 00:51:51.622343 update_engine[2736]: I20250515 00:51:51.622160 2736 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 00:51:51.622400 update_engine[2736]: I20250515 00:51:51.622380 2736 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 00:51:51.622786 update_engine[2736]: E20250515 00:51:51.622769 2736 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 00:51:51.622836 update_engine[2736]: I20250515 00:51:51.622824 2736 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 15 00:52:01.621935 update_engine[2736]: I20250515 00:52:01.621865 2736 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 00:52:01.622392 update_engine[2736]: I20250515 00:52:01.622110 2736 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 00:52:01.622392 update_engine[2736]: I20250515 00:52:01.622319 2736 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 00:52:01.622840 update_engine[2736]: E20250515 00:52:01.622822 2736 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 00:52:01.622865 update_engine[2736]: I20250515 00:52:01.622856 2736 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 15 00:52:01.622886 update_engine[2736]: I20250515 00:52:01.622864 2736 omaha_request_action.cc:617] Omaha request response: May 15 00:52:01.622952 update_engine[2736]: E20250515 00:52:01.622932 2736 omaha_request_action.cc:636] Omaha request network transfer failed. May 15 00:52:01.622952 update_engine[2736]: I20250515 00:52:01.622946 2736 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 15 00:52:01.623003 update_engine[2736]: I20250515 00:52:01.622951 2736 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 00:52:01.623003 update_engine[2736]: I20250515 00:52:01.622956 2736 update_attempter.cc:306] Processing Done. May 15 00:52:01.623003 update_engine[2736]: E20250515 00:52:01.622969 2736 update_attempter.cc:619] Update failed. May 15 00:52:01.623003 update_engine[2736]: I20250515 00:52:01.622974 2736 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 15 00:52:01.623003 update_engine[2736]: I20250515 00:52:01.622978 2736 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 15 00:52:01.623003 update_engine[2736]: I20250515 00:52:01.622983 2736 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 15 00:52:01.623112 update_engine[2736]: I20250515 00:52:01.623045 2736 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 15 00:52:01.623112 update_engine[2736]: I20250515 00:52:01.623066 2736 omaha_request_action.cc:271] Posting an Omaha request to disabled May 15 00:52:01.623112 update_engine[2736]: I20250515 00:52:01.623071 2736 omaha_request_action.cc:272] Request: May 15 00:52:01.623112 update_engine[2736]: May 15 00:52:01.623112 update_engine[2736]: May 15 00:52:01.623112 update_engine[2736]: May 15 00:52:01.623112 update_engine[2736]: May 15 00:52:01.623112 update_engine[2736]: May 15 00:52:01.623112 update_engine[2736]: May 15 00:52:01.623112 update_engine[2736]: I20250515 00:52:01.623075 2736 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 00:52:01.623279 update_engine[2736]: I20250515 00:52:01.623188 2736 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 00:52:01.623300 locksmithd[2767]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 15 00:52:01.623483 update_engine[2736]: I20250515 00:52:01.623343 2736 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 00:52:01.623597 update_engine[2736]: E20250515 00:52:01.623581 2736 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 00:52:01.623635 update_engine[2736]: I20250515 00:52:01.623623 2736 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 15 00:52:01.623656 update_engine[2736]: I20250515 00:52:01.623632 2736 omaha_request_action.cc:617] Omaha request response: May 15 00:52:01.623656 update_engine[2736]: I20250515 00:52:01.623637 2736 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 00:52:01.623656 update_engine[2736]: I20250515 00:52:01.623642 2736 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 00:52:01.623656 update_engine[2736]: I20250515 00:52:01.623646 2736 update_attempter.cc:306] Processing Done. May 15 00:52:01.623656 update_engine[2736]: I20250515 00:52:01.623651 2736 update_attempter.cc:310] Error event sent. May 15 00:52:01.623746 update_engine[2736]: I20250515 00:52:01.623657 2736 update_check_scheduler.cc:74] Next update check in 46m19s May 15 00:52:01.623792 locksmithd[2767]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 15 00:53:10.933030 systemd[1]: Started sshd@8-147.28.145.22:22-139.178.68.195:43048.service - OpenSSH per-connection server daemon (139.178.68.195:43048). May 15 00:53:11.336542 sshd[10223]: Accepted publickey for core from 139.178.68.195 port 43048 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 00:53:11.337581 sshd-session[10223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:53:11.340993 systemd-logind[2723]: New session 10 of user core. May 15 00:53:11.353151 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 00:53:11.687721 sshd[10225]: Connection closed by 139.178.68.195 port 43048 May 15 00:53:11.688022 sshd-session[10223]: pam_unix(sshd:session): session closed for user core May 15 00:53:11.690891 systemd[1]: sshd@8-147.28.145.22:22-139.178.68.195:43048.service: Deactivated successfully. May 15 00:53:11.703484 systemd[1]: session-10.scope: Deactivated successfully. May 15 00:53:11.704045 systemd-logind[2723]: Session 10 logged out. Waiting for processes to exit. May 15 00:53:11.704655 systemd-logind[2723]: Removed session 10. May 15 00:53:16.765927 systemd[1]: Started sshd@9-147.28.145.22:22-139.178.68.195:52988.service - OpenSSH per-connection server daemon (139.178.68.195:52988). May 15 00:53:17.177560 sshd[10283]: Accepted publickey for core from 139.178.68.195 port 52988 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 00:53:17.178498 sshd-session[10283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:53:17.181594 systemd-logind[2723]: New session 11 of user core. May 15 00:53:17.193084 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 00:53:17.528820 sshd[10286]: Connection closed by 139.178.68.195 port 52988 May 15 00:53:17.529250 sshd-session[10283]: pam_unix(sshd:session): session closed for user core May 15 00:53:17.532196 systemd[1]: sshd@9-147.28.145.22:22-139.178.68.195:52988.service: Deactivated successfully. May 15 00:53:17.534568 systemd[1]: session-11.scope: Deactivated successfully. May 15 00:53:17.535120 systemd-logind[2723]: Session 11 logged out. Waiting for processes to exit. May 15 00:53:17.535644 systemd-logind[2723]: Removed session 11. May 15 00:53:17.603008 systemd[1]: Started sshd@10-147.28.145.22:22-139.178.68.195:52992.service - OpenSSH per-connection server daemon (139.178.68.195:52992). May 15 00:53:18.009940 sshd[10325]: Accepted publickey for core from 139.178.68.195 port 52992 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 00:53:18.011013 sshd-session[10325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:53:18.014038 systemd-logind[2723]: New session 12 of user core. May 15 00:53:18.031158 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 00:53:18.384868 sshd[10327]: Connection closed by 139.178.68.195 port 52992 May 15 00:53:18.385158 sshd-session[10325]: pam_unix(sshd:session): session closed for user core May 15 00:53:18.388004 systemd[1]: sshd@10-147.28.145.22:22-139.178.68.195:52992.service: Deactivated successfully. May 15 00:53:18.389787 systemd[1]: session-12.scope: Deactivated successfully. May 15 00:53:18.390351 systemd-logind[2723]: Session 12 logged out. Waiting for processes to exit. May 15 00:53:18.390887 systemd-logind[2723]: Removed session 12. May 15 00:53:18.465020 systemd[1]: Started sshd@11-147.28.145.22:22-139.178.68.195:52998.service - OpenSSH per-connection server daemon (139.178.68.195:52998). May 15 00:53:18.872216 sshd[10361]: Accepted publickey for core from 139.178.68.195 port 52998 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 00:53:18.873727 sshd-session[10361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:53:18.876900 systemd-logind[2723]: New session 13 of user core. May 15 00:53:18.893141 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 00:53:19.223437 sshd[10363]: Connection closed by 139.178.68.195 port 52998 May 15 00:53:19.223842 sshd-session[10361]: pam_unix(sshd:session): session closed for user core May 15 00:53:19.226684 systemd[1]: sshd@11-147.28.145.22:22-139.178.68.195:52998.service: Deactivated successfully. May 15 00:53:19.228404 systemd[1]: session-13.scope: Deactivated successfully. May 15 00:53:19.228967 systemd-logind[2723]: Session 13 logged out. Waiting for processes to exit. May 15 00:53:19.229534 systemd-logind[2723]: Removed session 13. May 15 00:53:24.299032 systemd[1]: Started sshd@12-147.28.145.22:22-139.178.68.195:42492.service - OpenSSH per-connection server daemon (139.178.68.195:42492). May 15 00:53:24.715791 sshd[10424]: Accepted publickey for core from 139.178.68.195 port 42492 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 00:53:24.716816 sshd-session[10424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:53:24.720026 systemd-logind[2723]: New session 14 of user core. May 15 00:53:24.729096 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 00:53:25.073776 sshd[10426]: Connection closed by 139.178.68.195 port 42492 May 15 00:53:25.074138 sshd-session[10424]: pam_unix(sshd:session): session closed for user core May 15 00:53:25.076912 systemd[1]: sshd@12-147.28.145.22:22-139.178.68.195:42492.service: Deactivated successfully. May 15 00:53:25.078788 systemd[1]: session-14.scope: Deactivated successfully. May 15 00:53:25.079364 systemd-logind[2723]: Session 14 logged out. Waiting for processes to exit. May 15 00:53:25.079912 systemd-logind[2723]: Removed session 14. May 15 00:53:25.148952 systemd[1]: Started sshd@13-147.28.145.22:22-139.178.68.195:42508.service - OpenSSH per-connection server daemon (139.178.68.195:42508). May 15 00:53:25.570185 sshd[10465]: Accepted publickey for core from 139.178.68.195 port 42508 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 00:53:25.571296 sshd-session[10465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:53:25.574197 systemd-logind[2723]: New session 15 of user core. May 15 00:53:25.586100 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 00:53:25.963458 sshd[10467]: Connection closed by 139.178.68.195 port 42508 May 15 00:53:25.963803 sshd-session[10465]: pam_unix(sshd:session): session closed for user core May 15 00:53:25.966741 systemd[1]: sshd@13-147.28.145.22:22-139.178.68.195:42508.service: Deactivated successfully. May 15 00:53:25.968526 systemd[1]: session-15.scope: Deactivated successfully. May 15 00:53:25.969075 systemd-logind[2723]: Session 15 logged out. Waiting for processes to exit. May 15 00:53:25.969616 systemd-logind[2723]: Removed session 15. May 15 00:53:26.040193 systemd[1]: Started sshd@14-147.28.145.22:22-139.178.68.195:42510.service - OpenSSH per-connection server daemon (139.178.68.195:42510). May 15 00:53:26.446921 sshd[10501]: Accepted publickey for core from 139.178.68.195 port 42510 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 00:53:26.447975 sshd-session[10501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:53:26.451251 systemd-logind[2723]: New session 16 of user core. May 15 00:53:26.463145 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 00:53:27.802925 sshd[10503]: Connection closed by 139.178.68.195 port 42510 May 15 00:53:27.803333 sshd-session[10501]: pam_unix(sshd:session): session closed for user core May 15 00:53:27.806190 systemd[1]: sshd@14-147.28.145.22:22-139.178.68.195:42510.service: Deactivated successfully. May 15 00:53:27.808483 systemd[1]: session-16.scope: Deactivated successfully. May 15 00:53:27.808719 systemd[1]: session-16.scope: Consumed 3.592s CPU time, 114.2M memory peak. May 15 00:53:27.809087 systemd-logind[2723]: Session 16 logged out. Waiting for processes to exit. May 15 00:53:27.809710 systemd-logind[2723]: Removed session 16. May 15 00:53:27.872886 systemd[1]: Started sshd@15-147.28.145.22:22-139.178.68.195:42526.service - OpenSSH per-connection server daemon (139.178.68.195:42526). May 15 00:53:28.273071 sshd[10600]: Accepted publickey for core from 139.178.68.195 port 42526 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 00:53:28.274117 sshd-session[10600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:53:28.277302 systemd-logind[2723]: New session 17 of user core. May 15 00:53:28.290160 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 00:53:28.702975 sshd[10602]: Connection closed by 139.178.68.195 port 42526 May 15 00:53:28.703334 sshd-session[10600]: pam_unix(sshd:session): session closed for user core May 15 00:53:28.706288 systemd[1]: sshd@15-147.28.145.22:22-139.178.68.195:42526.service: Deactivated successfully. May 15 00:53:28.708891 systemd[1]: session-17.scope: Deactivated successfully. May 15 00:53:28.709490 systemd-logind[2723]: Session 17 logged out. Waiting for processes to exit. May 15 00:53:28.710046 systemd-logind[2723]: Removed session 17. May 15 00:53:28.779781 systemd[1]: Started sshd@16-147.28.145.22:22-139.178.68.195:42542.service - OpenSSH per-connection server daemon (139.178.68.195:42542). May 15 00:53:29.201127 sshd[10651]: Accepted publickey for core from 139.178.68.195 port 42542 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 00:53:29.202106 sshd-session[10651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:53:29.205222 systemd-logind[2723]: New session 18 of user core. May 15 00:53:29.214144 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 00:53:29.557711 sshd[10658]: Connection closed by 139.178.68.195 port 42542 May 15 00:53:29.558128 sshd-session[10651]: pam_unix(sshd:session): session closed for user core May 15 00:53:29.561056 systemd[1]: sshd@16-147.28.145.22:22-139.178.68.195:42542.service: Deactivated successfully. May 15 00:53:29.563253 systemd[1]: session-18.scope: Deactivated successfully. May 15 00:53:29.563861 systemd-logind[2723]: Session 18 logged out. Waiting for processes to exit. May 15 00:53:29.564537 systemd-logind[2723]: Removed session 18. May 15 00:53:32.261946 systemd[1]: Started sshd@17-147.28.145.22:22-117.50.163.21:39176.service - OpenSSH per-connection server daemon (117.50.163.21:39176). May 15 00:53:33.171497 sshd[10742]: Invalid user from 117.50.163.21 port 39176 May 15 00:53:34.629001 systemd[1]: Started sshd@18-147.28.145.22:22-139.178.68.195:56480.service - OpenSSH per-connection server daemon (139.178.68.195:56480). May 15 00:53:35.033259 sshd[10745]: Accepted publickey for core from 139.178.68.195 port 56480 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 00:53:35.034288 sshd-session[10745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:53:35.037328 systemd-logind[2723]: New session 19 of user core. May 15 00:53:35.047087 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 00:53:35.380038 sshd[10747]: Connection closed by 139.178.68.195 port 56480 May 15 00:53:35.380331 sshd-session[10745]: pam_unix(sshd:session): session closed for user core May 15 00:53:35.383250 systemd[1]: sshd@18-147.28.145.22:22-139.178.68.195:56480.service: Deactivated successfully. May 15 00:53:35.385015 systemd[1]: session-19.scope: Deactivated successfully. May 15 00:53:35.385568 systemd-logind[2723]: Session 19 logged out. Waiting for processes to exit. May 15 00:53:35.386139 systemd-logind[2723]: Removed session 19. May 15 00:53:40.252564 sshd[10742]: Connection closed by invalid user 117.50.163.21 port 39176 [preauth] May 15 00:53:40.254583 systemd[1]: sshd@17-147.28.145.22:22-117.50.163.21:39176.service: Deactivated successfully. May 15 00:53:40.451978 systemd[1]: Started sshd@19-147.28.145.22:22-139.178.68.195:56494.service - OpenSSH per-connection server daemon (139.178.68.195:56494). May 15 00:53:40.855198 sshd[10785]: Accepted publickey for core from 139.178.68.195 port 56494 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 00:53:40.856255 sshd-session[10785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:53:40.859412 systemd-logind[2723]: New session 20 of user core. May 15 00:53:40.871093 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 00:53:41.200771 sshd[10787]: Connection closed by 139.178.68.195 port 56494 May 15 00:53:41.201105 sshd-session[10785]: pam_unix(sshd:session): session closed for user core May 15 00:53:41.203930 systemd[1]: sshd@19-147.28.145.22:22-139.178.68.195:56494.service: Deactivated successfully. May 15 00:53:41.206237 systemd[1]: session-20.scope: Deactivated successfully. May 15 00:53:41.206802 systemd-logind[2723]: Session 20 logged out. Waiting for processes to exit. May 15 00:53:41.207405 systemd-logind[2723]: Removed session 20. May 15 00:53:46.272207 systemd[1]: Started sshd@20-147.28.145.22:22-139.178.68.195:47596.service - OpenSSH per-connection server daemon (139.178.68.195:47596). May 15 00:53:46.672163 sshd[10842]: Accepted publickey for core from 139.178.68.195 port 47596 ssh2: RSA SHA256:8oJIdO872RFbPzczlCaeCEVqSyLrFmYJzdz3/Hf/guQ May 15 00:53:46.673131 sshd-session[10842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:53:46.676113 systemd-logind[2723]: New session 21 of user core. May 15 00:53:46.692086 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 00:53:47.015174 sshd[10844]: Connection closed by 139.178.68.195 port 47596 May 15 00:53:47.015556 sshd-session[10842]: pam_unix(sshd:session): session closed for user core May 15 00:53:47.018351 systemd[1]: sshd@20-147.28.145.22:22-139.178.68.195:47596.service: Deactivated successfully. May 15 00:53:47.020763 systemd[1]: session-21.scope: Deactivated successfully. May 15 00:53:47.021337 systemd-logind[2723]: Session 21 logged out. Waiting for processes to exit. May 15 00:53:47.021865 systemd-logind[2723]: Removed session 21.