Jul 6 23:44:42.178287 kernel: Booting Linux on physical CPU 0x0000120000 [0x413fd0c1] Jul 6 23:44:42.178309 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Sun Jul 6 21:51:54 -00 2025 Jul 6 23:44:42.178318 kernel: KASLR enabled Jul 6 23:44:42.178324 kernel: efi: EFI v2.7 by American Megatrends Jul 6 23:44:42.178330 kernel: efi: ACPI 2.0=0xec080000 SMBIOS 3.0=0xf0a1ff98 ESRT=0xea468818 RNG=0xebf10018 MEMRESERVE=0xe4627e18 Jul 6 23:44:42.178335 kernel: random: crng init done Jul 6 23:44:42.178342 kernel: secureboot: Secure boot disabled Jul 6 23:44:42.178348 kernel: esrt: Reserving ESRT space from 0x00000000ea468818 to 0x00000000ea468878. Jul 6 23:44:42.178355 kernel: ACPI: Early table checksum verification disabled Jul 6 23:44:42.178361 kernel: ACPI: RSDP 0x00000000EC080000 000024 (v02 Ampere) Jul 6 23:44:42.178367 kernel: ACPI: XSDT 0x00000000EC070000 0000A4 (v01 Ampere Altra 00000000 AMI 01000013) Jul 6 23:44:42.178373 kernel: ACPI: FACP 0x00000000EC050000 000114 (v06 Ampere Altra 00000000 INTL 20190509) Jul 6 23:44:42.178378 kernel: ACPI: DSDT 0x00000000EBFF0000 019B57 (v02 Ampere Jade 00000001 INTL 20200717) Jul 6 23:44:42.178384 kernel: ACPI: DBG2 0x00000000EC060000 00005C (v00 Ampere Altra 00000000 INTL 20190509) Jul 6 23:44:42.178393 kernel: ACPI: GTDT 0x00000000EC040000 000110 (v03 Ampere Altra 00000000 INTL 20190509) Jul 6 23:44:42.178399 kernel: ACPI: SSDT 0x00000000EC030000 00002D (v02 Ampere Altra 00000001 INTL 20190509) Jul 6 23:44:42.178405 kernel: ACPI: FIDT 0x00000000EBFE0000 00009C (v01 ALASKA A M I 01072009 AMI 00010013) Jul 6 23:44:42.178411 kernel: ACPI: SPCR 0x00000000EBFD0000 000050 (v02 ALASKA A M I 01072009 AMI 0005000F) Jul 6 23:44:42.178418 kernel: ACPI: BGRT 0x00000000EBFC0000 000038 (v01 ALASKA A M I 01072009 AMI 00010013) Jul 6 23:44:42.178424 kernel: ACPI: MCFG 0x00000000EBFB0000 0000AC (v01 Ampere Altra 00000001 AMP. 01000013) Jul 6 23:44:42.178430 kernel: ACPI: IORT 0x00000000EBFA0000 000610 (v00 Ampere Altra 00000000 AMP. 01000013) Jul 6 23:44:42.178436 kernel: ACPI: PPTT 0x00000000EBF80000 006E60 (v02 Ampere Altra 00000000 AMP. 01000013) Jul 6 23:44:42.178442 kernel: ACPI: SLIT 0x00000000EBF70000 00002D (v01 Ampere Altra 00000000 AMP. 01000013) Jul 6 23:44:42.178448 kernel: ACPI: SRAT 0x00000000EBF60000 0006D0 (v03 Ampere Altra 00000000 AMP. 01000013) Jul 6 23:44:42.178455 kernel: ACPI: APIC 0x00000000EBF90000 0019F4 (v05 Ampere Altra 00000003 AMI 01000013) Jul 6 23:44:42.178462 kernel: ACPI: PCCT 0x00000000EBF40000 000576 (v02 Ampere Altra 00000003 AMP. 01000013) Jul 6 23:44:42.178468 kernel: ACPI: WSMT 0x00000000EBF30000 000028 (v01 ALASKA A M I 01072009 AMI 00010013) Jul 6 23:44:42.178474 kernel: ACPI: FPDT 0x00000000EBF20000 000044 (v01 ALASKA A M I 01072009 AMI 01000013) Jul 6 23:44:42.178480 kernel: ACPI: SPCR: console: pl011,mmio32,0x100002600000,115200 Jul 6 23:44:42.178486 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x88300000-0x883fffff] Jul 6 23:44:42.178493 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x90000000-0xffffffff] Jul 6 23:44:42.178499 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0x8007fffffff] Jul 6 23:44:42.178505 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80100000000-0x83fffffffff] Jul 6 23:44:42.178511 kernel: NUMA: NODE_DATA [mem 0x83fdffcb800-0x83fdffd0fff] Jul 6 23:44:42.178517 kernel: Zone ranges: Jul 6 23:44:42.178525 kernel: DMA [mem 0x0000000088300000-0x00000000ffffffff] Jul 6 23:44:42.178531 kernel: DMA32 empty Jul 6 23:44:42.178537 kernel: Normal [mem 0x0000000100000000-0x0000083fffffffff] Jul 6 23:44:42.178543 kernel: Movable zone start for each node Jul 6 23:44:42.178549 kernel: Early memory node ranges Jul 6 23:44:42.178558 kernel: node 0: [mem 0x0000000088300000-0x00000000883fffff] Jul 6 23:44:42.178564 kernel: node 0: [mem 0x0000000090000000-0x0000000091ffffff] Jul 6 23:44:42.178572 kernel: node 0: [mem 0x0000000092000000-0x0000000093ffffff] Jul 6 23:44:42.178579 kernel: node 0: [mem 0x0000000094000000-0x00000000eba2dfff] Jul 6 23:44:42.178585 kernel: node 0: [mem 0x00000000eba2e000-0x00000000ebeaffff] Jul 6 23:44:42.178592 kernel: node 0: [mem 0x00000000ebeb0000-0x00000000ebeb9fff] Jul 6 23:44:42.178598 kernel: node 0: [mem 0x00000000ebeba000-0x00000000ebeccfff] Jul 6 23:44:42.178605 kernel: node 0: [mem 0x00000000ebecd000-0x00000000ebecdfff] Jul 6 23:44:42.178611 kernel: node 0: [mem 0x00000000ebece000-0x00000000ebecffff] Jul 6 23:44:42.178617 kernel: node 0: [mem 0x00000000ebed0000-0x00000000ec0effff] Jul 6 23:44:42.178624 kernel: node 0: [mem 0x00000000ec0f0000-0x00000000ec0fffff] Jul 6 23:44:42.178630 kernel: node 0: [mem 0x00000000ec100000-0x00000000ee53ffff] Jul 6 23:44:42.178638 kernel: node 0: [mem 0x00000000ee540000-0x00000000f765ffff] Jul 6 23:44:42.178644 kernel: node 0: [mem 0x00000000f7660000-0x00000000f784ffff] Jul 6 23:44:42.178651 kernel: node 0: [mem 0x00000000f7850000-0x00000000f7fdffff] Jul 6 23:44:42.178657 kernel: node 0: [mem 0x00000000f7fe0000-0x00000000ffc8efff] Jul 6 23:44:42.178664 kernel: node 0: [mem 0x00000000ffc8f000-0x00000000ffc8ffff] Jul 6 23:44:42.178670 kernel: node 0: [mem 0x00000000ffc90000-0x00000000ffffffff] Jul 6 23:44:42.178677 kernel: node 0: [mem 0x0000080000000000-0x000008007fffffff] Jul 6 23:44:42.178683 kernel: node 0: [mem 0x0000080100000000-0x0000083fffffffff] Jul 6 23:44:42.178690 kernel: Initmem setup node 0 [mem 0x0000000088300000-0x0000083fffffffff] Jul 6 23:44:42.178696 kernel: On node 0, zone DMA: 768 pages in unavailable ranges Jul 6 23:44:42.178703 kernel: On node 0, zone DMA: 31744 pages in unavailable ranges Jul 6 23:44:42.178710 kernel: psci: probing for conduit method from ACPI. Jul 6 23:44:42.178717 kernel: psci: PSCIv1.1 detected in firmware. Jul 6 23:44:42.178723 kernel: psci: Using standard PSCI v0.2 function IDs Jul 6 23:44:42.178730 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 6 23:44:42.178736 kernel: psci: SMC Calling Convention v1.2 Jul 6 23:44:42.178742 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 6 23:44:42.178749 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100 -> Node 0 Jul 6 23:44:42.178755 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10000 -> Node 0 Jul 6 23:44:42.178762 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10100 -> Node 0 Jul 6 23:44:42.178768 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20000 -> Node 0 Jul 6 23:44:42.178775 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20100 -> Node 0 Jul 6 23:44:42.178781 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30000 -> Node 0 Jul 6 23:44:42.178789 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30100 -> Node 0 Jul 6 23:44:42.178795 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40000 -> Node 0 Jul 6 23:44:42.178802 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40100 -> Node 0 Jul 6 23:44:42.178808 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50000 -> Node 0 Jul 6 23:44:42.178815 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50100 -> Node 0 Jul 6 23:44:42.178821 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60000 -> Node 0 Jul 6 23:44:42.178828 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60100 -> Node 0 Jul 6 23:44:42.178834 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70000 -> Node 0 Jul 6 23:44:42.178840 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70100 -> Node 0 Jul 6 23:44:42.178847 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80000 -> Node 0 Jul 6 23:44:42.178853 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80100 -> Node 0 Jul 6 23:44:42.178859 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90000 -> Node 0 Jul 6 23:44:42.178867 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90100 -> Node 0 Jul 6 23:44:42.178874 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0000 -> Node 0 Jul 6 23:44:42.178880 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0100 -> Node 0 Jul 6 23:44:42.178887 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0000 -> Node 0 Jul 6 23:44:42.178893 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0100 -> Node 0 Jul 6 23:44:42.178899 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0000 -> Node 0 Jul 6 23:44:42.178906 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0100 -> Node 0 Jul 6 23:44:42.178912 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0000 -> Node 0 Jul 6 23:44:42.178919 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0100 -> Node 0 Jul 6 23:44:42.178925 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0000 -> Node 0 Jul 6 23:44:42.178932 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0100 -> Node 0 Jul 6 23:44:42.178939 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0000 -> Node 0 Jul 6 23:44:42.178945 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0100 -> Node 0 Jul 6 23:44:42.178952 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100000 -> Node 0 Jul 6 23:44:42.178959 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100100 -> Node 0 Jul 6 23:44:42.178965 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110000 -> Node 0 Jul 6 23:44:42.178971 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110100 -> Node 0 Jul 6 23:44:42.178978 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120000 -> Node 0 Jul 6 23:44:42.178984 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120100 -> Node 0 Jul 6 23:44:42.178991 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130000 -> Node 0 Jul 6 23:44:42.178997 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130100 -> Node 0 Jul 6 23:44:42.179003 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140000 -> Node 0 Jul 6 23:44:42.179010 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140100 -> Node 0 Jul 6 23:44:42.179017 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150000 -> Node 0 Jul 6 23:44:42.179024 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150100 -> Node 0 Jul 6 23:44:42.179030 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160000 -> Node 0 Jul 6 23:44:42.179037 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160100 -> Node 0 Jul 6 23:44:42.179043 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170000 -> Node 0 Jul 6 23:44:42.179049 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170100 -> Node 0 Jul 6 23:44:42.179056 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180000 -> Node 0 Jul 6 23:44:42.179062 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180100 -> Node 0 Jul 6 23:44:42.179075 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190000 -> Node 0 Jul 6 23:44:42.179081 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190100 -> Node 0 Jul 6 23:44:42.179089 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0000 -> Node 0 Jul 6 23:44:42.179096 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0100 -> Node 0 Jul 6 23:44:42.179103 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0000 -> Node 0 Jul 6 23:44:42.179110 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0100 -> Node 0 Jul 6 23:44:42.179117 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0000 -> Node 0 Jul 6 23:44:42.179123 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0100 -> Node 0 Jul 6 23:44:42.179131 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0000 -> Node 0 Jul 6 23:44:42.179138 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0100 -> Node 0 Jul 6 23:44:42.179145 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0000 -> Node 0 Jul 6 23:44:42.179152 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0100 -> Node 0 Jul 6 23:44:42.179162 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0000 -> Node 0 Jul 6 23:44:42.179170 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0100 -> Node 0 Jul 6 23:44:42.179177 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200000 -> Node 0 Jul 6 23:44:42.179183 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200100 -> Node 0 Jul 6 23:44:42.179190 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210000 -> Node 0 Jul 6 23:44:42.179197 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210100 -> Node 0 Jul 6 23:44:42.179204 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220000 -> Node 0 Jul 6 23:44:42.179211 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220100 -> Node 0 Jul 6 23:44:42.179219 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230000 -> Node 0 Jul 6 23:44:42.179226 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230100 -> Node 0 Jul 6 23:44:42.179233 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240000 -> Node 0 Jul 6 23:44:42.179240 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240100 -> Node 0 Jul 6 23:44:42.179247 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250000 -> Node 0 Jul 6 23:44:42.179254 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250100 -> Node 0 Jul 6 23:44:42.179260 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260000 -> Node 0 Jul 6 23:44:42.179267 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260100 -> Node 0 Jul 6 23:44:42.179274 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270000 -> Node 0 Jul 6 23:44:42.179281 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270100 -> Node 0 Jul 6 23:44:42.179288 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 6 23:44:42.179296 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 6 23:44:42.179303 kernel: pcpu-alloc: [0] 00 [0] 01 [0] 02 [0] 03 [0] 04 [0] 05 [0] 06 [0] 07 Jul 6 23:44:42.179310 kernel: pcpu-alloc: [0] 08 [0] 09 [0] 10 [0] 11 [0] 12 [0] 13 [0] 14 [0] 15 Jul 6 23:44:42.179317 kernel: pcpu-alloc: [0] 16 [0] 17 [0] 18 [0] 19 [0] 20 [0] 21 [0] 22 [0] 23 Jul 6 23:44:42.179324 kernel: pcpu-alloc: [0] 24 [0] 25 [0] 26 [0] 27 [0] 28 [0] 29 [0] 30 [0] 31 Jul 6 23:44:42.179331 kernel: pcpu-alloc: [0] 32 [0] 33 [0] 34 [0] 35 [0] 36 [0] 37 [0] 38 [0] 39 Jul 6 23:44:42.179338 kernel: pcpu-alloc: [0] 40 [0] 41 [0] 42 [0] 43 [0] 44 [0] 45 [0] 46 [0] 47 Jul 6 23:44:42.179345 kernel: pcpu-alloc: [0] 48 [0] 49 [0] 50 [0] 51 [0] 52 [0] 53 [0] 54 [0] 55 Jul 6 23:44:42.179352 kernel: pcpu-alloc: [0] 56 [0] 57 [0] 58 [0] 59 [0] 60 [0] 61 [0] 62 [0] 63 Jul 6 23:44:42.179358 kernel: pcpu-alloc: [0] 64 [0] 65 [0] 66 [0] 67 [0] 68 [0] 69 [0] 70 [0] 71 Jul 6 23:44:42.179365 kernel: pcpu-alloc: [0] 72 [0] 73 [0] 74 [0] 75 [0] 76 [0] 77 [0] 78 [0] 79 Jul 6 23:44:42.179373 kernel: Detected PIPT I-cache on CPU0 Jul 6 23:44:42.179380 kernel: CPU features: detected: GIC system register CPU interface Jul 6 23:44:42.179387 kernel: CPU features: detected: Virtualization Host Extensions Jul 6 23:44:42.179394 kernel: CPU features: detected: Hardware dirty bit management Jul 6 23:44:42.179401 kernel: CPU features: detected: Spectre-v4 Jul 6 23:44:42.179408 kernel: CPU features: detected: Spectre-BHB Jul 6 23:44:42.179414 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 6 23:44:42.179421 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 6 23:44:42.179428 kernel: CPU features: detected: ARM erratum 1418040 Jul 6 23:44:42.179435 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 6 23:44:42.179442 kernel: alternatives: applying boot alternatives Jul 6 23:44:42.179450 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=ca8feb1f79a67c117068f051b5f829d3e40170c022cd5834bd6789cba9641479 Jul 6 23:44:42.179459 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:44:42.179466 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jul 6 23:44:42.179472 kernel: printk: log_buf_len total cpu_extra contributions: 323584 bytes Jul 6 23:44:42.179479 kernel: printk: log_buf_len min size: 262144 bytes Jul 6 23:44:42.179486 kernel: printk: log_buf_len: 1048576 bytes Jul 6 23:44:42.179493 kernel: printk: early log buf free: 249864(95%) Jul 6 23:44:42.179500 kernel: Dentry cache hash table entries: 16777216 (order: 15, 134217728 bytes, linear) Jul 6 23:44:42.179507 kernel: Inode-cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear) Jul 6 23:44:42.179514 kernel: Fallback order for Node 0: 0 Jul 6 23:44:42.179521 kernel: Built 1 zonelists, mobility grouping on. Total pages: 65996028 Jul 6 23:44:42.179529 kernel: Policy zone: Normal Jul 6 23:44:42.179536 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:44:42.179543 kernel: software IO TLB: area num 128. Jul 6 23:44:42.179550 kernel: software IO TLB: mapped [mem 0x00000000fbc8f000-0x00000000ffc8f000] (64MB) Jul 6 23:44:42.179557 kernel: Memory: 262923416K/268174336K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38336K init, 897K bss, 5250920K reserved, 0K cma-reserved) Jul 6 23:44:42.179564 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=80, Nodes=1 Jul 6 23:44:42.179571 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:44:42.179578 kernel: rcu: RCU event tracing is enabled. Jul 6 23:44:42.179585 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=80. Jul 6 23:44:42.179592 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:44:42.179599 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:44:42.179606 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:44:42.179615 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=80 Jul 6 23:44:42.179622 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 6 23:44:42.179629 kernel: GICv3: GIC: Using split EOI/Deactivate mode Jul 6 23:44:42.179635 kernel: GICv3: 672 SPIs implemented Jul 6 23:44:42.179642 kernel: GICv3: 0 Extended SPIs implemented Jul 6 23:44:42.179649 kernel: Root IRQ handler: gic_handle_irq Jul 6 23:44:42.179656 kernel: GICv3: GICv3 features: 16 PPIs Jul 6 23:44:42.179663 kernel: GICv3: CPU0: found redistributor 120000 region 0:0x00001001005c0000 Jul 6 23:44:42.179670 kernel: SRAT: PXM 0 -> ITS 0 -> Node 0 Jul 6 23:44:42.179677 kernel: SRAT: PXM 0 -> ITS 1 -> Node 0 Jul 6 23:44:42.179683 kernel: SRAT: PXM 0 -> ITS 2 -> Node 0 Jul 6 23:44:42.179690 kernel: SRAT: PXM 0 -> ITS 3 -> Node 0 Jul 6 23:44:42.179698 kernel: SRAT: PXM 0 -> ITS 4 -> Node 0 Jul 6 23:44:42.179705 kernel: SRAT: PXM 0 -> ITS 5 -> Node 0 Jul 6 23:44:42.179712 kernel: SRAT: PXM 0 -> ITS 6 -> Node 0 Jul 6 23:44:42.179719 kernel: SRAT: PXM 0 -> ITS 7 -> Node 0 Jul 6 23:44:42.179726 kernel: ITS [mem 0x100100040000-0x10010005ffff] Jul 6 23:44:42.179733 kernel: ITS@0x0000100100040000: allocated 8192 Devices @80000270000 (indirect, esz 8, psz 64K, shr 1) Jul 6 23:44:42.179740 kernel: ITS@0x0000100100040000: allocated 32768 Interrupt Collections @80000280000 (flat, esz 2, psz 64K, shr 1) Jul 6 23:44:42.179747 kernel: ITS [mem 0x100100060000-0x10010007ffff] Jul 6 23:44:42.179754 kernel: ITS@0x0000100100060000: allocated 8192 Devices @800002a0000 (indirect, esz 8, psz 64K, shr 1) Jul 6 23:44:42.179761 kernel: ITS@0x0000100100060000: allocated 32768 Interrupt Collections @800002b0000 (flat, esz 2, psz 64K, shr 1) Jul 6 23:44:42.179768 kernel: ITS [mem 0x100100080000-0x10010009ffff] Jul 6 23:44:42.179777 kernel: ITS@0x0000100100080000: allocated 8192 Devices @800002d0000 (indirect, esz 8, psz 64K, shr 1) Jul 6 23:44:42.179784 kernel: ITS@0x0000100100080000: allocated 32768 Interrupt Collections @800002e0000 (flat, esz 2, psz 64K, shr 1) Jul 6 23:44:42.179791 kernel: ITS [mem 0x1001000a0000-0x1001000bffff] Jul 6 23:44:42.179798 kernel: ITS@0x00001001000a0000: allocated 8192 Devices @80000300000 (indirect, esz 8, psz 64K, shr 1) Jul 6 23:44:42.179805 kernel: ITS@0x00001001000a0000: allocated 32768 Interrupt Collections @80000310000 (flat, esz 2, psz 64K, shr 1) Jul 6 23:44:42.179812 kernel: ITS [mem 0x1001000c0000-0x1001000dffff] Jul 6 23:44:42.179819 kernel: ITS@0x00001001000c0000: allocated 8192 Devices @80000330000 (indirect, esz 8, psz 64K, shr 1) Jul 6 23:44:42.179826 kernel: ITS@0x00001001000c0000: allocated 32768 Interrupt Collections @80000340000 (flat, esz 2, psz 64K, shr 1) Jul 6 23:44:42.179833 kernel: ITS [mem 0x1001000e0000-0x1001000fffff] Jul 6 23:44:42.179840 kernel: ITS@0x00001001000e0000: allocated 8192 Devices @80000360000 (indirect, esz 8, psz 64K, shr 1) Jul 6 23:44:42.179847 kernel: ITS@0x00001001000e0000: allocated 32768 Interrupt Collections @80000370000 (flat, esz 2, psz 64K, shr 1) Jul 6 23:44:42.179855 kernel: ITS [mem 0x100100100000-0x10010011ffff] Jul 6 23:44:42.179862 kernel: ITS@0x0000100100100000: allocated 8192 Devices @80000390000 (indirect, esz 8, psz 64K, shr 1) Jul 6 23:44:42.179869 kernel: ITS@0x0000100100100000: allocated 32768 Interrupt Collections @800003a0000 (flat, esz 2, psz 64K, shr 1) Jul 6 23:44:42.179876 kernel: ITS [mem 0x100100120000-0x10010013ffff] Jul 6 23:44:42.179883 kernel: ITS@0x0000100100120000: allocated 8192 Devices @800003c0000 (indirect, esz 8, psz 64K, shr 1) Jul 6 23:44:42.179890 kernel: ITS@0x0000100100120000: allocated 32768 Interrupt Collections @800003d0000 (flat, esz 2, psz 64K, shr 1) Jul 6 23:44:42.179897 kernel: GICv3: using LPI property table @0x00000800003e0000 Jul 6 23:44:42.179904 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000800003f0000 Jul 6 23:44:42.179911 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:44:42.179918 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.179925 kernel: ACPI GTDT: found 1 memory-mapped timer block(s). Jul 6 23:44:42.179933 kernel: arch_timer: cp15 and mmio timer(s) running at 25.00MHz (phys/phys). Jul 6 23:44:42.179940 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 6 23:44:42.179948 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 6 23:44:42.179955 kernel: Console: colour dummy device 80x25 Jul 6 23:44:42.179962 kernel: printk: console [tty0] enabled Jul 6 23:44:42.179969 kernel: ACPI: Core revision 20230628 Jul 6 23:44:42.179976 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 6 23:44:42.179983 kernel: pid_max: default: 81920 minimum: 640 Jul 6 23:44:42.179990 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:44:42.179997 kernel: landlock: Up and running. Jul 6 23:44:42.180005 kernel: SELinux: Initializing. Jul 6 23:44:42.180012 kernel: Mount-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:44:42.180020 kernel: Mountpoint-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:44:42.180027 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. Jul 6 23:44:42.180034 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. Jul 6 23:44:42.180041 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:44:42.180048 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:44:42.180055 kernel: Platform MSI: ITS@0x100100040000 domain created Jul 6 23:44:42.180063 kernel: Platform MSI: ITS@0x100100060000 domain created Jul 6 23:44:42.180071 kernel: Platform MSI: ITS@0x100100080000 domain created Jul 6 23:44:42.180078 kernel: Platform MSI: ITS@0x1001000a0000 domain created Jul 6 23:44:42.180085 kernel: Platform MSI: ITS@0x1001000c0000 domain created Jul 6 23:44:42.180092 kernel: Platform MSI: ITS@0x1001000e0000 domain created Jul 6 23:44:42.180099 kernel: Platform MSI: ITS@0x100100100000 domain created Jul 6 23:44:42.180106 kernel: Platform MSI: ITS@0x100100120000 domain created Jul 6 23:44:42.180113 kernel: PCI/MSI: ITS@0x100100040000 domain created Jul 6 23:44:42.180120 kernel: PCI/MSI: ITS@0x100100060000 domain created Jul 6 23:44:42.180127 kernel: PCI/MSI: ITS@0x100100080000 domain created Jul 6 23:44:42.180135 kernel: PCI/MSI: ITS@0x1001000a0000 domain created Jul 6 23:44:42.180142 kernel: PCI/MSI: ITS@0x1001000c0000 domain created Jul 6 23:44:42.180149 kernel: PCI/MSI: ITS@0x1001000e0000 domain created Jul 6 23:44:42.180155 kernel: PCI/MSI: ITS@0x100100100000 domain created Jul 6 23:44:42.180212 kernel: PCI/MSI: ITS@0x100100120000 domain created Jul 6 23:44:42.180220 kernel: Remapping and enabling EFI services. Jul 6 23:44:42.180227 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:44:42.180234 kernel: Detected PIPT I-cache on CPU1 Jul 6 23:44:42.180241 kernel: GICv3: CPU1: found redistributor 1a0000 region 0:0x00001001007c0000 Jul 6 23:44:42.180248 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000080000800000 Jul 6 23:44:42.180257 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.180264 kernel: CPU1: Booted secondary processor 0x00001a0000 [0x413fd0c1] Jul 6 23:44:42.180271 kernel: Detected PIPT I-cache on CPU2 Jul 6 23:44:42.180278 kernel: GICv3: CPU2: found redistributor 140000 region 0:0x0000100100640000 Jul 6 23:44:42.180285 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000080000810000 Jul 6 23:44:42.180292 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.180299 kernel: CPU2: Booted secondary processor 0x0000140000 [0x413fd0c1] Jul 6 23:44:42.180306 kernel: Detected PIPT I-cache on CPU3 Jul 6 23:44:42.180314 kernel: GICv3: CPU3: found redistributor 1c0000 region 0:0x0000100100840000 Jul 6 23:44:42.180322 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000080000820000 Jul 6 23:44:42.180329 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.180336 kernel: CPU3: Booted secondary processor 0x00001c0000 [0x413fd0c1] Jul 6 23:44:42.180343 kernel: Detected PIPT I-cache on CPU4 Jul 6 23:44:42.180350 kernel: GICv3: CPU4: found redistributor 100000 region 0:0x0000100100540000 Jul 6 23:44:42.180357 kernel: GICv3: CPU4: using allocated LPI pending table @0x0000080000830000 Jul 6 23:44:42.180364 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.180371 kernel: CPU4: Booted secondary processor 0x0000100000 [0x413fd0c1] Jul 6 23:44:42.180378 kernel: Detected PIPT I-cache on CPU5 Jul 6 23:44:42.180385 kernel: GICv3: CPU5: found redistributor 180000 region 0:0x0000100100740000 Jul 6 23:44:42.180393 kernel: GICv3: CPU5: using allocated LPI pending table @0x0000080000840000 Jul 6 23:44:42.180400 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.180407 kernel: CPU5: Booted secondary processor 0x0000180000 [0x413fd0c1] Jul 6 23:44:42.180414 kernel: Detected PIPT I-cache on CPU6 Jul 6 23:44:42.180421 kernel: GICv3: CPU6: found redistributor 160000 region 0:0x00001001006c0000 Jul 6 23:44:42.180428 kernel: GICv3: CPU6: using allocated LPI pending table @0x0000080000850000 Jul 6 23:44:42.180435 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.180442 kernel: CPU6: Booted secondary processor 0x0000160000 [0x413fd0c1] Jul 6 23:44:42.180449 kernel: Detected PIPT I-cache on CPU7 Jul 6 23:44:42.180458 kernel: GICv3: CPU7: found redistributor 1e0000 region 0:0x00001001008c0000 Jul 6 23:44:42.180465 kernel: GICv3: CPU7: using allocated LPI pending table @0x0000080000860000 Jul 6 23:44:42.180472 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.180479 kernel: CPU7: Booted secondary processor 0x00001e0000 [0x413fd0c1] Jul 6 23:44:42.180486 kernel: Detected PIPT I-cache on CPU8 Jul 6 23:44:42.180493 kernel: GICv3: CPU8: found redistributor a0000 region 0:0x00001001003c0000 Jul 6 23:44:42.180500 kernel: GICv3: CPU8: using allocated LPI pending table @0x0000080000870000 Jul 6 23:44:42.180507 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.180514 kernel: CPU8: Booted secondary processor 0x00000a0000 [0x413fd0c1] Jul 6 23:44:42.180521 kernel: Detected PIPT I-cache on CPU9 Jul 6 23:44:42.180529 kernel: GICv3: CPU9: found redistributor 220000 region 0:0x00001001009c0000 Jul 6 23:44:42.180536 kernel: GICv3: CPU9: using allocated LPI pending table @0x0000080000880000 Jul 6 23:44:42.180543 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.180550 kernel: CPU9: Booted secondary processor 0x0000220000 [0x413fd0c1] Jul 6 23:44:42.180557 kernel: Detected PIPT I-cache on CPU10 Jul 6 23:44:42.180564 kernel: GICv3: CPU10: found redistributor c0000 region 0:0x0000100100440000 Jul 6 23:44:42.180571 kernel: GICv3: CPU10: using allocated LPI pending table @0x0000080000890000 Jul 6 23:44:42.180578 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.180585 kernel: CPU10: Booted secondary processor 0x00000c0000 [0x413fd0c1] Jul 6 23:44:42.180593 kernel: Detected PIPT I-cache on CPU11 Jul 6 23:44:42.180601 kernel: GICv3: CPU11: found redistributor 240000 region 0:0x0000100100a40000 Jul 6 23:44:42.180608 kernel: GICv3: CPU11: using allocated LPI pending table @0x00000800008a0000 Jul 6 23:44:42.180615 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.180622 kernel: CPU11: Booted secondary processor 0x0000240000 [0x413fd0c1] Jul 6 23:44:42.180629 kernel: Detected PIPT I-cache on CPU12 Jul 6 23:44:42.180636 kernel: GICv3: CPU12: found redistributor 80000 region 0:0x0000100100340000 Jul 6 23:44:42.180643 kernel: GICv3: CPU12: using allocated LPI pending table @0x00000800008b0000 Jul 6 23:44:42.180650 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.180657 kernel: CPU12: Booted secondary processor 0x0000080000 [0x413fd0c1] Jul 6 23:44:42.180665 kernel: Detected PIPT I-cache on CPU13 Jul 6 23:44:42.180672 kernel: GICv3: CPU13: found redistributor 200000 region 0:0x0000100100940000 Jul 6 23:44:42.180679 kernel: GICv3: CPU13: using allocated LPI pending table @0x00000800008c0000 Jul 6 23:44:42.180686 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.180693 kernel: CPU13: Booted secondary processor 0x0000200000 [0x413fd0c1] Jul 6 23:44:42.180700 kernel: Detected PIPT I-cache on CPU14 Jul 6 23:44:42.180707 kernel: GICv3: CPU14: found redistributor e0000 region 0:0x00001001004c0000 Jul 6 23:44:42.180715 kernel: GICv3: CPU14: using allocated LPI pending table @0x00000800008d0000 Jul 6 23:44:42.180722 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.180730 kernel: CPU14: Booted secondary processor 0x00000e0000 [0x413fd0c1] Jul 6 23:44:42.180738 kernel: Detected PIPT I-cache on CPU15 Jul 6 23:44:42.180745 kernel: GICv3: CPU15: found redistributor 260000 region 0:0x0000100100ac0000 Jul 6 23:44:42.180752 kernel: GICv3: CPU15: using allocated LPI pending table @0x00000800008e0000 Jul 6 23:44:42.180759 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.180766 kernel: CPU15: Booted secondary processor 0x0000260000 [0x413fd0c1] Jul 6 23:44:42.180773 kernel: Detected PIPT I-cache on CPU16 Jul 6 23:44:42.180780 kernel: GICv3: CPU16: found redistributor 20000 region 0:0x00001001001c0000 Jul 6 23:44:42.180787 kernel: GICv3: CPU16: using allocated LPI pending table @0x00000800008f0000 Jul 6 23:44:42.180803 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.180811 kernel: CPU16: Booted secondary processor 0x0000020000 [0x413fd0c1] Jul 6 23:44:42.180819 kernel: Detected PIPT I-cache on CPU17 Jul 6 23:44:42.180826 kernel: GICv3: CPU17: found redistributor 40000 region 0:0x0000100100240000 Jul 6 23:44:42.180833 kernel: GICv3: CPU17: using allocated LPI pending table @0x0000080000900000 Jul 6 23:44:42.180840 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.180848 kernel: CPU17: Booted secondary processor 0x0000040000 [0x413fd0c1] Jul 6 23:44:42.180855 kernel: Detected PIPT I-cache on CPU18 Jul 6 23:44:42.180863 kernel: GICv3: CPU18: found redistributor 0 region 0:0x0000100100140000 Jul 6 23:44:42.180870 kernel: GICv3: CPU18: using allocated LPI pending table @0x0000080000910000 Jul 6 23:44:42.180879 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.180886 kernel: CPU18: Booted secondary processor 0x0000000000 [0x413fd0c1] Jul 6 23:44:42.180894 kernel: Detected PIPT I-cache on CPU19 Jul 6 23:44:42.180901 kernel: GICv3: CPU19: found redistributor 60000 region 0:0x00001001002c0000 Jul 6 23:44:42.180908 kernel: GICv3: CPU19: using allocated LPI pending table @0x0000080000920000 Jul 6 23:44:42.180916 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.180926 kernel: CPU19: Booted secondary processor 0x0000060000 [0x413fd0c1] Jul 6 23:44:42.180933 kernel: Detected PIPT I-cache on CPU20 Jul 6 23:44:42.180940 kernel: GICv3: CPU20: found redistributor 130000 region 0:0x0000100100600000 Jul 6 23:44:42.180948 kernel: GICv3: CPU20: using allocated LPI pending table @0x0000080000930000 Jul 6 23:44:42.180955 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.180963 kernel: CPU20: Booted secondary processor 0x0000130000 [0x413fd0c1] Jul 6 23:44:42.180970 kernel: Detected PIPT I-cache on CPU21 Jul 6 23:44:42.180977 kernel: GICv3: CPU21: found redistributor 1b0000 region 0:0x0000100100800000 Jul 6 23:44:42.180985 kernel: GICv3: CPU21: using allocated LPI pending table @0x0000080000940000 Jul 6 23:44:42.180994 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.181001 kernel: CPU21: Booted secondary processor 0x00001b0000 [0x413fd0c1] Jul 6 23:44:42.181008 kernel: Detected PIPT I-cache on CPU22 Jul 6 23:44:42.181016 kernel: GICv3: CPU22: found redistributor 150000 region 0:0x0000100100680000 Jul 6 23:44:42.181023 kernel: GICv3: CPU22: using allocated LPI pending table @0x0000080000950000 Jul 6 23:44:42.181030 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.181038 kernel: CPU22: Booted secondary processor 0x0000150000 [0x413fd0c1] Jul 6 23:44:42.181045 kernel: Detected PIPT I-cache on CPU23 Jul 6 23:44:42.181052 kernel: GICv3: CPU23: found redistributor 1d0000 region 0:0x0000100100880000 Jul 6 23:44:42.181060 kernel: GICv3: CPU23: using allocated LPI pending table @0x0000080000960000 Jul 6 23:44:42.181069 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.181076 kernel: CPU23: Booted secondary processor 0x00001d0000 [0x413fd0c1] Jul 6 23:44:42.181083 kernel: Detected PIPT I-cache on CPU24 Jul 6 23:44:42.181091 kernel: GICv3: CPU24: found redistributor 110000 region 0:0x0000100100580000 Jul 6 23:44:42.181098 kernel: GICv3: CPU24: using allocated LPI pending table @0x0000080000970000 Jul 6 23:44:42.181106 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.181113 kernel: CPU24: Booted secondary processor 0x0000110000 [0x413fd0c1] Jul 6 23:44:42.181120 kernel: Detected PIPT I-cache on CPU25 Jul 6 23:44:42.181128 kernel: GICv3: CPU25: found redistributor 190000 region 0:0x0000100100780000 Jul 6 23:44:42.181136 kernel: GICv3: CPU25: using allocated LPI pending table @0x0000080000980000 Jul 6 23:44:42.181145 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.181154 kernel: CPU25: Booted secondary processor 0x0000190000 [0x413fd0c1] Jul 6 23:44:42.181163 kernel: Detected PIPT I-cache on CPU26 Jul 6 23:44:42.181171 kernel: GICv3: CPU26: found redistributor 170000 region 0:0x0000100100700000 Jul 6 23:44:42.181179 kernel: GICv3: CPU26: using allocated LPI pending table @0x0000080000990000 Jul 6 23:44:42.181186 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.181193 kernel: CPU26: Booted secondary processor 0x0000170000 [0x413fd0c1] Jul 6 23:44:42.181201 kernel: Detected PIPT I-cache on CPU27 Jul 6 23:44:42.181210 kernel: GICv3: CPU27: found redistributor 1f0000 region 0:0x0000100100900000 Jul 6 23:44:42.181217 kernel: GICv3: CPU27: using allocated LPI pending table @0x00000800009a0000 Jul 6 23:44:42.181225 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.181232 kernel: CPU27: Booted secondary processor 0x00001f0000 [0x413fd0c1] Jul 6 23:44:42.181239 kernel: Detected PIPT I-cache on CPU28 Jul 6 23:44:42.181247 kernel: GICv3: CPU28: found redistributor b0000 region 0:0x0000100100400000 Jul 6 23:44:42.181254 kernel: GICv3: CPU28: using allocated LPI pending table @0x00000800009b0000 Jul 6 23:44:42.181261 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.181269 kernel: CPU28: Booted secondary processor 0x00000b0000 [0x413fd0c1] Jul 6 23:44:42.181276 kernel: Detected PIPT I-cache on CPU29 Jul 6 23:44:42.181285 kernel: GICv3: CPU29: found redistributor 230000 region 0:0x0000100100a00000 Jul 6 23:44:42.181292 kernel: GICv3: CPU29: using allocated LPI pending table @0x00000800009c0000 Jul 6 23:44:42.181300 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.181307 kernel: CPU29: Booted secondary processor 0x0000230000 [0x413fd0c1] Jul 6 23:44:42.181314 kernel: Detected PIPT I-cache on CPU30 Jul 6 23:44:42.181322 kernel: GICv3: CPU30: found redistributor d0000 region 0:0x0000100100480000 Jul 6 23:44:42.181329 kernel: GICv3: CPU30: using allocated LPI pending table @0x00000800009d0000 Jul 6 23:44:42.181337 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.181344 kernel: CPU30: Booted secondary processor 0x00000d0000 [0x413fd0c1] Jul 6 23:44:42.181353 kernel: Detected PIPT I-cache on CPU31 Jul 6 23:44:42.181360 kernel: GICv3: CPU31: found redistributor 250000 region 0:0x0000100100a80000 Jul 6 23:44:42.181368 kernel: GICv3: CPU31: using allocated LPI pending table @0x00000800009e0000 Jul 6 23:44:42.181375 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.181383 kernel: CPU31: Booted secondary processor 0x0000250000 [0x413fd0c1] Jul 6 23:44:42.181390 kernel: Detected PIPT I-cache on CPU32 Jul 6 23:44:42.181397 kernel: GICv3: CPU32: found redistributor 90000 region 0:0x0000100100380000 Jul 6 23:44:42.181405 kernel: GICv3: CPU32: using allocated LPI pending table @0x00000800009f0000 Jul 6 23:44:42.181412 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.181419 kernel: CPU32: Booted secondary processor 0x0000090000 [0x413fd0c1] Jul 6 23:44:42.181428 kernel: Detected PIPT I-cache on CPU33 Jul 6 23:44:42.181435 kernel: GICv3: CPU33: found redistributor 210000 region 0:0x0000100100980000 Jul 6 23:44:42.181443 kernel: GICv3: CPU33: using allocated LPI pending table @0x0000080000a00000 Jul 6 23:44:42.181450 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.181457 kernel: CPU33: Booted secondary processor 0x0000210000 [0x413fd0c1] Jul 6 23:44:42.181465 kernel: Detected PIPT I-cache on CPU34 Jul 6 23:44:42.181472 kernel: GICv3: CPU34: found redistributor f0000 region 0:0x0000100100500000 Jul 6 23:44:42.181480 kernel: GICv3: CPU34: using allocated LPI pending table @0x0000080000a10000 Jul 6 23:44:42.181487 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.181496 kernel: CPU34: Booted secondary processor 0x00000f0000 [0x413fd0c1] Jul 6 23:44:42.181503 kernel: Detected PIPT I-cache on CPU35 Jul 6 23:44:42.181510 kernel: GICv3: CPU35: found redistributor 270000 region 0:0x0000100100b00000 Jul 6 23:44:42.181518 kernel: GICv3: CPU35: using allocated LPI pending table @0x0000080000a20000 Jul 6 23:44:42.181525 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.181533 kernel: CPU35: Booted secondary processor 0x0000270000 [0x413fd0c1] Jul 6 23:44:42.181540 kernel: Detected PIPT I-cache on CPU36 Jul 6 23:44:42.181547 kernel: GICv3: CPU36: found redistributor 30000 region 0:0x0000100100200000 Jul 6 23:44:42.181555 kernel: GICv3: CPU36: using allocated LPI pending table @0x0000080000a30000 Jul 6 23:44:42.181562 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.181571 kernel: CPU36: Booted secondary processor 0x0000030000 [0x413fd0c1] Jul 6 23:44:42.181578 kernel: Detected PIPT I-cache on CPU37 Jul 6 23:44:42.181586 kernel: GICv3: CPU37: found redistributor 50000 region 0:0x0000100100280000 Jul 6 23:44:42.181593 kernel: GICv3: CPU37: using allocated LPI pending table @0x0000080000a40000 Jul 6 23:44:42.181600 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.181607 kernel: CPU37: Booted secondary processor 0x0000050000 [0x413fd0c1] Jul 6 23:44:42.181615 kernel: Detected PIPT I-cache on CPU38 Jul 6 23:44:42.181622 kernel: GICv3: CPU38: found redistributor 10000 region 0:0x0000100100180000 Jul 6 23:44:42.181630 kernel: GICv3: CPU38: using allocated LPI pending table @0x0000080000a50000 Jul 6 23:44:42.181639 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.181646 kernel: CPU38: Booted secondary processor 0x0000010000 [0x413fd0c1] Jul 6 23:44:42.181653 kernel: Detected PIPT I-cache on CPU39 Jul 6 23:44:42.181662 kernel: GICv3: CPU39: found redistributor 70000 region 0:0x0000100100300000 Jul 6 23:44:42.181670 kernel: GICv3: CPU39: using allocated LPI pending table @0x0000080000a60000 Jul 6 23:44:42.181677 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.181684 kernel: CPU39: Booted secondary processor 0x0000070000 [0x413fd0c1] Jul 6 23:44:42.181692 kernel: Detected PIPT I-cache on CPU40 Jul 6 23:44:42.181701 kernel: GICv3: CPU40: found redistributor 120100 region 0:0x00001001005e0000 Jul 6 23:44:42.181708 kernel: GICv3: CPU40: using allocated LPI pending table @0x0000080000a70000 Jul 6 23:44:42.181716 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.181723 kernel: CPU40: Booted secondary processor 0x0000120100 [0x413fd0c1] Jul 6 23:44:42.181730 kernel: Detected PIPT I-cache on CPU41 Jul 6 23:44:42.181738 kernel: GICv3: CPU41: found redistributor 1a0100 region 0:0x00001001007e0000 Jul 6 23:44:42.181745 kernel: GICv3: CPU41: using allocated LPI pending table @0x0000080000a80000 Jul 6 23:44:42.181753 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.181760 kernel: CPU41: Booted secondary processor 0x00001a0100 [0x413fd0c1] Jul 6 23:44:42.181767 kernel: Detected PIPT I-cache on CPU42 Jul 6 23:44:42.181776 kernel: GICv3: CPU42: found redistributor 140100 region 0:0x0000100100660000 Jul 6 23:44:42.181783 kernel: GICv3: CPU42: using allocated LPI pending table @0x0000080000a90000 Jul 6 23:44:42.181791 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.181798 kernel: CPU42: Booted secondary processor 0x0000140100 [0x413fd0c1] Jul 6 23:44:42.181805 kernel: Detected PIPT I-cache on CPU43 Jul 6 23:44:42.181813 kernel: GICv3: CPU43: found redistributor 1c0100 region 0:0x0000100100860000 Jul 6 23:44:42.181820 kernel: GICv3: CPU43: using allocated LPI pending table @0x0000080000aa0000 Jul 6 23:44:42.181828 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.181835 kernel: CPU43: Booted secondary processor 0x00001c0100 [0x413fd0c1] Jul 6 23:44:42.181844 kernel: Detected PIPT I-cache on CPU44 Jul 6 23:44:42.181851 kernel: GICv3: CPU44: found redistributor 100100 region 0:0x0000100100560000 Jul 6 23:44:42.181859 kernel: GICv3: CPU44: using allocated LPI pending table @0x0000080000ab0000 Jul 6 23:44:42.181866 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.181874 kernel: CPU44: Booted secondary processor 0x0000100100 [0x413fd0c1] Jul 6 23:44:42.181881 kernel: Detected PIPT I-cache on CPU45 Jul 6 23:44:42.181889 kernel: GICv3: CPU45: found redistributor 180100 region 0:0x0000100100760000 Jul 6 23:44:42.181896 kernel: GICv3: CPU45: using allocated LPI pending table @0x0000080000ac0000 Jul 6 23:44:42.181904 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.181911 kernel: CPU45: Booted secondary processor 0x0000180100 [0x413fd0c1] Jul 6 23:44:42.181920 kernel: Detected PIPT I-cache on CPU46 Jul 6 23:44:42.181927 kernel: GICv3: CPU46: found redistributor 160100 region 0:0x00001001006e0000 Jul 6 23:44:42.181935 kernel: GICv3: CPU46: using allocated LPI pending table @0x0000080000ad0000 Jul 6 23:44:42.181942 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.181949 kernel: CPU46: Booted secondary processor 0x0000160100 [0x413fd0c1] Jul 6 23:44:42.181957 kernel: Detected PIPT I-cache on CPU47 Jul 6 23:44:42.181964 kernel: GICv3: CPU47: found redistributor 1e0100 region 0:0x00001001008e0000 Jul 6 23:44:42.181972 kernel: GICv3: CPU47: using allocated LPI pending table @0x0000080000ae0000 Jul 6 23:44:42.181979 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.181987 kernel: CPU47: Booted secondary processor 0x00001e0100 [0x413fd0c1] Jul 6 23:44:42.181995 kernel: Detected PIPT I-cache on CPU48 Jul 6 23:44:42.182002 kernel: GICv3: CPU48: found redistributor a0100 region 0:0x00001001003e0000 Jul 6 23:44:42.182010 kernel: GICv3: CPU48: using allocated LPI pending table @0x0000080000af0000 Jul 6 23:44:42.182017 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.182024 kernel: CPU48: Booted secondary processor 0x00000a0100 [0x413fd0c1] Jul 6 23:44:42.182032 kernel: Detected PIPT I-cache on CPU49 Jul 6 23:44:42.182039 kernel: GICv3: CPU49: found redistributor 220100 region 0:0x00001001009e0000 Jul 6 23:44:42.182047 kernel: GICv3: CPU49: using allocated LPI pending table @0x0000080000b00000 Jul 6 23:44:42.182055 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.182063 kernel: CPU49: Booted secondary processor 0x0000220100 [0x413fd0c1] Jul 6 23:44:42.182070 kernel: Detected PIPT I-cache on CPU50 Jul 6 23:44:42.182077 kernel: GICv3: CPU50: found redistributor c0100 region 0:0x0000100100460000 Jul 6 23:44:42.182085 kernel: GICv3: CPU50: using allocated LPI pending table @0x0000080000b10000 Jul 6 23:44:42.182092 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.182099 kernel: CPU50: Booted secondary processor 0x00000c0100 [0x413fd0c1] Jul 6 23:44:42.182108 kernel: Detected PIPT I-cache on CPU51 Jul 6 23:44:42.182115 kernel: GICv3: CPU51: found redistributor 240100 region 0:0x0000100100a60000 Jul 6 23:44:42.182123 kernel: GICv3: CPU51: using allocated LPI pending table @0x0000080000b20000 Jul 6 23:44:42.182131 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.182139 kernel: CPU51: Booted secondary processor 0x0000240100 [0x413fd0c1] Jul 6 23:44:42.182146 kernel: Detected PIPT I-cache on CPU52 Jul 6 23:44:42.182154 kernel: GICv3: CPU52: found redistributor 80100 region 0:0x0000100100360000 Jul 6 23:44:42.182163 kernel: GICv3: CPU52: using allocated LPI pending table @0x0000080000b30000 Jul 6 23:44:42.182171 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.182178 kernel: CPU52: Booted secondary processor 0x0000080100 [0x413fd0c1] Jul 6 23:44:42.182186 kernel: Detected PIPT I-cache on CPU53 Jul 6 23:44:42.182193 kernel: GICv3: CPU53: found redistributor 200100 region 0:0x0000100100960000 Jul 6 23:44:42.182202 kernel: GICv3: CPU53: using allocated LPI pending table @0x0000080000b40000 Jul 6 23:44:42.182210 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.182217 kernel: CPU53: Booted secondary processor 0x0000200100 [0x413fd0c1] Jul 6 23:44:42.182225 kernel: Detected PIPT I-cache on CPU54 Jul 6 23:44:42.182232 kernel: GICv3: CPU54: found redistributor e0100 region 0:0x00001001004e0000 Jul 6 23:44:42.182240 kernel: GICv3: CPU54: using allocated LPI pending table @0x0000080000b50000 Jul 6 23:44:42.182247 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.182254 kernel: CPU54: Booted secondary processor 0x00000e0100 [0x413fd0c1] Jul 6 23:44:42.182261 kernel: Detected PIPT I-cache on CPU55 Jul 6 23:44:42.182269 kernel: GICv3: CPU55: found redistributor 260100 region 0:0x0000100100ae0000 Jul 6 23:44:42.182278 kernel: GICv3: CPU55: using allocated LPI pending table @0x0000080000b60000 Jul 6 23:44:42.182285 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.182293 kernel: CPU55: Booted secondary processor 0x0000260100 [0x413fd0c1] Jul 6 23:44:42.182300 kernel: Detected PIPT I-cache on CPU56 Jul 6 23:44:42.182307 kernel: GICv3: CPU56: found redistributor 20100 region 0:0x00001001001e0000 Jul 6 23:44:42.182315 kernel: GICv3: CPU56: using allocated LPI pending table @0x0000080000b70000 Jul 6 23:44:42.182322 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.182330 kernel: CPU56: Booted secondary processor 0x0000020100 [0x413fd0c1] Jul 6 23:44:42.182337 kernel: Detected PIPT I-cache on CPU57 Jul 6 23:44:42.182346 kernel: GICv3: CPU57: found redistributor 40100 region 0:0x0000100100260000 Jul 6 23:44:42.182353 kernel: GICv3: CPU57: using allocated LPI pending table @0x0000080000b80000 Jul 6 23:44:42.182361 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.182368 kernel: CPU57: Booted secondary processor 0x0000040100 [0x413fd0c1] Jul 6 23:44:42.182376 kernel: Detected PIPT I-cache on CPU58 Jul 6 23:44:42.182383 kernel: GICv3: CPU58: found redistributor 100 region 0:0x0000100100160000 Jul 6 23:44:42.182390 kernel: GICv3: CPU58: using allocated LPI pending table @0x0000080000b90000 Jul 6 23:44:42.182398 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.182405 kernel: CPU58: Booted secondary processor 0x0000000100 [0x413fd0c1] Jul 6 23:44:42.182413 kernel: Detected PIPT I-cache on CPU59 Jul 6 23:44:42.182421 kernel: GICv3: CPU59: found redistributor 60100 region 0:0x00001001002e0000 Jul 6 23:44:42.182429 kernel: GICv3: CPU59: using allocated LPI pending table @0x0000080000ba0000 Jul 6 23:44:42.182436 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.182444 kernel: CPU59: Booted secondary processor 0x0000060100 [0x413fd0c1] Jul 6 23:44:42.182451 kernel: Detected PIPT I-cache on CPU60 Jul 6 23:44:42.182458 kernel: GICv3: CPU60: found redistributor 130100 region 0:0x0000100100620000 Jul 6 23:44:42.182466 kernel: GICv3: CPU60: using allocated LPI pending table @0x0000080000bb0000 Jul 6 23:44:42.182473 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.182480 kernel: CPU60: Booted secondary processor 0x0000130100 [0x413fd0c1] Jul 6 23:44:42.182489 kernel: Detected PIPT I-cache on CPU61 Jul 6 23:44:42.182496 kernel: GICv3: CPU61: found redistributor 1b0100 region 0:0x0000100100820000 Jul 6 23:44:42.182504 kernel: GICv3: CPU61: using allocated LPI pending table @0x0000080000bc0000 Jul 6 23:44:42.182511 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.182519 kernel: CPU61: Booted secondary processor 0x00001b0100 [0x413fd0c1] Jul 6 23:44:42.182526 kernel: Detected PIPT I-cache on CPU62 Jul 6 23:44:42.182533 kernel: GICv3: CPU62: found redistributor 150100 region 0:0x00001001006a0000 Jul 6 23:44:42.182541 kernel: GICv3: CPU62: using allocated LPI pending table @0x0000080000bd0000 Jul 6 23:44:42.182548 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.182556 kernel: CPU62: Booted secondary processor 0x0000150100 [0x413fd0c1] Jul 6 23:44:42.182565 kernel: Detected PIPT I-cache on CPU63 Jul 6 23:44:42.182572 kernel: GICv3: CPU63: found redistributor 1d0100 region 0:0x00001001008a0000 Jul 6 23:44:42.182580 kernel: GICv3: CPU63: using allocated LPI pending table @0x0000080000be0000 Jul 6 23:44:42.182587 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.182594 kernel: CPU63: Booted secondary processor 0x00001d0100 [0x413fd0c1] Jul 6 23:44:42.182602 kernel: Detected PIPT I-cache on CPU64 Jul 6 23:44:42.182609 kernel: GICv3: CPU64: found redistributor 110100 region 0:0x00001001005a0000 Jul 6 23:44:42.182617 kernel: GICv3: CPU64: using allocated LPI pending table @0x0000080000bf0000 Jul 6 23:44:42.182624 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.182633 kernel: CPU64: Booted secondary processor 0x0000110100 [0x413fd0c1] Jul 6 23:44:42.182640 kernel: Detected PIPT I-cache on CPU65 Jul 6 23:44:42.182647 kernel: GICv3: CPU65: found redistributor 190100 region 0:0x00001001007a0000 Jul 6 23:44:42.182655 kernel: GICv3: CPU65: using allocated LPI pending table @0x0000080000c00000 Jul 6 23:44:42.182662 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.182670 kernel: CPU65: Booted secondary processor 0x0000190100 [0x413fd0c1] Jul 6 23:44:42.182677 kernel: Detected PIPT I-cache on CPU66 Jul 6 23:44:42.182684 kernel: GICv3: CPU66: found redistributor 170100 region 0:0x0000100100720000 Jul 6 23:44:42.182692 kernel: GICv3: CPU66: using allocated LPI pending table @0x0000080000c10000 Jul 6 23:44:42.182701 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.182708 kernel: CPU66: Booted secondary processor 0x0000170100 [0x413fd0c1] Jul 6 23:44:42.182715 kernel: Detected PIPT I-cache on CPU67 Jul 6 23:44:42.182723 kernel: GICv3: CPU67: found redistributor 1f0100 region 0:0x0000100100920000 Jul 6 23:44:42.182730 kernel: GICv3: CPU67: using allocated LPI pending table @0x0000080000c20000 Jul 6 23:44:42.182738 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.182745 kernel: CPU67: Booted secondary processor 0x00001f0100 [0x413fd0c1] Jul 6 23:44:42.182752 kernel: Detected PIPT I-cache on CPU68 Jul 6 23:44:42.182760 kernel: GICv3: CPU68: found redistributor b0100 region 0:0x0000100100420000 Jul 6 23:44:42.182767 kernel: GICv3: CPU68: using allocated LPI pending table @0x0000080000c30000 Jul 6 23:44:42.182776 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.182783 kernel: CPU68: Booted secondary processor 0x00000b0100 [0x413fd0c1] Jul 6 23:44:42.182791 kernel: Detected PIPT I-cache on CPU69 Jul 6 23:44:42.182798 kernel: GICv3: CPU69: found redistributor 230100 region 0:0x0000100100a20000 Jul 6 23:44:42.182806 kernel: GICv3: CPU69: using allocated LPI pending table @0x0000080000c40000 Jul 6 23:44:42.182813 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.182820 kernel: CPU69: Booted secondary processor 0x0000230100 [0x413fd0c1] Jul 6 23:44:42.182828 kernel: Detected PIPT I-cache on CPU70 Jul 6 23:44:42.182835 kernel: GICv3: CPU70: found redistributor d0100 region 0:0x00001001004a0000 Jul 6 23:44:42.182844 kernel: GICv3: CPU70: using allocated LPI pending table @0x0000080000c50000 Jul 6 23:44:42.182851 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.182858 kernel: CPU70: Booted secondary processor 0x00000d0100 [0x413fd0c1] Jul 6 23:44:42.182866 kernel: Detected PIPT I-cache on CPU71 Jul 6 23:44:42.182873 kernel: GICv3: CPU71: found redistributor 250100 region 0:0x0000100100aa0000 Jul 6 23:44:42.182880 kernel: GICv3: CPU71: using allocated LPI pending table @0x0000080000c60000 Jul 6 23:44:42.182888 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.182895 kernel: CPU71: Booted secondary processor 0x0000250100 [0x413fd0c1] Jul 6 23:44:42.182902 kernel: Detected PIPT I-cache on CPU72 Jul 6 23:44:42.182910 kernel: GICv3: CPU72: found redistributor 90100 region 0:0x00001001003a0000 Jul 6 23:44:42.182919 kernel: GICv3: CPU72: using allocated LPI pending table @0x0000080000c70000 Jul 6 23:44:42.182926 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.182933 kernel: CPU72: Booted secondary processor 0x0000090100 [0x413fd0c1] Jul 6 23:44:42.182941 kernel: Detected PIPT I-cache on CPU73 Jul 6 23:44:42.182949 kernel: GICv3: CPU73: found redistributor 210100 region 0:0x00001001009a0000 Jul 6 23:44:42.182956 kernel: GICv3: CPU73: using allocated LPI pending table @0x0000080000c80000 Jul 6 23:44:42.182964 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.182971 kernel: CPU73: Booted secondary processor 0x0000210100 [0x413fd0c1] Jul 6 23:44:42.182978 kernel: Detected PIPT I-cache on CPU74 Jul 6 23:44:42.182987 kernel: GICv3: CPU74: found redistributor f0100 region 0:0x0000100100520000 Jul 6 23:44:42.182994 kernel: GICv3: CPU74: using allocated LPI pending table @0x0000080000c90000 Jul 6 23:44:42.183002 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.183009 kernel: CPU74: Booted secondary processor 0x00000f0100 [0x413fd0c1] Jul 6 23:44:42.183016 kernel: Detected PIPT I-cache on CPU75 Jul 6 23:44:42.183024 kernel: GICv3: CPU75: found redistributor 270100 region 0:0x0000100100b20000 Jul 6 23:44:42.183031 kernel: GICv3: CPU75: using allocated LPI pending table @0x0000080000ca0000 Jul 6 23:44:42.183038 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.183046 kernel: CPU75: Booted secondary processor 0x0000270100 [0x413fd0c1] Jul 6 23:44:42.183053 kernel: Detected PIPT I-cache on CPU76 Jul 6 23:44:42.183062 kernel: GICv3: CPU76: found redistributor 30100 region 0:0x0000100100220000 Jul 6 23:44:42.183069 kernel: GICv3: CPU76: using allocated LPI pending table @0x0000080000cb0000 Jul 6 23:44:42.183077 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.183084 kernel: CPU76: Booted secondary processor 0x0000030100 [0x413fd0c1] Jul 6 23:44:42.183091 kernel: Detected PIPT I-cache on CPU77 Jul 6 23:44:42.183098 kernel: GICv3: CPU77: found redistributor 50100 region 0:0x00001001002a0000 Jul 6 23:44:42.183106 kernel: GICv3: CPU77: using allocated LPI pending table @0x0000080000cc0000 Jul 6 23:44:42.183113 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.183120 kernel: CPU77: Booted secondary processor 0x0000050100 [0x413fd0c1] Jul 6 23:44:42.183129 kernel: Detected PIPT I-cache on CPU78 Jul 6 23:44:42.183136 kernel: GICv3: CPU78: found redistributor 10100 region 0:0x00001001001a0000 Jul 6 23:44:42.183144 kernel: GICv3: CPU78: using allocated LPI pending table @0x0000080000cd0000 Jul 6 23:44:42.183152 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.183161 kernel: CPU78: Booted secondary processor 0x0000010100 [0x413fd0c1] Jul 6 23:44:42.183169 kernel: Detected PIPT I-cache on CPU79 Jul 6 23:44:42.183176 kernel: GICv3: CPU79: found redistributor 70100 region 0:0x0000100100320000 Jul 6 23:44:42.183184 kernel: GICv3: CPU79: using allocated LPI pending table @0x0000080000ce0000 Jul 6 23:44:42.183191 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:44:42.183198 kernel: CPU79: Booted secondary processor 0x0000070100 [0x413fd0c1] Jul 6 23:44:42.183207 kernel: smp: Brought up 1 node, 80 CPUs Jul 6 23:44:42.183214 kernel: SMP: Total of 80 processors activated. Jul 6 23:44:42.183222 kernel: CPU features: detected: 32-bit EL0 Support Jul 6 23:44:42.183229 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 6 23:44:42.183237 kernel: CPU features: detected: Common not Private translations Jul 6 23:44:42.183244 kernel: CPU features: detected: CRC32 instructions Jul 6 23:44:42.183252 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 6 23:44:42.183259 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 6 23:44:42.183266 kernel: CPU features: detected: LSE atomic instructions Jul 6 23:44:42.183275 kernel: CPU features: detected: Privileged Access Never Jul 6 23:44:42.183282 kernel: CPU features: detected: RAS Extension Support Jul 6 23:44:42.183290 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 6 23:44:42.183297 kernel: CPU: All CPU(s) started at EL2 Jul 6 23:44:42.183304 kernel: alternatives: applying system-wide alternatives Jul 6 23:44:42.183312 kernel: devtmpfs: initialized Jul 6 23:44:42.183319 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:44:42.183327 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jul 6 23:44:42.183334 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:44:42.183343 kernel: SMBIOS 3.4.0 present. Jul 6 23:44:42.183350 kernel: DMI: GIGABYTE R272-P30-JG/MP32-AR0-JG, BIOS F17a (SCP: 1.07.20210713) 07/22/2021 Jul 6 23:44:42.183358 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:44:42.183365 kernel: DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations Jul 6 23:44:42.183373 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 6 23:44:42.183380 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 6 23:44:42.183388 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:44:42.183395 kernel: audit: type=2000 audit(0.042:1): state=initialized audit_enabled=0 res=1 Jul 6 23:44:42.183404 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:44:42.183411 kernel: cpuidle: using governor menu Jul 6 23:44:42.183418 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 6 23:44:42.183426 kernel: ASID allocator initialised with 32768 entries Jul 6 23:44:42.183434 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:44:42.183441 kernel: Serial: AMBA PL011 UART driver Jul 6 23:44:42.183448 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 6 23:44:42.183456 kernel: Modules: 0 pages in range for non-PLT usage Jul 6 23:44:42.183463 kernel: Modules: 509264 pages in range for PLT usage Jul 6 23:44:42.183471 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:44:42.183479 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:44:42.183487 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 6 23:44:42.183494 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 6 23:44:42.183502 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:44:42.183509 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:44:42.183516 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 6 23:44:42.183524 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 6 23:44:42.183531 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:44:42.183539 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:44:42.183547 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:44:42.183555 kernel: ACPI: 2 ACPI AML tables successfully acquired and loaded Jul 6 23:44:42.183562 kernel: ACPI: Interpreter enabled Jul 6 23:44:42.183569 kernel: ACPI: Using GIC for interrupt routing Jul 6 23:44:42.183577 kernel: ACPI: MCFG table detected, 8 entries Jul 6 23:44:42.183584 kernel: ACPI: IORT: SMMU-v3[33ffe0000000] Mapped to Proximity domain 0 Jul 6 23:44:42.183592 kernel: ACPI: IORT: SMMU-v3[37ffe0000000] Mapped to Proximity domain 0 Jul 6 23:44:42.183599 kernel: ACPI: IORT: SMMU-v3[3bffe0000000] Mapped to Proximity domain 0 Jul 6 23:44:42.183607 kernel: ACPI: IORT: SMMU-v3[3fffe0000000] Mapped to Proximity domain 0 Jul 6 23:44:42.183616 kernel: ACPI: IORT: SMMU-v3[23ffe0000000] Mapped to Proximity domain 0 Jul 6 23:44:42.183623 kernel: ACPI: IORT: SMMU-v3[27ffe0000000] Mapped to Proximity domain 0 Jul 6 23:44:42.183630 kernel: ACPI: IORT: SMMU-v3[2bffe0000000] Mapped to Proximity domain 0 Jul 6 23:44:42.183638 kernel: ACPI: IORT: SMMU-v3[2fffe0000000] Mapped to Proximity domain 0 Jul 6 23:44:42.183645 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x100002600000 (irq = 19, base_baud = 0) is a SBSA Jul 6 23:44:42.183653 kernel: printk: console [ttyAMA0] enabled Jul 6 23:44:42.183660 kernel: ARMH0011:01: ttyAMA1 at MMIO 0x100002620000 (irq = 20, base_baud = 0) is a SBSA Jul 6 23:44:42.183668 kernel: ACPI: PCI Root Bridge [PCI1] (domain 000d [bus 00-ff]) Jul 6 23:44:42.183802 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:44:42.183874 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 6 23:44:42.183937 kernel: acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability] Jul 6 23:44:42.183999 kernel: acpi PNP0A08:00: MCFG quirk: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 6 23:44:42.184062 kernel: acpi PNP0A08:00: ECAM area [mem 0x37fff0000000-0x37ffffffffff] reserved by PNP0C02:00 Jul 6 23:44:42.184124 kernel: acpi PNP0A08:00: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] Jul 6 23:44:42.184133 kernel: PCI host bridge to bus 000d:00 Jul 6 23:44:42.184219 kernel: pci_bus 000d:00: root bus resource [mem 0x50000000-0x5fffffff window] Jul 6 23:44:42.184278 kernel: pci_bus 000d:00: root bus resource [mem 0x340000000000-0x37ffdfffffff window] Jul 6 23:44:42.184336 kernel: pci_bus 000d:00: root bus resource [bus 00-ff] Jul 6 23:44:42.184413 kernel: pci 000d:00:00.0: [1def:e100] type 00 class 0x060000 Jul 6 23:44:42.184488 kernel: pci 000d:00:01.0: [1def:e101] type 01 class 0x060400 Jul 6 23:44:42.184554 kernel: pci 000d:00:01.0: enabling Extended Tags Jul 6 23:44:42.184622 kernel: pci 000d:00:01.0: supports D1 D2 Jul 6 23:44:42.184687 kernel: pci 000d:00:01.0: PME# supported from D0 D1 D3hot Jul 6 23:44:42.184758 kernel: pci 000d:00:02.0: [1def:e102] type 01 class 0x060400 Jul 6 23:44:42.184824 kernel: pci 000d:00:02.0: supports D1 D2 Jul 6 23:44:42.184887 kernel: pci 000d:00:02.0: PME# supported from D0 D1 D3hot Jul 6 23:44:42.184959 kernel: pci 000d:00:03.0: [1def:e103] type 01 class 0x060400 Jul 6 23:44:42.185023 kernel: pci 000d:00:03.0: supports D1 D2 Jul 6 23:44:42.185090 kernel: pci 000d:00:03.0: PME# supported from D0 D1 D3hot Jul 6 23:44:42.185166 kernel: pci 000d:00:04.0: [1def:e104] type 01 class 0x060400 Jul 6 23:44:42.185231 kernel: pci 000d:00:04.0: supports D1 D2 Jul 6 23:44:42.185298 kernel: pci 000d:00:04.0: PME# supported from D0 D1 D3hot Jul 6 23:44:42.185308 kernel: acpiphp: Slot [1] registered Jul 6 23:44:42.185315 kernel: acpiphp: Slot [2] registered Jul 6 23:44:42.185322 kernel: acpiphp: Slot [3] registered Jul 6 23:44:42.185332 kernel: acpiphp: Slot [4] registered Jul 6 23:44:42.185389 kernel: pci_bus 000d:00: on NUMA node 0 Jul 6 23:44:42.185453 kernel: pci 000d:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 6 23:44:42.185517 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 6 23:44:42.185581 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 6 23:44:42.185646 kernel: pci 000d:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 6 23:44:42.185710 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 6 23:44:42.185775 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 6 23:44:42.185843 kernel: pci 000d:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 6 23:44:42.185911 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 6 23:44:42.185979 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jul 6 23:44:42.186044 kernel: pci 000d:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 6 23:44:42.186107 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 6 23:44:42.186175 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 6 23:44:42.186245 kernel: pci 000d:00:01.0: BAR 14: assigned [mem 0x50000000-0x501fffff] Jul 6 23:44:42.186309 kernel: pci 000d:00:01.0: BAR 15: assigned [mem 0x340000000000-0x3400001fffff 64bit pref] Jul 6 23:44:42.186375 kernel: pci 000d:00:02.0: BAR 14: assigned [mem 0x50200000-0x503fffff] Jul 6 23:44:42.186439 kernel: pci 000d:00:02.0: BAR 15: assigned [mem 0x340000200000-0x3400003fffff 64bit pref] Jul 6 23:44:42.186504 kernel: pci 000d:00:03.0: BAR 14: assigned [mem 0x50400000-0x505fffff] Jul 6 23:44:42.186567 kernel: pci 000d:00:03.0: BAR 15: assigned [mem 0x340000400000-0x3400005fffff 64bit pref] Jul 6 23:44:42.186631 kernel: pci 000d:00:04.0: BAR 14: assigned [mem 0x50600000-0x507fffff] Jul 6 23:44:42.186696 kernel: pci 000d:00:04.0: BAR 15: assigned [mem 0x340000600000-0x3400007fffff 64bit pref] Jul 6 23:44:42.186762 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.186827 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.186890 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.186954 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.187018 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.187083 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.187147 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.187217 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.187282 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.187345 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.187410 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.187473 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.187537 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.187600 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.187664 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.187727 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.187794 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] Jul 6 23:44:42.187858 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff] Jul 6 23:44:42.187922 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref] Jul 6 23:44:42.187987 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] Jul 6 23:44:42.188050 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff] Jul 6 23:44:42.188116 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref] Jul 6 23:44:42.188185 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] Jul 6 23:44:42.188250 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff] Jul 6 23:44:42.188315 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref] Jul 6 23:44:42.188377 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] Jul 6 23:44:42.188441 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff] Jul 6 23:44:42.188506 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref] Jul 6 23:44:42.188569 kernel: pci_bus 000d:00: resource 4 [mem 0x50000000-0x5fffffff window] Jul 6 23:44:42.188626 kernel: pci_bus 000d:00: resource 5 [mem 0x340000000000-0x37ffdfffffff window] Jul 6 23:44:42.188696 kernel: pci_bus 000d:01: resource 1 [mem 0x50000000-0x501fffff] Jul 6 23:44:42.188756 kernel: pci_bus 000d:01: resource 2 [mem 0x340000000000-0x3400001fffff 64bit pref] Jul 6 23:44:42.188825 kernel: pci_bus 000d:02: resource 1 [mem 0x50200000-0x503fffff] Jul 6 23:44:42.188889 kernel: pci_bus 000d:02: resource 2 [mem 0x340000200000-0x3400003fffff 64bit pref] Jul 6 23:44:42.188963 kernel: pci_bus 000d:03: resource 1 [mem 0x50400000-0x505fffff] Jul 6 23:44:42.189027 kernel: pci_bus 000d:03: resource 2 [mem 0x340000400000-0x3400005fffff 64bit pref] Jul 6 23:44:42.189093 kernel: pci_bus 000d:04: resource 1 [mem 0x50600000-0x507fffff] Jul 6 23:44:42.189153 kernel: pci_bus 000d:04: resource 2 [mem 0x340000600000-0x3400007fffff 64bit pref] Jul 6 23:44:42.189166 kernel: ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 00-ff]) Jul 6 23:44:42.189237 kernel: acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:44:42.189300 kernel: acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 6 23:44:42.189366 kernel: acpi PNP0A08:01: _OSC: OS now controls [AER PCIeCapability] Jul 6 23:44:42.189427 kernel: acpi PNP0A08:01: MCFG quirk: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 6 23:44:42.189489 kernel: acpi PNP0A08:01: ECAM area [mem 0x3ffff0000000-0x3fffffffffff] reserved by PNP0C02:00 Jul 6 23:44:42.189549 kernel: acpi PNP0A08:01: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] Jul 6 23:44:42.189559 kernel: PCI host bridge to bus 0000:00 Jul 6 23:44:42.189625 kernel: pci_bus 0000:00: root bus resource [mem 0x70000000-0x7fffffff window] Jul 6 23:44:42.189682 kernel: pci_bus 0000:00: root bus resource [mem 0x3c0000000000-0x3fffdfffffff window] Jul 6 23:44:42.189742 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 6 23:44:42.189815 kernel: pci 0000:00:00.0: [1def:e100] type 00 class 0x060000 Jul 6 23:44:42.189885 kernel: pci 0000:00:01.0: [1def:e101] type 01 class 0x060400 Jul 6 23:44:42.189950 kernel: pci 0000:00:01.0: enabling Extended Tags Jul 6 23:44:42.190013 kernel: pci 0000:00:01.0: supports D1 D2 Jul 6 23:44:42.190077 kernel: pci 0000:00:01.0: PME# supported from D0 D1 D3hot Jul 6 23:44:42.190148 kernel: pci 0000:00:02.0: [1def:e102] type 01 class 0x060400 Jul 6 23:44:42.190221 kernel: pci 0000:00:02.0: supports D1 D2 Jul 6 23:44:42.190284 kernel: pci 0000:00:02.0: PME# supported from D0 D1 D3hot Jul 6 23:44:42.190355 kernel: pci 0000:00:03.0: [1def:e103] type 01 class 0x060400 Jul 6 23:44:42.190420 kernel: pci 0000:00:03.0: supports D1 D2 Jul 6 23:44:42.190484 kernel: pci 0000:00:03.0: PME# supported from D0 D1 D3hot Jul 6 23:44:42.190556 kernel: pci 0000:00:04.0: [1def:e104] type 01 class 0x060400 Jul 6 23:44:42.190622 kernel: pci 0000:00:04.0: supports D1 D2 Jul 6 23:44:42.190687 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D3hot Jul 6 23:44:42.190696 kernel: acpiphp: Slot [1-1] registered Jul 6 23:44:42.190703 kernel: acpiphp: Slot [2-1] registered Jul 6 23:44:42.190711 kernel: acpiphp: Slot [3-1] registered Jul 6 23:44:42.190718 kernel: acpiphp: Slot [4-1] registered Jul 6 23:44:42.190775 kernel: pci_bus 0000:00: on NUMA node 0 Jul 6 23:44:42.190840 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 6 23:44:42.190905 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 6 23:44:42.190971 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 6 23:44:42.191035 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 6 23:44:42.191098 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 6 23:44:42.191165 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 6 23:44:42.191230 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 6 23:44:42.191294 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 6 23:44:42.191358 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jul 6 23:44:42.191427 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 6 23:44:42.191491 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 6 23:44:42.191554 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 6 23:44:42.191618 kernel: pci 0000:00:01.0: BAR 14: assigned [mem 0x70000000-0x701fffff] Jul 6 23:44:42.191683 kernel: pci 0000:00:01.0: BAR 15: assigned [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Jul 6 23:44:42.191747 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x70200000-0x703fffff] Jul 6 23:44:42.191809 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Jul 6 23:44:42.191876 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x70400000-0x705fffff] Jul 6 23:44:42.191941 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Jul 6 23:44:42.192003 kernel: pci 0000:00:04.0: BAR 14: assigned [mem 0x70600000-0x707fffff] Jul 6 23:44:42.192067 kernel: pci 0000:00:04.0: BAR 15: assigned [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Jul 6 23:44:42.192130 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.192197 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.192260 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.192327 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.192390 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.192454 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.192520 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.192583 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.192647 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.192709 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.192772 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.192835 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.192901 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.192965 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.193029 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.193093 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.193156 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 6 23:44:42.193223 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff] Jul 6 23:44:42.193287 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Jul 6 23:44:42.193351 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] Jul 6 23:44:42.193417 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff] Jul 6 23:44:42.193483 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Jul 6 23:44:42.193548 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] Jul 6 23:44:42.193613 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff] Jul 6 23:44:42.193677 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Jul 6 23:44:42.193741 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] Jul 6 23:44:42.193805 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff] Jul 6 23:44:42.193869 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Jul 6 23:44:42.193928 kernel: pci_bus 0000:00: resource 4 [mem 0x70000000-0x7fffffff window] Jul 6 23:44:42.193987 kernel: pci_bus 0000:00: resource 5 [mem 0x3c0000000000-0x3fffdfffffff window] Jul 6 23:44:42.194056 kernel: pci_bus 0000:01: resource 1 [mem 0x70000000-0x701fffff] Jul 6 23:44:42.194116 kernel: pci_bus 0000:01: resource 2 [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Jul 6 23:44:42.194186 kernel: pci_bus 0000:02: resource 1 [mem 0x70200000-0x703fffff] Jul 6 23:44:42.194247 kernel: pci_bus 0000:02: resource 2 [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Jul 6 23:44:42.194320 kernel: pci_bus 0000:03: resource 1 [mem 0x70400000-0x705fffff] Jul 6 23:44:42.194380 kernel: pci_bus 0000:03: resource 2 [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Jul 6 23:44:42.194449 kernel: pci_bus 0000:04: resource 1 [mem 0x70600000-0x707fffff] Jul 6 23:44:42.194509 kernel: pci_bus 0000:04: resource 2 [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Jul 6 23:44:42.194519 kernel: ACPI: PCI Root Bridge [PCI7] (domain 0005 [bus 00-ff]) Jul 6 23:44:42.194588 kernel: acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:44:42.194651 kernel: acpi PNP0A08:02: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 6 23:44:42.194714 kernel: acpi PNP0A08:02: _OSC: OS now controls [AER PCIeCapability] Jul 6 23:44:42.194777 kernel: acpi PNP0A08:02: MCFG quirk: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 6 23:44:42.194839 kernel: acpi PNP0A08:02: ECAM area [mem 0x2ffff0000000-0x2fffffffffff] reserved by PNP0C02:00 Jul 6 23:44:42.194900 kernel: acpi PNP0A08:02: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] Jul 6 23:44:42.194910 kernel: PCI host bridge to bus 0005:00 Jul 6 23:44:42.194974 kernel: pci_bus 0005:00: root bus resource [mem 0x30000000-0x3fffffff window] Jul 6 23:44:42.195032 kernel: pci_bus 0005:00: root bus resource [mem 0x2c0000000000-0x2fffdfffffff window] Jul 6 23:44:42.195088 kernel: pci_bus 0005:00: root bus resource [bus 00-ff] Jul 6 23:44:42.195169 kernel: pci 0005:00:00.0: [1def:e110] type 00 class 0x060000 Jul 6 23:44:42.195242 kernel: pci 0005:00:01.0: [1def:e111] type 01 class 0x060400 Jul 6 23:44:42.195307 kernel: pci 0005:00:01.0: supports D1 D2 Jul 6 23:44:42.195370 kernel: pci 0005:00:01.0: PME# supported from D0 D1 D3hot Jul 6 23:44:42.195441 kernel: pci 0005:00:03.0: [1def:e113] type 01 class 0x060400 Jul 6 23:44:42.195504 kernel: pci 0005:00:03.0: supports D1 D2 Jul 6 23:44:42.195572 kernel: pci 0005:00:03.0: PME# supported from D0 D1 D3hot Jul 6 23:44:42.195641 kernel: pci 0005:00:05.0: [1def:e115] type 01 class 0x060400 Jul 6 23:44:42.195705 kernel: pci 0005:00:05.0: supports D1 D2 Jul 6 23:44:42.195769 kernel: pci 0005:00:05.0: PME# supported from D0 D1 D3hot Jul 6 23:44:42.195840 kernel: pci 0005:00:07.0: [1def:e117] type 01 class 0x060400 Jul 6 23:44:42.195904 kernel: pci 0005:00:07.0: supports D1 D2 Jul 6 23:44:42.195968 kernel: pci 0005:00:07.0: PME# supported from D0 D1 D3hot Jul 6 23:44:42.195979 kernel: acpiphp: Slot [1-2] registered Jul 6 23:44:42.195987 kernel: acpiphp: Slot [2-2] registered Jul 6 23:44:42.196060 kernel: pci 0005:03:00.0: [144d:a808] type 00 class 0x010802 Jul 6 23:44:42.196129 kernel: pci 0005:03:00.0: reg 0x10: [mem 0x30110000-0x30113fff 64bit] Jul 6 23:44:42.196198 kernel: pci 0005:03:00.0: reg 0x30: [mem 0x30100000-0x3010ffff pref] Jul 6 23:44:42.196272 kernel: pci 0005:04:00.0: [144d:a808] type 00 class 0x010802 Jul 6 23:44:42.196338 kernel: pci 0005:04:00.0: reg 0x10: [mem 0x30010000-0x30013fff 64bit] Jul 6 23:44:42.196404 kernel: pci 0005:04:00.0: reg 0x30: [mem 0x30000000-0x3000ffff pref] Jul 6 23:44:42.196467 kernel: pci_bus 0005:00: on NUMA node 0 Jul 6 23:44:42.196532 kernel: pci 0005:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 6 23:44:42.196598 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 6 23:44:42.196662 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 6 23:44:42.196729 kernel: pci 0005:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 6 23:44:42.196793 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 6 23:44:42.196861 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 6 23:44:42.196927 kernel: pci 0005:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 6 23:44:42.196991 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 6 23:44:42.197055 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jul 6 23:44:42.197123 kernel: pci 0005:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 6 23:44:42.197192 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 6 23:44:42.197255 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x001fffff] to [bus 04] add_size 100000 add_align 100000 Jul 6 23:44:42.197322 kernel: pci 0005:00:01.0: BAR 14: assigned [mem 0x30000000-0x301fffff] Jul 6 23:44:42.197385 kernel: pci 0005:00:01.0: BAR 15: assigned [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Jul 6 23:44:42.197449 kernel: pci 0005:00:03.0: BAR 14: assigned [mem 0x30200000-0x303fffff] Jul 6 23:44:42.197514 kernel: pci 0005:00:03.0: BAR 15: assigned [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Jul 6 23:44:42.197577 kernel: pci 0005:00:05.0: BAR 14: assigned [mem 0x30400000-0x305fffff] Jul 6 23:44:42.197653 kernel: pci 0005:00:05.0: BAR 15: assigned [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Jul 6 23:44:42.197718 kernel: pci 0005:00:07.0: BAR 14: assigned [mem 0x30600000-0x307fffff] Jul 6 23:44:42.197784 kernel: pci 0005:00:07.0: BAR 15: assigned [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Jul 6 23:44:42.197849 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.197913 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.197982 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.198053 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.198117 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.198184 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.198250 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.198312 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.198379 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.198444 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.198508 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.198572 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.198634 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.198698 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.198760 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.198824 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.198886 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] Jul 6 23:44:42.198953 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff] Jul 6 23:44:42.199018 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Jul 6 23:44:42.199081 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] Jul 6 23:44:42.199145 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff] Jul 6 23:44:42.199215 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Jul 6 23:44:42.199285 kernel: pci 0005:03:00.0: BAR 6: assigned [mem 0x30400000-0x3040ffff pref] Jul 6 23:44:42.199353 kernel: pci 0005:03:00.0: BAR 0: assigned [mem 0x30410000-0x30413fff 64bit] Jul 6 23:44:42.199419 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] Jul 6 23:44:42.199482 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff] Jul 6 23:44:42.199548 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Jul 6 23:44:42.199616 kernel: pci 0005:04:00.0: BAR 6: assigned [mem 0x30600000-0x3060ffff pref] Jul 6 23:44:42.199682 kernel: pci 0005:04:00.0: BAR 0: assigned [mem 0x30610000-0x30613fff 64bit] Jul 6 23:44:42.199748 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] Jul 6 23:44:42.199814 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff] Jul 6 23:44:42.199879 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Jul 6 23:44:42.199938 kernel: pci_bus 0005:00: resource 4 [mem 0x30000000-0x3fffffff window] Jul 6 23:44:42.199996 kernel: pci_bus 0005:00: resource 5 [mem 0x2c0000000000-0x2fffdfffffff window] Jul 6 23:44:42.200066 kernel: pci_bus 0005:01: resource 1 [mem 0x30000000-0x301fffff] Jul 6 23:44:42.200124 kernel: pci_bus 0005:01: resource 2 [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Jul 6 23:44:42.200207 kernel: pci_bus 0005:02: resource 1 [mem 0x30200000-0x303fffff] Jul 6 23:44:42.200270 kernel: pci_bus 0005:02: resource 2 [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Jul 6 23:44:42.200338 kernel: pci_bus 0005:03: resource 1 [mem 0x30400000-0x305fffff] Jul 6 23:44:42.200397 kernel: pci_bus 0005:03: resource 2 [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Jul 6 23:44:42.200464 kernel: pci_bus 0005:04: resource 1 [mem 0x30600000-0x307fffff] Jul 6 23:44:42.200523 kernel: pci_bus 0005:04: resource 2 [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Jul 6 23:44:42.200535 kernel: ACPI: PCI Root Bridge [PCI5] (domain 0003 [bus 00-ff]) Jul 6 23:44:42.200606 kernel: acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:44:42.200670 kernel: acpi PNP0A08:03: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 6 23:44:42.200731 kernel: acpi PNP0A08:03: _OSC: OS now controls [AER PCIeCapability] Jul 6 23:44:42.200793 kernel: acpi PNP0A08:03: MCFG quirk: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 6 23:44:42.200854 kernel: acpi PNP0A08:03: ECAM area [mem 0x27fff0000000-0x27ffffffffff] reserved by PNP0C02:00 Jul 6 23:44:42.200915 kernel: acpi PNP0A08:03: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] Jul 6 23:44:42.200927 kernel: PCI host bridge to bus 0003:00 Jul 6 23:44:42.200992 kernel: pci_bus 0003:00: root bus resource [mem 0x10000000-0x1fffffff window] Jul 6 23:44:42.201050 kernel: pci_bus 0003:00: root bus resource [mem 0x240000000000-0x27ffdfffffff window] Jul 6 23:44:42.201107 kernel: pci_bus 0003:00: root bus resource [bus 00-ff] Jul 6 23:44:42.201182 kernel: pci 0003:00:00.0: [1def:e110] type 00 class 0x060000 Jul 6 23:44:42.201256 kernel: pci 0003:00:01.0: [1def:e111] type 01 class 0x060400 Jul 6 23:44:42.201324 kernel: pci 0003:00:01.0: supports D1 D2 Jul 6 23:44:42.201389 kernel: pci 0003:00:01.0: PME# supported from D0 D1 D3hot Jul 6 23:44:42.201459 kernel: pci 0003:00:03.0: [1def:e113] type 01 class 0x060400 Jul 6 23:44:42.201524 kernel: pci 0003:00:03.0: supports D1 D2 Jul 6 23:44:42.201589 kernel: pci 0003:00:03.0: PME# supported from D0 D1 D3hot Jul 6 23:44:42.201659 kernel: pci 0003:00:05.0: [1def:e115] type 01 class 0x060400 Jul 6 23:44:42.201724 kernel: pci 0003:00:05.0: supports D1 D2 Jul 6 23:44:42.201788 kernel: pci 0003:00:05.0: PME# supported from D0 D1 D3hot Jul 6 23:44:42.201798 kernel: acpiphp: Slot [1-3] registered Jul 6 23:44:42.201805 kernel: acpiphp: Slot [2-3] registered Jul 6 23:44:42.201879 kernel: pci 0003:03:00.0: [8086:1521] type 00 class 0x020000 Jul 6 23:44:42.201945 kernel: pci 0003:03:00.0: reg 0x10: [mem 0x10020000-0x1003ffff] Jul 6 23:44:42.202013 kernel: pci 0003:03:00.0: reg 0x18: [io 0x0020-0x003f] Jul 6 23:44:42.202079 kernel: pci 0003:03:00.0: reg 0x1c: [mem 0x10044000-0x10047fff] Jul 6 23:44:42.202146 kernel: pci 0003:03:00.0: PME# supported from D0 D3hot D3cold Jul 6 23:44:42.202224 kernel: pci 0003:03:00.0: reg 0x184: [mem 0x240000060000-0x240000063fff 64bit pref] Jul 6 23:44:42.202293 kernel: pci 0003:03:00.0: VF(n) BAR0 space: [mem 0x240000060000-0x24000007ffff 64bit pref] (contains BAR0 for 8 VFs) Jul 6 23:44:42.202360 kernel: pci 0003:03:00.0: reg 0x190: [mem 0x240000040000-0x240000043fff 64bit pref] Jul 6 23:44:42.202426 kernel: pci 0003:03:00.0: VF(n) BAR3 space: [mem 0x240000040000-0x24000005ffff 64bit pref] (contains BAR3 for 8 VFs) Jul 6 23:44:42.202492 kernel: pci 0003:03:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x2 link at 0003:00:05.0 (capable of 16.000 Gb/s with 5.0 GT/s PCIe x4 link) Jul 6 23:44:42.202567 kernel: pci 0003:03:00.1: [8086:1521] type 00 class 0x020000 Jul 6 23:44:42.202632 kernel: pci 0003:03:00.1: reg 0x10: [mem 0x10000000-0x1001ffff] Jul 6 23:44:42.202702 kernel: pci 0003:03:00.1: reg 0x18: [io 0x0000-0x001f] Jul 6 23:44:42.202772 kernel: pci 0003:03:00.1: reg 0x1c: [mem 0x10040000-0x10043fff] Jul 6 23:44:42.202842 kernel: pci 0003:03:00.1: PME# supported from D0 D3hot D3cold Jul 6 23:44:42.202909 kernel: pci 0003:03:00.1: reg 0x184: [mem 0x240000020000-0x240000023fff 64bit pref] Jul 6 23:44:42.202977 kernel: pci 0003:03:00.1: VF(n) BAR0 space: [mem 0x240000020000-0x24000003ffff 64bit pref] (contains BAR0 for 8 VFs) Jul 6 23:44:42.203044 kernel: pci 0003:03:00.1: reg 0x190: [mem 0x240000000000-0x240000003fff 64bit pref] Jul 6 23:44:42.203111 kernel: pci 0003:03:00.1: VF(n) BAR3 space: [mem 0x240000000000-0x24000001ffff 64bit pref] (contains BAR3 for 8 VFs) Jul 6 23:44:42.203178 kernel: pci_bus 0003:00: on NUMA node 0 Jul 6 23:44:42.203245 kernel: pci 0003:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 6 23:44:42.203310 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 6 23:44:42.203378 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 6 23:44:42.203442 kernel: pci 0003:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 6 23:44:42.203507 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 6 23:44:42.203570 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 6 23:44:42.203638 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03-04] add_size 300000 add_align 100000 Jul 6 23:44:42.203702 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03-04] add_size 100000 add_align 100000 Jul 6 23:44:42.203767 kernel: pci 0003:00:01.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jul 6 23:44:42.203831 kernel: pci 0003:00:01.0: BAR 15: assigned [mem 0x240000000000-0x2400001fffff 64bit pref] Jul 6 23:44:42.203907 kernel: pci 0003:00:03.0: BAR 14: assigned [mem 0x10200000-0x103fffff] Jul 6 23:44:42.203975 kernel: pci 0003:00:03.0: BAR 15: assigned [mem 0x240000200000-0x2400003fffff 64bit pref] Jul 6 23:44:42.204038 kernel: pci 0003:00:05.0: BAR 14: assigned [mem 0x10400000-0x105fffff] Jul 6 23:44:42.204103 kernel: pci 0003:00:05.0: BAR 15: assigned [mem 0x240000400000-0x2400006fffff 64bit pref] Jul 6 23:44:42.204173 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.204238 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.204303 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.204367 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.204431 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.204494 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.204559 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.204622 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.204690 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.204755 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.204818 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.204882 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.204945 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] Jul 6 23:44:42.205009 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff] Jul 6 23:44:42.205075 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref] Jul 6 23:44:42.205140 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] Jul 6 23:44:42.205209 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff] Jul 6 23:44:42.205277 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref] Jul 6 23:44:42.205346 kernel: pci 0003:03:00.0: BAR 0: assigned [mem 0x10400000-0x1041ffff] Jul 6 23:44:42.205413 kernel: pci 0003:03:00.1: BAR 0: assigned [mem 0x10420000-0x1043ffff] Jul 6 23:44:42.205480 kernel: pci 0003:03:00.0: BAR 3: assigned [mem 0x10440000-0x10443fff] Jul 6 23:44:42.205546 kernel: pci 0003:03:00.0: BAR 7: assigned [mem 0x240000400000-0x24000041ffff 64bit pref] Jul 6 23:44:42.205617 kernel: pci 0003:03:00.0: BAR 10: assigned [mem 0x240000420000-0x24000043ffff 64bit pref] Jul 6 23:44:42.205684 kernel: pci 0003:03:00.1: BAR 3: assigned [mem 0x10444000-0x10447fff] Jul 6 23:44:42.205749 kernel: pci 0003:03:00.1: BAR 7: assigned [mem 0x240000440000-0x24000045ffff 64bit pref] Jul 6 23:44:42.205816 kernel: pci 0003:03:00.1: BAR 10: assigned [mem 0x240000460000-0x24000047ffff 64bit pref] Jul 6 23:44:42.205880 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] Jul 6 23:44:42.205947 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] Jul 6 23:44:42.206013 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] Jul 6 23:44:42.206081 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] Jul 6 23:44:42.206147 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] Jul 6 23:44:42.206215 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] Jul 6 23:44:42.206283 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] Jul 6 23:44:42.206349 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] Jul 6 23:44:42.206413 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] Jul 6 23:44:42.206479 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff] Jul 6 23:44:42.206542 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref] Jul 6 23:44:42.206605 kernel: pci_bus 0003:00: Some PCI device resources are unassigned, try booting with pci=realloc Jul 6 23:44:42.206662 kernel: pci_bus 0003:00: resource 4 [mem 0x10000000-0x1fffffff window] Jul 6 23:44:42.206720 kernel: pci_bus 0003:00: resource 5 [mem 0x240000000000-0x27ffdfffffff window] Jul 6 23:44:42.206797 kernel: pci_bus 0003:01: resource 1 [mem 0x10000000-0x101fffff] Jul 6 23:44:42.206858 kernel: pci_bus 0003:01: resource 2 [mem 0x240000000000-0x2400001fffff 64bit pref] Jul 6 23:44:42.206927 kernel: pci_bus 0003:02: resource 1 [mem 0x10200000-0x103fffff] Jul 6 23:44:42.206989 kernel: pci_bus 0003:02: resource 2 [mem 0x240000200000-0x2400003fffff 64bit pref] Jul 6 23:44:42.207056 kernel: pci_bus 0003:03: resource 1 [mem 0x10400000-0x105fffff] Jul 6 23:44:42.207115 kernel: pci_bus 0003:03: resource 2 [mem 0x240000400000-0x2400006fffff 64bit pref] Jul 6 23:44:42.207125 kernel: ACPI: PCI Root Bridge [PCI0] (domain 000c [bus 00-ff]) Jul 6 23:44:42.207201 kernel: acpi PNP0A08:04: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:44:42.207264 kernel: acpi PNP0A08:04: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 6 23:44:42.207330 kernel: acpi PNP0A08:04: _OSC: OS now controls [AER PCIeCapability] Jul 6 23:44:42.207393 kernel: acpi PNP0A08:04: MCFG quirk: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 6 23:44:42.207455 kernel: acpi PNP0A08:04: ECAM area [mem 0x33fff0000000-0x33ffffffffff] reserved by PNP0C02:00 Jul 6 23:44:42.207517 kernel: acpi PNP0A08:04: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] Jul 6 23:44:42.207527 kernel: PCI host bridge to bus 000c:00 Jul 6 23:44:42.207591 kernel: pci_bus 000c:00: root bus resource [mem 0x40000000-0x4fffffff window] Jul 6 23:44:42.207650 kernel: pci_bus 000c:00: root bus resource [mem 0x300000000000-0x33ffdfffffff window] Jul 6 23:44:42.207708 kernel: pci_bus 000c:00: root bus resource [bus 00-ff] Jul 6 23:44:42.207780 kernel: pci 000c:00:00.0: [1def:e100] type 00 class 0x060000 Jul 6 23:44:42.207852 kernel: pci 000c:00:01.0: [1def:e101] type 01 class 0x060400 Jul 6 23:44:42.207917 kernel: pci 000c:00:01.0: enabling Extended Tags Jul 6 23:44:42.207982 kernel: pci 000c:00:01.0: supports D1 D2 Jul 6 23:44:42.208046 kernel: pci 000c:00:01.0: PME# supported from D0 D1 D3hot Jul 6 23:44:42.208117 kernel: pci 000c:00:02.0: [1def:e102] type 01 class 0x060400 Jul 6 23:44:42.208188 kernel: pci 000c:00:02.0: supports D1 D2 Jul 6 23:44:42.208253 kernel: pci 000c:00:02.0: PME# supported from D0 D1 D3hot Jul 6 23:44:42.208324 kernel: pci 000c:00:03.0: [1def:e103] type 01 class 0x060400 Jul 6 23:44:42.208389 kernel: pci 000c:00:03.0: supports D1 D2 Jul 6 23:44:42.208454 kernel: pci 000c:00:03.0: PME# supported from D0 D1 D3hot Jul 6 23:44:42.208524 kernel: pci 000c:00:04.0: [1def:e104] type 01 class 0x060400 Jul 6 23:44:42.208589 kernel: pci 000c:00:04.0: supports D1 D2 Jul 6 23:44:42.208655 kernel: pci 000c:00:04.0: PME# supported from D0 D1 D3hot Jul 6 23:44:42.208665 kernel: acpiphp: Slot [1-4] registered Jul 6 23:44:42.208673 kernel: acpiphp: Slot [2-4] registered Jul 6 23:44:42.208681 kernel: acpiphp: Slot [3-2] registered Jul 6 23:44:42.208688 kernel: acpiphp: Slot [4-2] registered Jul 6 23:44:42.208747 kernel: pci_bus 000c:00: on NUMA node 0 Jul 6 23:44:42.208811 kernel: pci 000c:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 6 23:44:42.208877 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 6 23:44:42.208943 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 6 23:44:42.209008 kernel: pci 000c:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 6 23:44:42.209072 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 6 23:44:42.209137 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 6 23:44:42.209208 kernel: pci 000c:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 6 23:44:42.209275 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 6 23:44:42.209341 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jul 6 23:44:42.209410 kernel: pci 000c:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 6 23:44:42.209475 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 6 23:44:42.209539 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 6 23:44:42.209604 kernel: pci 000c:00:01.0: BAR 14: assigned [mem 0x40000000-0x401fffff] Jul 6 23:44:42.209669 kernel: pci 000c:00:01.0: BAR 15: assigned [mem 0x300000000000-0x3000001fffff 64bit pref] Jul 6 23:44:42.209735 kernel: pci 000c:00:02.0: BAR 14: assigned [mem 0x40200000-0x403fffff] Jul 6 23:44:42.209799 kernel: pci 000c:00:02.0: BAR 15: assigned [mem 0x300000200000-0x3000003fffff 64bit pref] Jul 6 23:44:42.209865 kernel: pci 000c:00:03.0: BAR 14: assigned [mem 0x40400000-0x405fffff] Jul 6 23:44:42.209929 kernel: pci 000c:00:03.0: BAR 15: assigned [mem 0x300000400000-0x3000005fffff 64bit pref] Jul 6 23:44:42.209992 kernel: pci 000c:00:04.0: BAR 14: assigned [mem 0x40600000-0x407fffff] Jul 6 23:44:42.210056 kernel: pci 000c:00:04.0: BAR 15: assigned [mem 0x300000600000-0x3000007fffff 64bit pref] Jul 6 23:44:42.210120 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.210226 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.210291 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.210353 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.210420 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.210482 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.210544 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.210606 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.210669 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.210730 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.210793 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.210855 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.210917 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.210982 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.211045 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.211108 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.211174 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] Jul 6 23:44:42.211237 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff] Jul 6 23:44:42.211301 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref] Jul 6 23:44:42.211363 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] Jul 6 23:44:42.211429 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff] Jul 6 23:44:42.211492 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref] Jul 6 23:44:42.211555 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] Jul 6 23:44:42.211619 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff] Jul 6 23:44:42.211682 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref] Jul 6 23:44:42.211747 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] Jul 6 23:44:42.211810 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff] Jul 6 23:44:42.211877 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref] Jul 6 23:44:42.211936 kernel: pci_bus 000c:00: resource 4 [mem 0x40000000-0x4fffffff window] Jul 6 23:44:42.211996 kernel: pci_bus 000c:00: resource 5 [mem 0x300000000000-0x33ffdfffffff window] Jul 6 23:44:42.212067 kernel: pci_bus 000c:01: resource 1 [mem 0x40000000-0x401fffff] Jul 6 23:44:42.212127 kernel: pci_bus 000c:01: resource 2 [mem 0x300000000000-0x3000001fffff 64bit pref] Jul 6 23:44:42.212207 kernel: pci_bus 000c:02: resource 1 [mem 0x40200000-0x403fffff] Jul 6 23:44:42.212270 kernel: pci_bus 000c:02: resource 2 [mem 0x300000200000-0x3000003fffff 64bit pref] Jul 6 23:44:42.212338 kernel: pci_bus 000c:03: resource 1 [mem 0x40400000-0x405fffff] Jul 6 23:44:42.212397 kernel: pci_bus 000c:03: resource 2 [mem 0x300000400000-0x3000005fffff 64bit pref] Jul 6 23:44:42.212465 kernel: pci_bus 000c:04: resource 1 [mem 0x40600000-0x407fffff] Jul 6 23:44:42.212525 kernel: pci_bus 000c:04: resource 2 [mem 0x300000600000-0x3000007fffff 64bit pref] Jul 6 23:44:42.212535 kernel: ACPI: PCI Root Bridge [PCI4] (domain 0002 [bus 00-ff]) Jul 6 23:44:42.212608 kernel: acpi PNP0A08:05: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:44:42.212675 kernel: acpi PNP0A08:05: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 6 23:44:42.212738 kernel: acpi PNP0A08:05: _OSC: OS now controls [AER PCIeCapability] Jul 6 23:44:42.212800 kernel: acpi PNP0A08:05: MCFG quirk: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 6 23:44:42.212861 kernel: acpi PNP0A08:05: ECAM area [mem 0x23fff0000000-0x23ffffffffff] reserved by PNP0C02:00 Jul 6 23:44:42.212924 kernel: acpi PNP0A08:05: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] Jul 6 23:44:42.212934 kernel: PCI host bridge to bus 0002:00 Jul 6 23:44:42.213001 kernel: pci_bus 0002:00: root bus resource [mem 0x00800000-0x0fffffff window] Jul 6 23:44:42.213060 kernel: pci_bus 0002:00: root bus resource [mem 0x200000000000-0x23ffdfffffff window] Jul 6 23:44:42.213118 kernel: pci_bus 0002:00: root bus resource [bus 00-ff] Jul 6 23:44:42.213193 kernel: pci 0002:00:00.0: [1def:e110] type 00 class 0x060000 Jul 6 23:44:42.213267 kernel: pci 0002:00:01.0: [1def:e111] type 01 class 0x060400 Jul 6 23:44:42.213331 kernel: pci 0002:00:01.0: supports D1 D2 Jul 6 23:44:42.213397 kernel: pci 0002:00:01.0: PME# supported from D0 D1 D3hot Jul 6 23:44:42.213469 kernel: pci 0002:00:03.0: [1def:e113] type 01 class 0x060400 Jul 6 23:44:42.213536 kernel: pci 0002:00:03.0: supports D1 D2 Jul 6 23:44:42.213601 kernel: pci 0002:00:03.0: PME# supported from D0 D1 D3hot Jul 6 23:44:42.213672 kernel: pci 0002:00:05.0: [1def:e115] type 01 class 0x060400 Jul 6 23:44:42.213736 kernel: pci 0002:00:05.0: supports D1 D2 Jul 6 23:44:42.213800 kernel: pci 0002:00:05.0: PME# supported from D0 D1 D3hot Jul 6 23:44:42.213872 kernel: pci 0002:00:07.0: [1def:e117] type 01 class 0x060400 Jul 6 23:44:42.213939 kernel: pci 0002:00:07.0: supports D1 D2 Jul 6 23:44:42.214003 kernel: pci 0002:00:07.0: PME# supported from D0 D1 D3hot Jul 6 23:44:42.214014 kernel: acpiphp: Slot [1-5] registered Jul 6 23:44:42.214022 kernel: acpiphp: Slot [2-5] registered Jul 6 23:44:42.214030 kernel: acpiphp: Slot [3-3] registered Jul 6 23:44:42.214037 kernel: acpiphp: Slot [4-3] registered Jul 6 23:44:42.214094 kernel: pci_bus 0002:00: on NUMA node 0 Jul 6 23:44:42.214161 kernel: pci 0002:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 6 23:44:42.214230 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 6 23:44:42.214295 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 6 23:44:42.214363 kernel: pci 0002:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 6 23:44:42.214426 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 6 23:44:42.214493 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 6 23:44:42.214558 kernel: pci 0002:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 6 23:44:42.214623 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 6 23:44:42.214687 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jul 6 23:44:42.214751 kernel: pci 0002:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 6 23:44:42.214815 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 6 23:44:42.214879 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 6 23:44:42.214946 kernel: pci 0002:00:01.0: BAR 14: assigned [mem 0x00800000-0x009fffff] Jul 6 23:44:42.215010 kernel: pci 0002:00:01.0: BAR 15: assigned [mem 0x200000000000-0x2000001fffff 64bit pref] Jul 6 23:44:42.215074 kernel: pci 0002:00:03.0: BAR 14: assigned [mem 0x00a00000-0x00bfffff] Jul 6 23:44:42.215137 kernel: pci 0002:00:03.0: BAR 15: assigned [mem 0x200000200000-0x2000003fffff 64bit pref] Jul 6 23:44:42.215205 kernel: pci 0002:00:05.0: BAR 14: assigned [mem 0x00c00000-0x00dfffff] Jul 6 23:44:42.215270 kernel: pci 0002:00:05.0: BAR 15: assigned [mem 0x200000400000-0x2000005fffff 64bit pref] Jul 6 23:44:42.215334 kernel: pci 0002:00:07.0: BAR 14: assigned [mem 0x00e00000-0x00ffffff] Jul 6 23:44:42.215400 kernel: pci 0002:00:07.0: BAR 15: assigned [mem 0x200000600000-0x2000007fffff 64bit pref] Jul 6 23:44:42.215464 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.215528 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.215592 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.215658 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.215722 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.215785 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.215849 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.215914 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.215978 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.216041 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.216105 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.216171 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.216238 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.216303 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.216366 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.216431 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.216494 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] Jul 6 23:44:42.216562 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff] Jul 6 23:44:42.216626 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref] Jul 6 23:44:42.216691 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] Jul 6 23:44:42.216755 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff] Jul 6 23:44:42.216819 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref] Jul 6 23:44:42.216884 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] Jul 6 23:44:42.216950 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff] Jul 6 23:44:42.217014 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref] Jul 6 23:44:42.217078 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] Jul 6 23:44:42.217146 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff] Jul 6 23:44:42.217215 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref] Jul 6 23:44:42.217276 kernel: pci_bus 0002:00: resource 4 [mem 0x00800000-0x0fffffff window] Jul 6 23:44:42.217339 kernel: pci_bus 0002:00: resource 5 [mem 0x200000000000-0x23ffdfffffff window] Jul 6 23:44:42.217410 kernel: pci_bus 0002:01: resource 1 [mem 0x00800000-0x009fffff] Jul 6 23:44:42.217472 kernel: pci_bus 0002:01: resource 2 [mem 0x200000000000-0x2000001fffff 64bit pref] Jul 6 23:44:42.217539 kernel: pci_bus 0002:02: resource 1 [mem 0x00a00000-0x00bfffff] Jul 6 23:44:42.217602 kernel: pci_bus 0002:02: resource 2 [mem 0x200000200000-0x2000003fffff 64bit pref] Jul 6 23:44:42.217676 kernel: pci_bus 0002:03: resource 1 [mem 0x00c00000-0x00dfffff] Jul 6 23:44:42.217739 kernel: pci_bus 0002:03: resource 2 [mem 0x200000400000-0x2000005fffff 64bit pref] Jul 6 23:44:42.217805 kernel: pci_bus 0002:04: resource 1 [mem 0x00e00000-0x00ffffff] Jul 6 23:44:42.217865 kernel: pci_bus 0002:04: resource 2 [mem 0x200000600000-0x2000007fffff 64bit pref] Jul 6 23:44:42.217875 kernel: ACPI: PCI Root Bridge [PCI2] (domain 0001 [bus 00-ff]) Jul 6 23:44:42.217945 kernel: acpi PNP0A08:06: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:44:42.218007 kernel: acpi PNP0A08:06: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 6 23:44:42.218069 kernel: acpi PNP0A08:06: _OSC: OS now controls [AER PCIeCapability] Jul 6 23:44:42.218133 kernel: acpi PNP0A08:06: MCFG quirk: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 6 23:44:42.218200 kernel: acpi PNP0A08:06: ECAM area [mem 0x3bfff0000000-0x3bffffffffff] reserved by PNP0C02:00 Jul 6 23:44:42.218263 kernel: acpi PNP0A08:06: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] Jul 6 23:44:42.218273 kernel: PCI host bridge to bus 0001:00 Jul 6 23:44:42.218339 kernel: pci_bus 0001:00: root bus resource [mem 0x60000000-0x6fffffff window] Jul 6 23:44:42.218397 kernel: pci_bus 0001:00: root bus resource [mem 0x380000000000-0x3bffdfffffff window] Jul 6 23:44:42.218455 kernel: pci_bus 0001:00: root bus resource [bus 00-ff] Jul 6 23:44:42.218528 kernel: pci 0001:00:00.0: [1def:e100] type 00 class 0x060000 Jul 6 23:44:42.218601 kernel: pci 0001:00:01.0: [1def:e101] type 01 class 0x060400 Jul 6 23:44:42.218667 kernel: pci 0001:00:01.0: enabling Extended Tags Jul 6 23:44:42.218731 kernel: pci 0001:00:01.0: supports D1 D2 Jul 6 23:44:42.218817 kernel: pci 0001:00:01.0: PME# supported from D0 D1 D3hot Jul 6 23:44:42.218889 kernel: pci 0001:00:02.0: [1def:e102] type 01 class 0x060400 Jul 6 23:44:42.218957 kernel: pci 0001:00:02.0: supports D1 D2 Jul 6 23:44:42.219022 kernel: pci 0001:00:02.0: PME# supported from D0 D1 D3hot Jul 6 23:44:42.219093 kernel: pci 0001:00:03.0: [1def:e103] type 01 class 0x060400 Jul 6 23:44:42.219414 kernel: pci 0001:00:03.0: supports D1 D2 Jul 6 23:44:42.219502 kernel: pci 0001:00:03.0: PME# supported from D0 D1 D3hot Jul 6 23:44:42.219576 kernel: pci 0001:00:04.0: [1def:e104] type 01 class 0x060400 Jul 6 23:44:42.219645 kernel: pci 0001:00:04.0: supports D1 D2 Jul 6 23:44:42.219708 kernel: pci 0001:00:04.0: PME# supported from D0 D1 D3hot Jul 6 23:44:42.219718 kernel: acpiphp: Slot [1-6] registered Jul 6 23:44:42.219790 kernel: pci 0001:01:00.0: [15b3:1015] type 00 class 0x020000 Jul 6 23:44:42.219855 kernel: pci 0001:01:00.0: reg 0x10: [mem 0x380002000000-0x380003ffffff 64bit pref] Jul 6 23:44:42.219921 kernel: pci 0001:01:00.0: reg 0x30: [mem 0x60100000-0x601fffff pref] Jul 6 23:44:42.219985 kernel: pci 0001:01:00.0: PME# supported from D3cold Jul 6 23:44:42.220050 kernel: pci 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jul 6 23:44:42.220124 kernel: pci 0001:01:00.1: [15b3:1015] type 00 class 0x020000 Jul 6 23:44:42.220196 kernel: pci 0001:01:00.1: reg 0x10: [mem 0x380000000000-0x380001ffffff 64bit pref] Jul 6 23:44:42.220261 kernel: pci 0001:01:00.1: reg 0x30: [mem 0x60000000-0x600fffff pref] Jul 6 23:44:42.220327 kernel: pci 0001:01:00.1: PME# supported from D3cold Jul 6 23:44:42.220337 kernel: acpiphp: Slot [2-6] registered Jul 6 23:44:42.220345 kernel: acpiphp: Slot [3-4] registered Jul 6 23:44:42.220353 kernel: acpiphp: Slot [4-4] registered Jul 6 23:44:42.220410 kernel: pci_bus 0001:00: on NUMA node 0 Jul 6 23:44:42.220479 kernel: pci 0001:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 6 23:44:42.220543 kernel: pci 0001:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 6 23:44:42.220607 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 6 23:44:42.220673 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 6 23:44:42.220737 kernel: pci 0001:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 6 23:44:42.220803 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 6 23:44:42.220865 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jul 6 23:44:42.220933 kernel: pci 0001:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 6 23:44:42.220996 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 6 23:44:42.221059 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 6 23:44:42.221124 kernel: pci 0001:00:01.0: BAR 15: assigned [mem 0x380000000000-0x380003ffffff 64bit pref] Jul 6 23:44:42.221191 kernel: pci 0001:00:01.0: BAR 14: assigned [mem 0x60000000-0x601fffff] Jul 6 23:44:42.221255 kernel: pci 0001:00:02.0: BAR 14: assigned [mem 0x60200000-0x603fffff] Jul 6 23:44:42.221318 kernel: pci 0001:00:02.0: BAR 15: assigned [mem 0x380004000000-0x3800041fffff 64bit pref] Jul 6 23:44:42.221385 kernel: pci 0001:00:03.0: BAR 14: assigned [mem 0x60400000-0x605fffff] Jul 6 23:44:42.221448 kernel: pci 0001:00:03.0: BAR 15: assigned [mem 0x380004200000-0x3800043fffff 64bit pref] Jul 6 23:44:42.221513 kernel: pci 0001:00:04.0: BAR 14: assigned [mem 0x60600000-0x607fffff] Jul 6 23:44:42.221576 kernel: pci 0001:00:04.0: BAR 15: assigned [mem 0x380004400000-0x3800045fffff 64bit pref] Jul 6 23:44:42.221639 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.221703 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.221765 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.221832 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.221898 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.221961 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.222024 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.222087 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.222150 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.222217 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.222280 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.222343 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.222409 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.222471 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.222536 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.222598 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.222665 kernel: pci 0001:01:00.0: BAR 0: assigned [mem 0x380000000000-0x380001ffffff 64bit pref] Jul 6 23:44:42.222732 kernel: pci 0001:01:00.1: BAR 0: assigned [mem 0x380002000000-0x380003ffffff 64bit pref] Jul 6 23:44:42.222796 kernel: pci 0001:01:00.0: BAR 6: assigned [mem 0x60000000-0x600fffff pref] Jul 6 23:44:42.222862 kernel: pci 0001:01:00.1: BAR 6: assigned [mem 0x60100000-0x601fffff pref] Jul 6 23:44:42.222928 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] Jul 6 23:44:42.222991 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] Jul 6 23:44:42.223053 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref] Jul 6 23:44:42.223117 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] Jul 6 23:44:42.223185 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff] Jul 6 23:44:42.223248 kernel: pci 0001:00:02.0: bridge window [mem 0x380004000000-0x3800041fffff 64bit pref] Jul 6 23:44:42.223315 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] Jul 6 23:44:42.223378 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff] Jul 6 23:44:42.223442 kernel: pci 0001:00:03.0: bridge window [mem 0x380004200000-0x3800043fffff 64bit pref] Jul 6 23:44:42.223505 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] Jul 6 23:44:42.223569 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff] Jul 6 23:44:42.223633 kernel: pci 0001:00:04.0: bridge window [mem 0x380004400000-0x3800045fffff 64bit pref] Jul 6 23:44:42.223694 kernel: pci_bus 0001:00: resource 4 [mem 0x60000000-0x6fffffff window] Jul 6 23:44:42.223751 kernel: pci_bus 0001:00: resource 5 [mem 0x380000000000-0x3bffdfffffff window] Jul 6 23:44:42.223826 kernel: pci_bus 0001:01: resource 1 [mem 0x60000000-0x601fffff] Jul 6 23:44:42.223887 kernel: pci_bus 0001:01: resource 2 [mem 0x380000000000-0x380003ffffff 64bit pref] Jul 6 23:44:42.223953 kernel: pci_bus 0001:02: resource 1 [mem 0x60200000-0x603fffff] Jul 6 23:44:42.224013 kernel: pci_bus 0001:02: resource 2 [mem 0x380004000000-0x3800041fffff 64bit pref] Jul 6 23:44:42.224080 kernel: pci_bus 0001:03: resource 1 [mem 0x60400000-0x605fffff] Jul 6 23:44:42.224143 kernel: pci_bus 0001:03: resource 2 [mem 0x380004200000-0x3800043fffff 64bit pref] Jul 6 23:44:42.224392 kernel: pci_bus 0001:04: resource 1 [mem 0x60600000-0x607fffff] Jul 6 23:44:42.224456 kernel: pci_bus 0001:04: resource 2 [mem 0x380004400000-0x3800045fffff 64bit pref] Jul 6 23:44:42.224466 kernel: ACPI: PCI Root Bridge [PCI6] (domain 0004 [bus 00-ff]) Jul 6 23:44:42.224537 kernel: acpi PNP0A08:07: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:44:42.224599 kernel: acpi PNP0A08:07: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 6 23:44:42.224664 kernel: acpi PNP0A08:07: _OSC: OS now controls [AER PCIeCapability] Jul 6 23:44:42.224725 kernel: acpi PNP0A08:07: MCFG quirk: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 6 23:44:42.224786 kernel: acpi PNP0A08:07: ECAM area [mem 0x2bfff0000000-0x2bffffffffff] reserved by PNP0C02:00 Jul 6 23:44:42.224846 kernel: acpi PNP0A08:07: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] Jul 6 23:44:42.224856 kernel: PCI host bridge to bus 0004:00 Jul 6 23:44:42.224919 kernel: pci_bus 0004:00: root bus resource [mem 0x20000000-0x2fffffff window] Jul 6 23:44:42.224976 kernel: pci_bus 0004:00: root bus resource [mem 0x280000000000-0x2bffdfffffff window] Jul 6 23:44:42.225034 kernel: pci_bus 0004:00: root bus resource [bus 00-ff] Jul 6 23:44:42.225105 kernel: pci 0004:00:00.0: [1def:e110] type 00 class 0x060000 Jul 6 23:44:42.225182 kernel: pci 0004:00:01.0: [1def:e111] type 01 class 0x060400 Jul 6 23:44:42.225247 kernel: pci 0004:00:01.0: supports D1 D2 Jul 6 23:44:42.225312 kernel: pci 0004:00:01.0: PME# supported from D0 D1 D3hot Jul 6 23:44:42.225381 kernel: pci 0004:00:03.0: [1def:e113] type 01 class 0x060400 Jul 6 23:44:42.225446 kernel: pci 0004:00:03.0: supports D1 D2 Jul 6 23:44:42.225512 kernel: pci 0004:00:03.0: PME# supported from D0 D1 D3hot Jul 6 23:44:42.225583 kernel: pci 0004:00:05.0: [1def:e115] type 01 class 0x060400 Jul 6 23:44:42.225648 kernel: pci 0004:00:05.0: supports D1 D2 Jul 6 23:44:42.225710 kernel: pci 0004:00:05.0: PME# supported from D0 D1 D3hot Jul 6 23:44:42.225782 kernel: pci 0004:01:00.0: [1a03:1150] type 01 class 0x060400 Jul 6 23:44:42.225847 kernel: pci 0004:01:00.0: enabling Extended Tags Jul 6 23:44:42.225915 kernel: pci 0004:01:00.0: supports D1 D2 Jul 6 23:44:42.225983 kernel: pci 0004:01:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 6 23:44:42.226058 kernel: pci_bus 0004:02: extended config space not accessible Jul 6 23:44:42.226135 kernel: pci 0004:02:00.0: [1a03:2000] type 00 class 0x030000 Jul 6 23:44:42.226206 kernel: pci 0004:02:00.0: reg 0x10: [mem 0x20000000-0x21ffffff] Jul 6 23:44:42.226274 kernel: pci 0004:02:00.0: reg 0x14: [mem 0x22000000-0x2201ffff] Jul 6 23:44:42.226343 kernel: pci 0004:02:00.0: reg 0x18: [io 0x0000-0x007f] Jul 6 23:44:42.226409 kernel: pci 0004:02:00.0: BAR 0: assigned to efifb Jul 6 23:44:42.226479 kernel: pci 0004:02:00.0: supports D1 D2 Jul 6 23:44:42.226546 kernel: pci 0004:02:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 6 23:44:42.226618 kernel: pci 0004:03:00.0: [1912:0014] type 00 class 0x0c0330 Jul 6 23:44:42.226684 kernel: pci 0004:03:00.0: reg 0x10: [mem 0x22200000-0x22201fff 64bit] Jul 6 23:44:42.226749 kernel: pci 0004:03:00.0: PME# supported from D0 D3hot D3cold Jul 6 23:44:42.226807 kernel: pci_bus 0004:00: on NUMA node 0 Jul 6 23:44:42.226870 kernel: pci 0004:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01-02] add_size 200000 add_align 100000 Jul 6 23:44:42.226936 kernel: pci 0004:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 6 23:44:42.226999 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 6 23:44:42.227063 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jul 6 23:44:42.227128 kernel: pci 0004:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 6 23:44:42.227195 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 6 23:44:42.227259 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 6 23:44:42.227322 kernel: pci 0004:00:01.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] Jul 6 23:44:42.227388 kernel: pci 0004:00:01.0: BAR 15: assigned [mem 0x280000000000-0x2800001fffff 64bit pref] Jul 6 23:44:42.227450 kernel: pci 0004:00:03.0: BAR 14: assigned [mem 0x23000000-0x231fffff] Jul 6 23:44:42.227513 kernel: pci 0004:00:03.0: BAR 15: assigned [mem 0x280000200000-0x2800003fffff 64bit pref] Jul 6 23:44:42.227577 kernel: pci 0004:00:05.0: BAR 14: assigned [mem 0x23200000-0x233fffff] Jul 6 23:44:42.227640 kernel: pci 0004:00:05.0: BAR 15: assigned [mem 0x280000400000-0x2800005fffff 64bit pref] Jul 6 23:44:42.227702 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.227765 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.227831 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.227893 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.227956 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.228019 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.228083 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.228146 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.228212 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.228275 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.228338 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.228404 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.228469 kernel: pci 0004:01:00.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] Jul 6 23:44:42.228534 kernel: pci 0004:01:00.0: BAR 13: no space for [io size 0x1000] Jul 6 23:44:42.228600 kernel: pci 0004:01:00.0: BAR 13: failed to assign [io size 0x1000] Jul 6 23:44:42.228667 kernel: pci 0004:02:00.0: BAR 0: assigned [mem 0x20000000-0x21ffffff] Jul 6 23:44:42.228735 kernel: pci 0004:02:00.0: BAR 1: assigned [mem 0x22000000-0x2201ffff] Jul 6 23:44:42.228802 kernel: pci 0004:02:00.0: BAR 2: no space for [io size 0x0080] Jul 6 23:44:42.228869 kernel: pci 0004:02:00.0: BAR 2: failed to assign [io size 0x0080] Jul 6 23:44:42.228937 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] Jul 6 23:44:42.229001 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff] Jul 6 23:44:42.229064 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] Jul 6 23:44:42.229127 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff] Jul 6 23:44:42.229194 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref] Jul 6 23:44:42.229260 kernel: pci 0004:03:00.0: BAR 0: assigned [mem 0x23000000-0x23001fff 64bit] Jul 6 23:44:42.229323 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] Jul 6 23:44:42.229386 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff] Jul 6 23:44:42.229451 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref] Jul 6 23:44:42.229515 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] Jul 6 23:44:42.229578 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff] Jul 6 23:44:42.229642 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref] Jul 6 23:44:42.229699 kernel: pci_bus 0004:00: Some PCI device resources are unassigned, try booting with pci=realloc Jul 6 23:44:42.229756 kernel: pci_bus 0004:00: resource 4 [mem 0x20000000-0x2fffffff window] Jul 6 23:44:42.229815 kernel: pci_bus 0004:00: resource 5 [mem 0x280000000000-0x2bffdfffffff window] Jul 6 23:44:42.229884 kernel: pci_bus 0004:01: resource 1 [mem 0x20000000-0x22ffffff] Jul 6 23:44:42.229944 kernel: pci_bus 0004:01: resource 2 [mem 0x280000000000-0x2800001fffff 64bit pref] Jul 6 23:44:42.230006 kernel: pci_bus 0004:02: resource 1 [mem 0x20000000-0x22ffffff] Jul 6 23:44:42.230072 kernel: pci_bus 0004:03: resource 1 [mem 0x23000000-0x231fffff] Jul 6 23:44:42.230130 kernel: pci_bus 0004:03: resource 2 [mem 0x280000200000-0x2800003fffff 64bit pref] Jul 6 23:44:42.230200 kernel: pci_bus 0004:04: resource 1 [mem 0x23200000-0x233fffff] Jul 6 23:44:42.230262 kernel: pci_bus 0004:04: resource 2 [mem 0x280000400000-0x2800005fffff 64bit pref] Jul 6 23:44:42.230272 kernel: iommu: Default domain type: Translated Jul 6 23:44:42.230280 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 6 23:44:42.230288 kernel: efivars: Registered efivars operations Jul 6 23:44:42.230354 kernel: pci 0004:02:00.0: vgaarb: setting as boot VGA device Jul 6 23:44:42.230422 kernel: pci 0004:02:00.0: vgaarb: bridge control possible Jul 6 23:44:42.230489 kernel: pci 0004:02:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none Jul 6 23:44:42.230501 kernel: vgaarb: loaded Jul 6 23:44:42.230509 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 6 23:44:42.230517 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:44:42.230525 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:44:42.230533 kernel: pnp: PnP ACPI init Jul 6 23:44:42.230602 kernel: system 00:00: [mem 0x3bfff0000000-0x3bffffffffff window] could not be reserved Jul 6 23:44:42.230663 kernel: system 00:00: [mem 0x3ffff0000000-0x3fffffffffff window] could not be reserved Jul 6 23:44:42.230724 kernel: system 00:00: [mem 0x23fff0000000-0x23ffffffffff window] could not be reserved Jul 6 23:44:42.230782 kernel: system 00:00: [mem 0x27fff0000000-0x27ffffffffff window] could not be reserved Jul 6 23:44:42.230840 kernel: system 00:00: [mem 0x2bfff0000000-0x2bffffffffff window] could not be reserved Jul 6 23:44:42.230899 kernel: system 00:00: [mem 0x2ffff0000000-0x2fffffffffff window] could not be reserved Jul 6 23:44:42.230957 kernel: system 00:00: [mem 0x33fff0000000-0x33ffffffffff window] could not be reserved Jul 6 23:44:42.231015 kernel: system 00:00: [mem 0x37fff0000000-0x37ffffffffff window] could not be reserved Jul 6 23:44:42.231025 kernel: pnp: PnP ACPI: found 1 devices Jul 6 23:44:42.231033 kernel: NET: Registered PF_INET protocol family Jul 6 23:44:42.231044 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:44:42.231052 kernel: tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear) Jul 6 23:44:42.231060 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:44:42.231068 kernel: TCP established hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:44:42.231077 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 6 23:44:42.231084 kernel: TCP: Hash tables configured (established 524288 bind 65536) Jul 6 23:44:42.231092 kernel: UDP hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 6 23:44:42.231100 kernel: UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 6 23:44:42.231110 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:44:42.231181 kernel: pci 0001:01:00.0: CLS mismatch (64 != 32), using 64 bytes Jul 6 23:44:42.231192 kernel: kvm [1]: IPA Size Limit: 48 bits Jul 6 23:44:42.231200 kernel: kvm [1]: GICv3: no GICV resource entry Jul 6 23:44:42.231208 kernel: kvm [1]: disabling GICv2 emulation Jul 6 23:44:42.231216 kernel: kvm [1]: GIC system register CPU interface enabled Jul 6 23:44:42.231224 kernel: kvm [1]: vgic interrupt IRQ9 Jul 6 23:44:42.231232 kernel: kvm [1]: VHE mode initialized successfully Jul 6 23:44:42.231239 kernel: Initialise system trusted keyrings Jul 6 23:44:42.231249 kernel: workingset: timestamp_bits=39 max_order=26 bucket_order=0 Jul 6 23:44:42.231257 kernel: Key type asymmetric registered Jul 6 23:44:42.231264 kernel: Asymmetric key parser 'x509' registered Jul 6 23:44:42.231272 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 6 23:44:42.231280 kernel: io scheduler mq-deadline registered Jul 6 23:44:42.231288 kernel: io scheduler kyber registered Jul 6 23:44:42.231296 kernel: io scheduler bfq registered Jul 6 23:44:42.231304 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 6 23:44:42.231312 kernel: ACPI: button: Power Button [PWRB] Jul 6 23:44:42.231320 kernel: ACPI GTDT: found 1 SBSA generic Watchdog(s). Jul 6 23:44:42.231329 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:44:42.231400 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: option mask 0x0 Jul 6 23:44:42.231463 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: IDR0.COHACC overridden by FW configuration (false) Jul 6 23:44:42.231522 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Jul 6 23:44:42.231582 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for cmdq Jul 6 23:44:42.231641 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 131072 entries for evtq Jul 6 23:44:42.231703 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for priq Jul 6 23:44:42.231771 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: option mask 0x0 Jul 6 23:44:42.231832 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: IDR0.COHACC overridden by FW configuration (false) Jul 6 23:44:42.231891 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Jul 6 23:44:42.231949 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for cmdq Jul 6 23:44:42.232008 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 131072 entries for evtq Jul 6 23:44:42.232066 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for priq Jul 6 23:44:42.232135 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: option mask 0x0 Jul 6 23:44:42.232198 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: IDR0.COHACC overridden by FW configuration (false) Jul 6 23:44:42.232258 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Jul 6 23:44:42.232317 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for cmdq Jul 6 23:44:42.232375 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 131072 entries for evtq Jul 6 23:44:42.232434 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for priq Jul 6 23:44:42.232499 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: option mask 0x0 Jul 6 23:44:42.232562 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: IDR0.COHACC overridden by FW configuration (false) Jul 6 23:44:42.232621 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Jul 6 23:44:42.232680 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for cmdq Jul 6 23:44:42.232738 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 131072 entries for evtq Jul 6 23:44:42.232797 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for priq Jul 6 23:44:42.232872 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: option mask 0x0 Jul 6 23:44:42.232934 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: IDR0.COHACC overridden by FW configuration (false) Jul 6 23:44:42.232993 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Jul 6 23:44:42.233052 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for cmdq Jul 6 23:44:42.233111 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 131072 entries for evtq Jul 6 23:44:42.233172 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for priq Jul 6 23:44:42.233240 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: option mask 0x0 Jul 6 23:44:42.233302 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: IDR0.COHACC overridden by FW configuration (false) Jul 6 23:44:42.233361 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Jul 6 23:44:42.233420 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for cmdq Jul 6 23:44:42.233478 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 131072 entries for evtq Jul 6 23:44:42.233536 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for priq Jul 6 23:44:42.233602 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: option mask 0x0 Jul 6 23:44:42.233662 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: IDR0.COHACC overridden by FW configuration (false) Jul 6 23:44:42.233723 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Jul 6 23:44:42.233782 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for cmdq Jul 6 23:44:42.233844 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 131072 entries for evtq Jul 6 23:44:42.233902 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for priq Jul 6 23:44:42.233969 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: option mask 0x0 Jul 6 23:44:42.234028 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: IDR0.COHACC overridden by FW configuration (false) Jul 6 23:44:42.234090 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Jul 6 23:44:42.234150 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for cmdq Jul 6 23:44:42.234375 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 131072 entries for evtq Jul 6 23:44:42.234437 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for priq Jul 6 23:44:42.234447 kernel: thunder_xcv, ver 1.0 Jul 6 23:44:42.234455 kernel: thunder_bgx, ver 1.0 Jul 6 23:44:42.234463 kernel: nicpf, ver 1.0 Jul 6 23:44:42.234471 kernel: nicvf, ver 1.0 Jul 6 23:44:42.234541 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 6 23:44:42.234601 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-06T23:44:40 UTC (1751845480) Jul 6 23:44:42.234611 kernel: efifb: probing for efifb Jul 6 23:44:42.234619 kernel: efifb: framebuffer at 0x20000000, using 1876k, total 1875k Jul 6 23:44:42.234627 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jul 6 23:44:42.234635 kernel: efifb: scrolling: redraw Jul 6 23:44:42.234643 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 6 23:44:42.234651 kernel: Console: switching to colour frame buffer device 100x37 Jul 6 23:44:42.234661 kernel: fb0: EFI VGA frame buffer device Jul 6 23:44:42.234669 kernel: SMCCC: SOC_ID: ID = jep106:0a16:0001 Revision = 0x000000a1 Jul 6 23:44:42.234677 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 6 23:44:42.234685 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 6 23:44:42.234693 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 6 23:44:42.234701 kernel: watchdog: Hard watchdog permanently disabled Jul 6 23:44:42.234710 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:44:42.234718 kernel: Segment Routing with IPv6 Jul 6 23:44:42.234726 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:44:42.234735 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:44:42.234743 kernel: Key type dns_resolver registered Jul 6 23:44:42.234750 kernel: registered taskstats version 1 Jul 6 23:44:42.234758 kernel: Loading compiled-in X.509 certificates Jul 6 23:44:42.234766 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: b86e6d3bec2e587f2e5c37def91c4582416a83e3' Jul 6 23:44:42.234774 kernel: Key type .fscrypt registered Jul 6 23:44:42.234781 kernel: Key type fscrypt-provisioning registered Jul 6 23:44:42.234789 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:44:42.234797 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:44:42.234806 kernel: ima: No architecture policies found Jul 6 23:44:42.234814 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 6 23:44:42.234883 kernel: pcieport 000d:00:01.0: Adding to iommu group 0 Jul 6 23:44:42.234947 kernel: pcieport 000d:00:01.0: AER: enabled with IRQ 91 Jul 6 23:44:42.235014 kernel: pcieport 000d:00:02.0: Adding to iommu group 1 Jul 6 23:44:42.235077 kernel: pcieport 000d:00:02.0: AER: enabled with IRQ 91 Jul 6 23:44:42.235143 kernel: pcieport 000d:00:03.0: Adding to iommu group 2 Jul 6 23:44:42.235211 kernel: pcieport 000d:00:03.0: AER: enabled with IRQ 91 Jul 6 23:44:42.235277 kernel: pcieport 000d:00:04.0: Adding to iommu group 3 Jul 6 23:44:42.235343 kernel: pcieport 000d:00:04.0: AER: enabled with IRQ 91 Jul 6 23:44:42.235409 kernel: pcieport 0000:00:01.0: Adding to iommu group 4 Jul 6 23:44:42.235473 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 92 Jul 6 23:44:42.235541 kernel: pcieport 0000:00:02.0: Adding to iommu group 5 Jul 6 23:44:42.235605 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 92 Jul 6 23:44:42.235670 kernel: pcieport 0000:00:03.0: Adding to iommu group 6 Jul 6 23:44:42.235734 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 92 Jul 6 23:44:42.235799 kernel: pcieport 0000:00:04.0: Adding to iommu group 7 Jul 6 23:44:42.235866 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 92 Jul 6 23:44:42.235931 kernel: pcieport 0005:00:01.0: Adding to iommu group 8 Jul 6 23:44:42.235995 kernel: pcieport 0005:00:01.0: AER: enabled with IRQ 93 Jul 6 23:44:42.236060 kernel: pcieport 0005:00:03.0: Adding to iommu group 9 Jul 6 23:44:42.236123 kernel: pcieport 0005:00:03.0: AER: enabled with IRQ 93 Jul 6 23:44:42.236192 kernel: pcieport 0005:00:05.0: Adding to iommu group 10 Jul 6 23:44:42.236256 kernel: pcieport 0005:00:05.0: AER: enabled with IRQ 93 Jul 6 23:44:42.236321 kernel: pcieport 0005:00:07.0: Adding to iommu group 11 Jul 6 23:44:42.236387 kernel: pcieport 0005:00:07.0: AER: enabled with IRQ 93 Jul 6 23:44:42.236453 kernel: pcieport 0003:00:01.0: Adding to iommu group 12 Jul 6 23:44:42.236516 kernel: pcieport 0003:00:01.0: AER: enabled with IRQ 94 Jul 6 23:44:42.236582 kernel: pcieport 0003:00:03.0: Adding to iommu group 13 Jul 6 23:44:42.236646 kernel: pcieport 0003:00:03.0: AER: enabled with IRQ 94 Jul 6 23:44:42.236711 kernel: pcieport 0003:00:05.0: Adding to iommu group 14 Jul 6 23:44:42.236775 kernel: pcieport 0003:00:05.0: AER: enabled with IRQ 94 Jul 6 23:44:42.236841 kernel: pcieport 000c:00:01.0: Adding to iommu group 15 Jul 6 23:44:42.236907 kernel: pcieport 000c:00:01.0: AER: enabled with IRQ 95 Jul 6 23:44:42.236974 kernel: pcieport 000c:00:02.0: Adding to iommu group 16 Jul 6 23:44:42.237038 kernel: pcieport 000c:00:02.0: AER: enabled with IRQ 95 Jul 6 23:44:42.237103 kernel: pcieport 000c:00:03.0: Adding to iommu group 17 Jul 6 23:44:42.237172 kernel: pcieport 000c:00:03.0: AER: enabled with IRQ 95 Jul 6 23:44:42.237239 kernel: pcieport 000c:00:04.0: Adding to iommu group 18 Jul 6 23:44:42.237303 kernel: pcieport 000c:00:04.0: AER: enabled with IRQ 95 Jul 6 23:44:42.237369 kernel: pcieport 0002:00:01.0: Adding to iommu group 19 Jul 6 23:44:42.237433 kernel: pcieport 0002:00:01.0: AER: enabled with IRQ 96 Jul 6 23:44:42.237502 kernel: pcieport 0002:00:03.0: Adding to iommu group 20 Jul 6 23:44:42.237566 kernel: pcieport 0002:00:03.0: AER: enabled with IRQ 96 Jul 6 23:44:42.237632 kernel: pcieport 0002:00:05.0: Adding to iommu group 21 Jul 6 23:44:42.237696 kernel: pcieport 0002:00:05.0: AER: enabled with IRQ 96 Jul 6 23:44:42.237761 kernel: pcieport 0002:00:07.0: Adding to iommu group 22 Jul 6 23:44:42.237825 kernel: pcieport 0002:00:07.0: AER: enabled with IRQ 96 Jul 6 23:44:42.237888 kernel: pcieport 0001:00:01.0: Adding to iommu group 23 Jul 6 23:44:42.237954 kernel: pcieport 0001:00:01.0: AER: enabled with IRQ 97 Jul 6 23:44:42.238020 kernel: pcieport 0001:00:02.0: Adding to iommu group 24 Jul 6 23:44:42.238084 kernel: pcieport 0001:00:02.0: AER: enabled with IRQ 97 Jul 6 23:44:42.238150 kernel: pcieport 0001:00:03.0: Adding to iommu group 25 Jul 6 23:44:42.238320 kernel: pcieport 0001:00:03.0: AER: enabled with IRQ 97 Jul 6 23:44:42.238388 kernel: pcieport 0001:00:04.0: Adding to iommu group 26 Jul 6 23:44:42.238451 kernel: pcieport 0001:00:04.0: AER: enabled with IRQ 97 Jul 6 23:44:42.238517 kernel: pcieport 0004:00:01.0: Adding to iommu group 27 Jul 6 23:44:42.238580 kernel: pcieport 0004:00:01.0: AER: enabled with IRQ 98 Jul 6 23:44:42.238649 kernel: pcieport 0004:00:03.0: Adding to iommu group 28 Jul 6 23:44:42.238712 kernel: pcieport 0004:00:03.0: AER: enabled with IRQ 98 Jul 6 23:44:42.238777 kernel: pcieport 0004:00:05.0: Adding to iommu group 29 Jul 6 23:44:42.238840 kernel: pcieport 0004:00:05.0: AER: enabled with IRQ 98 Jul 6 23:44:42.238907 kernel: pcieport 0004:01:00.0: Adding to iommu group 30 Jul 6 23:44:42.238917 kernel: clk: Disabling unused clocks Jul 6 23:44:42.238925 kernel: Freeing unused kernel memory: 38336K Jul 6 23:44:42.238933 kernel: Run /init as init process Jul 6 23:44:42.238943 kernel: with arguments: Jul 6 23:44:42.238951 kernel: /init Jul 6 23:44:42.238959 kernel: with environment: Jul 6 23:44:42.238966 kernel: HOME=/ Jul 6 23:44:42.238974 kernel: TERM=linux Jul 6 23:44:42.238981 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:44:42.238990 systemd[1]: Successfully made /usr/ read-only. Jul 6 23:44:42.239001 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:44:42.239012 systemd[1]: Detected architecture arm64. Jul 6 23:44:42.239020 systemd[1]: Running in initrd. Jul 6 23:44:42.239028 systemd[1]: No hostname configured, using default hostname. Jul 6 23:44:42.239036 systemd[1]: Hostname set to . Jul 6 23:44:42.239044 systemd[1]: Initializing machine ID from random generator. Jul 6 23:44:42.239052 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:44:42.239060 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:44:42.239069 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:44:42.239080 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:44:42.239088 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:44:42.239097 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:44:42.239105 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:44:42.239115 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:44:42.239123 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:44:42.239131 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:44:42.239141 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:44:42.239149 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:44:42.239157 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:44:42.239169 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:44:42.239177 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:44:42.239186 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:44:42.239194 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:44:42.239202 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:44:42.239212 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 6 23:44:42.239220 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:44:42.239228 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:44:42.239237 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:44:42.239245 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:44:42.239253 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:44:42.239261 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:44:42.239269 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:44:42.239277 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:44:42.239287 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:44:42.239296 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:44:42.239327 systemd-journald[902]: Collecting audit messages is disabled. Jul 6 23:44:42.239347 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:44:42.239358 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:44:42.239366 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:44:42.239374 kernel: Bridge firewalling registered Jul 6 23:44:42.239383 systemd-journald[902]: Journal started Jul 6 23:44:42.239401 systemd-journald[902]: Runtime Journal (/run/log/journal/0fe15f145b4e4d11b78590d11040322f) is 8M, max 4G, 3.9G free. Jul 6 23:44:42.198292 systemd-modules-load[906]: Inserted module 'overlay' Jul 6 23:44:42.274732 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:44:42.222025 systemd-modules-load[906]: Inserted module 'br_netfilter' Jul 6 23:44:42.280442 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:44:42.291332 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:44:42.302291 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:44:42.313155 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:44:42.338273 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:44:42.344514 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:44:42.361767 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:44:42.384965 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:44:42.403527 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:44:42.421412 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:44:42.427327 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:44:42.438797 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:44:42.466262 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:44:42.475907 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:44:42.488581 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:44:42.514533 dracut-cmdline[949]: dracut-dracut-053 Jul 6 23:44:42.514533 dracut-cmdline[949]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=ca8feb1f79a67c117068f051b5f829d3e40170c022cd5834bd6789cba9641479 Jul 6 23:44:42.502263 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:44:42.516241 systemd-resolved[951]: Positive Trust Anchors: Jul 6 23:44:42.516250 systemd-resolved[951]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:44:42.516280 systemd-resolved[951]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:44:42.531202 systemd-resolved[951]: Defaulting to hostname 'linux'. Jul 6 23:44:42.532685 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:44:42.567825 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:44:42.678375 kernel: SCSI subsystem initialized Jul 6 23:44:42.690167 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:44:42.708171 kernel: iscsi: registered transport (tcp) Jul 6 23:44:42.736089 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:44:42.736116 kernel: QLogic iSCSI HBA Driver Jul 6 23:44:42.779769 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:44:42.803286 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:44:42.848832 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:44:42.848866 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:44:42.858705 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:44:42.924171 kernel: raid6: neonx8 gen() 15843 MB/s Jul 6 23:44:42.949165 kernel: raid6: neonx4 gen() 15887 MB/s Jul 6 23:44:42.975169 kernel: raid6: neonx2 gen() 13274 MB/s Jul 6 23:44:43.000170 kernel: raid6: neonx1 gen() 10582 MB/s Jul 6 23:44:43.025173 kernel: raid6: int64x8 gen() 6814 MB/s Jul 6 23:44:43.050170 kernel: raid6: int64x4 gen() 7387 MB/s Jul 6 23:44:43.076169 kernel: raid6: int64x2 gen() 6133 MB/s Jul 6 23:44:43.104138 kernel: raid6: int64x1 gen() 5077 MB/s Jul 6 23:44:43.104172 kernel: raid6: using algorithm neonx4 gen() 15887 MB/s Jul 6 23:44:43.138580 kernel: raid6: .... xor() 12495 MB/s, rmw enabled Jul 6 23:44:43.138605 kernel: raid6: using neon recovery algorithm Jul 6 23:44:43.161671 kernel: xor: measuring software checksum speed Jul 6 23:44:43.161692 kernel: 8regs : 21624 MB/sec Jul 6 23:44:43.166169 kernel: 32regs : 21282 MB/sec Jul 6 23:44:43.177337 kernel: arm64_neon : 28070 MB/sec Jul 6 23:44:43.184965 kernel: xor: using function: arm64_neon (28070 MB/sec) Jul 6 23:44:43.245165 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:44:43.256215 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:44:43.277302 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:44:43.291427 systemd-udevd[1154]: Using default interface naming scheme 'v255'. Jul 6 23:44:43.294985 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:44:43.313310 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:44:43.327312 dracut-pre-trigger[1167]: rd.md=0: removing MD RAID activation Jul 6 23:44:43.353482 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:44:43.374308 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:44:43.479672 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:44:43.510092 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 6 23:44:43.510131 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 6 23:44:43.510291 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:44:43.658778 kernel: ACPI: bus type USB registered Jul 6 23:44:43.658802 kernel: usbcore: registered new interface driver usbfs Jul 6 23:44:43.658821 kernel: usbcore: registered new interface driver hub Jul 6 23:44:43.658840 kernel: usbcore: registered new device driver usb Jul 6 23:44:43.658858 kernel: PTP clock support registered Jul 6 23:44:43.658878 kernel: xhci_hcd 0004:03:00.0: Adding to iommu group 31 Jul 6 23:44:43.659090 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller Jul 6 23:44:43.659186 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 1 Jul 6 23:44:43.659271 kernel: xhci_hcd 0004:03:00.0: Zeroing 64bit base registers, expecting fault Jul 6 23:44:43.659351 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jul 6 23:44:43.659361 kernel: mlx5_core 0001:01:00.0: Adding to iommu group 32 Jul 6 23:44:43.659449 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jul 6 23:44:43.659459 kernel: igb 0003:03:00.0: Adding to iommu group 33 Jul 6 23:44:43.570653 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:44:43.703190 kernel: nvme 0005:03:00.0: Adding to iommu group 34 Jul 6 23:44:43.703310 kernel: nvme 0005:04:00.0: Adding to iommu group 35 Jul 6 23:44:43.700631 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:44:43.709094 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:44:43.725578 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:44:43.744383 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:44:43.757280 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:44:43.757346 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:44:43.775421 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:44:43.786524 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:44:43.872285 kernel: xhci_hcd 0004:03:00.0: hcc params 0x014051cf hci version 0x100 quirks 0x0000001100000010 Jul 6 23:44:43.872481 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller Jul 6 23:44:43.872566 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 2 Jul 6 23:44:43.872645 kernel: xhci_hcd 0004:03:00.0: Host supports USB 3.0 SuperSpeed Jul 6 23:44:43.872722 kernel: hub 1-0:1.0: USB hub found Jul 6 23:44:43.872822 kernel: hub 1-0:1.0: 4 ports detected Jul 6 23:44:43.786571 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:44:43.943609 kernel: mlx5_core 0001:01:00.0: firmware version: 14.31.1014 Jul 6 23:44:43.943725 kernel: mlx5_core 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jul 6 23:44:43.943805 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jul 6 23:44:43.803931 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:44:43.948301 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:44:43.955127 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:44:43.973802 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:44:44.007646 kernel: hub 2-0:1.0: USB hub found Jul 6 23:44:44.007869 kernel: hub 2-0:1.0: 4 ports detected Jul 6 23:44:44.007978 kernel: nvme nvme0: pci function 0005:03:00.0 Jul 6 23:44:44.008067 kernel: nvme nvme1: pci function 0005:04:00.0 Jul 6 23:44:44.039170 kernel: nvme nvme0: Shutdown timeout set to 8 seconds Jul 6 23:44:44.039291 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:44:44.085516 kernel: nvme nvme1: Shutdown timeout set to 8 seconds Jul 6 23:44:44.085693 kernel: nvme nvme0: 32/0/0 default/read/poll queues Jul 6 23:44:44.085785 kernel: igb 0003:03:00.0: added PHC on eth0 Jul 6 23:44:44.085882 kernel: igb 0003:03:00.0: Intel(R) Gigabit Ethernet Network Connection Jul 6 23:44:44.097858 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:44:44.358303 kernel: igb 0003:03:00.0: eth0: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0f:f6:a8 Jul 6 23:44:44.358503 kernel: igb 0003:03:00.0: eth0: PBA No: 106300-000 Jul 6 23:44:44.358599 kernel: igb 0003:03:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) Jul 6 23:44:44.358674 kernel: igb 0003:03:00.1: Adding to iommu group 36 Jul 6 23:44:44.358754 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:44:44.358764 kernel: GPT:9289727 != 1875385007 Jul 6 23:44:44.358773 kernel: nvme nvme1: 32/0/0 default/read/poll queues Jul 6 23:44:44.358855 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:44:44.358865 kernel: GPT:9289727 != 1875385007 Jul 6 23:44:44.358875 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:44:44.358884 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 6 23:44:44.358893 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by (udev-worker) (1210) Jul 6 23:44:44.358903 kernel: igb 0003:03:00.1: added PHC on eth1 Jul 6 23:44:44.358981 kernel: BTRFS: device fsid 990dd864-0c88-4d4d-9797-49057844458a devid 1 transid 35 /dev/nvme0n1p3 scanned by (udev-worker) (1258) Jul 6 23:44:44.358991 kernel: igb 0003:03:00.1: Intel(R) Gigabit Ethernet Network Connection Jul 6 23:44:44.359067 kernel: igb 0003:03:00.1: eth1: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0f:f6:a9 Jul 6 23:44:44.359143 kernel: igb 0003:03:00.1: eth1: PBA No: 106300-000 Jul 6 23:44:44.359229 kernel: igb 0003:03:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) Jul 6 23:44:44.359305 kernel: igb 0003:03:00.0 eno1: renamed from eth0 Jul 6 23:44:44.359381 kernel: mlx5_core 0001:01:00.0: Port module event: module 0, Cable plugged Jul 6 23:44:44.359466 kernel: usb 1-3: new high-speed USB device number 2 using xhci_hcd Jul 6 23:44:44.359486 kernel: igb 0003:03:00.1 eno2: renamed from eth1 Jul 6 23:44:44.386598 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - SAMSUNG MZ1LB960HAJQ-00007 EFI-SYSTEM. Jul 6 23:44:44.404467 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - SAMSUNG MZ1LB960HAJQ-00007 ROOT. Jul 6 23:44:44.416865 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. Jul 6 23:44:44.430437 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. Jul 6 23:44:44.439767 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. Jul 6 23:44:44.466302 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:44:44.496612 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 6 23:44:44.496628 kernel: hub 1-3:1.0: USB hub found Jul 6 23:44:44.496833 disk-uuid[1335]: Primary Header is updated. Jul 6 23:44:44.496833 disk-uuid[1335]: Secondary Entries is updated. Jul 6 23:44:44.496833 disk-uuid[1335]: Secondary Header is updated. Jul 6 23:44:44.528228 kernel: hub 1-3:1.0: 4 ports detected Jul 6 23:44:44.602174 kernel: usb 2-3: new SuperSpeed USB device number 2 using xhci_hcd Jul 6 23:44:44.637088 kernel: hub 2-3:1.0: USB hub found Jul 6 23:44:44.637323 kernel: hub 2-3:1.0: 4 ports detected Jul 6 23:44:44.663171 kernel: mlx5_core 0001:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 6 23:44:44.676165 kernel: mlx5_core 0001:01:00.1: Adding to iommu group 37 Jul 6 23:44:44.698720 kernel: mlx5_core 0001:01:00.1: firmware version: 14.31.1014 Jul 6 23:44:44.698805 kernel: mlx5_core 0001:01:00.1: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jul 6 23:44:45.043896 kernel: mlx5_core 0001:01:00.1: Port module event: module 1, Cable plugged Jul 6 23:44:45.353172 kernel: mlx5_core 0001:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 6 23:44:45.368166 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: renamed from eth1 Jul 6 23:44:45.396172 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: renamed from eth0 Jul 6 23:44:45.493179 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 6 23:44:45.493231 disk-uuid[1336]: The operation has completed successfully. Jul 6 23:44:45.520105 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:44:45.520195 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:44:45.567266 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:44:45.577616 sh[1491]: Success Jul 6 23:44:45.596168 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 6 23:44:45.630163 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:44:45.649319 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:44:45.659469 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:44:45.666164 kernel: BTRFS info (device dm-0): first mount of filesystem 990dd864-0c88-4d4d-9797-49057844458a Jul 6 23:44:45.666181 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:44:45.666192 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:44:45.666202 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:44:45.666211 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:44:45.670164 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 6 23:44:45.754861 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:44:45.762659 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:44:45.773361 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:44:45.782526 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:44:45.897762 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:44:45.897782 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:44:45.897793 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 6 23:44:45.897803 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 6 23:44:45.897813 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard Jul 6 23:44:45.897822 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:44:45.888376 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:44:45.903565 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:44:45.933316 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:44:45.945750 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:44:45.975413 systemd-networkd[1681]: lo: Link UP Jul 6 23:44:45.975419 systemd-networkd[1681]: lo: Gained carrier Jul 6 23:44:45.979423 systemd-networkd[1681]: Enumeration completed Jul 6 23:44:45.979772 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:44:45.980711 systemd-networkd[1681]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:44:45.986630 systemd[1]: Reached target network.target - Network. Jul 6 23:44:46.028547 ignition[1677]: Ignition 2.20.0 Jul 6 23:44:46.028554 ignition[1677]: Stage: fetch-offline Jul 6 23:44:46.032597 systemd-networkd[1681]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:44:46.028594 ignition[1677]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:44:46.038765 unknown[1677]: fetched base config from "system" Jul 6 23:44:46.028603 ignition[1677]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 6 23:44:46.038771 unknown[1677]: fetched user config from "system" Jul 6 23:44:46.028760 ignition[1677]: parsed url from cmdline: "" Jul 6 23:44:46.041239 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:44:46.028763 ignition[1677]: no config URL provided Jul 6 23:44:46.055073 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 6 23:44:46.028767 ignition[1677]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:44:46.072366 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:44:46.028819 ignition[1677]: parsing config with SHA512: 73d8283e9676edaed38955fbead2123765079fd37ae72abbd11f7cb92838578ad827dc196fc21487b700057dba4a55717677d5f48e6d8ae05ba1ae1942a436f0 Jul 6 23:44:46.084210 systemd-networkd[1681]: enP1p1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:44:46.039270 ignition[1677]: fetch-offline: fetch-offline passed Jul 6 23:44:46.039275 ignition[1677]: POST message to Packet Timeline Jul 6 23:44:46.039280 ignition[1677]: POST Status error: resource requires networking Jul 6 23:44:46.039345 ignition[1677]: Ignition finished successfully Jul 6 23:44:46.087440 ignition[1707]: Ignition 2.20.0 Jul 6 23:44:46.087446 ignition[1707]: Stage: kargs Jul 6 23:44:46.087587 ignition[1707]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:44:46.087595 ignition[1707]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 6 23:44:46.089101 ignition[1707]: kargs: kargs passed Jul 6 23:44:46.089120 ignition[1707]: POST message to Packet Timeline Jul 6 23:44:46.089369 ignition[1707]: GET https://metadata.packet.net/metadata: attempt #1 Jul 6 23:44:46.092492 ignition[1707]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:34248->[::1]:53: read: connection refused Jul 6 23:44:46.293575 ignition[1707]: GET https://metadata.packet.net/metadata: attempt #2 Jul 6 23:44:46.294459 ignition[1707]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:58381->[::1]:53: read: connection refused Jul 6 23:44:46.652169 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up Jul 6 23:44:46.655355 systemd-networkd[1681]: enP1p1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:44:46.694861 ignition[1707]: GET https://metadata.packet.net/metadata: attempt #3 Jul 6 23:44:46.695379 ignition[1707]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:44859->[::1]:53: read: connection refused Jul 6 23:44:47.271175 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up Jul 6 23:44:47.274039 systemd-networkd[1681]: eno1: Link UP Jul 6 23:44:47.274176 systemd-networkd[1681]: eno2: Link UP Jul 6 23:44:47.274303 systemd-networkd[1681]: enP1p1s0f0np0: Link UP Jul 6 23:44:47.274440 systemd-networkd[1681]: enP1p1s0f0np0: Gained carrier Jul 6 23:44:47.285302 systemd-networkd[1681]: enP1p1s0f1np1: Link UP Jul 6 23:44:47.318188 systemd-networkd[1681]: enP1p1s0f0np0: DHCPv4 address 147.28.150.251/31, gateway 147.28.150.250 acquired from 147.28.144.140 Jul 6 23:44:47.495679 ignition[1707]: GET https://metadata.packet.net/metadata: attempt #4 Jul 6 23:44:47.496081 ignition[1707]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:33281->[::1]:53: read: connection refused Jul 6 23:44:47.657360 systemd-networkd[1681]: enP1p1s0f1np1: Gained carrier Jul 6 23:44:48.657419 systemd-networkd[1681]: enP1p1s0f0np0: Gained IPv6LL Jul 6 23:44:49.097038 ignition[1707]: GET https://metadata.packet.net/metadata: attempt #5 Jul 6 23:44:49.097480 ignition[1707]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:50217->[::1]:53: read: connection refused Jul 6 23:44:49.233274 systemd-networkd[1681]: enP1p1s0f1np1: Gained IPv6LL Jul 6 23:44:52.300078 ignition[1707]: GET https://metadata.packet.net/metadata: attempt #6 Jul 6 23:44:52.828154 ignition[1707]: GET result: OK Jul 6 23:44:53.099741 ignition[1707]: Ignition finished successfully Jul 6 23:44:53.102200 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:44:53.117299 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:44:53.128856 ignition[1724]: Ignition 2.20.0 Jul 6 23:44:53.128863 ignition[1724]: Stage: disks Jul 6 23:44:53.129106 ignition[1724]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:44:53.129115 ignition[1724]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 6 23:44:53.130673 ignition[1724]: disks: disks passed Jul 6 23:44:53.130677 ignition[1724]: POST message to Packet Timeline Jul 6 23:44:53.130694 ignition[1724]: GET https://metadata.packet.net/metadata: attempt #1 Jul 6 23:44:53.663319 ignition[1724]: GET result: OK Jul 6 23:44:53.964494 ignition[1724]: Ignition finished successfully Jul 6 23:44:53.966461 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:44:53.973011 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:44:53.980863 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:44:53.988864 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:44:53.997373 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:44:54.006399 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:44:54.025296 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:44:54.041103 systemd-fsck[1742]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 6 23:44:54.044314 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:44:54.061242 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:44:54.127174 kernel: EXT4-fs (nvme0n1p9): mounted filesystem efd38a90-a3d5-48a9-85e4-1ea6162daba0 r/w with ordered data mode. Quota mode: none. Jul 6 23:44:54.127941 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:44:54.133743 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:44:54.162234 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:44:54.255405 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1754) Jul 6 23:44:54.255438 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:44:54.255458 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:44:54.255477 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 6 23:44:54.255502 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 6 23:44:54.255522 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard Jul 6 23:44:54.168570 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:44:54.261868 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 6 23:44:54.272685 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jul 6 23:44:54.288726 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:44:54.288767 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:44:54.321571 coreos-metadata[1775]: Jul 06 23:44:54.319 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 6 23:44:54.338492 coreos-metadata[1773]: Jul 06 23:44:54.319 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 6 23:44:54.302439 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:44:54.316109 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:44:54.336411 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:44:54.372152 initrd-setup-root[1798]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:44:54.378425 initrd-setup-root[1806]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:44:54.384659 initrd-setup-root[1813]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:44:54.391136 initrd-setup-root[1820]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:44:54.460259 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:44:54.487245 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:44:54.518409 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:44:54.493773 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:44:54.525399 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:44:54.541500 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:44:54.546816 ignition[1897]: INFO : Ignition 2.20.0 Jul 6 23:44:54.546816 ignition[1897]: INFO : Stage: mount Jul 6 23:44:54.546816 ignition[1897]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:44:54.546816 ignition[1897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 6 23:44:54.577503 ignition[1897]: INFO : mount: mount passed Jul 6 23:44:54.577503 ignition[1897]: INFO : POST message to Packet Timeline Jul 6 23:44:54.577503 ignition[1897]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 6 23:44:54.837269 coreos-metadata[1773]: Jul 06 23:44:54.837 INFO Fetch successful Jul 6 23:44:54.883736 coreos-metadata[1773]: Jul 06 23:44:54.883 INFO wrote hostname ci-4230.2.1-a-784d2181dd to /sysroot/etc/hostname Jul 6 23:44:54.893166 coreos-metadata[1775]: Jul 06 23:44:54.891 INFO Fetch successful Jul 6 23:44:54.888249 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:44:54.940464 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jul 6 23:44:54.942202 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jul 6 23:44:55.119397 ignition[1897]: INFO : GET result: OK Jul 6 23:44:55.416868 ignition[1897]: INFO : Ignition finished successfully Jul 6 23:44:55.418961 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:44:55.440237 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:44:55.450657 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:44:55.486389 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1924) Jul 6 23:44:55.486427 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:44:55.500783 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:44:55.513821 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 6 23:44:55.536828 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 6 23:44:55.536849 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard Jul 6 23:44:55.544914 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:44:55.573179 ignition[1941]: INFO : Ignition 2.20.0 Jul 6 23:44:55.573179 ignition[1941]: INFO : Stage: files Jul 6 23:44:55.582944 ignition[1941]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:44:55.582944 ignition[1941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 6 23:44:55.582944 ignition[1941]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:44:55.582944 ignition[1941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:44:55.582944 ignition[1941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:44:55.582944 ignition[1941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:44:55.582944 ignition[1941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:44:55.582944 ignition[1941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:44:55.582944 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 6 23:44:55.582944 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 6 23:44:55.578719 unknown[1941]: wrote ssh authorized keys file for user: core Jul 6 23:44:55.677069 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:44:55.739200 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 6 23:44:55.749794 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:44:55.749794 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:44:55.749794 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:44:55.749794 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:44:55.749794 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:44:55.749794 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:44:55.749794 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:44:55.749794 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:44:55.749794 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:44:55.749794 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:44:55.749794 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 6 23:44:55.749794 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 6 23:44:55.749794 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 6 23:44:55.749794 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 6 23:44:56.002574 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 6 23:44:56.350742 ignition[1941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 6 23:44:56.350742 ignition[1941]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 6 23:44:56.375607 ignition[1941]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:44:56.375607 ignition[1941]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:44:56.375607 ignition[1941]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 6 23:44:56.375607 ignition[1941]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:44:56.375607 ignition[1941]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:44:56.375607 ignition[1941]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:44:56.375607 ignition[1941]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:44:56.375607 ignition[1941]: INFO : files: files passed Jul 6 23:44:56.375607 ignition[1941]: INFO : POST message to Packet Timeline Jul 6 23:44:56.375607 ignition[1941]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 6 23:44:56.949948 ignition[1941]: INFO : GET result: OK Jul 6 23:44:57.236464 ignition[1941]: INFO : Ignition finished successfully Jul 6 23:44:57.239892 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:44:57.259336 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:44:57.271875 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:44:57.290605 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:44:57.290747 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:44:57.309498 initrd-setup-root-after-ignition[1987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:44:57.309498 initrd-setup-root-after-ignition[1987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:44:57.303970 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:44:57.355787 initrd-setup-root-after-ignition[1991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:44:57.316999 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:44:57.345352 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:44:57.387722 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:44:57.387880 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:44:57.397496 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:44:57.407858 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:44:57.424981 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:44:57.437280 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:44:57.458980 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:44:57.475311 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:44:57.498491 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:44:57.510294 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:44:57.516196 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:44:57.527795 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:44:57.527891 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:44:57.539535 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:44:57.550835 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:44:57.562376 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:44:57.573809 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:44:57.585172 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:44:57.596495 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:44:57.607815 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:44:57.619205 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:44:57.630600 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:44:57.647454 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:44:57.658903 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:44:57.659016 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:44:57.670552 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:44:57.681815 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:44:57.693004 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:44:57.696230 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:44:57.704436 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:44:57.704540 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:44:57.715837 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:44:57.715930 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:44:57.727297 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:44:57.738616 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:44:57.738759 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:44:57.755900 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:44:57.767488 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:44:57.779055 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:44:57.779164 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:44:57.879954 ignition[2015]: INFO : Ignition 2.20.0 Jul 6 23:44:57.879954 ignition[2015]: INFO : Stage: umount Jul 6 23:44:57.879954 ignition[2015]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:44:57.879954 ignition[2015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 6 23:44:57.879954 ignition[2015]: INFO : umount: umount passed Jul 6 23:44:57.879954 ignition[2015]: INFO : POST message to Packet Timeline Jul 6 23:44:57.879954 ignition[2015]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 6 23:44:57.790719 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:44:57.790778 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:44:57.802499 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:44:57.802607 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:44:57.814224 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:44:57.814322 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:44:57.825936 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 6 23:44:57.826029 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:44:57.850268 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:44:57.861638 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:44:57.861738 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:44:57.886278 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:44:57.898079 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:44:57.898181 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:44:57.909582 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:44:57.909665 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:44:57.929121 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:44:57.931840 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:44:57.931920 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:44:57.949905 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:44:57.950132 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:44:58.547003 ignition[2015]: INFO : GET result: OK Jul 6 23:44:59.316115 ignition[2015]: INFO : Ignition finished successfully Jul 6 23:44:59.319136 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:44:59.319410 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:44:59.326991 systemd[1]: Stopped target network.target - Network. Jul 6 23:44:59.336331 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:44:59.336391 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:44:59.346242 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:44:59.346305 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:44:59.355956 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:44:59.356029 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:44:59.365703 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:44:59.365734 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:44:59.375528 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:44:59.375573 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:44:59.385580 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:44:59.395352 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:44:59.405275 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:44:59.405380 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:44:59.419540 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 6 23:44:59.420405 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:44:59.420636 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:44:59.433024 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:44:59.433275 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:44:59.435190 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:44:59.441738 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 6 23:44:59.442593 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:44:59.442783 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:44:59.464233 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:44:59.470896 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:44:59.470945 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:44:59.481230 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:44:59.481268 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:44:59.496473 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:44:59.496527 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:44:59.506911 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:44:59.523915 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:44:59.524318 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:44:59.524445 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:44:59.552250 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:44:59.552450 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:44:59.566637 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:44:59.566714 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:44:59.577481 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:44:59.577521 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:44:59.588892 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:44:59.588950 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:44:59.605299 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:44:59.605335 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:44:59.634288 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:44:59.645196 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:44:59.645246 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:44:59.656789 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:44:59.656851 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:44:59.675448 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 6 23:44:59.675507 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:44:59.675872 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:44:59.675947 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:45:00.198904 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:45:00.199082 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:45:00.210622 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:45:00.230315 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:45:00.240093 systemd[1]: Switching root. Jul 6 23:45:00.299872 systemd-journald[902]: Journal stopped Jul 6 23:45:02.375354 systemd-journald[902]: Received SIGTERM from PID 1 (systemd). Jul 6 23:45:02.375383 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:45:02.375393 kernel: SELinux: policy capability open_perms=1 Jul 6 23:45:02.375401 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:45:02.375409 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:45:02.375416 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:45:02.375425 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:45:02.375434 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:45:02.375442 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:45:02.375449 kernel: audit: type=1403 audit(1751845500.475:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:45:02.375458 systemd[1]: Successfully loaded SELinux policy in 116.292ms. Jul 6 23:45:02.375467 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.965ms. Jul 6 23:45:02.375477 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:45:02.375488 systemd[1]: Detected architecture arm64. Jul 6 23:45:02.375499 systemd[1]: Detected first boot. Jul 6 23:45:02.375507 systemd[1]: Hostname set to . Jul 6 23:45:02.375516 systemd[1]: Initializing machine ID from random generator. Jul 6 23:45:02.375525 zram_generator::config[2091]: No configuration found. Jul 6 23:45:02.375536 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:45:02.375546 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 6 23:45:02.375554 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:45:02.375563 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:45:02.375571 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:45:02.375580 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:45:02.375589 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:45:02.375600 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:45:02.375609 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:45:02.375618 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:45:02.375627 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:45:02.375635 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:45:02.375644 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:45:02.375653 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:45:02.375662 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:45:02.375672 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:45:02.375681 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:45:02.375690 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:45:02.375699 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:45:02.375708 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 6 23:45:02.375717 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:45:02.375726 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:45:02.375737 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:45:02.375746 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:45:02.375756 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:45:02.375765 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:45:02.375775 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:45:02.375783 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:45:02.375793 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:45:02.375801 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:45:02.375811 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:45:02.375821 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 6 23:45:02.375830 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:45:02.375840 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:45:02.375849 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:45:02.375858 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:45:02.375869 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:45:02.375879 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:45:02.375888 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:45:02.375897 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:45:02.375907 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:45:02.375916 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:45:02.375925 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:45:02.375935 systemd[1]: Reached target machines.target - Containers. Jul 6 23:45:02.375945 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:45:02.375955 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:45:02.375964 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:45:02.375973 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:45:02.375982 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:45:02.375991 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:45:02.376001 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:45:02.376009 kernel: ACPI: bus type drm_connector registered Jul 6 23:45:02.376018 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:45:02.376028 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:45:02.376037 kernel: fuse: init (API version 7.39) Jul 6 23:45:02.376045 kernel: loop: module loaded Jul 6 23:45:02.376054 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:45:02.376063 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:45:02.376072 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:45:02.376082 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:45:02.376091 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:45:02.376102 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:45:02.376111 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:45:02.376121 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:45:02.376149 systemd-journald[2207]: Collecting audit messages is disabled. Jul 6 23:45:02.376175 systemd-journald[2207]: Journal started Jul 6 23:45:02.376194 systemd-journald[2207]: Runtime Journal (/run/log/journal/9256f3c30ee440cd8cdb6f8477cbe5eb) is 8M, max 4G, 3.9G free. Jul 6 23:45:01.028617 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:45:01.043569 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 6 23:45:01.043897 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:45:01.044223 systemd[1]: systemd-journald.service: Consumed 3.504s CPU time. Jul 6 23:45:02.400222 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:45:02.428173 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:45:02.455174 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 6 23:45:02.477174 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:45:02.500148 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:45:02.500201 systemd[1]: Stopped verity-setup.service. Jul 6 23:45:02.526182 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:45:02.531533 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:45:02.537111 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:45:02.542652 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:45:02.548136 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:45:02.553659 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:45:02.559081 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:45:02.564662 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:45:02.570312 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:45:02.575967 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:45:02.576158 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:45:02.581763 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:45:02.581942 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:45:02.587394 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:45:02.587573 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:45:02.593042 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:45:02.593237 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:45:02.598601 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:45:02.598778 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:45:02.604323 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:45:02.604505 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:45:02.609822 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:45:02.614947 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:45:02.620380 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:45:02.625609 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 6 23:45:02.630777 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:45:02.647654 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:45:02.668410 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:45:02.674550 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:45:02.679400 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:45:02.679428 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:45:02.684936 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 6 23:45:02.690739 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:45:02.696620 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:45:02.701456 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:45:02.702647 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:45:02.708412 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:45:02.713190 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:45:02.714966 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:45:02.719354 systemd-journald[2207]: Time spent on flushing to /var/log/journal/9256f3c30ee440cd8cdb6f8477cbe5eb is 23.449ms for 2354 entries. Jul 6 23:45:02.719354 systemd-journald[2207]: System Journal (/var/log/journal/9256f3c30ee440cd8cdb6f8477cbe5eb) is 8M, max 195.6M, 187.6M free. Jul 6 23:45:02.747915 systemd-journald[2207]: Received client request to flush runtime journal. Jul 6 23:45:02.733131 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:45:02.746556 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:45:02.752420 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:45:02.758309 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:45:02.764141 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 6 23:45:02.766173 kernel: loop0: detected capacity change from 0 to 113512 Jul 6 23:45:02.781216 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:45:02.790178 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:45:02.794661 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:45:02.799337 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:45:02.806088 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:45:02.811624 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:45:02.817076 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:45:02.821976 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:45:02.833543 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:45:02.850172 kernel: loop1: detected capacity change from 0 to 123192 Jul 6 23:45:02.860529 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 6 23:45:02.866688 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:45:02.872331 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:45:02.873213 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 6 23:45:02.879842 udevadm[2259]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 6 23:45:02.888421 systemd-tmpfiles[2279]: ACLs are not supported, ignoring. Jul 6 23:45:02.888432 systemd-tmpfiles[2279]: ACLs are not supported, ignoring. Jul 6 23:45:02.892450 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:45:02.894212 kernel: loop2: detected capacity change from 0 to 203944 Jul 6 23:45:02.956174 kernel: loop3: detected capacity change from 0 to 8 Jul 6 23:45:02.979806 ldconfig[2244]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:45:02.983203 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:45:03.009175 kernel: loop4: detected capacity change from 0 to 113512 Jul 6 23:45:03.025173 kernel: loop5: detected capacity change from 0 to 123192 Jul 6 23:45:03.041173 kernel: loop6: detected capacity change from 0 to 203944 Jul 6 23:45:03.058173 kernel: loop7: detected capacity change from 0 to 8 Jul 6 23:45:03.058672 (sd-merge)[2293]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Jul 6 23:45:03.059122 (sd-merge)[2293]: Merged extensions into '/usr'. Jul 6 23:45:03.062112 systemd[1]: Reload requested from client PID 2252 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:45:03.062125 systemd[1]: Reloading... Jul 6 23:45:03.110167 zram_generator::config[2327]: No configuration found. Jul 6 23:45:03.202948 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:45:03.263625 systemd[1]: Reloading finished in 201 ms. Jul 6 23:45:03.279571 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:45:03.284334 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:45:03.301529 systemd[1]: Starting ensure-sysext.service... Jul 6 23:45:03.307337 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:45:03.314063 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:45:03.325067 systemd[1]: Reload requested from client PID 2377 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:45:03.325078 systemd[1]: Reloading... Jul 6 23:45:03.327726 systemd-tmpfiles[2378]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:45:03.327925 systemd-tmpfiles[2378]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:45:03.328550 systemd-tmpfiles[2378]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:45:03.328751 systemd-tmpfiles[2378]: ACLs are not supported, ignoring. Jul 6 23:45:03.328797 systemd-tmpfiles[2378]: ACLs are not supported, ignoring. Jul 6 23:45:03.331503 systemd-tmpfiles[2378]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:45:03.331510 systemd-tmpfiles[2378]: Skipping /boot Jul 6 23:45:03.340086 systemd-tmpfiles[2378]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:45:03.340094 systemd-tmpfiles[2378]: Skipping /boot Jul 6 23:45:03.342154 systemd-udevd[2379]: Using default interface naming scheme 'v255'. Jul 6 23:45:03.370166 zram_generator::config[2412]: No configuration found. Jul 6 23:45:03.403186 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (2420) Jul 6 23:45:03.434173 kernel: IPMI message handler: version 39.2 Jul 6 23:45:03.444168 kernel: ipmi device interface Jul 6 23:45:03.457177 kernel: ipmi_si: IPMI System Interface driver Jul 6 23:45:03.457296 kernel: ipmi_si: Unable to find any System Interface(s) Jul 6 23:45:03.468167 kernel: ipmi_ssif: IPMI SSIF Interface driver Jul 6 23:45:03.479968 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:45:03.560633 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. Jul 6 23:45:03.565186 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 6 23:45:03.565493 systemd[1]: Reloading finished in 240 ms. Jul 6 23:45:03.576651 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:45:03.596058 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:45:03.618911 systemd[1]: Finished ensure-sysext.service. Jul 6 23:45:03.623680 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 6 23:45:03.662361 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:45:03.668402 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:45:03.673529 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:45:03.674630 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 6 23:45:03.680543 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:45:03.686492 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:45:03.692206 lvm[2654]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:45:03.692229 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:45:03.698699 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:45:03.703643 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:45:03.704546 augenrules[2675]: No rules Jul 6 23:45:03.704584 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:45:03.709366 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:45:03.710530 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:45:03.716951 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:45:03.723545 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:45:03.729766 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 6 23:45:03.735496 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:45:03.741175 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:45:03.746722 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:45:03.746933 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:45:03.752643 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:45:03.758960 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 6 23:45:03.763919 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:45:03.764087 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:45:03.769026 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:45:03.769181 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:45:03.774253 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:45:03.774430 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:45:03.779488 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:45:03.780254 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:45:03.785813 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:45:03.790724 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:45:03.795686 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:45:03.808801 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:45:03.824365 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 6 23:45:03.828258 lvm[2707]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:45:03.828980 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:45:03.829047 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:45:03.830271 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:45:03.836885 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:45:03.841671 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:45:03.842125 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:45:03.847082 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:45:03.863587 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 6 23:45:03.872137 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:45:03.925457 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 6 23:45:03.930612 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:45:03.933981 systemd-resolved[2684]: Positive Trust Anchors: Jul 6 23:45:03.933994 systemd-resolved[2684]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:45:03.934025 systemd-resolved[2684]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:45:03.937718 systemd-resolved[2684]: Using system hostname 'ci-4230.2.1-a-784d2181dd'. Jul 6 23:45:03.939601 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:45:03.940356 systemd-networkd[2683]: lo: Link UP Jul 6 23:45:03.940362 systemd-networkd[2683]: lo: Gained carrier Jul 6 23:45:03.944066 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:45:03.944192 systemd-networkd[2683]: bond0: netdev ready Jul 6 23:45:03.948429 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:45:03.952733 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:45:03.953332 systemd-networkd[2683]: Enumeration completed Jul 6 23:45:03.957012 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:45:03.961493 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:45:03.964249 systemd-networkd[2683]: enP1p1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:5a:08:48.network. Jul 6 23:45:03.965852 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:45:03.970221 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:45:03.974689 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:45:03.974707 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:45:03.979083 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:45:03.984185 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:45:03.989973 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:45:03.996196 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 6 23:45:04.003080 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:45:04.008049 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 6 23:45:04.013057 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:45:04.017741 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:45:04.022303 systemd[1]: Reached target network.target - Network. Jul 6 23:45:04.026753 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:45:04.031101 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:45:04.035390 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:45:04.035410 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:45:04.044226 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:45:04.049829 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 6 23:45:04.055536 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:45:04.061155 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:45:04.066812 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:45:04.071189 jq[2742]: false Jul 6 23:45:04.071363 coreos-metadata[2738]: Jul 06 23:45:04.071 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 6 23:45:04.071313 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:45:04.072416 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:45:04.073276 coreos-metadata[2738]: Jul 06 23:45:04.073 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jul 6 23:45:04.076664 dbus-daemon[2739]: [system] SELinux support is enabled Jul 6 23:45:04.078004 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:45:04.083730 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:45:04.086880 extend-filesystems[2743]: Found loop4 Jul 6 23:45:04.093105 extend-filesystems[2743]: Found loop5 Jul 6 23:45:04.093105 extend-filesystems[2743]: Found loop6 Jul 6 23:45:04.093105 extend-filesystems[2743]: Found loop7 Jul 6 23:45:04.093105 extend-filesystems[2743]: Found nvme0n1 Jul 6 23:45:04.093105 extend-filesystems[2743]: Found nvme0n1p1 Jul 6 23:45:04.093105 extend-filesystems[2743]: Found nvme0n1p2 Jul 6 23:45:04.093105 extend-filesystems[2743]: Found nvme0n1p3 Jul 6 23:45:04.093105 extend-filesystems[2743]: Found usr Jul 6 23:45:04.093105 extend-filesystems[2743]: Found nvme0n1p4 Jul 6 23:45:04.093105 extend-filesystems[2743]: Found nvme0n1p6 Jul 6 23:45:04.093105 extend-filesystems[2743]: Found nvme0n1p7 Jul 6 23:45:04.093105 extend-filesystems[2743]: Found nvme0n1p9 Jul 6 23:45:04.093105 extend-filesystems[2743]: Checking size of /dev/nvme0n1p9 Jul 6 23:45:04.224981 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 233815889 blocks Jul 6 23:45:04.225011 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (2587) Jul 6 23:45:04.089562 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:45:04.225094 extend-filesystems[2743]: Resized partition /dev/nvme0n1p9 Jul 6 23:45:04.101713 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:45:04.234238 extend-filesystems[2762]: resize2fs 1.47.1 (20-May-2024) Jul 6 23:45:04.108017 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 6 23:45:04.157152 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:45:04.166668 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:45:04.167233 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:45:04.239693 update_engine[2771]: I20250706 23:45:04.221951 2771 main.cc:92] Flatcar Update Engine starting Jul 6 23:45:04.239693 update_engine[2771]: I20250706 23:45:04.225240 2771 update_check_scheduler.cc:74] Next update check in 9m26s Jul 6 23:45:04.167919 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:45:04.239953 jq[2772]: true Jul 6 23:45:04.176078 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:45:04.184460 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:45:04.197033 systemd-logind[2761]: Watching system buttons on /dev/input/event0 (Power Button) Jul 6 23:45:04.197868 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:45:04.197905 systemd-logind[2761]: New seat seat0. Jul 6 23:45:04.198073 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:45:04.198347 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:45:04.198533 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:45:04.212533 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:45:04.221338 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:45:04.221538 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:45:04.240021 (ntainerd)[2777]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:45:04.241207 dbus-daemon[2739]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 6 23:45:04.242636 jq[2776]: true Jul 6 23:45:04.245079 tar[2774]: linux-arm64/helm Jul 6 23:45:04.258641 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:45:04.264472 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:45:04.264631 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:45:04.269218 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:45:04.269319 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:45:04.274286 bash[2802]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:45:04.285378 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:45:04.293557 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:45:04.300778 systemd[1]: Starting sshkeys.service... Jul 6 23:45:04.313471 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 6 23:45:04.313862 locksmithd[2803]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:45:04.319615 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 6 23:45:04.339223 coreos-metadata[2815]: Jul 06 23:45:04.339 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 6 23:45:04.340241 coreos-metadata[2815]: Jul 06 23:45:04.340 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jul 6 23:45:04.374045 containerd[2777]: time="2025-07-06T23:45:04.373963840Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jul 6 23:45:04.395669 containerd[2777]: time="2025-07-06T23:45:04.395625160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:45:04.396980 containerd[2777]: time="2025-07-06T23:45:04.396949120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:45:04.397001 containerd[2777]: time="2025-07-06T23:45:04.396979640Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 6 23:45:04.397001 containerd[2777]: time="2025-07-06T23:45:04.396996000Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 6 23:45:04.397163 containerd[2777]: time="2025-07-06T23:45:04.397146880Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 6 23:45:04.397185 containerd[2777]: time="2025-07-06T23:45:04.397172640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 6 23:45:04.397247 containerd[2777]: time="2025-07-06T23:45:04.397232720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:45:04.397267 containerd[2777]: time="2025-07-06T23:45:04.397247160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:45:04.397456 containerd[2777]: time="2025-07-06T23:45:04.397437480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:45:04.397477 containerd[2777]: time="2025-07-06T23:45:04.397454600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 6 23:45:04.397477 containerd[2777]: time="2025-07-06T23:45:04.397467320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:45:04.397512 containerd[2777]: time="2025-07-06T23:45:04.397477560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 6 23:45:04.397568 containerd[2777]: time="2025-07-06T23:45:04.397554560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:45:04.397754 containerd[2777]: time="2025-07-06T23:45:04.397738440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:45:04.397874 containerd[2777]: time="2025-07-06T23:45:04.397861520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:45:04.397892 containerd[2777]: time="2025-07-06T23:45:04.397875320Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 6 23:45:04.397962 containerd[2777]: time="2025-07-06T23:45:04.397952240Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 6 23:45:04.398002 containerd[2777]: time="2025-07-06T23:45:04.397992920Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:45:04.405206 containerd[2777]: time="2025-07-06T23:45:04.405182480Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 6 23:45:04.405245 containerd[2777]: time="2025-07-06T23:45:04.405221200Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 6 23:45:04.405245 containerd[2777]: time="2025-07-06T23:45:04.405237280Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 6 23:45:04.405306 containerd[2777]: time="2025-07-06T23:45:04.405252120Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 6 23:45:04.405306 containerd[2777]: time="2025-07-06T23:45:04.405266160Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 6 23:45:04.405415 containerd[2777]: time="2025-07-06T23:45:04.405396720Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 6 23:45:04.405618 containerd[2777]: time="2025-07-06T23:45:04.405602480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 6 23:45:04.405717 containerd[2777]: time="2025-07-06T23:45:04.405703120Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 6 23:45:04.405737 containerd[2777]: time="2025-07-06T23:45:04.405720560Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 6 23:45:04.405754 containerd[2777]: time="2025-07-06T23:45:04.405735520Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 6 23:45:04.405754 containerd[2777]: time="2025-07-06T23:45:04.405749080Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 6 23:45:04.405786 containerd[2777]: time="2025-07-06T23:45:04.405768120Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 6 23:45:04.405786 containerd[2777]: time="2025-07-06T23:45:04.405782880Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 6 23:45:04.405821 containerd[2777]: time="2025-07-06T23:45:04.405796760Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 6 23:45:04.405821 containerd[2777]: time="2025-07-06T23:45:04.405811040Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 6 23:45:04.405853 containerd[2777]: time="2025-07-06T23:45:04.405824360Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 6 23:45:04.405853 containerd[2777]: time="2025-07-06T23:45:04.405837400Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 6 23:45:04.405853 containerd[2777]: time="2025-07-06T23:45:04.405848960Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 6 23:45:04.405901 containerd[2777]: time="2025-07-06T23:45:04.405868560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 6 23:45:04.405901 containerd[2777]: time="2025-07-06T23:45:04.405882720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 6 23:45:04.405901 containerd[2777]: time="2025-07-06T23:45:04.405894200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 6 23:45:04.405951 containerd[2777]: time="2025-07-06T23:45:04.405907280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 6 23:45:04.405951 containerd[2777]: time="2025-07-06T23:45:04.405919280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 6 23:45:04.405951 containerd[2777]: time="2025-07-06T23:45:04.405932480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 6 23:45:04.405951 containerd[2777]: time="2025-07-06T23:45:04.405943760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 6 23:45:04.406014 containerd[2777]: time="2025-07-06T23:45:04.405956680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 6 23:45:04.406014 containerd[2777]: time="2025-07-06T23:45:04.405969560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 6 23:45:04.406014 containerd[2777]: time="2025-07-06T23:45:04.405984160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 6 23:45:04.406014 containerd[2777]: time="2025-07-06T23:45:04.405995520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 6 23:45:04.406014 containerd[2777]: time="2025-07-06T23:45:04.406006520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 6 23:45:04.406093 containerd[2777]: time="2025-07-06T23:45:04.406019240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 6 23:45:04.406093 containerd[2777]: time="2025-07-06T23:45:04.406033600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 6 23:45:04.406093 containerd[2777]: time="2025-07-06T23:45:04.406056920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 6 23:45:04.406093 containerd[2777]: time="2025-07-06T23:45:04.406069560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 6 23:45:04.406093 containerd[2777]: time="2025-07-06T23:45:04.406079880Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 6 23:45:04.406264 containerd[2777]: time="2025-07-06T23:45:04.406254080Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 6 23:45:04.406284 containerd[2777]: time="2025-07-06T23:45:04.406271480Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 6 23:45:04.406303 containerd[2777]: time="2025-07-06T23:45:04.406281400Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 6 23:45:04.406303 containerd[2777]: time="2025-07-06T23:45:04.406294080Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 6 23:45:04.406338 containerd[2777]: time="2025-07-06T23:45:04.406303520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 6 23:45:04.406338 containerd[2777]: time="2025-07-06T23:45:04.406315560Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 6 23:45:04.406338 containerd[2777]: time="2025-07-06T23:45:04.406327560Z" level=info msg="NRI interface is disabled by configuration." Jul 6 23:45:04.406388 containerd[2777]: time="2025-07-06T23:45:04.406340120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 6 23:45:04.406701 containerd[2777]: time="2025-07-06T23:45:04.406665000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 6 23:45:04.406793 containerd[2777]: time="2025-07-06T23:45:04.406717400Z" level=info msg="Connect containerd service" Jul 6 23:45:04.406793 containerd[2777]: time="2025-07-06T23:45:04.406745960Z" level=info msg="using legacy CRI server" Jul 6 23:45:04.406793 containerd[2777]: time="2025-07-06T23:45:04.406752560Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:45:04.407019 containerd[2777]: time="2025-07-06T23:45:04.407007080Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 6 23:45:04.407715 containerd[2777]: time="2025-07-06T23:45:04.407695680Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:45:04.407832 containerd[2777]: time="2025-07-06T23:45:04.407799680Z" level=info msg="Start subscribing containerd event" Jul 6 23:45:04.407857 containerd[2777]: time="2025-07-06T23:45:04.407848360Z" level=info msg="Start recovering state" Jul 6 23:45:04.407919 containerd[2777]: time="2025-07-06T23:45:04.407910080Z" level=info msg="Start event monitor" Jul 6 23:45:04.407938 containerd[2777]: time="2025-07-06T23:45:04.407921480Z" level=info msg="Start snapshots syncer" Jul 6 23:45:04.407938 containerd[2777]: time="2025-07-06T23:45:04.407930280Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:45:04.407977 containerd[2777]: time="2025-07-06T23:45:04.407937440Z" level=info msg="Start streaming server" Jul 6 23:45:04.408229 containerd[2777]: time="2025-07-06T23:45:04.408214760Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:45:04.408265 containerd[2777]: time="2025-07-06T23:45:04.408256520Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:45:04.408310 containerd[2777]: time="2025-07-06T23:45:04.408301800Z" level=info msg="containerd successfully booted in 0.035652s" Jul 6 23:45:04.408360 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:45:04.540916 tar[2774]: linux-arm64/LICENSE Jul 6 23:45:04.540980 tar[2774]: linux-arm64/README.md Jul 6 23:45:04.561348 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:45:04.625177 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 233815889 Jul 6 23:45:04.640884 extend-filesystems[2762]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 6 23:45:04.640884 extend-filesystems[2762]: old_desc_blocks = 1, new_desc_blocks = 112 Jul 6 23:45:04.640884 extend-filesystems[2762]: The filesystem on /dev/nvme0n1p9 is now 233815889 (4k) blocks long. Jul 6 23:45:04.670772 extend-filesystems[2743]: Resized filesystem in /dev/nvme0n1p9 Jul 6 23:45:04.670772 extend-filesystems[2743]: Found nvme1n1 Jul 6 23:45:04.686237 sshd_keygen[2767]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:45:04.643441 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:45:04.643806 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:45:04.657003 systemd[1]: extend-filesystems.service: Consumed 211ms CPU time, 68.8M memory peak. Jul 6 23:45:04.674615 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:45:04.698569 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:45:04.707553 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:45:04.707779 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:45:04.714625 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:45:04.727469 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:45:04.734027 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:45:04.740395 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 6 23:45:04.745687 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:45:05.073440 coreos-metadata[2738]: Jul 06 23:45:05.073 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jul 6 23:45:05.073879 coreos-metadata[2738]: Jul 06 23:45:05.073 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jul 6 23:45:05.267176 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up Jul 6 23:45:05.285171 kernel: bond0: (slave enP1p1s0f0np0): Enslaving as a backup interface with an up link Jul 6 23:45:05.288677 systemd-networkd[2683]: enP1p1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:5a:08:49.network. Jul 6 23:45:05.340424 coreos-metadata[2815]: Jul 06 23:45:05.340 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jul 6 23:45:05.340980 coreos-metadata[2815]: Jul 06 23:45:05.340 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jul 6 23:45:05.881173 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up Jul 6 23:45:05.898176 kernel: bond0: (slave enP1p1s0f1np1): Enslaving as a backup interface with an up link Jul 6 23:45:05.898598 systemd-networkd[2683]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Jul 6 23:45:05.899783 systemd-networkd[2683]: enP1p1s0f0np0: Link UP Jul 6 23:45:05.900026 systemd-networkd[2683]: enP1p1s0f0np0: Gained carrier Jul 6 23:45:05.900891 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 6 23:45:05.919170 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jul 6 23:45:05.930493 systemd-networkd[2683]: enP1p1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:5a:08:48.network. Jul 6 23:45:05.930784 systemd-networkd[2683]: enP1p1s0f1np1: Link UP Jul 6 23:45:05.930987 systemd-networkd[2683]: enP1p1s0f1np1: Gained carrier Jul 6 23:45:05.940452 systemd-networkd[2683]: bond0: Link UP Jul 6 23:45:05.940743 systemd-networkd[2683]: bond0: Gained carrier Jul 6 23:45:05.940920 systemd-timesyncd[2685]: Network configuration changed, trying to establish connection. Jul 6 23:45:05.941514 systemd-timesyncd[2685]: Network configuration changed, trying to establish connection. Jul 6 23:45:05.941766 systemd-timesyncd[2685]: Network configuration changed, trying to establish connection. Jul 6 23:45:05.941902 systemd-timesyncd[2685]: Network configuration changed, trying to establish connection. Jul 6 23:45:06.020819 kernel: bond0: (slave enP1p1s0f0np0): link status definitely up, 25000 Mbps full duplex Jul 6 23:45:06.020865 kernel: bond0: active interface up! Jul 6 23:45:06.145173 kernel: bond0: (slave enP1p1s0f1np1): link status definitely up, 25000 Mbps full duplex Jul 6 23:45:07.073980 coreos-metadata[2738]: Jul 06 23:45:07.073 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Jul 6 23:45:07.341013 coreos-metadata[2815]: Jul 06 23:45:07.340 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Jul 6 23:45:07.537302 systemd-networkd[2683]: bond0: Gained IPv6LL Jul 6 23:45:07.537690 systemd-timesyncd[2685]: Network configuration changed, trying to establish connection. Jul 6 23:45:07.537838 systemd-timesyncd[2685]: Network configuration changed, trying to establish connection. Jul 6 23:45:07.537961 systemd-timesyncd[2685]: Network configuration changed, trying to establish connection. Jul 6 23:45:07.539522 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:45:07.545471 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:45:07.564349 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:45:07.571262 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:45:07.593641 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:45:08.175513 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:45:08.181621 (kubelet)[2883]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:45:08.580389 kubelet[2883]: E0706 23:45:08.580305 2883 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:45:08.582678 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:45:08.582826 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:45:08.583152 systemd[1]: kubelet.service: Consumed 742ms CPU time, 273.9M memory peak. Jul 6 23:45:09.396418 kernel: mlx5_core 0001:01:00.0: lag map: port 1:1 port 2:2 Jul 6 23:45:09.396701 kernel: mlx5_core 0001:01:00.0: shared_fdb:0 mode:queue_affinity Jul 6 23:45:09.607981 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:45:09.626402 systemd[1]: Started sshd@0-147.28.150.251:22-139.178.89.65:53626.service - OpenSSH per-connection server daemon (139.178.89.65:53626). Jul 6 23:45:09.783185 login[2859]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jul 6 23:45:09.783635 login[2858]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:09.793016 systemd-logind[2761]: New session 1 of user core. Jul 6 23:45:09.794400 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:45:09.809388 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:45:09.817321 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:45:09.819727 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:45:09.825502 (systemd)[2919]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:45:09.827526 systemd-logind[2761]: New session c1 of user core. Jul 6 23:45:09.874726 coreos-metadata[2815]: Jul 06 23:45:09.874 INFO Fetch successful Jul 6 23:45:09.925380 unknown[2815]: wrote ssh authorized keys file for user: core Jul 6 23:45:09.943864 systemd[2919]: Queued start job for default target default.target. Jul 6 23:45:09.945217 systemd[2919]: Created slice app.slice - User Application Slice. Jul 6 23:45:09.945242 systemd[2919]: Reached target paths.target - Paths. Jul 6 23:45:09.945275 systemd[2919]: Reached target timers.target - Timers. Jul 6 23:45:09.946524 systemd[2919]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:45:09.950270 update-ssh-keys[2926]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:45:09.951462 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 6 23:45:09.952964 systemd[1]: Finished sshkeys.service. Jul 6 23:45:09.954800 systemd[2919]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:45:09.954852 systemd[2919]: Reached target sockets.target - Sockets. Jul 6 23:45:09.954893 systemd[2919]: Reached target basic.target - Basic System. Jul 6 23:45:09.954923 systemd[2919]: Reached target default.target - Main User Target. Jul 6 23:45:09.954945 systemd[2919]: Startup finished in 122ms. Jul 6 23:45:09.955359 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:45:09.956893 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:45:10.042978 sshd[2909]: Accepted publickey for core from 139.178.89.65 port 53626 ssh2: RSA SHA256:jFXM65JZ7mqyKzAzCXI2L3LJhK0caTsT4P3ZL5RTqgY Jul 6 23:45:10.044268 sshd-session[2909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:10.047389 systemd-logind[2761]: New session 3 of user core. Jul 6 23:45:10.056262 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:45:10.400998 coreos-metadata[2738]: Jul 06 23:45:10.400 INFO Fetch successful Jul 6 23:45:10.410999 systemd[1]: Started sshd@1-147.28.150.251:22-139.178.89.65:46296.service - OpenSSH per-connection server daemon (139.178.89.65:46296). Jul 6 23:45:10.464926 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 6 23:45:10.466843 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Jul 6 23:45:10.784604 login[2859]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:10.787889 systemd-logind[2761]: New session 2 of user core. Jul 6 23:45:10.797325 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:45:10.810570 sshd[2946]: Accepted publickey for core from 139.178.89.65 port 46296 ssh2: RSA SHA256:jFXM65JZ7mqyKzAzCXI2L3LJhK0caTsT4P3ZL5RTqgY Jul 6 23:45:10.811728 sshd-session[2946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:10.814535 systemd-logind[2761]: New session 4 of user core. Jul 6 23:45:10.815103 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Jul 6 23:45:10.830444 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:45:10.830571 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:45:10.834227 systemd[1]: Startup finished in 3.233s (kernel) + 18.994s (initrd) + 10.475s (userspace) = 32.704s. Jul 6 23:45:11.101643 sshd[2964]: Connection closed by 139.178.89.65 port 46296 Jul 6 23:45:11.101986 sshd-session[2946]: pam_unix(sshd:session): session closed for user core Jul 6 23:45:11.104627 systemd[1]: sshd@1-147.28.150.251:22-139.178.89.65:46296.service: Deactivated successfully. Jul 6 23:45:11.106248 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:45:11.106778 systemd-logind[2761]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:45:11.107325 systemd-logind[2761]: Removed session 4. Jul 6 23:45:11.173967 systemd[1]: Started sshd@2-147.28.150.251:22-139.178.89.65:46300.service - OpenSSH per-connection server daemon (139.178.89.65:46300). Jul 6 23:45:11.578901 sshd[2971]: Accepted publickey for core from 139.178.89.65 port 46300 ssh2: RSA SHA256:jFXM65JZ7mqyKzAzCXI2L3LJhK0caTsT4P3ZL5RTqgY Jul 6 23:45:11.579882 sshd-session[2971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:11.582948 systemd-logind[2761]: New session 5 of user core. Jul 6 23:45:11.594276 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:45:11.869718 sshd[2973]: Connection closed by 139.178.89.65 port 46300 Jul 6 23:45:11.870051 sshd-session[2971]: pam_unix(sshd:session): session closed for user core Jul 6 23:45:11.872788 systemd[1]: sshd@2-147.28.150.251:22-139.178.89.65:46300.service: Deactivated successfully. Jul 6 23:45:11.874344 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:45:11.874855 systemd-logind[2761]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:45:11.875381 systemd-logind[2761]: Removed session 5. Jul 6 23:45:11.943860 systemd[1]: Started sshd@3-147.28.150.251:22-139.178.89.65:46316.service - OpenSSH per-connection server daemon (139.178.89.65:46316). Jul 6 23:45:12.354187 sshd[2979]: Accepted publickey for core from 139.178.89.65 port 46316 ssh2: RSA SHA256:jFXM65JZ7mqyKzAzCXI2L3LJhK0caTsT4P3ZL5RTqgY Jul 6 23:45:12.355230 sshd-session[2979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:12.358068 systemd-logind[2761]: New session 6 of user core. Jul 6 23:45:12.369266 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:45:12.652659 sshd[2981]: Connection closed by 139.178.89.65 port 46316 Jul 6 23:45:12.653157 sshd-session[2979]: pam_unix(sshd:session): session closed for user core Jul 6 23:45:12.656615 systemd[1]: sshd@3-147.28.150.251:22-139.178.89.65:46316.service: Deactivated successfully. Jul 6 23:45:12.658837 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:45:12.659395 systemd-logind[2761]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:45:12.659937 systemd-logind[2761]: Removed session 6. Jul 6 23:45:12.725843 systemd[1]: Started sshd@4-147.28.150.251:22-139.178.89.65:46326.service - OpenSSH per-connection server daemon (139.178.89.65:46326). Jul 6 23:45:12.923785 systemd-timesyncd[2685]: Network configuration changed, trying to establish connection. Jul 6 23:45:13.136006 sshd[2988]: Accepted publickey for core from 139.178.89.65 port 46326 ssh2: RSA SHA256:jFXM65JZ7mqyKzAzCXI2L3LJhK0caTsT4P3ZL5RTqgY Jul 6 23:45:13.137041 sshd-session[2988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:13.140195 systemd-logind[2761]: New session 7 of user core. Jul 6 23:45:13.151266 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:45:13.377541 sudo[2991]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:45:13.377801 sudo[2991]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:45:13.395094 sudo[2991]: pam_unix(sudo:session): session closed for user root Jul 6 23:45:13.458841 sshd[2990]: Connection closed by 139.178.89.65 port 46326 Jul 6 23:45:13.459545 sshd-session[2988]: pam_unix(sshd:session): session closed for user core Jul 6 23:45:13.463444 systemd[1]: sshd@4-147.28.150.251:22-139.178.89.65:46326.service: Deactivated successfully. Jul 6 23:45:13.465806 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:45:13.466412 systemd-logind[2761]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:45:13.467027 systemd-logind[2761]: Removed session 7. Jul 6 23:45:13.533094 systemd[1]: Started sshd@5-147.28.150.251:22-139.178.89.65:46330.service - OpenSSH per-connection server daemon (139.178.89.65:46330). Jul 6 23:45:13.933823 sshd[2997]: Accepted publickey for core from 139.178.89.65 port 46330 ssh2: RSA SHA256:jFXM65JZ7mqyKzAzCXI2L3LJhK0caTsT4P3ZL5RTqgY Jul 6 23:45:13.935110 sshd-session[2997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:13.938084 systemd-logind[2761]: New session 8 of user core. Jul 6 23:45:13.954261 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:45:14.163978 sudo[3001]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:45:14.164240 sudo[3001]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:45:14.166764 sudo[3001]: pam_unix(sudo:session): session closed for user root Jul 6 23:45:14.171038 sudo[3000]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 6 23:45:14.171294 sudo[3000]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:45:14.194374 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:45:14.216211 augenrules[3023]: No rules Jul 6 23:45:14.217288 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:45:14.218286 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:45:14.219023 sudo[3000]: pam_unix(sudo:session): session closed for user root Jul 6 23:45:14.280955 sshd[2999]: Connection closed by 139.178.89.65 port 46330 Jul 6 23:45:14.281308 sshd-session[2997]: pam_unix(sshd:session): session closed for user core Jul 6 23:45:14.284002 systemd[1]: sshd@5-147.28.150.251:22-139.178.89.65:46330.service: Deactivated successfully. Jul 6 23:45:14.285518 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:45:14.286043 systemd-logind[2761]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:45:14.286596 systemd-logind[2761]: Removed session 8. Jul 6 23:45:14.352870 systemd[1]: Started sshd@6-147.28.150.251:22-139.178.89.65:46338.service - OpenSSH per-connection server daemon (139.178.89.65:46338). Jul 6 23:45:14.757243 sshd[3033]: Accepted publickey for core from 139.178.89.65 port 46338 ssh2: RSA SHA256:jFXM65JZ7mqyKzAzCXI2L3LJhK0caTsT4P3ZL5RTqgY Jul 6 23:45:14.758271 sshd-session[3033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:14.761320 systemd-logind[2761]: New session 9 of user core. Jul 6 23:45:14.771316 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:45:14.988597 sudo[3036]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:45:14.988855 sudo[3036]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:45:15.290368 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:45:15.290586 (dockerd)[3068]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:45:15.499668 dockerd[3068]: time="2025-07-06T23:45:15.499520920Z" level=info msg="Starting up" Jul 6 23:45:15.572915 dockerd[3068]: time="2025-07-06T23:45:15.572840760Z" level=info msg="Loading containers: start." Jul 6 23:45:15.717172 kernel: Initializing XFRM netlink socket Jul 6 23:45:15.735576 systemd-timesyncd[2685]: Network configuration changed, trying to establish connection. Jul 6 23:45:15.786748 systemd-networkd[2683]: docker0: Link UP Jul 6 23:45:15.816268 dockerd[3068]: time="2025-07-06T23:45:15.816237360Z" level=info msg="Loading containers: done." Jul 6 23:45:15.825022 dockerd[3068]: time="2025-07-06T23:45:15.824959320Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:45:15.825100 dockerd[3068]: time="2025-07-06T23:45:15.825037240Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jul 6 23:45:15.825226 dockerd[3068]: time="2025-07-06T23:45:15.825209800Z" level=info msg="Daemon has completed initialization" Jul 6 23:45:15.845092 dockerd[3068]: time="2025-07-06T23:45:15.844945000Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:45:15.845076 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:45:16.395634 systemd-resolved[2684]: Clock change detected. Flushing caches. Jul 6 23:45:16.395787 systemd-timesyncd[2685]: Contacted time server [2600:3c00::f03c:93ff:fe5b:29d1]:123 (2.flatcar.pool.ntp.org). Jul 6 23:45:16.395841 systemd-timesyncd[2685]: Initial clock synchronization to Sun 2025-07-06 23:45:16.395576 UTC. Jul 6 23:45:16.828451 containerd[2777]: time="2025-07-06T23:45:16.828418420Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 6 23:45:16.991472 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3474817225-merged.mount: Deactivated successfully. Jul 6 23:45:17.316592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1282154353.mount: Deactivated successfully. Jul 6 23:45:18.416082 containerd[2777]: time="2025-07-06T23:45:18.415998980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:18.416082 containerd[2777]: time="2025-07-06T23:45:18.416046020Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651793" Jul 6 23:45:18.417045 containerd[2777]: time="2025-07-06T23:45:18.417018300Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:18.419926 containerd[2777]: time="2025-07-06T23:45:18.419890060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:18.421054 containerd[2777]: time="2025-07-06T23:45:18.421000860Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 1.59254424s" Jul 6 23:45:18.421054 containerd[2777]: time="2025-07-06T23:45:18.421032900Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 6 23:45:18.422208 containerd[2777]: time="2025-07-06T23:45:18.422180980Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 6 23:45:19.261994 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:45:19.272020 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:45:19.383052 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:45:19.386434 (kubelet)[3370]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:45:19.437887 kubelet[3370]: E0706 23:45:19.437848 3370 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:45:19.440877 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:45:19.441026 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:45:19.441928 systemd[1]: kubelet.service: Consumed 147ms CPU time, 117.5M memory peak. Jul 6 23:45:19.606452 containerd[2777]: time="2025-07-06T23:45:19.606355460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:19.606452 containerd[2777]: time="2025-07-06T23:45:19.606412140Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459677" Jul 6 23:45:19.607484 containerd[2777]: time="2025-07-06T23:45:19.607460540Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:19.610275 containerd[2777]: time="2025-07-06T23:45:19.610257180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:19.611305 containerd[2777]: time="2025-07-06T23:45:19.611280260Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.1889764s" Jul 6 23:45:19.611330 containerd[2777]: time="2025-07-06T23:45:19.611311500Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 6 23:45:19.611664 containerd[2777]: time="2025-07-06T23:45:19.611646020Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 6 23:45:20.441404 containerd[2777]: time="2025-07-06T23:45:20.441363940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:20.441531 containerd[2777]: time="2025-07-06T23:45:20.441406180Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125066" Jul 6 23:45:20.442361 containerd[2777]: time="2025-07-06T23:45:20.442342020Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:20.445311 containerd[2777]: time="2025-07-06T23:45:20.445286340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:20.446430 containerd[2777]: time="2025-07-06T23:45:20.446403580Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 834.72832ms" Jul 6 23:45:20.446456 containerd[2777]: time="2025-07-06T23:45:20.446438220Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 6 23:45:20.446820 containerd[2777]: time="2025-07-06T23:45:20.446799260Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 6 23:45:21.141269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount700892895.mount: Deactivated successfully. Jul 6 23:45:21.351801 containerd[2777]: time="2025-07-06T23:45:21.351757460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:21.352128 containerd[2777]: time="2025-07-06T23:45:21.351797940Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915957" Jul 6 23:45:21.352536 containerd[2777]: time="2025-07-06T23:45:21.352516460Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:21.354309 containerd[2777]: time="2025-07-06T23:45:21.354281180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:21.354977 containerd[2777]: time="2025-07-06T23:45:21.354956060Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 908.12188ms" Jul 6 23:45:21.355004 containerd[2777]: time="2025-07-06T23:45:21.354986100Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 6 23:45:21.355331 containerd[2777]: time="2025-07-06T23:45:21.355315780Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 6 23:45:21.686088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2626844572.mount: Deactivated successfully. Jul 6 23:45:22.271831 containerd[2777]: time="2025-07-06T23:45:22.271782500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:22.271976 containerd[2777]: time="2025-07-06T23:45:22.271801100Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jul 6 23:45:22.272949 containerd[2777]: time="2025-07-06T23:45:22.272922420Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:22.275940 containerd[2777]: time="2025-07-06T23:45:22.275922220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:22.277073 containerd[2777]: time="2025-07-06T23:45:22.277033300Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 921.68508ms" Jul 6 23:45:22.277103 containerd[2777]: time="2025-07-06T23:45:22.277083700Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 6 23:45:22.278786 containerd[2777]: time="2025-07-06T23:45:22.278765180Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:45:22.571243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3925592347.mount: Deactivated successfully. Jul 6 23:45:22.571599 containerd[2777]: time="2025-07-06T23:45:22.571544020Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 6 23:45:22.571760 containerd[2777]: time="2025-07-06T23:45:22.571557260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:22.572514 containerd[2777]: time="2025-07-06T23:45:22.572490020Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:22.574486 containerd[2777]: time="2025-07-06T23:45:22.574462980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:22.575225 containerd[2777]: time="2025-07-06T23:45:22.575200420Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 296.40464ms" Jul 6 23:45:22.575260 containerd[2777]: time="2025-07-06T23:45:22.575229860Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 6 23:45:22.575558 containerd[2777]: time="2025-07-06T23:45:22.575540140Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 6 23:45:22.827440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4255377709.mount: Deactivated successfully. Jul 6 23:45:24.710757 containerd[2777]: time="2025-07-06T23:45:24.710713260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:24.711181 containerd[2777]: time="2025-07-06T23:45:24.710755140Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" Jul 6 23:45:24.711956 containerd[2777]: time="2025-07-06T23:45:24.711930940Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:24.715021 containerd[2777]: time="2025-07-06T23:45:24.714998340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:24.716310 containerd[2777]: time="2025-07-06T23:45:24.716283340Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.14071432s" Jul 6 23:45:24.716336 containerd[2777]: time="2025-07-06T23:45:24.716317900Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 6 23:45:29.691438 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 6 23:45:29.702074 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:45:29.803819 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:45:29.807199 (kubelet)[3608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:45:29.838573 kubelet[3608]: E0706 23:45:29.838538 3608 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:45:29.840825 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:45:29.840974 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:45:29.841281 systemd[1]: kubelet.service: Consumed 135ms CPU time, 119.6M memory peak. Jul 6 23:45:30.767403 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:45:30.767616 systemd[1]: kubelet.service: Consumed 135ms CPU time, 119.6M memory peak. Jul 6 23:45:30.781207 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:45:30.799828 systemd[1]: Reload requested from client PID 3639 ('systemctl') (unit session-9.scope)... Jul 6 23:45:30.799839 systemd[1]: Reloading... Jul 6 23:45:30.886880 zram_generator::config[3689]: No configuration found. Jul 6 23:45:30.977800 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:45:31.069179 systemd[1]: Reloading finished in 269 ms. Jul 6 23:45:31.112353 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:45:31.115272 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:45:31.115959 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:45:31.116280 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:45:31.116428 systemd[1]: kubelet.service: Consumed 83ms CPU time, 95.1M memory peak. Jul 6 23:45:31.119158 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:45:31.223110 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:45:31.226478 (kubelet)[3753]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:45:31.256792 kubelet[3753]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:45:31.256792 kubelet[3753]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:45:31.256792 kubelet[3753]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:45:31.257112 kubelet[3753]: I0706 23:45:31.256841 3753 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:45:31.919204 kubelet[3753]: I0706 23:45:31.919172 3753 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:45:31.919204 kubelet[3753]: I0706 23:45:31.919197 3753 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:45:31.919414 kubelet[3753]: I0706 23:45:31.919397 3753 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:45:31.938746 kubelet[3753]: E0706 23:45:31.938718 3753 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.28.150.251:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.28.150.251:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:45:31.939417 kubelet[3753]: I0706 23:45:31.939405 3753 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:45:31.944580 kubelet[3753]: E0706 23:45:31.944559 3753 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:45:31.944608 kubelet[3753]: I0706 23:45:31.944581 3753 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:45:31.965400 kubelet[3753]: I0706 23:45:31.965372 3753 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:45:31.966227 kubelet[3753]: I0706 23:45:31.966208 3753 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:45:31.966369 kubelet[3753]: I0706 23:45:31.966341 3753 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:45:31.966520 kubelet[3753]: I0706 23:45:31.966370 3753 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.1-a-784d2181dd","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:45:31.966601 kubelet[3753]: I0706 23:45:31.966593 3753 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:45:31.966623 kubelet[3753]: I0706 23:45:31.966602 3753 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:45:31.966835 kubelet[3753]: I0706 23:45:31.966827 3753 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:45:31.968962 kubelet[3753]: I0706 23:45:31.968947 3753 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:45:31.968988 kubelet[3753]: I0706 23:45:31.968972 3753 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:45:31.969012 kubelet[3753]: I0706 23:45:31.968992 3753 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:45:31.969069 kubelet[3753]: I0706 23:45:31.969061 3753 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:45:31.971114 kubelet[3753]: W0706 23:45:31.971006 3753 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.150.251:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.1-a-784d2181dd&limit=500&resourceVersion=0": dial tcp 147.28.150.251:6443: connect: connection refused Jul 6 23:45:31.971144 kubelet[3753]: E0706 23:45:31.971132 3753 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.28.150.251:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.1-a-784d2181dd&limit=500&resourceVersion=0\": dial tcp 147.28.150.251:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:45:31.971785 kubelet[3753]: W0706 23:45:31.971748 3753 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.28.150.251:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.28.150.251:6443: connect: connection refused Jul 6 23:45:31.971813 kubelet[3753]: E0706 23:45:31.971797 3753 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.28.150.251:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.28.150.251:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:45:31.972389 kubelet[3753]: I0706 23:45:31.972373 3753 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 6 23:45:31.973093 kubelet[3753]: I0706 23:45:31.973082 3753 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:45:31.973260 kubelet[3753]: W0706 23:45:31.973253 3753 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:45:31.974206 kubelet[3753]: I0706 23:45:31.974194 3753 server.go:1274] "Started kubelet" Jul 6 23:45:31.974295 kubelet[3753]: I0706 23:45:31.974261 3753 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:45:31.974509 kubelet[3753]: I0706 23:45:31.974498 3753 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:45:31.974534 kubelet[3753]: I0706 23:45:31.974494 3753 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:45:31.975559 kubelet[3753]: I0706 23:45:31.975544 3753 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:45:31.975627 kubelet[3753]: I0706 23:45:31.975609 3753 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:45:31.975651 kubelet[3753]: I0706 23:45:31.975620 3753 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:45:31.975675 kubelet[3753]: I0706 23:45:31.975653 3753 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:45:31.975749 kubelet[3753]: I0706 23:45:31.975713 3753 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:45:31.976752 kubelet[3753]: I0706 23:45:31.976730 3753 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:45:31.976935 kubelet[3753]: I0706 23:45:31.976917 3753 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:45:31.976996 kubelet[3753]: E0706 23:45:31.976513 3753 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-784d2181dd\" not found" Jul 6 23:45:31.977261 kubelet[3753]: E0706 23:45:31.977219 3753 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.150.251:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.1-a-784d2181dd?timeout=10s\": dial tcp 147.28.150.251:6443: connect: connection refused" interval="200ms" Jul 6 23:45:31.977523 kubelet[3753]: W0706 23:45:31.977370 3753 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.28.150.251:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.150.251:6443: connect: connection refused Jul 6 23:45:31.977560 kubelet[3753]: E0706 23:45:31.977546 3753 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.28.150.251:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.28.150.251:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:45:31.979579 kubelet[3753]: E0706 23:45:31.979558 3753 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:45:31.979677 kubelet[3753]: I0706 23:45:31.979662 3753 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:45:31.980393 kubelet[3753]: I0706 23:45:31.980377 3753 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:45:31.981466 kubelet[3753]: E0706 23:45:31.980439 3753 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.28.150.251:6443/api/v1/namespaces/default/events\": dial tcp 147.28.150.251:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.1-a-784d2181dd.184fce3acaacf84c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.1-a-784d2181dd,UID:ci-4230.2.1-a-784d2181dd,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.1-a-784d2181dd,},FirstTimestamp:2025-07-06 23:45:31.9741707 +0000 UTC m=+0.744887401,LastTimestamp:2025-07-06 23:45:31.9741707 +0000 UTC m=+0.744887401,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.1-a-784d2181dd,}" Jul 6 23:45:31.990094 kubelet[3753]: I0706 23:45:31.990060 3753 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:45:31.991110 kubelet[3753]: I0706 23:45:31.991098 3753 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:45:31.991133 kubelet[3753]: I0706 23:45:31.991116 3753 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:45:31.991133 kubelet[3753]: I0706 23:45:31.991132 3753 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:45:31.991183 kubelet[3753]: E0706 23:45:31.991168 3753 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:45:31.992451 kubelet[3753]: W0706 23:45:31.992416 3753 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.28.150.251:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.150.251:6443: connect: connection refused Jul 6 23:45:31.992479 kubelet[3753]: E0706 23:45:31.992466 3753 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.28.150.251:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.28.150.251:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:45:31.992970 kubelet[3753]: I0706 23:45:31.992958 3753 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:45:31.992993 kubelet[3753]: I0706 23:45:31.992970 3753 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:45:31.992993 kubelet[3753]: I0706 23:45:31.992985 3753 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:45:31.993662 kubelet[3753]: I0706 23:45:31.993651 3753 policy_none.go:49] "None policy: Start" Jul 6 23:45:31.994004 kubelet[3753]: I0706 23:45:31.993995 3753 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:45:31.994031 kubelet[3753]: I0706 23:45:31.994014 3753 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:45:32.000530 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:45:32.014207 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:45:32.029243 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:45:32.038131 kubelet[3753]: I0706 23:45:32.038108 3753 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:45:32.038309 kubelet[3753]: I0706 23:45:32.038297 3753 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:45:32.038341 kubelet[3753]: I0706 23:45:32.038310 3753 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:45:32.038497 kubelet[3753]: I0706 23:45:32.038480 3753 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:45:32.039130 kubelet[3753]: E0706 23:45:32.039112 3753 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.2.1-a-784d2181dd\" not found" Jul 6 23:45:32.099295 systemd[1]: Created slice kubepods-burstable-pod7b496625925904d2c4141a8914c60566.slice - libcontainer container kubepods-burstable-pod7b496625925904d2c4141a8914c60566.slice. Jul 6 23:45:32.113858 systemd[1]: Created slice kubepods-burstable-podd82d383feb92817c5c866eb09fc6551a.slice - libcontainer container kubepods-burstable-podd82d383feb92817c5c866eb09fc6551a.slice. Jul 6 23:45:32.133133 systemd[1]: Created slice kubepods-burstable-pod8fb89bd9fbcc8b7a4a0766b088f1d3e5.slice - libcontainer container kubepods-burstable-pod8fb89bd9fbcc8b7a4a0766b088f1d3e5.slice. Jul 6 23:45:32.140648 kubelet[3753]: I0706 23:45:32.140624 3753 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.2.1-a-784d2181dd" Jul 6 23:45:32.141065 kubelet[3753]: E0706 23:45:32.141039 3753 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.28.150.251:6443/api/v1/nodes\": dial tcp 147.28.150.251:6443: connect: connection refused" node="ci-4230.2.1-a-784d2181dd" Jul 6 23:45:32.178584 kubelet[3753]: E0706 23:45:32.178508 3753 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.150.251:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.1-a-784d2181dd?timeout=10s\": dial tcp 147.28.150.251:6443: connect: connection refused" interval="400ms" Jul 6 23:45:32.276664 kubelet[3753]: I0706 23:45:32.276622 3753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d82d383feb92817c5c866eb09fc6551a-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.1-a-784d2181dd\" (UID: \"d82d383feb92817c5c866eb09fc6551a\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-784d2181dd" Jul 6 23:45:32.277080 kubelet[3753]: I0706 23:45:32.276670 3753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d82d383feb92817c5c866eb09fc6551a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.1-a-784d2181dd\" (UID: \"d82d383feb92817c5c866eb09fc6551a\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-784d2181dd" Jul 6 23:45:32.277080 kubelet[3753]: I0706 23:45:32.276704 3753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d82d383feb92817c5c866eb09fc6551a-ca-certs\") pod \"kube-controller-manager-ci-4230.2.1-a-784d2181dd\" (UID: \"d82d383feb92817c5c866eb09fc6551a\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-784d2181dd" Jul 6 23:45:32.277080 kubelet[3753]: I0706 23:45:32.276737 3753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d82d383feb92817c5c866eb09fc6551a-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.1-a-784d2181dd\" (UID: \"d82d383feb92817c5c866eb09fc6551a\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-784d2181dd" Jul 6 23:45:32.277080 kubelet[3753]: I0706 23:45:32.276815 3753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d82d383feb92817c5c866eb09fc6551a-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.1-a-784d2181dd\" (UID: \"d82d383feb92817c5c866eb09fc6551a\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-784d2181dd" Jul 6 23:45:32.277080 kubelet[3753]: I0706 23:45:32.276908 3753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fb89bd9fbcc8b7a4a0766b088f1d3e5-kubeconfig\") pod \"kube-scheduler-ci-4230.2.1-a-784d2181dd\" (UID: \"8fb89bd9fbcc8b7a4a0766b088f1d3e5\") " pod="kube-system/kube-scheduler-ci-4230.2.1-a-784d2181dd" Jul 6 23:45:32.277243 kubelet[3753]: I0706 23:45:32.276943 3753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b496625925904d2c4141a8914c60566-ca-certs\") pod \"kube-apiserver-ci-4230.2.1-a-784d2181dd\" (UID: \"7b496625925904d2c4141a8914c60566\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-784d2181dd" Jul 6 23:45:32.277243 kubelet[3753]: I0706 23:45:32.276970 3753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b496625925904d2c4141a8914c60566-k8s-certs\") pod \"kube-apiserver-ci-4230.2.1-a-784d2181dd\" (UID: \"7b496625925904d2c4141a8914c60566\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-784d2181dd" Jul 6 23:45:32.277243 kubelet[3753]: I0706 23:45:32.276997 3753 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b496625925904d2c4141a8914c60566-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.1-a-784d2181dd\" (UID: \"7b496625925904d2c4141a8914c60566\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-784d2181dd" Jul 6 23:45:32.343290 kubelet[3753]: I0706 23:45:32.343266 3753 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.2.1-a-784d2181dd" Jul 6 23:45:32.343525 kubelet[3753]: E0706 23:45:32.343501 3753 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.28.150.251:6443/api/v1/nodes\": dial tcp 147.28.150.251:6443: connect: connection refused" node="ci-4230.2.1-a-784d2181dd" Jul 6 23:45:32.412516 containerd[2777]: time="2025-07-06T23:45:32.412467820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.1-a-784d2181dd,Uid:7b496625925904d2c4141a8914c60566,Namespace:kube-system,Attempt:0,}" Jul 6 23:45:32.415895 containerd[2777]: time="2025-07-06T23:45:32.415855060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.1-a-784d2181dd,Uid:d82d383feb92817c5c866eb09fc6551a,Namespace:kube-system,Attempt:0,}" Jul 6 23:45:32.437214 containerd[2777]: time="2025-07-06T23:45:32.437160380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.1-a-784d2181dd,Uid:8fb89bd9fbcc8b7a4a0766b088f1d3e5,Namespace:kube-system,Attempt:0,}" Jul 6 23:45:32.579496 kubelet[3753]: E0706 23:45:32.579459 3753 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.150.251:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.1-a-784d2181dd?timeout=10s\": dial tcp 147.28.150.251:6443: connect: connection refused" interval="800ms" Jul 6 23:45:32.757809 kubelet[3753]: I0706 23:45:32.745216 3753 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.2.1-a-784d2181dd" Jul 6 23:45:32.757809 kubelet[3753]: E0706 23:45:32.745457 3753 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.28.150.251:6443/api/v1/nodes\": dial tcp 147.28.150.251:6443: connect: connection refused" node="ci-4230.2.1-a-784d2181dd" Jul 6 23:45:32.793190 kubelet[3753]: W0706 23:45:32.793145 3753 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.150.251:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.1-a-784d2181dd&limit=500&resourceVersion=0": dial tcp 147.28.150.251:6443: connect: connection refused Jul 6 23:45:32.793255 kubelet[3753]: E0706 23:45:32.793196 3753 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.28.150.251:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.1-a-784d2181dd&limit=500&resourceVersion=0\": dial tcp 147.28.150.251:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:45:32.801526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3124670433.mount: Deactivated successfully. Jul 6 23:45:32.801909 containerd[2777]: time="2025-07-06T23:45:32.801882300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:45:32.802338 containerd[2777]: time="2025-07-06T23:45:32.802304700Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jul 6 23:45:32.802645 containerd[2777]: time="2025-07-06T23:45:32.802625860Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:45:32.803182 containerd[2777]: time="2025-07-06T23:45:32.803158340Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:45:32.803282 containerd[2777]: time="2025-07-06T23:45:32.803266620Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:45:32.803442 containerd[2777]: time="2025-07-06T23:45:32.803415100Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:45:32.807426 containerd[2777]: time="2025-07-06T23:45:32.807405340Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:45:32.808207 containerd[2777]: time="2025-07-06T23:45:32.808182020Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 395.63164ms" Jul 6 23:45:32.809656 containerd[2777]: time="2025-07-06T23:45:32.809613980Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 393.66456ms" Jul 6 23:45:32.811593 containerd[2777]: time="2025-07-06T23:45:32.811541980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:45:32.812458 containerd[2777]: time="2025-07-06T23:45:32.812433620Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 375.2072ms" Jul 6 23:45:32.939838 containerd[2777]: time="2025-07-06T23:45:32.939496620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:45:32.939838 containerd[2777]: time="2025-07-06T23:45:32.939831220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:45:32.939981 containerd[2777]: time="2025-07-06T23:45:32.939844300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:45:32.939981 containerd[2777]: time="2025-07-06T23:45:32.939567900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:45:32.939981 containerd[2777]: time="2025-07-06T23:45:32.939922220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:45:32.939981 containerd[2777]: time="2025-07-06T23:45:32.939934980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:45:32.939981 containerd[2777]: time="2025-07-06T23:45:32.939938900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:45:32.940084 containerd[2777]: time="2025-07-06T23:45:32.940012340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:45:32.941342 containerd[2777]: time="2025-07-06T23:45:32.941290380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:45:32.941372 containerd[2777]: time="2025-07-06T23:45:32.941343500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:45:32.941372 containerd[2777]: time="2025-07-06T23:45:32.941355020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:45:32.941443 containerd[2777]: time="2025-07-06T23:45:32.941427380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:45:32.968070 systemd[1]: Started cri-containerd-13736699ce10088b7da4cfd7938d17cfad75729c92cbde8f0c61d6c623d83af9.scope - libcontainer container 13736699ce10088b7da4cfd7938d17cfad75729c92cbde8f0c61d6c623d83af9. Jul 6 23:45:32.969600 systemd[1]: Started cri-containerd-801e7e004a1f462016b2daf7649f6094f58585669d7232db448b260687330ffe.scope - libcontainer container 801e7e004a1f462016b2daf7649f6094f58585669d7232db448b260687330ffe. Jul 6 23:45:32.971066 systemd[1]: Started cri-containerd-83bece1db8ab10feda4f3e5e890dff6d7878428f46055d3d4a5297f55b0c89d8.scope - libcontainer container 83bece1db8ab10feda4f3e5e890dff6d7878428f46055d3d4a5297f55b0c89d8. Jul 6 23:45:32.991771 containerd[2777]: time="2025-07-06T23:45:32.991736500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.1-a-784d2181dd,Uid:8fb89bd9fbcc8b7a4a0766b088f1d3e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"13736699ce10088b7da4cfd7938d17cfad75729c92cbde8f0c61d6c623d83af9\"" Jul 6 23:45:32.993003 containerd[2777]: time="2025-07-06T23:45:32.992953780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.1-a-784d2181dd,Uid:d82d383feb92817c5c866eb09fc6551a,Namespace:kube-system,Attempt:0,} returns sandbox id \"801e7e004a1f462016b2daf7649f6094f58585669d7232db448b260687330ffe\"" Jul 6 23:45:32.994012 containerd[2777]: time="2025-07-06T23:45:32.993943260Z" level=info msg="CreateContainer within sandbox \"13736699ce10088b7da4cfd7938d17cfad75729c92cbde8f0c61d6c623d83af9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:45:32.994396 containerd[2777]: time="2025-07-06T23:45:32.994330100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.1-a-784d2181dd,Uid:7b496625925904d2c4141a8914c60566,Namespace:kube-system,Attempt:0,} returns sandbox id \"83bece1db8ab10feda4f3e5e890dff6d7878428f46055d3d4a5297f55b0c89d8\"" Jul 6 23:45:32.994396 containerd[2777]: time="2025-07-06T23:45:32.994384740Z" level=info msg="CreateContainer within sandbox \"801e7e004a1f462016b2daf7649f6094f58585669d7232db448b260687330ffe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:45:32.995964 containerd[2777]: time="2025-07-06T23:45:32.995937780Z" level=info msg="CreateContainer within sandbox \"83bece1db8ab10feda4f3e5e890dff6d7878428f46055d3d4a5297f55b0c89d8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:45:33.005221 containerd[2777]: time="2025-07-06T23:45:33.005165180Z" level=info msg="CreateContainer within sandbox \"13736699ce10088b7da4cfd7938d17cfad75729c92cbde8f0c61d6c623d83af9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bbbd67d4de3bbc1f269d8f6b2e2a75489996fbad4ee130de03ab7eb31152d4f0\"" Jul 6 23:45:33.005631 containerd[2777]: time="2025-07-06T23:45:33.005607500Z" level=info msg="CreateContainer within sandbox \"801e7e004a1f462016b2daf7649f6094f58585669d7232db448b260687330ffe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7a4cb83b358a1f1e009542938537907b7ea4756ecd076c93c59bda3c8d33ff97\"" Jul 6 23:45:33.005716 containerd[2777]: time="2025-07-06T23:45:33.005692100Z" level=info msg="StartContainer for \"bbbd67d4de3bbc1f269d8f6b2e2a75489996fbad4ee130de03ab7eb31152d4f0\"" Jul 6 23:45:33.005773 containerd[2777]: time="2025-07-06T23:45:33.005752380Z" level=info msg="CreateContainer within sandbox \"83bece1db8ab10feda4f3e5e890dff6d7878428f46055d3d4a5297f55b0c89d8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"67e90d9867ce4a508b66c8f3fc6972159ba5227a4532b85466ba4c9b226a9a9b\"" Jul 6 23:45:33.005874 containerd[2777]: time="2025-07-06T23:45:33.005854500Z" level=info msg="StartContainer for \"7a4cb83b358a1f1e009542938537907b7ea4756ecd076c93c59bda3c8d33ff97\"" Jul 6 23:45:33.006007 containerd[2777]: time="2025-07-06T23:45:33.005989060Z" level=info msg="StartContainer for \"67e90d9867ce4a508b66c8f3fc6972159ba5227a4532b85466ba4c9b226a9a9b\"" Jul 6 23:45:33.036984 systemd[1]: Started cri-containerd-67e90d9867ce4a508b66c8f3fc6972159ba5227a4532b85466ba4c9b226a9a9b.scope - libcontainer container 67e90d9867ce4a508b66c8f3fc6972159ba5227a4532b85466ba4c9b226a9a9b. Jul 6 23:45:33.038165 systemd[1]: Started cri-containerd-7a4cb83b358a1f1e009542938537907b7ea4756ecd076c93c59bda3c8d33ff97.scope - libcontainer container 7a4cb83b358a1f1e009542938537907b7ea4756ecd076c93c59bda3c8d33ff97. Jul 6 23:45:33.039279 systemd[1]: Started cri-containerd-bbbd67d4de3bbc1f269d8f6b2e2a75489996fbad4ee130de03ab7eb31152d4f0.scope - libcontainer container bbbd67d4de3bbc1f269d8f6b2e2a75489996fbad4ee130de03ab7eb31152d4f0. Jul 6 23:45:33.062328 containerd[2777]: time="2025-07-06T23:45:33.062293540Z" level=info msg="StartContainer for \"67e90d9867ce4a508b66c8f3fc6972159ba5227a4532b85466ba4c9b226a9a9b\" returns successfully" Jul 6 23:45:33.063481 containerd[2777]: time="2025-07-06T23:45:33.063447660Z" level=info msg="StartContainer for \"7a4cb83b358a1f1e009542938537907b7ea4756ecd076c93c59bda3c8d33ff97\" returns successfully" Jul 6 23:45:33.064662 containerd[2777]: time="2025-07-06T23:45:33.064634460Z" level=info msg="StartContainer for \"bbbd67d4de3bbc1f269d8f6b2e2a75489996fbad4ee130de03ab7eb31152d4f0\" returns successfully" Jul 6 23:45:33.547820 kubelet[3753]: I0706 23:45:33.547794 3753 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.2.1-a-784d2181dd" Jul 6 23:45:34.748066 kubelet[3753]: E0706 23:45:34.748022 3753 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.2.1-a-784d2181dd\" not found" node="ci-4230.2.1-a-784d2181dd" Jul 6 23:45:34.847185 kubelet[3753]: I0706 23:45:34.847160 3753 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.2.1-a-784d2181dd" Jul 6 23:45:34.847301 kubelet[3753]: E0706 23:45:34.847193 3753 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4230.2.1-a-784d2181dd\": node \"ci-4230.2.1-a-784d2181dd\" not found" Jul 6 23:45:34.854804 kubelet[3753]: E0706 23:45:34.854772 3753 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-784d2181dd\" not found" Jul 6 23:45:34.955896 kubelet[3753]: E0706 23:45:34.955863 3753 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-784d2181dd\" not found" Jul 6 23:45:35.056781 kubelet[3753]: E0706 23:45:35.056693 3753 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-784d2181dd\" not found" Jul 6 23:45:35.157712 kubelet[3753]: E0706 23:45:35.157683 3753 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-784d2181dd\" not found" Jul 6 23:45:35.258164 kubelet[3753]: E0706 23:45:35.258134 3753 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-784d2181dd\" not found" Jul 6 23:45:35.358647 kubelet[3753]: E0706 23:45:35.358596 3753 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-784d2181dd\" not found" Jul 6 23:45:35.459180 kubelet[3753]: E0706 23:45:35.459155 3753 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-784d2181dd\" not found" Jul 6 23:45:35.559842 kubelet[3753]: E0706 23:45:35.559818 3753 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-784d2181dd\" not found" Jul 6 23:45:35.660314 kubelet[3753]: E0706 23:45:35.660268 3753 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-784d2181dd\" not found" Jul 6 23:45:35.760726 kubelet[3753]: E0706 23:45:35.760709 3753 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-784d2181dd\" not found" Jul 6 23:45:35.861315 kubelet[3753]: E0706 23:45:35.861284 3753 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-784d2181dd\" not found" Jul 6 23:45:35.971852 kubelet[3753]: I0706 23:45:35.971794 3753 apiserver.go:52] "Watching apiserver" Jul 6 23:45:35.975789 kubelet[3753]: I0706 23:45:35.975770 3753 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:45:36.383557 kubelet[3753]: W0706 23:45:36.383480 3753 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:45:36.657998 systemd[1]: Reload requested from client PID 4192 ('systemctl') (unit session-9.scope)... Jul 6 23:45:36.658009 systemd[1]: Reloading... Jul 6 23:45:36.734887 zram_generator::config[4243]: No configuration found. Jul 6 23:45:36.824319 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:45:36.926206 systemd[1]: Reloading finished in 267 ms. Jul 6 23:45:36.946610 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:45:36.965746 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:45:36.966964 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:45:36.967025 systemd[1]: kubelet.service: Consumed 1.240s CPU time, 154.9M memory peak. Jul 6 23:45:36.976204 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:45:37.086858 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:45:37.090361 (kubelet)[4301]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:45:37.120189 kubelet[4301]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:45:37.120189 kubelet[4301]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:45:37.120189 kubelet[4301]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:45:37.120355 kubelet[4301]: I0706 23:45:37.120246 4301 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:45:37.125099 kubelet[4301]: I0706 23:45:37.125078 4301 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:45:37.125131 kubelet[4301]: I0706 23:45:37.125101 4301 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:45:37.125319 kubelet[4301]: I0706 23:45:37.125309 4301 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:45:37.126560 kubelet[4301]: I0706 23:45:37.126547 4301 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 6 23:45:37.129464 kubelet[4301]: I0706 23:45:37.129444 4301 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:45:37.131836 kubelet[4301]: E0706 23:45:37.131818 4301 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:45:37.131860 kubelet[4301]: I0706 23:45:37.131841 4301 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:45:37.150261 kubelet[4301]: I0706 23:45:37.150236 4301 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:45:37.150375 kubelet[4301]: I0706 23:45:37.150336 4301 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:45:37.150464 kubelet[4301]: I0706 23:45:37.150430 4301 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:45:37.150622 kubelet[4301]: I0706 23:45:37.150460 4301 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.1-a-784d2181dd","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:45:37.150689 kubelet[4301]: I0706 23:45:37.150628 4301 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:45:37.150689 kubelet[4301]: I0706 23:45:37.150639 4301 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:45:37.150689 kubelet[4301]: I0706 23:45:37.150670 4301 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:45:37.150767 kubelet[4301]: I0706 23:45:37.150757 4301 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:45:37.150790 kubelet[4301]: I0706 23:45:37.150768 4301 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:45:37.150790 kubelet[4301]: I0706 23:45:37.150784 4301 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:45:37.150829 kubelet[4301]: I0706 23:45:37.150793 4301 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:45:37.151317 kubelet[4301]: I0706 23:45:37.151302 4301 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 6 23:45:37.151767 kubelet[4301]: I0706 23:45:37.151755 4301 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:45:37.152161 kubelet[4301]: I0706 23:45:37.152147 4301 server.go:1274] "Started kubelet" Jul 6 23:45:37.152261 kubelet[4301]: I0706 23:45:37.152188 4301 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:45:37.152288 kubelet[4301]: I0706 23:45:37.152251 4301 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:45:37.152454 kubelet[4301]: I0706 23:45:37.152441 4301 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:45:37.153182 kubelet[4301]: I0706 23:45:37.153168 4301 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:45:37.153210 kubelet[4301]: I0706 23:45:37.153181 4301 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:45:37.153253 kubelet[4301]: E0706 23:45:37.153237 4301 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-784d2181dd\" not found" Jul 6 23:45:37.153274 kubelet[4301]: I0706 23:45:37.153254 4301 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:45:37.153298 kubelet[4301]: I0706 23:45:37.153284 4301 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:45:37.153396 kubelet[4301]: I0706 23:45:37.153386 4301 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:45:37.153470 kubelet[4301]: E0706 23:45:37.153461 4301 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:45:37.155374 kubelet[4301]: I0706 23:45:37.155352 4301 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:45:37.155802 kubelet[4301]: I0706 23:45:37.155785 4301 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:45:37.155802 kubelet[4301]: I0706 23:45:37.155802 4301 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:45:37.155909 kubelet[4301]: I0706 23:45:37.155889 4301 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:45:37.160837 kubelet[4301]: I0706 23:45:37.160807 4301 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:45:37.161807 kubelet[4301]: I0706 23:45:37.161782 4301 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:45:37.161835 kubelet[4301]: I0706 23:45:37.161808 4301 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:45:37.161835 kubelet[4301]: I0706 23:45:37.161827 4301 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:45:37.161915 kubelet[4301]: E0706 23:45:37.161881 4301 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:45:37.186175 kubelet[4301]: I0706 23:45:37.186148 4301 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:45:37.186175 kubelet[4301]: I0706 23:45:37.186165 4301 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:45:37.186308 kubelet[4301]: I0706 23:45:37.186182 4301 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:45:37.186358 kubelet[4301]: I0706 23:45:37.186313 4301 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:45:37.186358 kubelet[4301]: I0706 23:45:37.186324 4301 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:45:37.186358 kubelet[4301]: I0706 23:45:37.186344 4301 policy_none.go:49] "None policy: Start" Jul 6 23:45:37.186786 kubelet[4301]: I0706 23:45:37.186774 4301 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:45:37.186807 kubelet[4301]: I0706 23:45:37.186791 4301 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:45:37.186928 kubelet[4301]: I0706 23:45:37.186920 4301 state_mem.go:75] "Updated machine memory state" Jul 6 23:45:37.189930 kubelet[4301]: I0706 23:45:37.189916 4301 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:45:37.190097 kubelet[4301]: I0706 23:45:37.190081 4301 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:45:37.190140 kubelet[4301]: I0706 23:45:37.190094 4301 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:45:37.190230 kubelet[4301]: I0706 23:45:37.190217 4301 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:45:37.265571 kubelet[4301]: W0706 23:45:37.265551 4301 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:45:37.265612 kubelet[4301]: W0706 23:45:37.265584 4301 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:45:37.265799 kubelet[4301]: W0706 23:45:37.265782 4301 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:45:37.265850 kubelet[4301]: E0706 23:45:37.265834 4301 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4230.2.1-a-784d2181dd\" already exists" pod="kube-system/kube-scheduler-ci-4230.2.1-a-784d2181dd" Jul 6 23:45:37.293380 kubelet[4301]: I0706 23:45:37.293359 4301 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.2.1-a-784d2181dd" Jul 6 23:45:37.297137 kubelet[4301]: I0706 23:45:37.297115 4301 kubelet_node_status.go:111] "Node was previously registered" node="ci-4230.2.1-a-784d2181dd" Jul 6 23:45:37.297191 kubelet[4301]: I0706 23:45:37.297180 4301 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.2.1-a-784d2181dd" Jul 6 23:45:37.455278 kubelet[4301]: I0706 23:45:37.455217 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b496625925904d2c4141a8914c60566-ca-certs\") pod \"kube-apiserver-ci-4230.2.1-a-784d2181dd\" (UID: \"7b496625925904d2c4141a8914c60566\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-784d2181dd" Jul 6 23:45:37.455278 kubelet[4301]: I0706 23:45:37.455244 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d82d383feb92817c5c866eb09fc6551a-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.1-a-784d2181dd\" (UID: \"d82d383feb92817c5c866eb09fc6551a\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-784d2181dd" Jul 6 23:45:37.455278 kubelet[4301]: I0706 23:45:37.455264 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d82d383feb92817c5c866eb09fc6551a-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.1-a-784d2181dd\" (UID: \"d82d383feb92817c5c866eb09fc6551a\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-784d2181dd" Jul 6 23:45:37.455403 kubelet[4301]: I0706 23:45:37.455279 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d82d383feb92817c5c866eb09fc6551a-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.1-a-784d2181dd\" (UID: \"d82d383feb92817c5c866eb09fc6551a\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-784d2181dd" Jul 6 23:45:37.455403 kubelet[4301]: I0706 23:45:37.455301 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d82d383feb92817c5c866eb09fc6551a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.1-a-784d2181dd\" (UID: \"d82d383feb92817c5c866eb09fc6551a\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-784d2181dd" Jul 6 23:45:37.455403 kubelet[4301]: I0706 23:45:37.455329 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fb89bd9fbcc8b7a4a0766b088f1d3e5-kubeconfig\") pod \"kube-scheduler-ci-4230.2.1-a-784d2181dd\" (UID: \"8fb89bd9fbcc8b7a4a0766b088f1d3e5\") " pod="kube-system/kube-scheduler-ci-4230.2.1-a-784d2181dd" Jul 6 23:45:37.455403 kubelet[4301]: I0706 23:45:37.455379 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b496625925904d2c4141a8914c60566-k8s-certs\") pod \"kube-apiserver-ci-4230.2.1-a-784d2181dd\" (UID: \"7b496625925904d2c4141a8914c60566\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-784d2181dd" Jul 6 23:45:37.455558 kubelet[4301]: I0706 23:45:37.455435 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b496625925904d2c4141a8914c60566-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.1-a-784d2181dd\" (UID: \"7b496625925904d2c4141a8914c60566\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-784d2181dd" Jul 6 23:45:37.455558 kubelet[4301]: I0706 23:45:37.455486 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d82d383feb92817c5c866eb09fc6551a-ca-certs\") pod \"kube-controller-manager-ci-4230.2.1-a-784d2181dd\" (UID: \"d82d383feb92817c5c866eb09fc6551a\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-784d2181dd" Jul 6 23:45:38.151729 kubelet[4301]: I0706 23:45:38.151696 4301 apiserver.go:52] "Watching apiserver" Jul 6 23:45:38.153854 kubelet[4301]: I0706 23:45:38.153837 4301 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:45:38.183739 kubelet[4301]: I0706 23:45:38.183691 4301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.2.1-a-784d2181dd" podStartSLOduration=2.18367754 podStartE2EDuration="2.18367754s" podCreationTimestamp="2025-07-06 23:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:45:38.1835761 +0000 UTC m=+1.090323361" watchObservedRunningTime="2025-07-06 23:45:38.18367754 +0000 UTC m=+1.090424761" Jul 6 23:45:38.188925 kubelet[4301]: I0706 23:45:38.188887 4301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.2.1-a-784d2181dd" podStartSLOduration=1.1888755 podStartE2EDuration="1.1888755s" podCreationTimestamp="2025-07-06 23:45:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:45:38.18884322 +0000 UTC m=+1.095590481" watchObservedRunningTime="2025-07-06 23:45:38.1888755 +0000 UTC m=+1.095622761" Jul 6 23:45:38.199858 kubelet[4301]: I0706 23:45:38.199818 4301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.2.1-a-784d2181dd" podStartSLOduration=1.1998063 podStartE2EDuration="1.1998063s" podCreationTimestamp="2025-07-06 23:45:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:45:38.19402022 +0000 UTC m=+1.100767441" watchObservedRunningTime="2025-07-06 23:45:38.1998063 +0000 UTC m=+1.106553521" Jul 6 23:45:43.226704 kubelet[4301]: I0706 23:45:43.226660 4301 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:45:43.227189 kubelet[4301]: I0706 23:45:43.227089 4301 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:45:43.227219 containerd[2777]: time="2025-07-06T23:45:43.226934460Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:45:43.841922 systemd[1]: Created slice kubepods-besteffort-pod518c4203_48e4_48da_b3aa_6af8bc1acb45.slice - libcontainer container kubepods-besteffort-pod518c4203_48e4_48da_b3aa_6af8bc1acb45.slice. Jul 6 23:45:43.895853 kubelet[4301]: I0706 23:45:43.895809 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/518c4203-48e4-48da-b3aa-6af8bc1acb45-xtables-lock\") pod \"kube-proxy-nd85z\" (UID: \"518c4203-48e4-48da-b3aa-6af8bc1acb45\") " pod="kube-system/kube-proxy-nd85z" Jul 6 23:45:43.895853 kubelet[4301]: I0706 23:45:43.895843 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/518c4203-48e4-48da-b3aa-6af8bc1acb45-lib-modules\") pod \"kube-proxy-nd85z\" (UID: \"518c4203-48e4-48da-b3aa-6af8bc1acb45\") " pod="kube-system/kube-proxy-nd85z" Jul 6 23:45:43.895853 kubelet[4301]: I0706 23:45:43.895862 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/518c4203-48e4-48da-b3aa-6af8bc1acb45-kube-proxy\") pod \"kube-proxy-nd85z\" (UID: \"518c4203-48e4-48da-b3aa-6af8bc1acb45\") " pod="kube-system/kube-proxy-nd85z" Jul 6 23:45:43.896100 kubelet[4301]: I0706 23:45:43.895885 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q728n\" (UniqueName: \"kubernetes.io/projected/518c4203-48e4-48da-b3aa-6af8bc1acb45-kube-api-access-q728n\") pod \"kube-proxy-nd85z\" (UID: \"518c4203-48e4-48da-b3aa-6af8bc1acb45\") " pod="kube-system/kube-proxy-nd85z" Jul 6 23:45:44.002340 kubelet[4301]: E0706 23:45:44.002306 4301 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 6 23:45:44.002340 kubelet[4301]: E0706 23:45:44.002334 4301 projected.go:194] Error preparing data for projected volume kube-api-access-q728n for pod kube-system/kube-proxy-nd85z: configmap "kube-root-ca.crt" not found Jul 6 23:45:44.002467 kubelet[4301]: E0706 23:45:44.002388 4301 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/518c4203-48e4-48da-b3aa-6af8bc1acb45-kube-api-access-q728n podName:518c4203-48e4-48da-b3aa-6af8bc1acb45 nodeName:}" failed. No retries permitted until 2025-07-06 23:45:44.50236942 +0000 UTC m=+7.409116681 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-q728n" (UniqueName: "kubernetes.io/projected/518c4203-48e4-48da-b3aa-6af8bc1acb45-kube-api-access-q728n") pod "kube-proxy-nd85z" (UID: "518c4203-48e4-48da-b3aa-6af8bc1acb45") : configmap "kube-root-ca.crt" not found Jul 6 23:45:44.300682 systemd[1]: Created slice kubepods-besteffort-podd59a5275_63a0_49c0_8855_44287ed0602b.slice - libcontainer container kubepods-besteffort-podd59a5275_63a0_49c0_8855_44287ed0602b.slice. Jul 6 23:45:44.398038 kubelet[4301]: I0706 23:45:44.398010 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d59a5275-63a0-49c0-8855-44287ed0602b-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-g67xc\" (UID: \"d59a5275-63a0-49c0-8855-44287ed0602b\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-g67xc" Jul 6 23:45:44.398352 kubelet[4301]: I0706 23:45:44.398054 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfz4c\" (UniqueName: \"kubernetes.io/projected/d59a5275-63a0-49c0-8855-44287ed0602b-kube-api-access-mfz4c\") pod \"tigera-operator-5bf8dfcb4-g67xc\" (UID: \"d59a5275-63a0-49c0-8855-44287ed0602b\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-g67xc" Jul 6 23:45:44.602644 containerd[2777]: time="2025-07-06T23:45:44.602579180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-g67xc,Uid:d59a5275-63a0-49c0-8855-44287ed0602b,Namespace:tigera-operator,Attempt:0,}" Jul 6 23:45:44.615619 containerd[2777]: time="2025-07-06T23:45:44.615241420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:45:44.615678 containerd[2777]: time="2025-07-06T23:45:44.615616620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:45:44.615678 containerd[2777]: time="2025-07-06T23:45:44.615632100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:45:44.615730 containerd[2777]: time="2025-07-06T23:45:44.615715740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:45:44.637069 systemd[1]: Started cri-containerd-f0dbcadbd901185977ef53b560c3648f6281c82e1a046298ed04fcb1a6a1e8a1.scope - libcontainer container f0dbcadbd901185977ef53b560c3648f6281c82e1a046298ed04fcb1a6a1e8a1. Jul 6 23:45:44.660545 containerd[2777]: time="2025-07-06T23:45:44.660495180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-g67xc,Uid:d59a5275-63a0-49c0-8855-44287ed0602b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f0dbcadbd901185977ef53b560c3648f6281c82e1a046298ed04fcb1a6a1e8a1\"" Jul 6 23:45:44.662316 containerd[2777]: time="2025-07-06T23:45:44.662298740Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 6 23:45:44.756051 containerd[2777]: time="2025-07-06T23:45:44.756023020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nd85z,Uid:518c4203-48e4-48da-b3aa-6af8bc1acb45,Namespace:kube-system,Attempt:0,}" Jul 6 23:45:44.768400 containerd[2777]: time="2025-07-06T23:45:44.768338420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:45:44.768448 containerd[2777]: time="2025-07-06T23:45:44.768393460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:45:44.768448 containerd[2777]: time="2025-07-06T23:45:44.768406540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:45:44.768499 containerd[2777]: time="2025-07-06T23:45:44.768479020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:45:44.794001 systemd[1]: Started cri-containerd-a81ff042a1aacd3e0ca89741dad500fde3ee1c8deed0646e43fd64466093dcfc.scope - libcontainer container a81ff042a1aacd3e0ca89741dad500fde3ee1c8deed0646e43fd64466093dcfc. Jul 6 23:45:44.810039 containerd[2777]: time="2025-07-06T23:45:44.810011980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nd85z,Uid:518c4203-48e4-48da-b3aa-6af8bc1acb45,Namespace:kube-system,Attempt:0,} returns sandbox id \"a81ff042a1aacd3e0ca89741dad500fde3ee1c8deed0646e43fd64466093dcfc\"" Jul 6 23:45:44.811855 containerd[2777]: time="2025-07-06T23:45:44.811833500Z" level=info msg="CreateContainer within sandbox \"a81ff042a1aacd3e0ca89741dad500fde3ee1c8deed0646e43fd64466093dcfc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:45:44.820254 containerd[2777]: time="2025-07-06T23:45:44.820214700Z" level=info msg="CreateContainer within sandbox \"a81ff042a1aacd3e0ca89741dad500fde3ee1c8deed0646e43fd64466093dcfc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bb831418d0953eecfde5eab2d97daa34cc2e4fe3ae5b2b71841b8579fd20843b\"" Jul 6 23:45:44.820681 containerd[2777]: time="2025-07-06T23:45:44.820659580Z" level=info msg="StartContainer for \"bb831418d0953eecfde5eab2d97daa34cc2e4fe3ae5b2b71841b8579fd20843b\"" Jul 6 23:45:44.851043 systemd[1]: Started cri-containerd-bb831418d0953eecfde5eab2d97daa34cc2e4fe3ae5b2b71841b8579fd20843b.scope - libcontainer container bb831418d0953eecfde5eab2d97daa34cc2e4fe3ae5b2b71841b8579fd20843b. Jul 6 23:45:44.871525 containerd[2777]: time="2025-07-06T23:45:44.871457940Z" level=info msg="StartContainer for \"bb831418d0953eecfde5eab2d97daa34cc2e4fe3ae5b2b71841b8579fd20843b\" returns successfully" Jul 6 23:45:45.902943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount810369788.mount: Deactivated successfully. Jul 6 23:45:46.763895 containerd[2777]: time="2025-07-06T23:45:46.763823100Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:46.764227 containerd[2777]: time="2025-07-06T23:45:46.763838580Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 6 23:45:46.764639 containerd[2777]: time="2025-07-06T23:45:46.764620540Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:46.766567 containerd[2777]: time="2025-07-06T23:45:46.766550580Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:46.767357 containerd[2777]: time="2025-07-06T23:45:46.767329780Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.10500428s" Jul 6 23:45:46.767384 containerd[2777]: time="2025-07-06T23:45:46.767363980Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 6 23:45:46.769066 containerd[2777]: time="2025-07-06T23:45:46.769046660Z" level=info msg="CreateContainer within sandbox \"f0dbcadbd901185977ef53b560c3648f6281c82e1a046298ed04fcb1a6a1e8a1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 6 23:45:46.774021 containerd[2777]: time="2025-07-06T23:45:46.773997540Z" level=info msg="CreateContainer within sandbox \"f0dbcadbd901185977ef53b560c3648f6281c82e1a046298ed04fcb1a6a1e8a1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"645b40203bc4aa252d34030ea87adbf435245c39d2b24bf58611b8a76a566337\"" Jul 6 23:45:46.774359 containerd[2777]: time="2025-07-06T23:45:46.774336340Z" level=info msg="StartContainer for \"645b40203bc4aa252d34030ea87adbf435245c39d2b24bf58611b8a76a566337\"" Jul 6 23:45:46.803054 systemd[1]: Started cri-containerd-645b40203bc4aa252d34030ea87adbf435245c39d2b24bf58611b8a76a566337.scope - libcontainer container 645b40203bc4aa252d34030ea87adbf435245c39d2b24bf58611b8a76a566337. Jul 6 23:45:46.827403 containerd[2777]: time="2025-07-06T23:45:46.827369940Z" level=info msg="StartContainer for \"645b40203bc4aa252d34030ea87adbf435245c39d2b24bf58611b8a76a566337\" returns successfully" Jul 6 23:45:47.188596 kubelet[4301]: I0706 23:45:47.188550 4301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nd85z" podStartSLOduration=4.18853326 podStartE2EDuration="4.18853326s" podCreationTimestamp="2025-07-06 23:45:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:45:45.19159566 +0000 UTC m=+8.098342921" watchObservedRunningTime="2025-07-06 23:45:47.18853326 +0000 UTC m=+10.095280521" Jul 6 23:45:47.188898 kubelet[4301]: I0706 23:45:47.188663 4301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-g67xc" podStartSLOduration=1.08255282 podStartE2EDuration="3.1886565s" podCreationTimestamp="2025-07-06 23:45:44 +0000 UTC" firstStartedPulling="2025-07-06 23:45:44.6619795 +0000 UTC m=+7.568726761" lastFinishedPulling="2025-07-06 23:45:46.76808318 +0000 UTC m=+9.674830441" observedRunningTime="2025-07-06 23:45:47.1883843 +0000 UTC m=+10.095131561" watchObservedRunningTime="2025-07-06 23:45:47.1886565 +0000 UTC m=+10.095403761" Jul 6 23:45:49.562662 update_engine[2771]: I20250706 23:45:49.562125 2771 update_attempter.cc:509] Updating boot flags... Jul 6 23:45:49.593891 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (4958) Jul 6 23:45:49.625893 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (4963) Jul 6 23:45:51.788366 sudo[3036]: pam_unix(sudo:session): session closed for user root Jul 6 23:45:51.850787 sshd[3035]: Connection closed by 139.178.89.65 port 46338 Jul 6 23:45:51.851191 sshd-session[3033]: pam_unix(sshd:session): session closed for user core Jul 6 23:45:51.854163 systemd[1]: sshd@6-147.28.150.251:22-139.178.89.65:46338.service: Deactivated successfully. Jul 6 23:45:51.855964 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:45:51.856177 systemd[1]: session-9.scope: Consumed 8.268s CPU time, 248.2M memory peak. Jul 6 23:45:51.857314 systemd-logind[2761]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:45:51.857891 systemd-logind[2761]: Removed session 9. Jul 6 23:45:56.701363 systemd[1]: Created slice kubepods-besteffort-pode51b4632_f228_4c2b_a5b3_e68be7e68678.slice - libcontainer container kubepods-besteffort-pode51b4632_f228_4c2b_a5b3_e68be7e68678.slice. Jul 6 23:45:56.773818 kubelet[4301]: I0706 23:45:56.773789 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e51b4632-f228-4c2b-a5b3-e68be7e68678-tigera-ca-bundle\") pod \"calico-typha-55df6dd745-bv7sh\" (UID: \"e51b4632-f228-4c2b-a5b3-e68be7e68678\") " pod="calico-system/calico-typha-55df6dd745-bv7sh" Jul 6 23:45:56.774133 kubelet[4301]: I0706 23:45:56.773827 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xhbb\" (UniqueName: \"kubernetes.io/projected/e51b4632-f228-4c2b-a5b3-e68be7e68678-kube-api-access-9xhbb\") pod \"calico-typha-55df6dd745-bv7sh\" (UID: \"e51b4632-f228-4c2b-a5b3-e68be7e68678\") " pod="calico-system/calico-typha-55df6dd745-bv7sh" Jul 6 23:45:56.774133 kubelet[4301]: I0706 23:45:56.773845 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e51b4632-f228-4c2b-a5b3-e68be7e68678-typha-certs\") pod \"calico-typha-55df6dd745-bv7sh\" (UID: \"e51b4632-f228-4c2b-a5b3-e68be7e68678\") " pod="calico-system/calico-typha-55df6dd745-bv7sh" Jul 6 23:45:57.000632 systemd[1]: Created slice kubepods-besteffort-pod8610b120_f3bd_4b38_a821_859452c34d8e.slice - libcontainer container kubepods-besteffort-pod8610b120_f3bd_4b38_a821_859452c34d8e.slice. Jul 6 23:45:57.010889 containerd[2777]: time="2025-07-06T23:45:57.007447124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55df6dd745-bv7sh,Uid:e51b4632-f228-4c2b-a5b3-e68be7e68678,Namespace:calico-system,Attempt:0,}" Jul 6 23:45:57.032220 containerd[2777]: time="2025-07-06T23:45:57.032115925Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:45:57.032220 containerd[2777]: time="2025-07-06T23:45:57.032180365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:45:57.032389 containerd[2777]: time="2025-07-06T23:45:57.032191645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:45:57.032389 containerd[2777]: time="2025-07-06T23:45:57.032271485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:45:57.060988 systemd[1]: Started cri-containerd-35bb911e8ea8c3e43853b1981decc9539c8c3b38cad27ae0f85f0e5e05d278ef.scope - libcontainer container 35bb911e8ea8c3e43853b1981decc9539c8c3b38cad27ae0f85f0e5e05d278ef. Jul 6 23:45:57.075511 kubelet[4301]: I0706 23:45:57.075485 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8610b120-f3bd-4b38-a821-859452c34d8e-cni-log-dir\") pod \"calico-node-mp7hl\" (UID: \"8610b120-f3bd-4b38-a821-859452c34d8e\") " pod="calico-system/calico-node-mp7hl" Jul 6 23:45:57.075561 kubelet[4301]: I0706 23:45:57.075521 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8610b120-f3bd-4b38-a821-859452c34d8e-node-certs\") pod \"calico-node-mp7hl\" (UID: \"8610b120-f3bd-4b38-a821-859452c34d8e\") " pod="calico-system/calico-node-mp7hl" Jul 6 23:45:57.075583 kubelet[4301]: I0706 23:45:57.075538 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8610b120-f3bd-4b38-a821-859452c34d8e-policysync\") pod \"calico-node-mp7hl\" (UID: \"8610b120-f3bd-4b38-a821-859452c34d8e\") " pod="calico-system/calico-node-mp7hl" Jul 6 23:45:57.075605 kubelet[4301]: I0706 23:45:57.075582 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8610b120-f3bd-4b38-a821-859452c34d8e-tigera-ca-bundle\") pod \"calico-node-mp7hl\" (UID: \"8610b120-f3bd-4b38-a821-859452c34d8e\") " pod="calico-system/calico-node-mp7hl" Jul 6 23:45:57.075605 kubelet[4301]: I0706 23:45:57.075598 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8610b120-f3bd-4b38-a821-859452c34d8e-cni-bin-dir\") pod \"calico-node-mp7hl\" (UID: \"8610b120-f3bd-4b38-a821-859452c34d8e\") " pod="calico-system/calico-node-mp7hl" Jul 6 23:45:57.075652 kubelet[4301]: I0706 23:45:57.075612 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8610b120-f3bd-4b38-a821-859452c34d8e-cni-net-dir\") pod \"calico-node-mp7hl\" (UID: \"8610b120-f3bd-4b38-a821-859452c34d8e\") " pod="calico-system/calico-node-mp7hl" Jul 6 23:45:57.075652 kubelet[4301]: I0706 23:45:57.075628 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8610b120-f3bd-4b38-a821-859452c34d8e-flexvol-driver-host\") pod \"calico-node-mp7hl\" (UID: \"8610b120-f3bd-4b38-a821-859452c34d8e\") " pod="calico-system/calico-node-mp7hl" Jul 6 23:45:57.075652 kubelet[4301]: I0706 23:45:57.075646 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8610b120-f3bd-4b38-a821-859452c34d8e-var-lib-calico\") pod \"calico-node-mp7hl\" (UID: \"8610b120-f3bd-4b38-a821-859452c34d8e\") " pod="calico-system/calico-node-mp7hl" Jul 6 23:45:57.075714 kubelet[4301]: I0706 23:45:57.075662 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8610b120-f3bd-4b38-a821-859452c34d8e-xtables-lock\") pod \"calico-node-mp7hl\" (UID: \"8610b120-f3bd-4b38-a821-859452c34d8e\") " pod="calico-system/calico-node-mp7hl" Jul 6 23:45:57.075714 kubelet[4301]: I0706 23:45:57.075677 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5vqj\" (UniqueName: \"kubernetes.io/projected/8610b120-f3bd-4b38-a821-859452c34d8e-kube-api-access-t5vqj\") pod \"calico-node-mp7hl\" (UID: \"8610b120-f3bd-4b38-a821-859452c34d8e\") " pod="calico-system/calico-node-mp7hl" Jul 6 23:45:57.075714 kubelet[4301]: I0706 23:45:57.075692 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8610b120-f3bd-4b38-a821-859452c34d8e-lib-modules\") pod \"calico-node-mp7hl\" (UID: \"8610b120-f3bd-4b38-a821-859452c34d8e\") " pod="calico-system/calico-node-mp7hl" Jul 6 23:45:57.075776 kubelet[4301]: I0706 23:45:57.075729 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8610b120-f3bd-4b38-a821-859452c34d8e-var-run-calico\") pod \"calico-node-mp7hl\" (UID: \"8610b120-f3bd-4b38-a821-859452c34d8e\") " pod="calico-system/calico-node-mp7hl" Jul 6 23:45:57.084191 containerd[2777]: time="2025-07-06T23:45:57.084164247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55df6dd745-bv7sh,Uid:e51b4632-f228-4c2b-a5b3-e68be7e68678,Namespace:calico-system,Attempt:0,} returns sandbox id \"35bb911e8ea8c3e43853b1981decc9539c8c3b38cad27ae0f85f0e5e05d278ef\"" Jul 6 23:45:57.085177 containerd[2777]: time="2025-07-06T23:45:57.085158327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 6 23:45:57.178226 kubelet[4301]: E0706 23:45:57.178205 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.178226 kubelet[4301]: W0706 23:45:57.178222 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.178340 kubelet[4301]: E0706 23:45:57.178241 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.179532 kubelet[4301]: E0706 23:45:57.179517 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.179564 kubelet[4301]: W0706 23:45:57.179533 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.179564 kubelet[4301]: E0706 23:45:57.179547 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.185523 kubelet[4301]: E0706 23:45:57.185506 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.185523 kubelet[4301]: W0706 23:45:57.185520 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.185625 kubelet[4301]: E0706 23:45:57.185533 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.242165 kubelet[4301]: E0706 23:45:57.242129 4301 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-plknp" podUID="8f8629de-298f-4ed4-a595-838d54ca032b" Jul 6 23:45:57.261167 kubelet[4301]: E0706 23:45:57.261091 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.261167 kubelet[4301]: W0706 23:45:57.261112 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.261167 kubelet[4301]: E0706 23:45:57.261131 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.261361 kubelet[4301]: E0706 23:45:57.261351 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.261361 kubelet[4301]: W0706 23:45:57.261359 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.261407 kubelet[4301]: E0706 23:45:57.261367 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.261606 kubelet[4301]: E0706 23:45:57.261598 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.261632 kubelet[4301]: W0706 23:45:57.261606 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.261632 kubelet[4301]: E0706 23:45:57.261614 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.261832 kubelet[4301]: E0706 23:45:57.261823 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.261856 kubelet[4301]: W0706 23:45:57.261832 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.261856 kubelet[4301]: E0706 23:45:57.261840 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.262026 kubelet[4301]: E0706 23:45:57.262015 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.262053 kubelet[4301]: W0706 23:45:57.262026 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.262053 kubelet[4301]: E0706 23:45:57.262036 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.262230 kubelet[4301]: E0706 23:45:57.262219 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.262252 kubelet[4301]: W0706 23:45:57.262230 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.262252 kubelet[4301]: E0706 23:45:57.262241 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.262403 kubelet[4301]: E0706 23:45:57.262395 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.262428 kubelet[4301]: W0706 23:45:57.262403 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.262428 kubelet[4301]: E0706 23:45:57.262412 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.262562 kubelet[4301]: E0706 23:45:57.262554 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.262581 kubelet[4301]: W0706 23:45:57.262562 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.262581 kubelet[4301]: E0706 23:45:57.262569 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.262792 kubelet[4301]: E0706 23:45:57.262784 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.262814 kubelet[4301]: W0706 23:45:57.262792 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.262814 kubelet[4301]: E0706 23:45:57.262800 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.263014 kubelet[4301]: E0706 23:45:57.263007 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.263038 kubelet[4301]: W0706 23:45:57.263015 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.263038 kubelet[4301]: E0706 23:45:57.263022 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.263227 kubelet[4301]: E0706 23:45:57.263220 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.263252 kubelet[4301]: W0706 23:45:57.263227 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.263252 kubelet[4301]: E0706 23:45:57.263234 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.263376 kubelet[4301]: E0706 23:45:57.263369 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.263398 kubelet[4301]: W0706 23:45:57.263376 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.263398 kubelet[4301]: E0706 23:45:57.263383 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.263587 kubelet[4301]: E0706 23:45:57.263580 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.263611 kubelet[4301]: W0706 23:45:57.263587 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.263611 kubelet[4301]: E0706 23:45:57.263595 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.263795 kubelet[4301]: E0706 23:45:57.263787 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.263814 kubelet[4301]: W0706 23:45:57.263795 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.263814 kubelet[4301]: E0706 23:45:57.263802 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.264011 kubelet[4301]: E0706 23:45:57.264003 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.264038 kubelet[4301]: W0706 23:45:57.264011 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.264038 kubelet[4301]: E0706 23:45:57.264018 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.264219 kubelet[4301]: E0706 23:45:57.264212 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.264243 kubelet[4301]: W0706 23:45:57.264219 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.264243 kubelet[4301]: E0706 23:45:57.264227 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.264441 kubelet[4301]: E0706 23:45:57.264433 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.264463 kubelet[4301]: W0706 23:45:57.264441 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.264463 kubelet[4301]: E0706 23:45:57.264448 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.264661 kubelet[4301]: E0706 23:45:57.264653 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.264683 kubelet[4301]: W0706 23:45:57.264662 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.264683 kubelet[4301]: E0706 23:45:57.264669 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.264847 kubelet[4301]: E0706 23:45:57.264840 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.264876 kubelet[4301]: W0706 23:45:57.264847 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.264876 kubelet[4301]: E0706 23:45:57.264854 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.265068 kubelet[4301]: E0706 23:45:57.265059 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.265093 kubelet[4301]: W0706 23:45:57.265068 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.265093 kubelet[4301]: E0706 23:45:57.265075 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.277292 kubelet[4301]: E0706 23:45:57.277277 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.277324 kubelet[4301]: W0706 23:45:57.277291 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.277324 kubelet[4301]: E0706 23:45:57.277305 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.277366 kubelet[4301]: I0706 23:45:57.277327 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8f8629de-298f-4ed4-a595-838d54ca032b-registration-dir\") pod \"csi-node-driver-plknp\" (UID: \"8f8629de-298f-4ed4-a595-838d54ca032b\") " pod="calico-system/csi-node-driver-plknp" Jul 6 23:45:57.277603 kubelet[4301]: E0706 23:45:57.277591 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.277603 kubelet[4301]: W0706 23:45:57.277601 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.277648 kubelet[4301]: E0706 23:45:57.277613 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.277648 kubelet[4301]: I0706 23:45:57.277627 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbk6n\" (UniqueName: \"kubernetes.io/projected/8f8629de-298f-4ed4-a595-838d54ca032b-kube-api-access-cbk6n\") pod \"csi-node-driver-plknp\" (UID: \"8f8629de-298f-4ed4-a595-838d54ca032b\") " pod="calico-system/csi-node-driver-plknp" Jul 6 23:45:57.277857 kubelet[4301]: E0706 23:45:57.277846 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.277857 kubelet[4301]: W0706 23:45:57.277855 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.277905 kubelet[4301]: E0706 23:45:57.277868 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.277905 kubelet[4301]: I0706 23:45:57.277888 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8f8629de-298f-4ed4-a595-838d54ca032b-socket-dir\") pod \"csi-node-driver-plknp\" (UID: \"8f8629de-298f-4ed4-a595-838d54ca032b\") " pod="calico-system/csi-node-driver-plknp" Jul 6 23:45:57.278048 kubelet[4301]: E0706 23:45:57.278037 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.278048 kubelet[4301]: W0706 23:45:57.278046 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.278089 kubelet[4301]: E0706 23:45:57.278057 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.278089 kubelet[4301]: I0706 23:45:57.278070 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f8629de-298f-4ed4-a595-838d54ca032b-kubelet-dir\") pod \"csi-node-driver-plknp\" (UID: \"8f8629de-298f-4ed4-a595-838d54ca032b\") " pod="calico-system/csi-node-driver-plknp" Jul 6 23:45:57.278227 kubelet[4301]: E0706 23:45:57.278217 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.278227 kubelet[4301]: W0706 23:45:57.278225 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.278295 kubelet[4301]: E0706 23:45:57.278236 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.278295 kubelet[4301]: I0706 23:45:57.278250 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8f8629de-298f-4ed4-a595-838d54ca032b-varrun\") pod \"csi-node-driver-plknp\" (UID: \"8f8629de-298f-4ed4-a595-838d54ca032b\") " pod="calico-system/csi-node-driver-plknp" Jul 6 23:45:57.278429 kubelet[4301]: E0706 23:45:57.278418 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.278429 kubelet[4301]: W0706 23:45:57.278427 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.278472 kubelet[4301]: E0706 23:45:57.278438 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.278711 kubelet[4301]: E0706 23:45:57.278695 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.278734 kubelet[4301]: W0706 23:45:57.278712 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.278734 kubelet[4301]: E0706 23:45:57.278730 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.278893 kubelet[4301]: E0706 23:45:57.278884 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.278914 kubelet[4301]: W0706 23:45:57.278893 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.278914 kubelet[4301]: E0706 23:45:57.278905 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.279066 kubelet[4301]: E0706 23:45:57.279057 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.279092 kubelet[4301]: W0706 23:45:57.279066 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.279092 kubelet[4301]: E0706 23:45:57.279077 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.279294 kubelet[4301]: E0706 23:45:57.279286 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.279316 kubelet[4301]: W0706 23:45:57.279294 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.279316 kubelet[4301]: E0706 23:45:57.279304 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.279538 kubelet[4301]: E0706 23:45:57.279530 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.279563 kubelet[4301]: W0706 23:45:57.279538 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.279563 kubelet[4301]: E0706 23:45:57.279546 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.279767 kubelet[4301]: E0706 23:45:57.279760 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.279791 kubelet[4301]: W0706 23:45:57.279768 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.279791 kubelet[4301]: E0706 23:45:57.279775 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.280038 kubelet[4301]: E0706 23:45:57.280030 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.280063 kubelet[4301]: W0706 23:45:57.280039 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.280063 kubelet[4301]: E0706 23:45:57.280047 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.280206 kubelet[4301]: E0706 23:45:57.280199 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.280226 kubelet[4301]: W0706 23:45:57.280206 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.280226 kubelet[4301]: E0706 23:45:57.280213 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.280432 kubelet[4301]: E0706 23:45:57.280424 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.280456 kubelet[4301]: W0706 23:45:57.280432 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.280456 kubelet[4301]: E0706 23:45:57.280439 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.302850 containerd[2777]: time="2025-07-06T23:45:57.302815975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mp7hl,Uid:8610b120-f3bd-4b38-a821-859452c34d8e,Namespace:calico-system,Attempt:0,}" Jul 6 23:45:57.316476 containerd[2777]: time="2025-07-06T23:45:57.316125095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:45:57.316476 containerd[2777]: time="2025-07-06T23:45:57.316469335Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:45:57.316549 containerd[2777]: time="2025-07-06T23:45:57.316482215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:45:57.316576 containerd[2777]: time="2025-07-06T23:45:57.316562135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:45:57.336990 systemd[1]: Started cri-containerd-04de9248d97e34a100219856c880a3521400955d0c7ba650af8897bf43d93b9b.scope - libcontainer container 04de9248d97e34a100219856c880a3521400955d0c7ba650af8897bf43d93b9b. Jul 6 23:45:57.353502 containerd[2777]: time="2025-07-06T23:45:57.353473256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mp7hl,Uid:8610b120-f3bd-4b38-a821-859452c34d8e,Namespace:calico-system,Attempt:0,} returns sandbox id \"04de9248d97e34a100219856c880a3521400955d0c7ba650af8897bf43d93b9b\"" Jul 6 23:45:57.379642 kubelet[4301]: E0706 23:45:57.379620 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.379675 kubelet[4301]: W0706 23:45:57.379640 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.379675 kubelet[4301]: E0706 23:45:57.379656 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.379946 kubelet[4301]: E0706 23:45:57.379933 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.379946 kubelet[4301]: W0706 23:45:57.379942 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.379991 kubelet[4301]: E0706 23:45:57.379954 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.380125 kubelet[4301]: E0706 23:45:57.380113 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.380125 kubelet[4301]: W0706 23:45:57.380123 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.380172 kubelet[4301]: E0706 23:45:57.380134 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.380335 kubelet[4301]: E0706 23:45:57.380324 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.380335 kubelet[4301]: W0706 23:45:57.380333 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.380376 kubelet[4301]: E0706 23:45:57.380345 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.380551 kubelet[4301]: E0706 23:45:57.380543 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.380570 kubelet[4301]: W0706 23:45:57.380551 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.380570 kubelet[4301]: E0706 23:45:57.380562 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.380800 kubelet[4301]: E0706 23:45:57.380791 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.380846 kubelet[4301]: W0706 23:45:57.380800 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.380846 kubelet[4301]: E0706 23:45:57.380813 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.381019 kubelet[4301]: E0706 23:45:57.381011 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.381039 kubelet[4301]: W0706 23:45:57.381018 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.381071 kubelet[4301]: E0706 23:45:57.381037 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.381297 kubelet[4301]: E0706 23:45:57.381290 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.381318 kubelet[4301]: W0706 23:45:57.381297 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.381318 kubelet[4301]: E0706 23:45:57.381312 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.381492 kubelet[4301]: E0706 23:45:57.381484 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.381515 kubelet[4301]: W0706 23:45:57.381492 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.381535 kubelet[4301]: E0706 23:45:57.381515 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.381702 kubelet[4301]: E0706 23:45:57.381694 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.381724 kubelet[4301]: W0706 23:45:57.381701 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.381724 kubelet[4301]: E0706 23:45:57.381718 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.381957 kubelet[4301]: E0706 23:45:57.381949 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.381986 kubelet[4301]: W0706 23:45:57.381957 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.381986 kubelet[4301]: E0706 23:45:57.381972 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.382101 kubelet[4301]: E0706 23:45:57.382094 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.382101 kubelet[4301]: W0706 23:45:57.382101 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.382143 kubelet[4301]: E0706 23:45:57.382113 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.382334 kubelet[4301]: E0706 23:45:57.382326 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.382354 kubelet[4301]: W0706 23:45:57.382334 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.382354 kubelet[4301]: E0706 23:45:57.382344 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.382641 kubelet[4301]: E0706 23:45:57.382630 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.382663 kubelet[4301]: W0706 23:45:57.382642 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.382663 kubelet[4301]: E0706 23:45:57.382658 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.382838 kubelet[4301]: E0706 23:45:57.382831 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.382859 kubelet[4301]: W0706 23:45:57.382839 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.382859 kubelet[4301]: E0706 23:45:57.382854 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.383040 kubelet[4301]: E0706 23:45:57.383033 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.383061 kubelet[4301]: W0706 23:45:57.383041 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.383080 kubelet[4301]: E0706 23:45:57.383058 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.383205 kubelet[4301]: E0706 23:45:57.383198 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.383226 kubelet[4301]: W0706 23:45:57.383205 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.383245 kubelet[4301]: E0706 23:45:57.383223 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.383409 kubelet[4301]: E0706 23:45:57.383402 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.383409 kubelet[4301]: W0706 23:45:57.383409 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.383474 kubelet[4301]: E0706 23:45:57.383423 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.383631 kubelet[4301]: E0706 23:45:57.383623 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.383651 kubelet[4301]: W0706 23:45:57.383631 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.383651 kubelet[4301]: E0706 23:45:57.383641 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.383902 kubelet[4301]: E0706 23:45:57.383894 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.383922 kubelet[4301]: W0706 23:45:57.383901 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.383922 kubelet[4301]: E0706 23:45:57.383912 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.384172 kubelet[4301]: E0706 23:45:57.384164 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.384192 kubelet[4301]: W0706 23:45:57.384172 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.384192 kubelet[4301]: E0706 23:45:57.384183 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.384397 kubelet[4301]: E0706 23:45:57.384389 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.384417 kubelet[4301]: W0706 23:45:57.384397 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.384417 kubelet[4301]: E0706 23:45:57.384407 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.384632 kubelet[4301]: E0706 23:45:57.384624 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.384652 kubelet[4301]: W0706 23:45:57.384632 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.384652 kubelet[4301]: E0706 23:45:57.384647 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.384817 kubelet[4301]: E0706 23:45:57.384810 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.384840 kubelet[4301]: W0706 23:45:57.384817 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.384840 kubelet[4301]: E0706 23:45:57.384825 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.385120 kubelet[4301]: E0706 23:45:57.385111 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.385150 kubelet[4301]: W0706 23:45:57.385119 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.385150 kubelet[4301]: E0706 23:45:57.385128 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:57.390847 kubelet[4301]: E0706 23:45:57.390834 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:57.390868 kubelet[4301]: W0706 23:45:57.390848 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:57.390868 kubelet[4301]: E0706 23:45:57.390860 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:58.056847 containerd[2777]: time="2025-07-06T23:45:58.056805561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:58.057284 containerd[2777]: time="2025-07-06T23:45:58.056831801Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 6 23:45:58.057574 containerd[2777]: time="2025-07-06T23:45:58.057555241Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:58.061049 containerd[2777]: time="2025-07-06T23:45:58.061020201Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:58.061783 containerd[2777]: time="2025-07-06T23:45:58.061755321Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 976.568794ms" Jul 6 23:45:58.061822 containerd[2777]: time="2025-07-06T23:45:58.061789681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 6 23:45:58.062578 containerd[2777]: time="2025-07-06T23:45:58.062559001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 6 23:45:58.067227 containerd[2777]: time="2025-07-06T23:45:58.067207322Z" level=info msg="CreateContainer within sandbox \"35bb911e8ea8c3e43853b1981decc9539c8c3b38cad27ae0f85f0e5e05d278ef\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 6 23:45:58.072006 containerd[2777]: time="2025-07-06T23:45:58.071978322Z" level=info msg="CreateContainer within sandbox \"35bb911e8ea8c3e43853b1981decc9539c8c3b38cad27ae0f85f0e5e05d278ef\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2c811b20e54c4277fa4475e9f818b5e72b86f612f5b7a0a034fea58a642e2024\"" Jul 6 23:45:58.072308 containerd[2777]: time="2025-07-06T23:45:58.072283242Z" level=info msg="StartContainer for \"2c811b20e54c4277fa4475e9f818b5e72b86f612f5b7a0a034fea58a642e2024\"" Jul 6 23:45:58.097979 systemd[1]: Started cri-containerd-2c811b20e54c4277fa4475e9f818b5e72b86f612f5b7a0a034fea58a642e2024.scope - libcontainer container 2c811b20e54c4277fa4475e9f818b5e72b86f612f5b7a0a034fea58a642e2024. Jul 6 23:45:58.122254 containerd[2777]: time="2025-07-06T23:45:58.122224763Z" level=info msg="StartContainer for \"2c811b20e54c4277fa4475e9f818b5e72b86f612f5b7a0a034fea58a642e2024\" returns successfully" Jul 6 23:45:58.203639 kubelet[4301]: I0706 23:45:58.203562 4301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-55df6dd745-bv7sh" podStartSLOduration=1.226073812 podStartE2EDuration="2.203545606s" podCreationTimestamp="2025-07-06 23:45:56 +0000 UTC" firstStartedPulling="2025-07-06 23:45:57.084951207 +0000 UTC m=+19.991698428" lastFinishedPulling="2025-07-06 23:45:58.062423001 +0000 UTC m=+20.969170222" observedRunningTime="2025-07-06 23:45:58.203218726 +0000 UTC m=+21.109965987" watchObservedRunningTime="2025-07-06 23:45:58.203545606 +0000 UTC m=+21.110292867" Jul 6 23:45:58.270688 kubelet[4301]: E0706 23:45:58.270661 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:58.270688 kubelet[4301]: W0706 23:45:58.270682 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:58.270809 kubelet[4301]: E0706 23:45:58.270702 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:58.270961 kubelet[4301]: E0706 23:45:58.270950 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:58.270961 kubelet[4301]: W0706 23:45:58.270961 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:58.271013 kubelet[4301]: E0706 23:45:58.270970 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:58.271203 kubelet[4301]: E0706 23:45:58.271195 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:58.271226 kubelet[4301]: W0706 23:45:58.271203 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:58.271226 kubelet[4301]: E0706 23:45:58.271211 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:58.271478 kubelet[4301]: E0706 23:45:58.271469 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:58.271502 kubelet[4301]: W0706 23:45:58.271478 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:58.271522 kubelet[4301]: E0706 23:45:58.271501 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:58.271755 kubelet[4301]: E0706 23:45:58.271746 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:58.271778 kubelet[4301]: W0706 23:45:58.271755 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:58.271778 kubelet[4301]: E0706 23:45:58.271764 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:58.271981 kubelet[4301]: E0706 23:45:58.271972 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:58.272009 kubelet[4301]: W0706 23:45:58.271980 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:58.272009 kubelet[4301]: E0706 23:45:58.271987 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:58.272136 kubelet[4301]: E0706 23:45:58.272128 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:58.272159 kubelet[4301]: W0706 23:45:58.272136 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:58.272159 kubelet[4301]: E0706 23:45:58.272143 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:58.272350 kubelet[4301]: E0706 23:45:58.272343 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:58.272373 kubelet[4301]: W0706 23:45:58.272352 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:58.272373 kubelet[4301]: E0706 23:45:58.272359 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:58.272581 kubelet[4301]: E0706 23:45:58.272573 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:58.272601 kubelet[4301]: W0706 23:45:58.272581 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:58.272601 kubelet[4301]: E0706 23:45:58.272588 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:58.272770 kubelet[4301]: E0706 23:45:58.272763 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:58.272793 kubelet[4301]: W0706 23:45:58.272770 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:58.272793 kubelet[4301]: E0706 23:45:58.272780 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:58.272930 kubelet[4301]: E0706 23:45:58.272922 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:58.272957 kubelet[4301]: W0706 23:45:58.272930 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:58.272957 kubelet[4301]: E0706 23:45:58.272938 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:58.273123 kubelet[4301]: E0706 23:45:58.273115 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:58.273147 kubelet[4301]: W0706 23:45:58.273123 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:58.273147 kubelet[4301]: E0706 23:45:58.273130 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:58.273351 kubelet[4301]: E0706 23:45:58.273343 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:58.273375 kubelet[4301]: W0706 23:45:58.273351 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:58.273375 kubelet[4301]: E0706 23:45:58.273359 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:58.273573 kubelet[4301]: E0706 23:45:58.273566 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:58.273596 kubelet[4301]: W0706 23:45:58.273573 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:58.273596 kubelet[4301]: E0706 23:45:58.273580 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:58.273800 kubelet[4301]: E0706 23:45:58.273793 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:58.273823 kubelet[4301]: W0706 23:45:58.273800 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:58.273823 kubelet[4301]: E0706 23:45:58.273807 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:58.285088 kubelet[4301]: E0706 23:45:58.285067 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:58.285088 kubelet[4301]: W0706 23:45:58.285081 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:58.285189 kubelet[4301]: E0706 23:45:58.285095 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:58.285314 kubelet[4301]: E0706 23:45:58.285302 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:58.285314 kubelet[4301]: W0706 23:45:58.285311 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:58.285355 kubelet[4301]: E0706 23:45:58.285324 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:58.285528 kubelet[4301]: E0706 23:45:58.285516 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:58.285528 kubelet[4301]: W0706 23:45:58.285525 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:58.285571 kubelet[4301]: E0706 23:45:58.285537 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:58.285790 kubelet[4301]: E0706 23:45:58.285779 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:58.285815 kubelet[4301]: W0706 23:45:58.285790 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:58.285815 kubelet[4301]: E0706 23:45:58.285801 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:58.286009 kubelet[4301]: E0706 23:45:58.286001 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:58.286035 kubelet[4301]: W0706 23:45:58.286009 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:58.286035 kubelet[4301]: E0706 23:45:58.286019 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:58.286162 kubelet[4301]: E0706 23:45:58.286154 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:58.286183 kubelet[4301]: W0706 23:45:58.286162 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:58.286183 kubelet[4301]: E0706 23:45:58.286172 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:58.286359 kubelet[4301]: E0706 23:45:58.286351 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:58.286383 kubelet[4301]: W0706 23:45:58.286359 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:58.286403 kubelet[4301]: E0706 23:45:58.286376 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:58.286567 kubelet[4301]: E0706 23:45:58.286559 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:58.286618 kubelet[4301]: W0706 23:45:58.286567 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:58.286618 kubelet[4301]: E0706 23:45:58.286584 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:58.286874 kubelet[4301]: E0706 23:45:58.286859 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:58.286874 kubelet[4301]: W0706 23:45:58.286875 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:58.286937 kubelet[4301]: E0706 23:45:58.286892 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:58.287061 kubelet[4301]: E0706 23:45:58.287050 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:58.287082 kubelet[4301]: W0706 23:45:58.287060 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:58.287082 kubelet[4301]: E0706 23:45:58.287073 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:58.287246 kubelet[4301]: E0706 23:45:58.287238 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:58.287270 kubelet[4301]: W0706 23:45:58.287247 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:58.287270 kubelet[4301]: E0706 23:45:58.287257 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:58.287461 kubelet[4301]: E0706 23:45:58.287453 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:58.287484 kubelet[4301]: W0706 23:45:58.287461 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:58.287484 kubelet[4301]: E0706 23:45:58.287472 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:58.287700 kubelet[4301]: E0706 23:45:58.287692 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:58.287728 kubelet[4301]: W0706 23:45:58.287700 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:58.287728 kubelet[4301]: E0706 23:45:58.287710 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:58.287949 kubelet[4301]: E0706 23:45:58.287938 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:58.287975 kubelet[4301]: W0706 23:45:58.287949 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:58.287975 kubelet[4301]: E0706 23:45:58.287961 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:58.288118 kubelet[4301]: E0706 23:45:58.288110 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:58.288142 kubelet[4301]: W0706 23:45:58.288118 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:58.288142 kubelet[4301]: E0706 23:45:58.288129 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:58.288292 kubelet[4301]: E0706 23:45:58.288284 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:58.288313 kubelet[4301]: W0706 23:45:58.288293 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:58.288313 kubelet[4301]: E0706 23:45:58.288304 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:58.288533 kubelet[4301]: E0706 23:45:58.288521 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:58.288558 kubelet[4301]: W0706 23:45:58.288534 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:58.288558 kubelet[4301]: E0706 23:45:58.288547 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:58.288718 kubelet[4301]: E0706 23:45:58.288708 4301 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:45:58.288739 kubelet[4301]: W0706 23:45:58.288719 4301 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:45:58.288739 kubelet[4301]: E0706 23:45:58.288729 4301 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:45:58.413681 containerd[2777]: time="2025-07-06T23:45:58.413558973Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:58.413681 containerd[2777]: time="2025-07-06T23:45:58.413627533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 6 23:45:58.414246 containerd[2777]: time="2025-07-06T23:45:58.414227733Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:58.416279 containerd[2777]: time="2025-07-06T23:45:58.416246773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:58.416835 containerd[2777]: time="2025-07-06T23:45:58.416806813Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 354.220972ms" Jul 6 23:45:58.416863 containerd[2777]: time="2025-07-06T23:45:58.416837773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 6 23:45:58.418466 containerd[2777]: time="2025-07-06T23:45:58.418440253Z" level=info msg="CreateContainer within sandbox \"04de9248d97e34a100219856c880a3521400955d0c7ba650af8897bf43d93b9b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 6 23:45:58.424082 containerd[2777]: time="2025-07-06T23:45:58.424048813Z" level=info msg="CreateContainer within sandbox \"04de9248d97e34a100219856c880a3521400955d0c7ba650af8897bf43d93b9b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ae808f7c9a21806578412eccdaae56163644f3257c26d7d8fb8191928bdd0d57\"" Jul 6 23:45:58.424339 containerd[2777]: time="2025-07-06T23:45:58.424316253Z" level=info msg="StartContainer for \"ae808f7c9a21806578412eccdaae56163644f3257c26d7d8fb8191928bdd0d57\"" Jul 6 23:45:58.455971 systemd[1]: Started cri-containerd-ae808f7c9a21806578412eccdaae56163644f3257c26d7d8fb8191928bdd0d57.scope - libcontainer container ae808f7c9a21806578412eccdaae56163644f3257c26d7d8fb8191928bdd0d57. Jul 6 23:45:58.476155 containerd[2777]: time="2025-07-06T23:45:58.476125695Z" level=info msg="StartContainer for \"ae808f7c9a21806578412eccdaae56163644f3257c26d7d8fb8191928bdd0d57\" returns successfully" Jul 6 23:45:58.487941 systemd[1]: cri-containerd-ae808f7c9a21806578412eccdaae56163644f3257c26d7d8fb8191928bdd0d57.scope: Deactivated successfully. Jul 6 23:45:58.649532 containerd[2777]: time="2025-07-06T23:45:58.649475701Z" level=info msg="shim disconnected" id=ae808f7c9a21806578412eccdaae56163644f3257c26d7d8fb8191928bdd0d57 namespace=k8s.io Jul 6 23:45:58.649532 containerd[2777]: time="2025-07-06T23:45:58.649528141Z" level=warning msg="cleaning up after shim disconnected" id=ae808f7c9a21806578412eccdaae56163644f3257c26d7d8fb8191928bdd0d57 namespace=k8s.io Jul 6 23:45:58.649532 containerd[2777]: time="2025-07-06T23:45:58.649535621Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:45:59.163200 kubelet[4301]: E0706 23:45:59.163163 4301 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-plknp" podUID="8f8629de-298f-4ed4-a595-838d54ca032b" Jul 6 23:45:59.199004 kubelet[4301]: I0706 23:45:59.198985 4301 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:45:59.199829 containerd[2777]: time="2025-07-06T23:45:59.199801199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 6 23:46:00.396205 containerd[2777]: time="2025-07-06T23:46:00.396157676Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:00.396548 containerd[2777]: time="2025-07-06T23:46:00.396224076Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 6 23:46:00.396889 containerd[2777]: time="2025-07-06T23:46:00.396863436Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:00.412132 containerd[2777]: time="2025-07-06T23:46:00.412104116Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:00.412708 containerd[2777]: time="2025-07-06T23:46:00.412683836Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 1.212848157s" Jul 6 23:46:00.412737 containerd[2777]: time="2025-07-06T23:46:00.412712236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 6 23:46:00.414459 containerd[2777]: time="2025-07-06T23:46:00.414433036Z" level=info msg="CreateContainer within sandbox \"04de9248d97e34a100219856c880a3521400955d0c7ba650af8897bf43d93b9b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 6 23:46:00.420246 containerd[2777]: time="2025-07-06T23:46:00.420218876Z" level=info msg="CreateContainer within sandbox \"04de9248d97e34a100219856c880a3521400955d0c7ba650af8897bf43d93b9b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3ae4658c6d923b585caff184de62a5cb8d21dc24d6efca8cbc32981cb94167fd\"" Jul 6 23:46:00.420795 containerd[2777]: time="2025-07-06T23:46:00.420769596Z" level=info msg="StartContainer for \"3ae4658c6d923b585caff184de62a5cb8d21dc24d6efca8cbc32981cb94167fd\"" Jul 6 23:46:00.468034 systemd[1]: Started cri-containerd-3ae4658c6d923b585caff184de62a5cb8d21dc24d6efca8cbc32981cb94167fd.scope - libcontainer container 3ae4658c6d923b585caff184de62a5cb8d21dc24d6efca8cbc32981cb94167fd. Jul 6 23:46:00.488684 containerd[2777]: time="2025-07-06T23:46:00.488655598Z" level=info msg="StartContainer for \"3ae4658c6d923b585caff184de62a5cb8d21dc24d6efca8cbc32981cb94167fd\" returns successfully" Jul 6 23:46:00.849651 containerd[2777]: time="2025-07-06T23:46:00.849620089Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:46:00.851267 systemd[1]: cri-containerd-3ae4658c6d923b585caff184de62a5cb8d21dc24d6efca8cbc32981cb94167fd.scope: Deactivated successfully. Jul 6 23:46:00.851562 systemd[1]: cri-containerd-3ae4658c6d923b585caff184de62a5cb8d21dc24d6efca8cbc32981cb94167fd.scope: Consumed 965ms CPU time, 195.7M memory peak, 165.8M written to disk. Jul 6 23:46:00.949072 kubelet[4301]: I0706 23:46:00.949051 4301 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 6 23:46:00.966578 systemd[1]: Created slice kubepods-besteffort-pod0269473e_9b77_4370_b872_0e13aff1fbe2.slice - libcontainer container kubepods-besteffort-pod0269473e_9b77_4370_b872_0e13aff1fbe2.slice. Jul 6 23:46:00.970353 systemd[1]: Created slice kubepods-burstable-pod994a699b_ff00_478f_a192_50e32c9ff313.slice - libcontainer container kubepods-burstable-pod994a699b_ff00_478f_a192_50e32c9ff313.slice. Jul 6 23:46:00.974597 systemd[1]: Created slice kubepods-besteffort-pod857da74c_a792_4f7a_9157_031cec6c674a.slice - libcontainer container kubepods-besteffort-pod857da74c_a792_4f7a_9157_031cec6c674a.slice. Jul 6 23:46:00.979712 systemd[1]: Created slice kubepods-besteffort-pode0b41b8f_0f59_42bb_bf26_a6f229d75d87.slice - libcontainer container kubepods-besteffort-pode0b41b8f_0f59_42bb_bf26_a6f229d75d87.slice. Jul 6 23:46:00.983513 systemd[1]: Created slice kubepods-besteffort-pod77b24a7d_db32_4239_b789_7ecb6aad130e.slice - libcontainer container kubepods-besteffort-pod77b24a7d_db32_4239_b789_7ecb6aad130e.slice. Jul 6 23:46:00.986546 containerd[2777]: time="2025-07-06T23:46:00.986282773Z" level=info msg="shim disconnected" id=3ae4658c6d923b585caff184de62a5cb8d21dc24d6efca8cbc32981cb94167fd namespace=k8s.io Jul 6 23:46:00.986623 containerd[2777]: time="2025-07-06T23:46:00.986571693Z" level=warning msg="cleaning up after shim disconnected" id=3ae4658c6d923b585caff184de62a5cb8d21dc24d6efca8cbc32981cb94167fd namespace=k8s.io Jul 6 23:46:00.986623 containerd[2777]: time="2025-07-06T23:46:00.986587453Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:46:00.987217 systemd[1]: Created slice kubepods-burstable-pod149827c6_4bc8_4f81_a88b_f0d5697ac2f9.slice - libcontainer container kubepods-burstable-pod149827c6_4bc8_4f81_a88b_f0d5697ac2f9.slice. Jul 6 23:46:00.991114 systemd[1]: Created slice kubepods-besteffort-pod2789cf2c_eaba_4e19_aa42_51b1fb03b3f4.slice - libcontainer container kubepods-besteffort-pod2789cf2c_eaba_4e19_aa42_51b1fb03b3f4.slice. Jul 6 23:46:01.000247 kubelet[4301]: I0706 23:46:01.000214 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2789cf2c-eaba-4e19-aa42-51b1fb03b3f4-goldmane-key-pair\") pod \"goldmane-58fd7646b9-fm57w\" (UID: \"2789cf2c-eaba-4e19-aa42-51b1fb03b3f4\") " pod="calico-system/goldmane-58fd7646b9-fm57w" Jul 6 23:46:01.000247 kubelet[4301]: I0706 23:46:01.000250 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpjkq\" (UniqueName: \"kubernetes.io/projected/2789cf2c-eaba-4e19-aa42-51b1fb03b3f4-kube-api-access-kpjkq\") pod \"goldmane-58fd7646b9-fm57w\" (UID: \"2789cf2c-eaba-4e19-aa42-51b1fb03b3f4\") " pod="calico-system/goldmane-58fd7646b9-fm57w" Jul 6 23:46:01.000428 kubelet[4301]: I0706 23:46:01.000266 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s67b\" (UniqueName: \"kubernetes.io/projected/994a699b-ff00-478f-a192-50e32c9ff313-kube-api-access-9s67b\") pod \"coredns-7c65d6cfc9-r6sm8\" (UID: \"994a699b-ff00-478f-a192-50e32c9ff313\") " pod="kube-system/coredns-7c65d6cfc9-r6sm8" Jul 6 23:46:01.000428 kubelet[4301]: I0706 23:46:01.000342 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2789cf2c-eaba-4e19-aa42-51b1fb03b3f4-config\") pod \"goldmane-58fd7646b9-fm57w\" (UID: \"2789cf2c-eaba-4e19-aa42-51b1fb03b3f4\") " pod="calico-system/goldmane-58fd7646b9-fm57w" Jul 6 23:46:01.000428 kubelet[4301]: I0706 23:46:01.000375 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2789cf2c-eaba-4e19-aa42-51b1fb03b3f4-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-fm57w\" (UID: \"2789cf2c-eaba-4e19-aa42-51b1fb03b3f4\") " pod="calico-system/goldmane-58fd7646b9-fm57w" Jul 6 23:46:01.000428 kubelet[4301]: I0706 23:46:01.000397 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e0b41b8f-0f59-42bb-bf26-a6f229d75d87-whisker-backend-key-pair\") pod \"whisker-69cf447d8-mt4ls\" (UID: \"e0b41b8f-0f59-42bb-bf26-a6f229d75d87\") " pod="calico-system/whisker-69cf447d8-mt4ls" Jul 6 23:46:01.000428 kubelet[4301]: I0706 23:46:01.000417 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0b41b8f-0f59-42bb-bf26-a6f229d75d87-whisker-ca-bundle\") pod \"whisker-69cf447d8-mt4ls\" (UID: \"e0b41b8f-0f59-42bb-bf26-a6f229d75d87\") " pod="calico-system/whisker-69cf447d8-mt4ls" Jul 6 23:46:01.000536 kubelet[4301]: I0706 23:46:01.000459 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k82wn\" (UniqueName: \"kubernetes.io/projected/0269473e-9b77-4370-b872-0e13aff1fbe2-kube-api-access-k82wn\") pod \"calico-kube-controllers-7b4c6f9887-2rpdn\" (UID: \"0269473e-9b77-4370-b872-0e13aff1fbe2\") " pod="calico-system/calico-kube-controllers-7b4c6f9887-2rpdn" Jul 6 23:46:01.000536 kubelet[4301]: I0706 23:46:01.000497 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0269473e-9b77-4370-b872-0e13aff1fbe2-tigera-ca-bundle\") pod \"calico-kube-controllers-7b4c6f9887-2rpdn\" (UID: \"0269473e-9b77-4370-b872-0e13aff1fbe2\") " pod="calico-system/calico-kube-controllers-7b4c6f9887-2rpdn" Jul 6 23:46:01.000536 kubelet[4301]: I0706 23:46:01.000518 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/857da74c-a792-4f7a-9157-031cec6c674a-calico-apiserver-certs\") pod \"calico-apiserver-7f699b944c-vw6wk\" (UID: \"857da74c-a792-4f7a-9157-031cec6c674a\") " pod="calico-apiserver/calico-apiserver-7f699b944c-vw6wk" Jul 6 23:46:01.000605 kubelet[4301]: I0706 23:46:01.000554 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jthb2\" (UniqueName: \"kubernetes.io/projected/857da74c-a792-4f7a-9157-031cec6c674a-kube-api-access-jthb2\") pod \"calico-apiserver-7f699b944c-vw6wk\" (UID: \"857da74c-a792-4f7a-9157-031cec6c674a\") " pod="calico-apiserver/calico-apiserver-7f699b944c-vw6wk" Jul 6 23:46:01.000605 kubelet[4301]: I0706 23:46:01.000586 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/994a699b-ff00-478f-a192-50e32c9ff313-config-volume\") pod \"coredns-7c65d6cfc9-r6sm8\" (UID: \"994a699b-ff00-478f-a192-50e32c9ff313\") " pod="kube-system/coredns-7c65d6cfc9-r6sm8" Jul 6 23:46:01.000650 kubelet[4301]: I0706 23:46:01.000606 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84xvv\" (UniqueName: \"kubernetes.io/projected/77b24a7d-db32-4239-b789-7ecb6aad130e-kube-api-access-84xvv\") pod \"calico-apiserver-7f699b944c-lr8j7\" (UID: \"77b24a7d-db32-4239-b789-7ecb6aad130e\") " pod="calico-apiserver/calico-apiserver-7f699b944c-lr8j7" Jul 6 23:46:01.000650 kubelet[4301]: I0706 23:46:01.000626 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r62qx\" (UniqueName: \"kubernetes.io/projected/149827c6-4bc8-4f81-a88b-f0d5697ac2f9-kube-api-access-r62qx\") pod \"coredns-7c65d6cfc9-q9rm8\" (UID: \"149827c6-4bc8-4f81-a88b-f0d5697ac2f9\") " pod="kube-system/coredns-7c65d6cfc9-q9rm8" Jul 6 23:46:01.000650 kubelet[4301]: I0706 23:46:01.000643 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4rw9\" (UniqueName: \"kubernetes.io/projected/e0b41b8f-0f59-42bb-bf26-a6f229d75d87-kube-api-access-s4rw9\") pod \"whisker-69cf447d8-mt4ls\" (UID: \"e0b41b8f-0f59-42bb-bf26-a6f229d75d87\") " pod="calico-system/whisker-69cf447d8-mt4ls" Jul 6 23:46:01.000709 kubelet[4301]: I0706 23:46:01.000660 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/77b24a7d-db32-4239-b789-7ecb6aad130e-calico-apiserver-certs\") pod \"calico-apiserver-7f699b944c-lr8j7\" (UID: \"77b24a7d-db32-4239-b789-7ecb6aad130e\") " pod="calico-apiserver/calico-apiserver-7f699b944c-lr8j7" Jul 6 23:46:01.000709 kubelet[4301]: I0706 23:46:01.000683 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/149827c6-4bc8-4f81-a88b-f0d5697ac2f9-config-volume\") pod \"coredns-7c65d6cfc9-q9rm8\" (UID: \"149827c6-4bc8-4f81-a88b-f0d5697ac2f9\") " pod="kube-system/coredns-7c65d6cfc9-q9rm8" Jul 6 23:46:01.166593 systemd[1]: Created slice kubepods-besteffort-pod8f8629de_298f_4ed4_a595_838d54ca032b.slice - libcontainer container kubepods-besteffort-pod8f8629de_298f_4ed4_a595_838d54ca032b.slice. Jul 6 23:46:01.168359 containerd[2777]: time="2025-07-06T23:46:01.168324498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-plknp,Uid:8f8629de-298f-4ed4-a595-838d54ca032b,Namespace:calico-system,Attempt:0,}" Jul 6 23:46:01.203618 containerd[2777]: time="2025-07-06T23:46:01.203591539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 6 23:46:01.227383 containerd[2777]: time="2025-07-06T23:46:01.227334699Z" level=error msg="Failed to destroy network for sandbox \"ed88fb7349684f51014d3036c107009b175993f050a65a9dff6bd628f7595af5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:01.228167 containerd[2777]: time="2025-07-06T23:46:01.227894539Z" level=error msg="encountered an error cleaning up failed sandbox \"ed88fb7349684f51014d3036c107009b175993f050a65a9dff6bd628f7595af5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:01.228167 containerd[2777]: time="2025-07-06T23:46:01.227987179Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-plknp,Uid:8f8629de-298f-4ed4-a595-838d54ca032b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ed88fb7349684f51014d3036c107009b175993f050a65a9dff6bd628f7595af5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:01.228328 kubelet[4301]: E0706 23:46:01.228159 4301 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed88fb7349684f51014d3036c107009b175993f050a65a9dff6bd628f7595af5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:01.228328 kubelet[4301]: E0706 23:46:01.228220 4301 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed88fb7349684f51014d3036c107009b175993f050a65a9dff6bd628f7595af5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-plknp" Jul 6 23:46:01.228328 kubelet[4301]: E0706 23:46:01.228238 4301 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed88fb7349684f51014d3036c107009b175993f050a65a9dff6bd628f7595af5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-plknp" Jul 6 23:46:01.228448 kubelet[4301]: E0706 23:46:01.228279 4301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-plknp_calico-system(8f8629de-298f-4ed4-a595-838d54ca032b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-plknp_calico-system(8f8629de-298f-4ed4-a595-838d54ca032b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed88fb7349684f51014d3036c107009b175993f050a65a9dff6bd628f7595af5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-plknp" podUID="8f8629de-298f-4ed4-a595-838d54ca032b" Jul 6 23:46:01.268839 containerd[2777]: time="2025-07-06T23:46:01.268809861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b4c6f9887-2rpdn,Uid:0269473e-9b77-4370-b872-0e13aff1fbe2,Namespace:calico-system,Attempt:0,}" Jul 6 23:46:01.273396 containerd[2777]: time="2025-07-06T23:46:01.273366981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r6sm8,Uid:994a699b-ff00-478f-a192-50e32c9ff313,Namespace:kube-system,Attempt:0,}" Jul 6 23:46:01.278960 containerd[2777]: time="2025-07-06T23:46:01.278933981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f699b944c-vw6wk,Uid:857da74c-a792-4f7a-9157-031cec6c674a,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:46:01.282446 containerd[2777]: time="2025-07-06T23:46:01.282417701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69cf447d8-mt4ls,Uid:e0b41b8f-0f59-42bb-bf26-a6f229d75d87,Namespace:calico-system,Attempt:0,}" Jul 6 23:46:01.285971 containerd[2777]: time="2025-07-06T23:46:01.285947901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f699b944c-lr8j7,Uid:77b24a7d-db32-4239-b789-7ecb6aad130e,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:46:01.289516 containerd[2777]: time="2025-07-06T23:46:01.289482701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q9rm8,Uid:149827c6-4bc8-4f81-a88b-f0d5697ac2f9,Namespace:kube-system,Attempt:0,}" Jul 6 23:46:01.292999 containerd[2777]: time="2025-07-06T23:46:01.292974261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-fm57w,Uid:2789cf2c-eaba-4e19-aa42-51b1fb03b3f4,Namespace:calico-system,Attempt:0,}" Jul 6 23:46:01.312608 containerd[2777]: time="2025-07-06T23:46:01.312565582Z" level=error msg="Failed to destroy network for sandbox \"3e59b4729dad76f14993b50c23bcc5b51745d3d4a26156978cc1317bfdae12ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:01.313007 containerd[2777]: time="2025-07-06T23:46:01.312983902Z" level=error msg="encountered an error cleaning up failed sandbox \"3e59b4729dad76f14993b50c23bcc5b51745d3d4a26156978cc1317bfdae12ee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:01.313068 containerd[2777]: time="2025-07-06T23:46:01.313050142Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b4c6f9887-2rpdn,Uid:0269473e-9b77-4370-b872-0e13aff1fbe2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3e59b4729dad76f14993b50c23bcc5b51745d3d4a26156978cc1317bfdae12ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:01.313281 kubelet[4301]: E0706 23:46:01.313241 4301 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e59b4729dad76f14993b50c23bcc5b51745d3d4a26156978cc1317bfdae12ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:01.313324 kubelet[4301]: E0706 23:46:01.313305 4301 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e59b4729dad76f14993b50c23bcc5b51745d3d4a26156978cc1317bfdae12ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b4c6f9887-2rpdn" Jul 6 23:46:01.313349 kubelet[4301]: E0706 23:46:01.313323 4301 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e59b4729dad76f14993b50c23bcc5b51745d3d4a26156978cc1317bfdae12ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b4c6f9887-2rpdn" Jul 6 23:46:01.313386 kubelet[4301]: E0706 23:46:01.313365 4301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b4c6f9887-2rpdn_calico-system(0269473e-9b77-4370-b872-0e13aff1fbe2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b4c6f9887-2rpdn_calico-system(0269473e-9b77-4370-b872-0e13aff1fbe2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e59b4729dad76f14993b50c23bcc5b51745d3d4a26156978cc1317bfdae12ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b4c6f9887-2rpdn" podUID="0269473e-9b77-4370-b872-0e13aff1fbe2" Jul 6 23:46:01.316728 containerd[2777]: time="2025-07-06T23:46:01.316600102Z" level=error msg="Failed to destroy network for sandbox \"c58b3575346115043e42d3160110ad65f9442033fef9a4e3609a527e9e41f300\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:01.317074 containerd[2777]: time="2025-07-06T23:46:01.317047302Z" level=error msg="encountered an error cleaning up failed sandbox \"c58b3575346115043e42d3160110ad65f9442033fef9a4e3609a527e9e41f300\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:01.317259 containerd[2777]: time="2025-07-06T23:46:01.317165262Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r6sm8,Uid:994a699b-ff00-478f-a192-50e32c9ff313,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c58b3575346115043e42d3160110ad65f9442033fef9a4e3609a527e9e41f300\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:01.317376 kubelet[4301]: E0706 23:46:01.317342 4301 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c58b3575346115043e42d3160110ad65f9442033fef9a4e3609a527e9e41f300\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:01.317419 kubelet[4301]: E0706 23:46:01.317395 4301 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c58b3575346115043e42d3160110ad65f9442033fef9a4e3609a527e9e41f300\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-r6sm8" Jul 6 23:46:01.317442 kubelet[4301]: E0706 23:46:01.317414 4301 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c58b3575346115043e42d3160110ad65f9442033fef9a4e3609a527e9e41f300\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-r6sm8" Jul 6 23:46:01.317476 kubelet[4301]: E0706 23:46:01.317451 4301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-r6sm8_kube-system(994a699b-ff00-478f-a192-50e32c9ff313)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-r6sm8_kube-system(994a699b-ff00-478f-a192-50e32c9ff313)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c58b3575346115043e42d3160110ad65f9442033fef9a4e3609a527e9e41f300\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-r6sm8" podUID="994a699b-ff00-478f-a192-50e32c9ff313" Jul 6 23:46:01.321851 containerd[2777]: time="2025-07-06T23:46:01.321819982Z" level=error msg="Failed to destroy network for sandbox \"4bb7360e989b50d413ec27c004346ce241f583edcb9a2b67c80e082cf7ac5244\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:01.322176 containerd[2777]: time="2025-07-06T23:46:01.322154702Z" level=error msg="encountered an error cleaning up failed sandbox \"4bb7360e989b50d413ec27c004346ce241f583edcb9a2b67c80e082cf7ac5244\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:01.322223 containerd[2777]: time="2025-07-06T23:46:01.322207662Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f699b944c-vw6wk,Uid:857da74c-a792-4f7a-9157-031cec6c674a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4bb7360e989b50d413ec27c004346ce241f583edcb9a2b67c80e082cf7ac5244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:01.322367 kubelet[4301]: E0706 23:46:01.322342 4301 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bb7360e989b50d413ec27c004346ce241f583edcb9a2b67c80e082cf7ac5244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:01.322398 kubelet[4301]: E0706 23:46:01.322386 4301 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bb7360e989b50d413ec27c004346ce241f583edcb9a2b67c80e082cf7ac5244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f699b944c-vw6wk" Jul 6 23:46:01.322427 kubelet[4301]: E0706 23:46:01.322403 4301 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bb7360e989b50d413ec27c004346ce241f583edcb9a2b67c80e082cf7ac5244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f699b944c-vw6wk" Jul 6 23:46:01.322449 kubelet[4301]: E0706 23:46:01.322435 4301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f699b944c-vw6wk_calico-apiserver(857da74c-a792-4f7a-9157-031cec6c674a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f699b944c-vw6wk_calico-apiserver(857da74c-a792-4f7a-9157-031cec6c674a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4bb7360e989b50d413ec27c004346ce241f583edcb9a2b67c80e082cf7ac5244\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f699b944c-vw6wk" podUID="857da74c-a792-4f7a-9157-031cec6c674a" Jul 6 23:46:01.326292 containerd[2777]: time="2025-07-06T23:46:01.326261142Z" level=error msg="Failed to destroy network for sandbox \"827747d3cb061ac5d9f0a033aea8fb6c25b8fd4e577c892cdeebdb49bfb41cbe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:01.326606 containerd[2777]: time="2025-07-06T23:46:01.326584342Z" level=error msg="encountered an error cleaning up failed sandbox \"827747d3cb061ac5d9f0a033aea8fb6c25b8fd4e577c892cdeebdb49bfb41cbe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:01.326651 containerd[2777]: time="2025-07-06T23:46:01.326636982Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69cf447d8-mt4ls,Uid:e0b41b8f-0f59-42bb-bf26-a6f229d75d87,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"827747d3cb061ac5d9f0a033aea8fb6c25b8fd4e577c892cdeebdb49bfb41cbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:01.326821 kubelet[4301]: E0706 23:46:01.326791 4301 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"827747d3cb061ac5d9f0a033aea8fb6c25b8fd4e577c892cdeebdb49bfb41cbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:01.326856 kubelet[4301]: E0706 23:46:01.326842 4301 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"827747d3cb061ac5d9f0a033aea8fb6c25b8fd4e577c892cdeebdb49bfb41cbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69cf447d8-mt4ls" Jul 6 23:46:01.326881 kubelet[4301]: E0706 23:46:01.326862 4301 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"827747d3cb061ac5d9f0a033aea8fb6c25b8fd4e577c892cdeebdb49bfb41cbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69cf447d8-mt4ls" Jul 6 23:46:01.326936 kubelet[4301]: E0706 23:46:01.326916 4301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-69cf447d8-mt4ls_calico-system(e0b41b8f-0f59-42bb-bf26-a6f229d75d87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-69cf447d8-mt4ls_calico-system(e0b41b8f-0f59-42bb-bf26-a6f229d75d87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"827747d3cb061ac5d9f0a033aea8fb6c25b8fd4e577c892cdeebdb49bfb41cbe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-69cf447d8-mt4ls" podUID="e0b41b8f-0f59-42bb-bf26-a6f229d75d87" Jul 6 23:46:01.331801 containerd[2777]: time="2025-07-06T23:46:01.331764862Z" level=error msg="Failed to destroy network for sandbox \"f47d0d97118fecbcf7c7a5a1c7cdcc49825de08cbee49547e1c0bcd8afbeb167\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:01.332136 containerd[2777]: time="2025-07-06T23:46:01.332113142Z" level=error msg="encountered an error cleaning up failed sandbox \"f47d0d97118fecbcf7c7a5a1c7cdcc49825de08cbee49547e1c0bcd8afbeb167\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:01.332182 containerd[2777]: time="2025-07-06T23:46:01.332167702Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f699b944c-lr8j7,Uid:77b24a7d-db32-4239-b789-7ecb6aad130e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f47d0d97118fecbcf7c7a5a1c7cdcc49825de08cbee49547e1c0bcd8afbeb167\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:01.332332 kubelet[4301]: E0706 23:46:01.332309 4301 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f47d0d97118fecbcf7c7a5a1c7cdcc49825de08cbee49547e1c0bcd8afbeb167\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:01.332367 kubelet[4301]: E0706 23:46:01.332350 4301 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f47d0d97118fecbcf7c7a5a1c7cdcc49825de08cbee49547e1c0bcd8afbeb167\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f699b944c-lr8j7" Jul 6 23:46:01.332392 kubelet[4301]: E0706 23:46:01.332373 4301 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f47d0d97118fecbcf7c7a5a1c7cdcc49825de08cbee49547e1c0bcd8afbeb167\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f699b944c-lr8j7" Jul 6 23:46:01.332426 kubelet[4301]: E0706 23:46:01.332409 4301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f699b944c-lr8j7_calico-apiserver(77b24a7d-db32-4239-b789-7ecb6aad130e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f699b944c-lr8j7_calico-apiserver(77b24a7d-db32-4239-b789-7ecb6aad130e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f47d0d97118fecbcf7c7a5a1c7cdcc49825de08cbee49547e1c0bcd8afbeb167\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f699b944c-lr8j7" podUID="77b24a7d-db32-4239-b789-7ecb6aad130e" Jul 6 23:46:01.332873 containerd[2777]: time="2025-07-06T23:46:01.332846982Z" level=error msg="Failed to destroy network for sandbox \"da8c00b01623512125b5fa588e21d32fee34d73ffd612e233afeadb15d9d9fb5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:01.333159 containerd[2777]: time="2025-07-06T23:46:01.333139542Z" level=error msg="encountered an error cleaning up failed sandbox \"da8c00b01623512125b5fa588e21d32fee34d73ffd612e233afeadb15d9d9fb5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:01.333206 containerd[2777]: time="2025-07-06T23:46:01.333190982Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q9rm8,Uid:149827c6-4bc8-4f81-a88b-f0d5697ac2f9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"da8c00b01623512125b5fa588e21d32fee34d73ffd612e233afeadb15d9d9fb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:01.333331 kubelet[4301]: E0706 23:46:01.333308 4301 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da8c00b01623512125b5fa588e21d32fee34d73ffd612e233afeadb15d9d9fb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:01.333358 kubelet[4301]: E0706 23:46:01.333346 4301 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da8c00b01623512125b5fa588e21d32fee34d73ffd612e233afeadb15d9d9fb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-q9rm8" Jul 6 23:46:01.333381 kubelet[4301]: E0706 23:46:01.333362 4301 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da8c00b01623512125b5fa588e21d32fee34d73ffd612e233afeadb15d9d9fb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-q9rm8" Jul 6 23:46:01.333414 kubelet[4301]: E0706 23:46:01.333392 4301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-q9rm8_kube-system(149827c6-4bc8-4f81-a88b-f0d5697ac2f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-q9rm8_kube-system(149827c6-4bc8-4f81-a88b-f0d5697ac2f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da8c00b01623512125b5fa588e21d32fee34d73ffd612e233afeadb15d9d9fb5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-q9rm8" podUID="149827c6-4bc8-4f81-a88b-f0d5697ac2f9" Jul 6 23:46:01.337386 containerd[2777]: time="2025-07-06T23:46:01.337349062Z" level=error msg="Failed to destroy network for sandbox \"464f9b256e2324160a8698f641d846186fb6b7c809a88e7b16b687e615d2cc0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:01.337689 containerd[2777]: time="2025-07-06T23:46:01.337666942Z" level=error msg="encountered an error cleaning up failed sandbox \"464f9b256e2324160a8698f641d846186fb6b7c809a88e7b16b687e615d2cc0a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:01.337730 containerd[2777]: time="2025-07-06T23:46:01.337715222Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-fm57w,Uid:2789cf2c-eaba-4e19-aa42-51b1fb03b3f4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"464f9b256e2324160a8698f641d846186fb6b7c809a88e7b16b687e615d2cc0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:01.337872 kubelet[4301]: E0706 23:46:01.337846 4301 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"464f9b256e2324160a8698f641d846186fb6b7c809a88e7b16b687e615d2cc0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:01.337901 kubelet[4301]: E0706 23:46:01.337885 4301 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"464f9b256e2324160a8698f641d846186fb6b7c809a88e7b16b687e615d2cc0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-fm57w" Jul 6 23:46:01.337923 kubelet[4301]: E0706 23:46:01.337902 4301 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"464f9b256e2324160a8698f641d846186fb6b7c809a88e7b16b687e615d2cc0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-fm57w" Jul 6 23:46:01.337948 kubelet[4301]: E0706 23:46:01.337931 4301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-fm57w_calico-system(2789cf2c-eaba-4e19-aa42-51b1fb03b3f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-fm57w_calico-system(2789cf2c-eaba-4e19-aa42-51b1fb03b3f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"464f9b256e2324160a8698f641d846186fb6b7c809a88e7b16b687e615d2cc0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-fm57w" podUID="2789cf2c-eaba-4e19-aa42-51b1fb03b3f4" Jul 6 23:46:01.431101 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ae4658c6d923b585caff184de62a5cb8d21dc24d6efca8cbc32981cb94167fd-rootfs.mount: Deactivated successfully. Jul 6 23:46:02.204633 kubelet[4301]: I0706 23:46:02.204607 4301 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e59b4729dad76f14993b50c23bcc5b51745d3d4a26156978cc1317bfdae12ee" Jul 6 23:46:02.205209 containerd[2777]: time="2025-07-06T23:46:02.205184526Z" level=info msg="StopPodSandbox for \"3e59b4729dad76f14993b50c23bcc5b51745d3d4a26156978cc1317bfdae12ee\"" Jul 6 23:46:02.205369 containerd[2777]: time="2025-07-06T23:46:02.205348286Z" level=info msg="Ensure that sandbox 3e59b4729dad76f14993b50c23bcc5b51745d3d4a26156978cc1317bfdae12ee in task-service has been cleanup successfully" Jul 6 23:46:02.205405 kubelet[4301]: I0706 23:46:02.205364 4301 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="827747d3cb061ac5d9f0a033aea8fb6c25b8fd4e577c892cdeebdb49bfb41cbe" Jul 6 23:46:02.205536 containerd[2777]: time="2025-07-06T23:46:02.205520446Z" level=info msg="TearDown network for sandbox \"3e59b4729dad76f14993b50c23bcc5b51745d3d4a26156978cc1317bfdae12ee\" successfully" Jul 6 23:46:02.205561 containerd[2777]: time="2025-07-06T23:46:02.205535606Z" level=info msg="StopPodSandbox for \"3e59b4729dad76f14993b50c23bcc5b51745d3d4a26156978cc1317bfdae12ee\" returns successfully" Jul 6 23:46:02.205790 containerd[2777]: time="2025-07-06T23:46:02.205769086Z" level=info msg="StopPodSandbox for \"827747d3cb061ac5d9f0a033aea8fb6c25b8fd4e577c892cdeebdb49bfb41cbe\"" Jul 6 23:46:02.205952 containerd[2777]: time="2025-07-06T23:46:02.205927446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b4c6f9887-2rpdn,Uid:0269473e-9b77-4370-b872-0e13aff1fbe2,Namespace:calico-system,Attempt:1,}" Jul 6 23:46:02.206006 containerd[2777]: time="2025-07-06T23:46:02.205926726Z" level=info msg="Ensure that sandbox 827747d3cb061ac5d9f0a033aea8fb6c25b8fd4e577c892cdeebdb49bfb41cbe in task-service has been cleanup successfully" Jul 6 23:46:02.206145 containerd[2777]: time="2025-07-06T23:46:02.206130966Z" level=info msg="TearDown network for sandbox \"827747d3cb061ac5d9f0a033aea8fb6c25b8fd4e577c892cdeebdb49bfb41cbe\" successfully" Jul 6 23:46:02.206168 containerd[2777]: time="2025-07-06T23:46:02.206145566Z" level=info msg="StopPodSandbox for \"827747d3cb061ac5d9f0a033aea8fb6c25b8fd4e577c892cdeebdb49bfb41cbe\" returns successfully" Jul 6 23:46:02.206253 kubelet[4301]: I0706 23:46:02.206240 4301 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed88fb7349684f51014d3036c107009b175993f050a65a9dff6bd628f7595af5" Jul 6 23:46:02.206490 containerd[2777]: time="2025-07-06T23:46:02.206468646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69cf447d8-mt4ls,Uid:e0b41b8f-0f59-42bb-bf26-a6f229d75d87,Namespace:calico-system,Attempt:1,}" Jul 6 23:46:02.206648 containerd[2777]: time="2025-07-06T23:46:02.206630326Z" level=info msg="StopPodSandbox for \"ed88fb7349684f51014d3036c107009b175993f050a65a9dff6bd628f7595af5\"" Jul 6 23:46:02.206784 containerd[2777]: time="2025-07-06T23:46:02.206769046Z" level=info msg="Ensure that sandbox ed88fb7349684f51014d3036c107009b175993f050a65a9dff6bd628f7595af5 in task-service has been cleanup successfully" Jul 6 23:46:02.206975 containerd[2777]: time="2025-07-06T23:46:02.206958846Z" level=info msg="TearDown network for sandbox \"ed88fb7349684f51014d3036c107009b175993f050a65a9dff6bd628f7595af5\" successfully" Jul 6 23:46:02.206996 containerd[2777]: time="2025-07-06T23:46:02.206976126Z" level=info msg="StopPodSandbox for \"ed88fb7349684f51014d3036c107009b175993f050a65a9dff6bd628f7595af5\" returns successfully" Jul 6 23:46:02.207126 kubelet[4301]: I0706 23:46:02.207115 4301 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="464f9b256e2324160a8698f641d846186fb6b7c809a88e7b16b687e615d2cc0a" Jul 6 23:46:02.207194 systemd[1]: run-netns-cni\x2dafe19fbe\x2d4dbc\x2de63b\x2d7d66\x2dde0b1d7f7ffe.mount: Deactivated successfully. Jul 6 23:46:02.207507 containerd[2777]: time="2025-07-06T23:46:02.207493846Z" level=info msg="StopPodSandbox for \"464f9b256e2324160a8698f641d846186fb6b7c809a88e7b16b687e615d2cc0a\"" Jul 6 23:46:02.207541 containerd[2777]: time="2025-07-06T23:46:02.207519086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-plknp,Uid:8f8629de-298f-4ed4-a595-838d54ca032b,Namespace:calico-system,Attempt:1,}" Jul 6 23:46:02.207630 containerd[2777]: time="2025-07-06T23:46:02.207616806Z" level=info msg="Ensure that sandbox 464f9b256e2324160a8698f641d846186fb6b7c809a88e7b16b687e615d2cc0a in task-service has been cleanup successfully" Jul 6 23:46:02.207794 containerd[2777]: time="2025-07-06T23:46:02.207778446Z" level=info msg="TearDown network for sandbox \"464f9b256e2324160a8698f641d846186fb6b7c809a88e7b16b687e615d2cc0a\" successfully" Jul 6 23:46:02.207820 containerd[2777]: time="2025-07-06T23:46:02.207794686Z" level=info msg="StopPodSandbox for \"464f9b256e2324160a8698f641d846186fb6b7c809a88e7b16b687e615d2cc0a\" returns successfully" Jul 6 23:46:02.207907 kubelet[4301]: I0706 23:46:02.207889 4301 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c58b3575346115043e42d3160110ad65f9442033fef9a4e3609a527e9e41f300" Jul 6 23:46:02.208114 containerd[2777]: time="2025-07-06T23:46:02.208095286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-fm57w,Uid:2789cf2c-eaba-4e19-aa42-51b1fb03b3f4,Namespace:calico-system,Attempt:1,}" Jul 6 23:46:02.208268 containerd[2777]: time="2025-07-06T23:46:02.208250566Z" level=info msg="StopPodSandbox for \"c58b3575346115043e42d3160110ad65f9442033fef9a4e3609a527e9e41f300\"" Jul 6 23:46:02.208393 containerd[2777]: time="2025-07-06T23:46:02.208380166Z" level=info msg="Ensure that sandbox c58b3575346115043e42d3160110ad65f9442033fef9a4e3609a527e9e41f300 in task-service has been cleanup successfully" Jul 6 23:46:02.208552 containerd[2777]: time="2025-07-06T23:46:02.208537886Z" level=info msg="TearDown network for sandbox \"c58b3575346115043e42d3160110ad65f9442033fef9a4e3609a527e9e41f300\" successfully" Jul 6 23:46:02.208576 containerd[2777]: time="2025-07-06T23:46:02.208553526Z" level=info msg="StopPodSandbox for \"c58b3575346115043e42d3160110ad65f9442033fef9a4e3609a527e9e41f300\" returns successfully" Jul 6 23:46:02.208855 containerd[2777]: time="2025-07-06T23:46:02.208838446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r6sm8,Uid:994a699b-ff00-478f-a192-50e32c9ff313,Namespace:kube-system,Attempt:1,}" Jul 6 23:46:02.208880 kubelet[4301]: I0706 23:46:02.208848 4301 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f47d0d97118fecbcf7c7a5a1c7cdcc49825de08cbee49547e1c0bcd8afbeb167" Jul 6 23:46:02.209211 systemd[1]: run-netns-cni\x2d6166921f\x2df121\x2de715\x2d86e0\x2dbc0ea4ba1ad7.mount: Deactivated successfully. Jul 6 23:46:02.209294 systemd[1]: run-netns-cni\x2d7acb1c3f\x2d44d3\x2d9e61\x2dcea9\x2df7bddd6f0b17.mount: Deactivated successfully. Jul 6 23:46:02.209342 systemd[1]: run-netns-cni\x2dbc6a3a9f\x2d8c28\x2d0f17\x2d3e19\x2d110fbab9ac0d.mount: Deactivated successfully. Jul 6 23:46:02.210472 containerd[2777]: time="2025-07-06T23:46:02.210449486Z" level=info msg="StopPodSandbox for \"f47d0d97118fecbcf7c7a5a1c7cdcc49825de08cbee49547e1c0bcd8afbeb167\"" Jul 6 23:46:02.210640 containerd[2777]: time="2025-07-06T23:46:02.210626686Z" level=info msg="Ensure that sandbox f47d0d97118fecbcf7c7a5a1c7cdcc49825de08cbee49547e1c0bcd8afbeb167 in task-service has been cleanup successfully" Jul 6 23:46:02.210739 kubelet[4301]: I0706 23:46:02.210711 4301 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bb7360e989b50d413ec27c004346ce241f583edcb9a2b67c80e082cf7ac5244" Jul 6 23:46:02.210791 containerd[2777]: time="2025-07-06T23:46:02.210775206Z" level=info msg="TearDown network for sandbox \"f47d0d97118fecbcf7c7a5a1c7cdcc49825de08cbee49547e1c0bcd8afbeb167\" successfully" Jul 6 23:46:02.210810 containerd[2777]: time="2025-07-06T23:46:02.210791846Z" level=info msg="StopPodSandbox for \"f47d0d97118fecbcf7c7a5a1c7cdcc49825de08cbee49547e1c0bcd8afbeb167\" returns successfully" Jul 6 23:46:02.211678 systemd[1]: run-netns-cni\x2d18ba454b\x2db290\x2dd265\x2d9b86\x2d1bb96a815052.mount: Deactivated successfully. Jul 6 23:46:02.211718 containerd[2777]: time="2025-07-06T23:46:02.211687286Z" level=info msg="StopPodSandbox for \"4bb7360e989b50d413ec27c004346ce241f583edcb9a2b67c80e082cf7ac5244\"" Jul 6 23:46:02.211740 containerd[2777]: time="2025-07-06T23:46:02.211714406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f699b944c-lr8j7,Uid:77b24a7d-db32-4239-b789-7ecb6aad130e,Namespace:calico-apiserver,Attempt:1,}" Jul 6 23:46:02.211859 containerd[2777]: time="2025-07-06T23:46:02.211842446Z" level=info msg="Ensure that sandbox 4bb7360e989b50d413ec27c004346ce241f583edcb9a2b67c80e082cf7ac5244 in task-service has been cleanup successfully" Jul 6 23:46:02.211914 kubelet[4301]: I0706 23:46:02.211900 4301 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da8c00b01623512125b5fa588e21d32fee34d73ffd612e233afeadb15d9d9fb5" Jul 6 23:46:02.212043 containerd[2777]: time="2025-07-06T23:46:02.212013966Z" level=info msg="TearDown network for sandbox \"4bb7360e989b50d413ec27c004346ce241f583edcb9a2b67c80e082cf7ac5244\" successfully" Jul 6 23:46:02.212063 containerd[2777]: time="2025-07-06T23:46:02.212045246Z" level=info msg="StopPodSandbox for \"4bb7360e989b50d413ec27c004346ce241f583edcb9a2b67c80e082cf7ac5244\" returns successfully" Jul 6 23:46:02.212306 containerd[2777]: time="2025-07-06T23:46:02.212284806Z" level=info msg="StopPodSandbox for \"da8c00b01623512125b5fa588e21d32fee34d73ffd612e233afeadb15d9d9fb5\"" Jul 6 23:46:02.212430 containerd[2777]: time="2025-07-06T23:46:02.212410806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f699b944c-vw6wk,Uid:857da74c-a792-4f7a-9157-031cec6c674a,Namespace:calico-apiserver,Attempt:1,}" Jul 6 23:46:02.212508 containerd[2777]: time="2025-07-06T23:46:02.212420046Z" level=info msg="Ensure that sandbox da8c00b01623512125b5fa588e21d32fee34d73ffd612e233afeadb15d9d9fb5 in task-service has been cleanup successfully" Jul 6 23:46:02.212666 containerd[2777]: time="2025-07-06T23:46:02.212652166Z" level=info msg="TearDown network for sandbox \"da8c00b01623512125b5fa588e21d32fee34d73ffd612e233afeadb15d9d9fb5\" successfully" Jul 6 23:46:02.212690 containerd[2777]: time="2025-07-06T23:46:02.212666566Z" level=info msg="StopPodSandbox for \"da8c00b01623512125b5fa588e21d32fee34d73ffd612e233afeadb15d9d9fb5\" returns successfully" Jul 6 23:46:02.213012 containerd[2777]: time="2025-07-06T23:46:02.212993166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q9rm8,Uid:149827c6-4bc8-4f81-a88b-f0d5697ac2f9,Namespace:kube-system,Attempt:1,}" Jul 6 23:46:02.213584 systemd[1]: run-netns-cni\x2d3a5b213d\x2d4f7e\x2d0dbe\x2d435d\x2df50a5fd86f8f.mount: Deactivated successfully. Jul 6 23:46:02.213664 systemd[1]: run-netns-cni\x2d00b24136\x2da56e\x2d7c96\x2d6cef\x2d1de9487e19f5.mount: Deactivated successfully. Jul 6 23:46:02.275174 containerd[2777]: time="2025-07-06T23:46:02.275118728Z" level=error msg="Failed to destroy network for sandbox \"dd37911a27c4787e700d8c3b35bf3819df12e6ee99f4e06d332024a23effc0ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:02.275550 containerd[2777]: time="2025-07-06T23:46:02.275484688Z" level=error msg="Failed to destroy network for sandbox \"c2d347f408603e868d5a95098be007fb32f834f6e2cee49d6628e4e888d74094\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:02.275663 containerd[2777]: time="2025-07-06T23:46:02.275512968Z" level=error msg="encountered an error cleaning up failed sandbox \"dd37911a27c4787e700d8c3b35bf3819df12e6ee99f4e06d332024a23effc0ad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:02.275724 containerd[2777]: time="2025-07-06T23:46:02.275705648Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b4c6f9887-2rpdn,Uid:0269473e-9b77-4370-b872-0e13aff1fbe2,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"dd37911a27c4787e700d8c3b35bf3819df12e6ee99f4e06d332024a23effc0ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:02.275784 containerd[2777]: time="2025-07-06T23:46:02.275724368Z" level=error msg="Failed to destroy network for sandbox \"e1157630a8ab5b265dd7f77df56f15dfe0fe032d107f4533541c371f3ea0d888\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:02.275892 containerd[2777]: time="2025-07-06T23:46:02.275849648Z" level=error msg="encountered an error cleaning up failed sandbox \"c2d347f408603e868d5a95098be007fb32f834f6e2cee49d6628e4e888d74094\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:02.275946 containerd[2777]: time="2025-07-06T23:46:02.275929928Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69cf447d8-mt4ls,Uid:e0b41b8f-0f59-42bb-bf26-a6f229d75d87,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"c2d347f408603e868d5a95098be007fb32f834f6e2cee49d6628e4e888d74094\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:02.275969 kubelet[4301]: E0706 23:46:02.275932 4301 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd37911a27c4787e700d8c3b35bf3819df12e6ee99f4e06d332024a23effc0ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:02.276015 kubelet[4301]: E0706 23:46:02.276001 4301 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd37911a27c4787e700d8c3b35bf3819df12e6ee99f4e06d332024a23effc0ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b4c6f9887-2rpdn" Jul 6 23:46:02.276047 kubelet[4301]: E0706 23:46:02.276022 4301 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd37911a27c4787e700d8c3b35bf3819df12e6ee99f4e06d332024a23effc0ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b4c6f9887-2rpdn" Jul 6 23:46:02.276073 containerd[2777]: time="2025-07-06T23:46:02.276024408Z" level=error msg="encountered an error cleaning up failed sandbox \"e1157630a8ab5b265dd7f77df56f15dfe0fe032d107f4533541c371f3ea0d888\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:02.276094 kubelet[4301]: E0706 23:46:02.276041 4301 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2d347f408603e868d5a95098be007fb32f834f6e2cee49d6628e4e888d74094\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:02.276094 kubelet[4301]: E0706 23:46:02.276068 4301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b4c6f9887-2rpdn_calico-system(0269473e-9b77-4370-b872-0e13aff1fbe2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b4c6f9887-2rpdn_calico-system(0269473e-9b77-4370-b872-0e13aff1fbe2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd37911a27c4787e700d8c3b35bf3819df12e6ee99f4e06d332024a23effc0ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b4c6f9887-2rpdn" podUID="0269473e-9b77-4370-b872-0e13aff1fbe2" Jul 6 23:46:02.276157 containerd[2777]: time="2025-07-06T23:46:02.276065048Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-plknp,Uid:8f8629de-298f-4ed4-a595-838d54ca032b,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"e1157630a8ab5b265dd7f77df56f15dfe0fe032d107f4533541c371f3ea0d888\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:02.276179 kubelet[4301]: E0706 23:46:02.276094 4301 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2d347f408603e868d5a95098be007fb32f834f6e2cee49d6628e4e888d74094\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69cf447d8-mt4ls" Jul 6 23:46:02.276179 kubelet[4301]: E0706 23:46:02.276113 4301 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2d347f408603e868d5a95098be007fb32f834f6e2cee49d6628e4e888d74094\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69cf447d8-mt4ls" Jul 6 23:46:02.276179 kubelet[4301]: E0706 23:46:02.276146 4301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-69cf447d8-mt4ls_calico-system(e0b41b8f-0f59-42bb-bf26-a6f229d75d87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-69cf447d8-mt4ls_calico-system(e0b41b8f-0f59-42bb-bf26-a6f229d75d87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2d347f408603e868d5a95098be007fb32f834f6e2cee49d6628e4e888d74094\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-69cf447d8-mt4ls" podUID="e0b41b8f-0f59-42bb-bf26-a6f229d75d87" Jul 6 23:46:02.276260 kubelet[4301]: E0706 23:46:02.276181 4301 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1157630a8ab5b265dd7f77df56f15dfe0fe032d107f4533541c371f3ea0d888\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:02.276260 kubelet[4301]: E0706 23:46:02.276223 4301 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1157630a8ab5b265dd7f77df56f15dfe0fe032d107f4533541c371f3ea0d888\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-plknp" Jul 6 23:46:02.276260 kubelet[4301]: E0706 23:46:02.276240 4301 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1157630a8ab5b265dd7f77df56f15dfe0fe032d107f4533541c371f3ea0d888\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-plknp" Jul 6 23:46:02.276318 kubelet[4301]: E0706 23:46:02.276272 4301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-plknp_calico-system(8f8629de-298f-4ed4-a595-838d54ca032b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-plknp_calico-system(8f8629de-298f-4ed4-a595-838d54ca032b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1157630a8ab5b265dd7f77df56f15dfe0fe032d107f4533541c371f3ea0d888\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-plknp" podUID="8f8629de-298f-4ed4-a595-838d54ca032b" Jul 6 23:46:02.276401 containerd[2777]: time="2025-07-06T23:46:02.276376128Z" level=error msg="Failed to destroy network for sandbox \"ed1ef771cef9efeebc6d34fd59702c200960451cd392fab8ab19e3edb13958c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:02.276536 containerd[2777]: time="2025-07-06T23:46:02.276504928Z" level=error msg="Failed to destroy network for sandbox \"c8d0e2cb9f3c1d0a431902d07ee056a4c3cdf35df1a4097cbcac64f90d899c85\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:02.276755 containerd[2777]: time="2025-07-06T23:46:02.276732608Z" level=error msg="encountered an error cleaning up failed sandbox \"ed1ef771cef9efeebc6d34fd59702c200960451cd392fab8ab19e3edb13958c5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:02.276799 containerd[2777]: time="2025-07-06T23:46:02.276781928Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r6sm8,Uid:994a699b-ff00-478f-a192-50e32c9ff313,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ed1ef771cef9efeebc6d34fd59702c200960451cd392fab8ab19e3edb13958c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:02.276883 containerd[2777]: time="2025-07-06T23:46:02.276824568Z" level=error msg="encountered an error cleaning up failed sandbox \"c8d0e2cb9f3c1d0a431902d07ee056a4c3cdf35df1a4097cbcac64f90d899c85\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:02.276926 containerd[2777]: time="2025-07-06T23:46:02.276909008Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f699b944c-lr8j7,Uid:77b24a7d-db32-4239-b789-7ecb6aad130e,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"c8d0e2cb9f3c1d0a431902d07ee056a4c3cdf35df1a4097cbcac64f90d899c85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:02.276963 kubelet[4301]: E0706 23:46:02.276917 4301 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed1ef771cef9efeebc6d34fd59702c200960451cd392fab8ab19e3edb13958c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:02.276963 kubelet[4301]: E0706 23:46:02.276955 4301 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed1ef771cef9efeebc6d34fd59702c200960451cd392fab8ab19e3edb13958c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-r6sm8" Jul 6 23:46:02.277004 kubelet[4301]: E0706 23:46:02.276971 4301 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed1ef771cef9efeebc6d34fd59702c200960451cd392fab8ab19e3edb13958c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-r6sm8" Jul 6 23:46:02.277030 kubelet[4301]: E0706 23:46:02.277002 4301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-r6sm8_kube-system(994a699b-ff00-478f-a192-50e32c9ff313)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-r6sm8_kube-system(994a699b-ff00-478f-a192-50e32c9ff313)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed1ef771cef9efeebc6d34fd59702c200960451cd392fab8ab19e3edb13958c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-r6sm8" podUID="994a699b-ff00-478f-a192-50e32c9ff313" Jul 6 23:46:02.277069 kubelet[4301]: E0706 23:46:02.277022 4301 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8d0e2cb9f3c1d0a431902d07ee056a4c3cdf35df1a4097cbcac64f90d899c85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:02.277069 kubelet[4301]: E0706 23:46:02.277059 4301 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8d0e2cb9f3c1d0a431902d07ee056a4c3cdf35df1a4097cbcac64f90d899c85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f699b944c-lr8j7" Jul 6 23:46:02.277108 kubelet[4301]: E0706 23:46:02.277077 4301 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8d0e2cb9f3c1d0a431902d07ee056a4c3cdf35df1a4097cbcac64f90d899c85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f699b944c-lr8j7" Jul 6 23:46:02.277128 kubelet[4301]: E0706 23:46:02.277107 4301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f699b944c-lr8j7_calico-apiserver(77b24a7d-db32-4239-b789-7ecb6aad130e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f699b944c-lr8j7_calico-apiserver(77b24a7d-db32-4239-b789-7ecb6aad130e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8d0e2cb9f3c1d0a431902d07ee056a4c3cdf35df1a4097cbcac64f90d899c85\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f699b944c-lr8j7" podUID="77b24a7d-db32-4239-b789-7ecb6aad130e" Jul 6 23:46:02.277671 containerd[2777]: time="2025-07-06T23:46:02.277640848Z" level=error msg="Failed to destroy network for sandbox \"1a4d18ea0b58e548c41a7c34af0b0b243947d571c13702a36d3a3ddbb4cee7a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:02.278988 containerd[2777]: time="2025-07-06T23:46:02.278964088Z" level=error msg="Failed to destroy network for sandbox \"b42d008a74e05ed0851f7ccd6bed88b6777beb64cbe1cd6fbccdf532040670f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:02.279649 containerd[2777]: time="2025-07-06T23:46:02.279624568Z" level=error msg="Failed to destroy network for sandbox \"0431b148033811abd7adc33d94f5e30f08850680469a4ff9b2f62cca5d90a198\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:02.279834 containerd[2777]: time="2025-07-06T23:46:02.279805608Z" level=error msg="encountered an error cleaning up failed sandbox \"1a4d18ea0b58e548c41a7c34af0b0b243947d571c13702a36d3a3ddbb4cee7a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:02.279889 containerd[2777]: time="2025-07-06T23:46:02.279865768Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f699b944c-vw6wk,Uid:857da74c-a792-4f7a-9157-031cec6c674a,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"1a4d18ea0b58e548c41a7c34af0b0b243947d571c13702a36d3a3ddbb4cee7a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:02.279930 containerd[2777]: time="2025-07-06T23:46:02.279911808Z" level=error msg="encountered an error cleaning up failed sandbox \"0431b148033811abd7adc33d94f5e30f08850680469a4ff9b2f62cca5d90a198\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:02.279954 containerd[2777]: time="2025-07-06T23:46:02.279820008Z" level=error msg="encountered an error cleaning up failed sandbox \"b42d008a74e05ed0851f7ccd6bed88b6777beb64cbe1cd6fbccdf532040670f6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:02.279974 containerd[2777]: time="2025-07-06T23:46:02.279955648Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q9rm8,Uid:149827c6-4bc8-4f81-a88b-f0d5697ac2f9,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"0431b148033811abd7adc33d94f5e30f08850680469a4ff9b2f62cca5d90a198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:02.280009 containerd[2777]: time="2025-07-06T23:46:02.279959088Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-fm57w,Uid:2789cf2c-eaba-4e19-aa42-51b1fb03b3f4,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b42d008a74e05ed0851f7ccd6bed88b6777beb64cbe1cd6fbccdf532040670f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:02.280064 kubelet[4301]: E0706 23:46:02.280040 4301 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a4d18ea0b58e548c41a7c34af0b0b243947d571c13702a36d3a3ddbb4cee7a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:02.280096 kubelet[4301]: E0706 23:46:02.280083 4301 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a4d18ea0b58e548c41a7c34af0b0b243947d571c13702a36d3a3ddbb4cee7a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f699b944c-vw6wk" Jul 6 23:46:02.280118 kubelet[4301]: E0706 23:46:02.280090 4301 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b42d008a74e05ed0851f7ccd6bed88b6777beb64cbe1cd6fbccdf532040670f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:02.280118 kubelet[4301]: E0706 23:46:02.280100 4301 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a4d18ea0b58e548c41a7c34af0b0b243947d571c13702a36d3a3ddbb4cee7a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f699b944c-vw6wk" Jul 6 23:46:02.280162 kubelet[4301]: E0706 23:46:02.280121 4301 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b42d008a74e05ed0851f7ccd6bed88b6777beb64cbe1cd6fbccdf532040670f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-fm57w" Jul 6 23:46:02.280162 kubelet[4301]: E0706 23:46:02.280132 4301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f699b944c-vw6wk_calico-apiserver(857da74c-a792-4f7a-9157-031cec6c674a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f699b944c-vw6wk_calico-apiserver(857da74c-a792-4f7a-9157-031cec6c674a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a4d18ea0b58e548c41a7c34af0b0b243947d571c13702a36d3a3ddbb4cee7a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f699b944c-vw6wk" podUID="857da74c-a792-4f7a-9157-031cec6c674a" Jul 6 23:46:02.280162 kubelet[4301]: E0706 23:46:02.280136 4301 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b42d008a74e05ed0851f7ccd6bed88b6777beb64cbe1cd6fbccdf532040670f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-fm57w" Jul 6 23:46:02.280233 kubelet[4301]: E0706 23:46:02.280174 4301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-fm57w_calico-system(2789cf2c-eaba-4e19-aa42-51b1fb03b3f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-fm57w_calico-system(2789cf2c-eaba-4e19-aa42-51b1fb03b3f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b42d008a74e05ed0851f7ccd6bed88b6777beb64cbe1cd6fbccdf532040670f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-fm57w" podUID="2789cf2c-eaba-4e19-aa42-51b1fb03b3f4" Jul 6 23:46:02.280233 kubelet[4301]: E0706 23:46:02.280043 4301 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0431b148033811abd7adc33d94f5e30f08850680469a4ff9b2f62cca5d90a198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:02.280286 kubelet[4301]: E0706 23:46:02.280231 4301 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0431b148033811abd7adc33d94f5e30f08850680469a4ff9b2f62cca5d90a198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-q9rm8" Jul 6 23:46:02.280286 kubelet[4301]: E0706 23:46:02.280250 4301 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0431b148033811abd7adc33d94f5e30f08850680469a4ff9b2f62cca5d90a198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-q9rm8" Jul 6 23:46:02.280325 kubelet[4301]: E0706 23:46:02.280280 4301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-q9rm8_kube-system(149827c6-4bc8-4f81-a88b-f0d5697ac2f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-q9rm8_kube-system(149827c6-4bc8-4f81-a88b-f0d5697ac2f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0431b148033811abd7adc33d94f5e30f08850680469a4ff9b2f62cca5d90a198\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-q9rm8" podUID="149827c6-4bc8-4f81-a88b-f0d5697ac2f9" Jul 6 23:46:02.422637 systemd[1]: run-netns-cni\x2debcba526\x2da00f\x2dd55e\x2d41ea\x2d10c52d02cb5a.mount: Deactivated successfully. Jul 6 23:46:03.214801 kubelet[4301]: I0706 23:46:03.214779 4301 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed1ef771cef9efeebc6d34fd59702c200960451cd392fab8ab19e3edb13958c5" Jul 6 23:46:03.215238 containerd[2777]: time="2025-07-06T23:46:03.215207232Z" level=info msg="StopPodSandbox for \"ed1ef771cef9efeebc6d34fd59702c200960451cd392fab8ab19e3edb13958c5\"" Jul 6 23:46:03.215395 containerd[2777]: time="2025-07-06T23:46:03.215369232Z" level=info msg="Ensure that sandbox ed1ef771cef9efeebc6d34fd59702c200960451cd392fab8ab19e3edb13958c5 in task-service has been cleanup successfully" Jul 6 23:46:03.215552 containerd[2777]: time="2025-07-06T23:46:03.215536832Z" level=info msg="TearDown network for sandbox \"ed1ef771cef9efeebc6d34fd59702c200960451cd392fab8ab19e3edb13958c5\" successfully" Jul 6 23:46:03.215574 containerd[2777]: time="2025-07-06T23:46:03.215551752Z" level=info msg="StopPodSandbox for \"ed1ef771cef9efeebc6d34fd59702c200960451cd392fab8ab19e3edb13958c5\" returns successfully" Jul 6 23:46:03.215632 kubelet[4301]: I0706 23:46:03.215618 4301 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0431b148033811abd7adc33d94f5e30f08850680469a4ff9b2f62cca5d90a198" Jul 6 23:46:03.215983 containerd[2777]: time="2025-07-06T23:46:03.215966592Z" level=info msg="StopPodSandbox for \"0431b148033811abd7adc33d94f5e30f08850680469a4ff9b2f62cca5d90a198\"" Jul 6 23:46:03.216007 containerd[2777]: time="2025-07-06T23:46:03.215987352Z" level=info msg="StopPodSandbox for \"c58b3575346115043e42d3160110ad65f9442033fef9a4e3609a527e9e41f300\"" Jul 6 23:46:03.216087 containerd[2777]: time="2025-07-06T23:46:03.216073752Z" level=info msg="TearDown network for sandbox \"c58b3575346115043e42d3160110ad65f9442033fef9a4e3609a527e9e41f300\" successfully" Jul 6 23:46:03.216123 containerd[2777]: time="2025-07-06T23:46:03.216087312Z" level=info msg="StopPodSandbox for \"c58b3575346115043e42d3160110ad65f9442033fef9a4e3609a527e9e41f300\" returns successfully" Jul 6 23:46:03.216123 containerd[2777]: time="2025-07-06T23:46:03.216096712Z" level=info msg="Ensure that sandbox 0431b148033811abd7adc33d94f5e30f08850680469a4ff9b2f62cca5d90a198 in task-service has been cleanup successfully" Jul 6 23:46:03.216271 containerd[2777]: time="2025-07-06T23:46:03.216257352Z" level=info msg="TearDown network for sandbox \"0431b148033811abd7adc33d94f5e30f08850680469a4ff9b2f62cca5d90a198\" successfully" Jul 6 23:46:03.216292 containerd[2777]: time="2025-07-06T23:46:03.216271192Z" level=info msg="StopPodSandbox for \"0431b148033811abd7adc33d94f5e30f08850680469a4ff9b2f62cca5d90a198\" returns successfully" Jul 6 23:46:03.216482 containerd[2777]: time="2025-07-06T23:46:03.216467912Z" level=info msg="StopPodSandbox for \"da8c00b01623512125b5fa588e21d32fee34d73ffd612e233afeadb15d9d9fb5\"" Jul 6 23:46:03.216536 containerd[2777]: time="2025-07-06T23:46:03.216517192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r6sm8,Uid:994a699b-ff00-478f-a192-50e32c9ff313,Namespace:kube-system,Attempt:2,}" Jul 6 23:46:03.216560 containerd[2777]: time="2025-07-06T23:46:03.216534392Z" level=info msg="TearDown network for sandbox \"da8c00b01623512125b5fa588e21d32fee34d73ffd612e233afeadb15d9d9fb5\" successfully" Jul 6 23:46:03.216560 containerd[2777]: time="2025-07-06T23:46:03.216543712Z" level=info msg="StopPodSandbox for \"da8c00b01623512125b5fa588e21d32fee34d73ffd612e233afeadb15d9d9fb5\" returns successfully" Jul 6 23:46:03.216805 kubelet[4301]: I0706 23:46:03.216787 4301 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd37911a27c4787e700d8c3b35bf3819df12e6ee99f4e06d332024a23effc0ad" Jul 6 23:46:03.216857 containerd[2777]: time="2025-07-06T23:46:03.216836512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q9rm8,Uid:149827c6-4bc8-4f81-a88b-f0d5697ac2f9,Namespace:kube-system,Attempt:2,}" Jul 6 23:46:03.217172 containerd[2777]: time="2025-07-06T23:46:03.217151512Z" level=info msg="StopPodSandbox for \"dd37911a27c4787e700d8c3b35bf3819df12e6ee99f4e06d332024a23effc0ad\"" Jul 6 23:46:03.217169 systemd[1]: run-netns-cni\x2dfe0f1094\x2da3ef\x2d47c8\x2dae19\x2d1a7887e82995.mount: Deactivated successfully. Jul 6 23:46:03.217350 containerd[2777]: time="2025-07-06T23:46:03.217287712Z" level=info msg="Ensure that sandbox dd37911a27c4787e700d8c3b35bf3819df12e6ee99f4e06d332024a23effc0ad in task-service has been cleanup successfully" Jul 6 23:46:03.217480 containerd[2777]: time="2025-07-06T23:46:03.217465032Z" level=info msg="TearDown network for sandbox \"dd37911a27c4787e700d8c3b35bf3819df12e6ee99f4e06d332024a23effc0ad\" successfully" Jul 6 23:46:03.217503 containerd[2777]: time="2025-07-06T23:46:03.217480432Z" level=info msg="StopPodSandbox for \"dd37911a27c4787e700d8c3b35bf3819df12e6ee99f4e06d332024a23effc0ad\" returns successfully" Jul 6 23:46:03.217653 containerd[2777]: time="2025-07-06T23:46:03.217633912Z" level=info msg="StopPodSandbox for \"3e59b4729dad76f14993b50c23bcc5b51745d3d4a26156978cc1317bfdae12ee\"" Jul 6 23:46:03.217725 containerd[2777]: time="2025-07-06T23:46:03.217712512Z" level=info msg="TearDown network for sandbox \"3e59b4729dad76f14993b50c23bcc5b51745d3d4a26156978cc1317bfdae12ee\" successfully" Jul 6 23:46:03.217753 containerd[2777]: time="2025-07-06T23:46:03.217725952Z" level=info msg="StopPodSandbox for \"3e59b4729dad76f14993b50c23bcc5b51745d3d4a26156978cc1317bfdae12ee\" returns successfully" Jul 6 23:46:03.218219 containerd[2777]: time="2025-07-06T23:46:03.218201232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b4c6f9887-2rpdn,Uid:0269473e-9b77-4370-b872-0e13aff1fbe2,Namespace:calico-system,Attempt:2,}" Jul 6 23:46:03.218378 kubelet[4301]: I0706 23:46:03.218362 4301 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2d347f408603e868d5a95098be007fb32f834f6e2cee49d6628e4e888d74094" Jul 6 23:46:03.218839 containerd[2777]: time="2025-07-06T23:46:03.218824272Z" level=info msg="StopPodSandbox for \"c2d347f408603e868d5a95098be007fb32f834f6e2cee49d6628e4e888d74094\"" Jul 6 23:46:03.218972 containerd[2777]: time="2025-07-06T23:46:03.218959152Z" level=info msg="Ensure that sandbox c2d347f408603e868d5a95098be007fb32f834f6e2cee49d6628e4e888d74094 in task-service has been cleanup successfully" Jul 6 23:46:03.219117 containerd[2777]: time="2025-07-06T23:46:03.219104312Z" level=info msg="TearDown network for sandbox \"c2d347f408603e868d5a95098be007fb32f834f6e2cee49d6628e4e888d74094\" successfully" Jul 6 23:46:03.219381 containerd[2777]: time="2025-07-06T23:46:03.219117272Z" level=info msg="StopPodSandbox for \"c2d347f408603e868d5a95098be007fb32f834f6e2cee49d6628e4e888d74094\" returns successfully" Jul 6 23:46:03.219381 containerd[2777]: time="2025-07-06T23:46:03.219356992Z" level=info msg="StopPodSandbox for \"827747d3cb061ac5d9f0a033aea8fb6c25b8fd4e577c892cdeebdb49bfb41cbe\"" Jul 6 23:46:03.219266 systemd[1]: run-netns-cni\x2d3521bb08\x2ded9b\x2d55fd\x2dc77d\x2d3af3800a220c.mount: Deactivated successfully. Jul 6 23:46:03.219350 systemd[1]: run-netns-cni\x2d3029ec48\x2d6779\x2db4cf\x2d80e8\x2dde798b757bd1.mount: Deactivated successfully. Jul 6 23:46:03.219488 containerd[2777]: time="2025-07-06T23:46:03.219430432Z" level=info msg="TearDown network for sandbox \"827747d3cb061ac5d9f0a033aea8fb6c25b8fd4e577c892cdeebdb49bfb41cbe\" successfully" Jul 6 23:46:03.219488 containerd[2777]: time="2025-07-06T23:46:03.219441432Z" level=info msg="StopPodSandbox for \"827747d3cb061ac5d9f0a033aea8fb6c25b8fd4e577c892cdeebdb49bfb41cbe\" returns successfully" Jul 6 23:46:03.219527 kubelet[4301]: I0706 23:46:03.219426 4301 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1157630a8ab5b265dd7f77df56f15dfe0fe032d107f4533541c371f3ea0d888" Jul 6 23:46:03.219806 containerd[2777]: time="2025-07-06T23:46:03.219787632Z" level=info msg="StopPodSandbox for \"e1157630a8ab5b265dd7f77df56f15dfe0fe032d107f4533541c371f3ea0d888\"" Jul 6 23:46:03.219843 containerd[2777]: time="2025-07-06T23:46:03.219828712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69cf447d8-mt4ls,Uid:e0b41b8f-0f59-42bb-bf26-a6f229d75d87,Namespace:calico-system,Attempt:2,}" Jul 6 23:46:03.219945 containerd[2777]: time="2025-07-06T23:46:03.219933152Z" level=info msg="Ensure that sandbox e1157630a8ab5b265dd7f77df56f15dfe0fe032d107f4533541c371f3ea0d888 in task-service has been cleanup successfully" Jul 6 23:46:03.220246 containerd[2777]: time="2025-07-06T23:46:03.220230072Z" level=info msg="TearDown network for sandbox \"e1157630a8ab5b265dd7f77df56f15dfe0fe032d107f4533541c371f3ea0d888\" successfully" Jul 6 23:46:03.220272 containerd[2777]: time="2025-07-06T23:46:03.220246712Z" level=info msg="StopPodSandbox for \"e1157630a8ab5b265dd7f77df56f15dfe0fe032d107f4533541c371f3ea0d888\" returns successfully" Jul 6 23:46:03.220397 kubelet[4301]: I0706 23:46:03.220382 4301 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b42d008a74e05ed0851f7ccd6bed88b6777beb64cbe1cd6fbccdf532040670f6" Jul 6 23:46:03.220603 containerd[2777]: time="2025-07-06T23:46:03.220583112Z" level=info msg="StopPodSandbox for \"ed88fb7349684f51014d3036c107009b175993f050a65a9dff6bd628f7595af5\"" Jul 6 23:46:03.220676 containerd[2777]: time="2025-07-06T23:46:03.220665352Z" level=info msg="TearDown network for sandbox \"ed88fb7349684f51014d3036c107009b175993f050a65a9dff6bd628f7595af5\" successfully" Jul 6 23:46:03.220696 containerd[2777]: time="2025-07-06T23:46:03.220676032Z" level=info msg="StopPodSandbox for \"ed88fb7349684f51014d3036c107009b175993f050a65a9dff6bd628f7595af5\" returns successfully" Jul 6 23:46:03.220727 containerd[2777]: time="2025-07-06T23:46:03.220706432Z" level=info msg="StopPodSandbox for \"b42d008a74e05ed0851f7ccd6bed88b6777beb64cbe1cd6fbccdf532040670f6\"" Jul 6 23:46:03.220855 containerd[2777]: time="2025-07-06T23:46:03.220840872Z" level=info msg="Ensure that sandbox b42d008a74e05ed0851f7ccd6bed88b6777beb64cbe1cd6fbccdf532040670f6 in task-service has been cleanup successfully" Jul 6 23:46:03.221025 containerd[2777]: time="2025-07-06T23:46:03.221007632Z" level=info msg="TearDown network for sandbox \"b42d008a74e05ed0851f7ccd6bed88b6777beb64cbe1cd6fbccdf532040670f6\" successfully" Jul 6 23:46:03.221046 containerd[2777]: time="2025-07-06T23:46:03.221025352Z" level=info msg="StopPodSandbox for \"b42d008a74e05ed0851f7ccd6bed88b6777beb64cbe1cd6fbccdf532040670f6\" returns successfully" Jul 6 23:46:03.221074 containerd[2777]: time="2025-07-06T23:46:03.221039912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-plknp,Uid:8f8629de-298f-4ed4-a595-838d54ca032b,Namespace:calico-system,Attempt:2,}" Jul 6 23:46:03.221291 containerd[2777]: time="2025-07-06T23:46:03.221275632Z" level=info msg="StopPodSandbox for \"464f9b256e2324160a8698f641d846186fb6b7c809a88e7b16b687e615d2cc0a\"" Jul 6 23:46:03.221362 containerd[2777]: time="2025-07-06T23:46:03.221351192Z" level=info msg="TearDown network for sandbox \"464f9b256e2324160a8698f641d846186fb6b7c809a88e7b16b687e615d2cc0a\" successfully" Jul 6 23:46:03.221382 containerd[2777]: time="2025-07-06T23:46:03.221362712Z" level=info msg="StopPodSandbox for \"464f9b256e2324160a8698f641d846186fb6b7c809a88e7b16b687e615d2cc0a\" returns successfully" Jul 6 23:46:03.221502 systemd[1]: run-netns-cni\x2d7fbcfe58\x2d39ea\x2dc2b9\x2d83e1\x2d34f060c9c45e.mount: Deactivated successfully. Jul 6 23:46:03.221579 systemd[1]: run-netns-cni\x2d2b854fb4\x2d7256\x2d820e\x2d177b\x2dd152f601f374.mount: Deactivated successfully. Jul 6 23:46:03.221655 containerd[2777]: time="2025-07-06T23:46:03.221640952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-fm57w,Uid:2789cf2c-eaba-4e19-aa42-51b1fb03b3f4,Namespace:calico-system,Attempt:2,}" Jul 6 23:46:03.221689 kubelet[4301]: I0706 23:46:03.221676 4301 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8d0e2cb9f3c1d0a431902d07ee056a4c3cdf35df1a4097cbcac64f90d899c85" Jul 6 23:46:03.222030 containerd[2777]: time="2025-07-06T23:46:03.222012312Z" level=info msg="StopPodSandbox for \"c8d0e2cb9f3c1d0a431902d07ee056a4c3cdf35df1a4097cbcac64f90d899c85\"" Jul 6 23:46:03.222157 containerd[2777]: time="2025-07-06T23:46:03.222144992Z" level=info msg="Ensure that sandbox c8d0e2cb9f3c1d0a431902d07ee056a4c3cdf35df1a4097cbcac64f90d899c85 in task-service has been cleanup successfully" Jul 6 23:46:03.222329 containerd[2777]: time="2025-07-06T23:46:03.222314592Z" level=info msg="TearDown network for sandbox \"c8d0e2cb9f3c1d0a431902d07ee056a4c3cdf35df1a4097cbcac64f90d899c85\" successfully" Jul 6 23:46:03.222356 containerd[2777]: time="2025-07-06T23:46:03.222329992Z" level=info msg="StopPodSandbox for \"c8d0e2cb9f3c1d0a431902d07ee056a4c3cdf35df1a4097cbcac64f90d899c85\" returns successfully" Jul 6 23:46:03.222599 containerd[2777]: time="2025-07-06T23:46:03.222583232Z" level=info msg="StopPodSandbox for \"f47d0d97118fecbcf7c7a5a1c7cdcc49825de08cbee49547e1c0bcd8afbeb167\"" Jul 6 23:46:03.222663 containerd[2777]: time="2025-07-06T23:46:03.222652872Z" level=info msg="TearDown network for sandbox \"f47d0d97118fecbcf7c7a5a1c7cdcc49825de08cbee49547e1c0bcd8afbeb167\" successfully" Jul 6 23:46:03.222687 kubelet[4301]: I0706 23:46:03.222653 4301 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a4d18ea0b58e548c41a7c34af0b0b243947d571c13702a36d3a3ddbb4cee7a6" Jul 6 23:46:03.222707 containerd[2777]: time="2025-07-06T23:46:03.222662992Z" level=info msg="StopPodSandbox for \"f47d0d97118fecbcf7c7a5a1c7cdcc49825de08cbee49547e1c0bcd8afbeb167\" returns successfully" Jul 6 23:46:03.223092 containerd[2777]: time="2025-07-06T23:46:03.223071072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f699b944c-lr8j7,Uid:77b24a7d-db32-4239-b789-7ecb6aad130e,Namespace:calico-apiserver,Attempt:2,}" Jul 6 23:46:03.223173 containerd[2777]: time="2025-07-06T23:46:03.223075352Z" level=info msg="StopPodSandbox for \"1a4d18ea0b58e548c41a7c34af0b0b243947d571c13702a36d3a3ddbb4cee7a6\"" Jul 6 23:46:03.223290 containerd[2777]: time="2025-07-06T23:46:03.223277312Z" level=info msg="Ensure that sandbox 1a4d18ea0b58e548c41a7c34af0b0b243947d571c13702a36d3a3ddbb4cee7a6 in task-service has been cleanup successfully" Jul 6 23:46:03.223446 containerd[2777]: time="2025-07-06T23:46:03.223431032Z" level=info msg="TearDown network for sandbox \"1a4d18ea0b58e548c41a7c34af0b0b243947d571c13702a36d3a3ddbb4cee7a6\" successfully" Jul 6 23:46:03.223468 containerd[2777]: time="2025-07-06T23:46:03.223446312Z" level=info msg="StopPodSandbox for \"1a4d18ea0b58e548c41a7c34af0b0b243947d571c13702a36d3a3ddbb4cee7a6\" returns successfully" Jul 6 23:46:03.223703 containerd[2777]: time="2025-07-06T23:46:03.223685992Z" level=info msg="StopPodSandbox for \"4bb7360e989b50d413ec27c004346ce241f583edcb9a2b67c80e082cf7ac5244\"" Jul 6 23:46:03.223775 containerd[2777]: time="2025-07-06T23:46:03.223764352Z" level=info msg="TearDown network for sandbox \"4bb7360e989b50d413ec27c004346ce241f583edcb9a2b67c80e082cf7ac5244\" successfully" Jul 6 23:46:03.223802 containerd[2777]: time="2025-07-06T23:46:03.223775272Z" level=info msg="StopPodSandbox for \"4bb7360e989b50d413ec27c004346ce241f583edcb9a2b67c80e082cf7ac5244\" returns successfully" Jul 6 23:46:03.224238 containerd[2777]: time="2025-07-06T23:46:03.224215552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f699b944c-vw6wk,Uid:857da74c-a792-4f7a-9157-031cec6c674a,Namespace:calico-apiserver,Attempt:2,}" Jul 6 23:46:03.264996 containerd[2777]: time="2025-07-06T23:46:03.264945033Z" level=error msg="Failed to destroy network for sandbox \"4b5c9dd063ac6b003e02bd71376fb8dc9574f47895d67712a98ee73b64942671\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:03.267909 containerd[2777]: time="2025-07-06T23:46:03.265297553Z" level=error msg="encountered an error cleaning up failed sandbox \"4b5c9dd063ac6b003e02bd71376fb8dc9574f47895d67712a98ee73b64942671\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:03.267909 containerd[2777]: time="2025-07-06T23:46:03.265354593Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q9rm8,Uid:149827c6-4bc8-4f81-a88b-f0d5697ac2f9,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"4b5c9dd063ac6b003e02bd71376fb8dc9574f47895d67712a98ee73b64942671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:03.267909 containerd[2777]: time="2025-07-06T23:46:03.265985433Z" level=error msg="Failed to destroy network for sandbox \"7fdd44146f32b6ceda976b4f4993ec80635e19da5c4c75dc29e9ea0763f599d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:03.267909 containerd[2777]: time="2025-07-06T23:46:03.266322153Z" level=error msg="encountered an error cleaning up failed sandbox \"7fdd44146f32b6ceda976b4f4993ec80635e19da5c4c75dc29e9ea0763f599d5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:03.267909 containerd[2777]: time="2025-07-06T23:46:03.266369993Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r6sm8,Uid:994a699b-ff00-478f-a192-50e32c9ff313,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"7fdd44146f32b6ceda976b4f4993ec80635e19da5c4c75dc29e9ea0763f599d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:03.268114 kubelet[4301]: E0706 23:46:03.265553 4301 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b5c9dd063ac6b003e02bd71376fb8dc9574f47895d67712a98ee73b64942671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:03.268114 kubelet[4301]: E0706 23:46:03.265614 4301 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b5c9dd063ac6b003e02bd71376fb8dc9574f47895d67712a98ee73b64942671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-q9rm8" Jul 6 23:46:03.268114 kubelet[4301]: E0706 23:46:03.265636 4301 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b5c9dd063ac6b003e02bd71376fb8dc9574f47895d67712a98ee73b64942671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-q9rm8" Jul 6 23:46:03.268199 kubelet[4301]: E0706 23:46:03.265678 4301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-q9rm8_kube-system(149827c6-4bc8-4f81-a88b-f0d5697ac2f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-q9rm8_kube-system(149827c6-4bc8-4f81-a88b-f0d5697ac2f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b5c9dd063ac6b003e02bd71376fb8dc9574f47895d67712a98ee73b64942671\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-q9rm8" podUID="149827c6-4bc8-4f81-a88b-f0d5697ac2f9" Jul 6 23:46:03.268199 kubelet[4301]: E0706 23:46:03.266507 4301 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fdd44146f32b6ceda976b4f4993ec80635e19da5c4c75dc29e9ea0763f599d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:03.268199 kubelet[4301]: E0706 23:46:03.266549 4301 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fdd44146f32b6ceda976b4f4993ec80635e19da5c4c75dc29e9ea0763f599d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-r6sm8" Jul 6 23:46:03.268296 kubelet[4301]: E0706 23:46:03.266567 4301 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fdd44146f32b6ceda976b4f4993ec80635e19da5c4c75dc29e9ea0763f599d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-r6sm8" Jul 6 23:46:03.268296 kubelet[4301]: E0706 23:46:03.266601 4301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-r6sm8_kube-system(994a699b-ff00-478f-a192-50e32c9ff313)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-r6sm8_kube-system(994a699b-ff00-478f-a192-50e32c9ff313)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7fdd44146f32b6ceda976b4f4993ec80635e19da5c4c75dc29e9ea0763f599d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-r6sm8" podUID="994a699b-ff00-478f-a192-50e32c9ff313" Jul 6 23:46:03.270223 containerd[2777]: time="2025-07-06T23:46:03.270182153Z" level=error msg="Failed to destroy network for sandbox \"5a2826ca413e2d92674965c2b0ac08422d77a7a10af570f31567dfebd484dda1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:03.270685 containerd[2777]: time="2025-07-06T23:46:03.270652353Z" level=error msg="encountered an error cleaning up failed sandbox \"5a2826ca413e2d92674965c2b0ac08422d77a7a10af570f31567dfebd484dda1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:03.270733 containerd[2777]: time="2025-07-06T23:46:03.270705713Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b4c6f9887-2rpdn,Uid:0269473e-9b77-4370-b872-0e13aff1fbe2,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"5a2826ca413e2d92674965c2b0ac08422d77a7a10af570f31567dfebd484dda1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:03.270866 kubelet[4301]: E0706 23:46:03.270836 4301 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a2826ca413e2d92674965c2b0ac08422d77a7a10af570f31567dfebd484dda1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:03.270912 kubelet[4301]: E0706 23:46:03.270890 4301 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a2826ca413e2d92674965c2b0ac08422d77a7a10af570f31567dfebd484dda1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b4c6f9887-2rpdn" Jul 6 23:46:03.270934 kubelet[4301]: E0706 23:46:03.270910 4301 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a2826ca413e2d92674965c2b0ac08422d77a7a10af570f31567dfebd484dda1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b4c6f9887-2rpdn" Jul 6 23:46:03.270965 kubelet[4301]: E0706 23:46:03.270945 4301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b4c6f9887-2rpdn_calico-system(0269473e-9b77-4370-b872-0e13aff1fbe2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b4c6f9887-2rpdn_calico-system(0269473e-9b77-4370-b872-0e13aff1fbe2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a2826ca413e2d92674965c2b0ac08422d77a7a10af570f31567dfebd484dda1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b4c6f9887-2rpdn" podUID="0269473e-9b77-4370-b872-0e13aff1fbe2" Jul 6 23:46:03.271107 containerd[2777]: time="2025-07-06T23:46:03.270850233Z" level=error msg="Failed to destroy network for sandbox \"3c224bb69c6528453a159ee5bb2d79c6e7ea654748b74b2eecc0643dee435c2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:03.271453 containerd[2777]: time="2025-07-06T23:46:03.271427153Z" level=error msg="encountered an error cleaning up failed sandbox \"3c224bb69c6528453a159ee5bb2d79c6e7ea654748b74b2eecc0643dee435c2f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:03.271534 containerd[2777]: time="2025-07-06T23:46:03.271519153Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69cf447d8-mt4ls,Uid:e0b41b8f-0f59-42bb-bf26-a6f229d75d87,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"3c224bb69c6528453a159ee5bb2d79c6e7ea654748b74b2eecc0643dee435c2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:03.271674 kubelet[4301]: E0706 23:46:03.271650 4301 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c224bb69c6528453a159ee5bb2d79c6e7ea654748b74b2eecc0643dee435c2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:03.271699 kubelet[4301]: E0706 23:46:03.271689 4301 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c224bb69c6528453a159ee5bb2d79c6e7ea654748b74b2eecc0643dee435c2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69cf447d8-mt4ls" Jul 6 23:46:03.271729 kubelet[4301]: E0706 23:46:03.271707 4301 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c224bb69c6528453a159ee5bb2d79c6e7ea654748b74b2eecc0643dee435c2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69cf447d8-mt4ls" Jul 6 23:46:03.271757 kubelet[4301]: E0706 23:46:03.271739 4301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-69cf447d8-mt4ls_calico-system(e0b41b8f-0f59-42bb-bf26-a6f229d75d87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-69cf447d8-mt4ls_calico-system(e0b41b8f-0f59-42bb-bf26-a6f229d75d87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c224bb69c6528453a159ee5bb2d79c6e7ea654748b74b2eecc0643dee435c2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-69cf447d8-mt4ls" podUID="e0b41b8f-0f59-42bb-bf26-a6f229d75d87" Jul 6 23:46:03.272127 containerd[2777]: time="2025-07-06T23:46:03.272093833Z" level=error msg="Failed to destroy network for sandbox \"fa92bf17061afeedc521996d1f46c10828ee73ebd12c994a7397e669b2364f48\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:03.272440 containerd[2777]: time="2025-07-06T23:46:03.272416153Z" level=error msg="encountered an error cleaning up failed sandbox \"fa92bf17061afeedc521996d1f46c10828ee73ebd12c994a7397e669b2364f48\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:03.272484 containerd[2777]: time="2025-07-06T23:46:03.272469393Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-fm57w,Uid:2789cf2c-eaba-4e19-aa42-51b1fb03b3f4,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"fa92bf17061afeedc521996d1f46c10828ee73ebd12c994a7397e669b2364f48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:03.272646 kubelet[4301]: E0706 23:46:03.272617 4301 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa92bf17061afeedc521996d1f46c10828ee73ebd12c994a7397e669b2364f48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:03.272676 kubelet[4301]: E0706 23:46:03.272660 4301 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa92bf17061afeedc521996d1f46c10828ee73ebd12c994a7397e669b2364f48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-fm57w" Jul 6 23:46:03.272697 kubelet[4301]: E0706 23:46:03.272681 4301 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa92bf17061afeedc521996d1f46c10828ee73ebd12c994a7397e669b2364f48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-fm57w" Jul 6 23:46:03.272733 kubelet[4301]: E0706 23:46:03.272714 4301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-fm57w_calico-system(2789cf2c-eaba-4e19-aa42-51b1fb03b3f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-fm57w_calico-system(2789cf2c-eaba-4e19-aa42-51b1fb03b3f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa92bf17061afeedc521996d1f46c10828ee73ebd12c994a7397e669b2364f48\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-fm57w" podUID="2789cf2c-eaba-4e19-aa42-51b1fb03b3f4" Jul 6 23:46:03.272971 containerd[2777]: time="2025-07-06T23:46:03.272937633Z" level=error msg="Failed to destroy network for sandbox \"64b399458d5fdeb321079777a99f72aa68a319d6eafcc77cf44c5da98abc5c83\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:03.273211 containerd[2777]: time="2025-07-06T23:46:03.273172473Z" level=error msg="Failed to destroy network for sandbox \"8301f3fd650439914a27cc54e681dacbfe954c84eac1c9400caa86d026837686\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:03.273314 containerd[2777]: time="2025-07-06T23:46:03.273292993Z" level=error msg="encountered an error cleaning up failed sandbox \"64b399458d5fdeb321079777a99f72aa68a319d6eafcc77cf44c5da98abc5c83\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:03.273354 containerd[2777]: time="2025-07-06T23:46:03.273340113Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-plknp,Uid:8f8629de-298f-4ed4-a595-838d54ca032b,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"64b399458d5fdeb321079777a99f72aa68a319d6eafcc77cf44c5da98abc5c83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:03.273472 kubelet[4301]: E0706 23:46:03.273454 4301 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64b399458d5fdeb321079777a99f72aa68a319d6eafcc77cf44c5da98abc5c83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:03.273500 kubelet[4301]: E0706 23:46:03.273485 4301 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64b399458d5fdeb321079777a99f72aa68a319d6eafcc77cf44c5da98abc5c83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-plknp" Jul 6 23:46:03.273522 kubelet[4301]: E0706 23:46:03.273500 4301 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64b399458d5fdeb321079777a99f72aa68a319d6eafcc77cf44c5da98abc5c83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-plknp" Jul 6 23:46:03.273551 kubelet[4301]: E0706 23:46:03.273532 4301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-plknp_calico-system(8f8629de-298f-4ed4-a595-838d54ca032b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-plknp_calico-system(8f8629de-298f-4ed4-a595-838d54ca032b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64b399458d5fdeb321079777a99f72aa68a319d6eafcc77cf44c5da98abc5c83\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-plknp" podUID="8f8629de-298f-4ed4-a595-838d54ca032b" Jul 6 23:46:03.273739 containerd[2777]: time="2025-07-06T23:46:03.273708753Z" level=error msg="encountered an error cleaning up failed sandbox \"8301f3fd650439914a27cc54e681dacbfe954c84eac1c9400caa86d026837686\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:03.273779 containerd[2777]: time="2025-07-06T23:46:03.273764553Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f699b944c-vw6wk,Uid:857da74c-a792-4f7a-9157-031cec6c674a,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"8301f3fd650439914a27cc54e681dacbfe954c84eac1c9400caa86d026837686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:03.273911 kubelet[4301]: E0706 23:46:03.273889 4301 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8301f3fd650439914a27cc54e681dacbfe954c84eac1c9400caa86d026837686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:03.273943 kubelet[4301]: E0706 23:46:03.273926 4301 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8301f3fd650439914a27cc54e681dacbfe954c84eac1c9400caa86d026837686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f699b944c-vw6wk" Jul 6 23:46:03.273964 kubelet[4301]: E0706 23:46:03.273943 4301 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8301f3fd650439914a27cc54e681dacbfe954c84eac1c9400caa86d026837686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f699b944c-vw6wk" Jul 6 23:46:03.273992 kubelet[4301]: E0706 23:46:03.273973 4301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f699b944c-vw6wk_calico-apiserver(857da74c-a792-4f7a-9157-031cec6c674a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f699b944c-vw6wk_calico-apiserver(857da74c-a792-4f7a-9157-031cec6c674a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8301f3fd650439914a27cc54e681dacbfe954c84eac1c9400caa86d026837686\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f699b944c-vw6wk" podUID="857da74c-a792-4f7a-9157-031cec6c674a" Jul 6 23:46:03.274104 containerd[2777]: time="2025-07-06T23:46:03.274076913Z" level=error msg="Failed to destroy network for sandbox \"6de0ba61cda0fa8afd1a40870b7799a55bd892c626bddb9307b08ba7d649294e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:03.274369 containerd[2777]: time="2025-07-06T23:46:03.274347113Z" level=error msg="encountered an error cleaning up failed sandbox \"6de0ba61cda0fa8afd1a40870b7799a55bd892c626bddb9307b08ba7d649294e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:03.274405 containerd[2777]: time="2025-07-06T23:46:03.274391793Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f699b944c-lr8j7,Uid:77b24a7d-db32-4239-b789-7ecb6aad130e,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"6de0ba61cda0fa8afd1a40870b7799a55bd892c626bddb9307b08ba7d649294e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:03.274504 kubelet[4301]: E0706 23:46:03.274482 4301 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6de0ba61cda0fa8afd1a40870b7799a55bd892c626bddb9307b08ba7d649294e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:46:03.274556 kubelet[4301]: E0706 23:46:03.274521 4301 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6de0ba61cda0fa8afd1a40870b7799a55bd892c626bddb9307b08ba7d649294e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f699b944c-lr8j7" Jul 6 23:46:03.274578 kubelet[4301]: E0706 23:46:03.274560 4301 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6de0ba61cda0fa8afd1a40870b7799a55bd892c626bddb9307b08ba7d649294e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f699b944c-lr8j7" Jul 6 23:46:03.274606 kubelet[4301]: E0706 23:46:03.274590 4301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f699b944c-lr8j7_calico-apiserver(77b24a7d-db32-4239-b789-7ecb6aad130e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f699b944c-lr8j7_calico-apiserver(77b24a7d-db32-4239-b789-7ecb6aad130e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6de0ba61cda0fa8afd1a40870b7799a55bd892c626bddb9307b08ba7d649294e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f699b944c-lr8j7" podUID="77b24a7d-db32-4239-b789-7ecb6aad130e" Jul 6 23:46:03.422830 systemd[1]: run-netns-cni\x2d27c62d64\x2d6b93\x2d7617\x2da5e7\x2dd60851b2fa92.mount: Deactivated successfully. Jul 6 23:46:03.422916 systemd[1]: run-netns-cni\x2d75b8d778\x2dd5ed\x2deccc\x2def21\x2d5c82e29478dc.mount: Deactivated successfully. Jul 6 23:46:03.422959 systemd[1]: run-netns-cni\x2d3e4dfc98\x2d7972\x2d81ae\x2dbd24\x2d4066b2a08fe7.mount: Deactivated successfully. Jul 6 23:46:03.752469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1699135033.mount: Deactivated successfully. Jul 6 23:46:03.769811 containerd[2777]: time="2025-07-06T23:46:03.769744445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 6 23:46:03.769895 containerd[2777]: time="2025-07-06T23:46:03.769754045Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:03.770589 containerd[2777]: time="2025-07-06T23:46:03.770560605Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:03.772216 containerd[2777]: time="2025-07-06T23:46:03.772189325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:03.772819 containerd[2777]: time="2025-07-06T23:46:03.772796685Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 2.569172546s" Jul 6 23:46:03.772846 containerd[2777]: time="2025-07-06T23:46:03.772823365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 6 23:46:03.778314 containerd[2777]: time="2025-07-06T23:46:03.778288325Z" level=info msg="CreateContainer within sandbox \"04de9248d97e34a100219856c880a3521400955d0c7ba650af8897bf43d93b9b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 6 23:46:03.799625 containerd[2777]: time="2025-07-06T23:46:03.799592486Z" level=info msg="CreateContainer within sandbox \"04de9248d97e34a100219856c880a3521400955d0c7ba650af8897bf43d93b9b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"43d6e0571a713e2aa4b997986ccb946aa1b96a82820a4cae2b7a7cdf7633fe40\"" Jul 6 23:46:03.799959 containerd[2777]: time="2025-07-06T23:46:03.799940006Z" level=info msg="StartContainer for \"43d6e0571a713e2aa4b997986ccb946aa1b96a82820a4cae2b7a7cdf7633fe40\"" Jul 6 23:46:03.826045 systemd[1]: Started cri-containerd-43d6e0571a713e2aa4b997986ccb946aa1b96a82820a4cae2b7a7cdf7633fe40.scope - libcontainer container 43d6e0571a713e2aa4b997986ccb946aa1b96a82820a4cae2b7a7cdf7633fe40. Jul 6 23:46:03.849547 containerd[2777]: time="2025-07-06T23:46:03.849515407Z" level=info msg="StartContainer for \"43d6e0571a713e2aa4b997986ccb946aa1b96a82820a4cae2b7a7cdf7633fe40\" returns successfully" Jul 6 23:46:03.982042 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 6 23:46:03.982127 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 6 23:46:04.227111 kubelet[4301]: I0706 23:46:04.227064 4301 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6de0ba61cda0fa8afd1a40870b7799a55bd892c626bddb9307b08ba7d649294e" Jul 6 23:46:04.228350 containerd[2777]: time="2025-07-06T23:46:04.228322616Z" level=info msg="StopPodSandbox for \"6de0ba61cda0fa8afd1a40870b7799a55bd892c626bddb9307b08ba7d649294e\"" Jul 6 23:46:04.228544 containerd[2777]: time="2025-07-06T23:46:04.228498256Z" level=info msg="Ensure that sandbox 6de0ba61cda0fa8afd1a40870b7799a55bd892c626bddb9307b08ba7d649294e in task-service has been cleanup successfully" Jul 6 23:46:04.228775 containerd[2777]: time="2025-07-06T23:46:04.228760136Z" level=info msg="TearDown network for sandbox \"6de0ba61cda0fa8afd1a40870b7799a55bd892c626bddb9307b08ba7d649294e\" successfully" Jul 6 23:46:04.228806 containerd[2777]: time="2025-07-06T23:46:04.228775776Z" level=info msg="StopPodSandbox for \"6de0ba61cda0fa8afd1a40870b7799a55bd892c626bddb9307b08ba7d649294e\" returns successfully" Jul 6 23:46:04.229724 containerd[2777]: time="2025-07-06T23:46:04.229702816Z" level=info msg="StopPodSandbox for \"c8d0e2cb9f3c1d0a431902d07ee056a4c3cdf35df1a4097cbcac64f90d899c85\"" Jul 6 23:46:04.229802 containerd[2777]: time="2025-07-06T23:46:04.229790056Z" level=info msg="TearDown network for sandbox \"c8d0e2cb9f3c1d0a431902d07ee056a4c3cdf35df1a4097cbcac64f90d899c85\" successfully" Jul 6 23:46:04.229825 containerd[2777]: time="2025-07-06T23:46:04.229801816Z" level=info msg="StopPodSandbox for \"c8d0e2cb9f3c1d0a431902d07ee056a4c3cdf35df1a4097cbcac64f90d899c85\" returns successfully" Jul 6 23:46:04.230011 containerd[2777]: time="2025-07-06T23:46:04.229992576Z" level=info msg="StopPodSandbox for \"f47d0d97118fecbcf7c7a5a1c7cdcc49825de08cbee49547e1c0bcd8afbeb167\"" Jul 6 23:46:04.230090 containerd[2777]: time="2025-07-06T23:46:04.230076936Z" level=info msg="TearDown network for sandbox \"f47d0d97118fecbcf7c7a5a1c7cdcc49825de08cbee49547e1c0bcd8afbeb167\" successfully" Jul 6 23:46:04.230131 containerd[2777]: time="2025-07-06T23:46:04.230090896Z" level=info msg="StopPodSandbox for \"f47d0d97118fecbcf7c7a5a1c7cdcc49825de08cbee49547e1c0bcd8afbeb167\" returns successfully" Jul 6 23:46:04.230310 kubelet[4301]: I0706 23:46:04.230289 4301 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8301f3fd650439914a27cc54e681dacbfe954c84eac1c9400caa86d026837686" Jul 6 23:46:04.230823 containerd[2777]: time="2025-07-06T23:46:04.230804976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f699b944c-lr8j7,Uid:77b24a7d-db32-4239-b789-7ecb6aad130e,Namespace:calico-apiserver,Attempt:3,}" Jul 6 23:46:04.231348 containerd[2777]: time="2025-07-06T23:46:04.231327176Z" level=info msg="StopPodSandbox for \"8301f3fd650439914a27cc54e681dacbfe954c84eac1c9400caa86d026837686\"" Jul 6 23:46:04.231486 containerd[2777]: time="2025-07-06T23:46:04.231469296Z" level=info msg="Ensure that sandbox 8301f3fd650439914a27cc54e681dacbfe954c84eac1c9400caa86d026837686 in task-service has been cleanup successfully" Jul 6 23:46:04.231737 containerd[2777]: time="2025-07-06T23:46:04.231717656Z" level=info msg="TearDown network for sandbox \"8301f3fd650439914a27cc54e681dacbfe954c84eac1c9400caa86d026837686\" successfully" Jul 6 23:46:04.231737 containerd[2777]: time="2025-07-06T23:46:04.231736216Z" level=info msg="StopPodSandbox for \"8301f3fd650439914a27cc54e681dacbfe954c84eac1c9400caa86d026837686\" returns successfully" Jul 6 23:46:04.231945 kubelet[4301]: I0706 23:46:04.231771 4301 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fdd44146f32b6ceda976b4f4993ec80635e19da5c4c75dc29e9ea0763f599d5" Jul 6 23:46:04.232235 containerd[2777]: time="2025-07-06T23:46:04.232209216Z" level=info msg="StopPodSandbox for \"7fdd44146f32b6ceda976b4f4993ec80635e19da5c4c75dc29e9ea0763f599d5\"" Jul 6 23:46:04.234877 containerd[2777]: time="2025-07-06T23:46:04.232213056Z" level=info msg="StopPodSandbox for \"1a4d18ea0b58e548c41a7c34af0b0b243947d571c13702a36d3a3ddbb4cee7a6\"" Jul 6 23:46:04.234877 containerd[2777]: time="2025-07-06T23:46:04.232356096Z" level=info msg="Ensure that sandbox 7fdd44146f32b6ceda976b4f4993ec80635e19da5c4c75dc29e9ea0763f599d5 in task-service has been cleanup successfully" Jul 6 23:46:04.234877 containerd[2777]: time="2025-07-06T23:46:04.232377216Z" level=info msg="TearDown network for sandbox \"1a4d18ea0b58e548c41a7c34af0b0b243947d571c13702a36d3a3ddbb4cee7a6\" successfully" Jul 6 23:46:04.234877 containerd[2777]: time="2025-07-06T23:46:04.232389016Z" level=info msg="StopPodSandbox for \"1a4d18ea0b58e548c41a7c34af0b0b243947d571c13702a36d3a3ddbb4cee7a6\" returns successfully" Jul 6 23:46:04.234877 containerd[2777]: time="2025-07-06T23:46:04.232589456Z" level=info msg="TearDown network for sandbox \"7fdd44146f32b6ceda976b4f4993ec80635e19da5c4c75dc29e9ea0763f599d5\" successfully" Jul 6 23:46:04.234877 containerd[2777]: time="2025-07-06T23:46:04.232605016Z" level=info msg="StopPodSandbox for \"7fdd44146f32b6ceda976b4f4993ec80635e19da5c4c75dc29e9ea0763f599d5\" returns successfully" Jul 6 23:46:04.234877 containerd[2777]: time="2025-07-06T23:46:04.233256776Z" level=info msg="StopPodSandbox for \"ed1ef771cef9efeebc6d34fd59702c200960451cd392fab8ab19e3edb13958c5\"" Jul 6 23:46:04.234877 containerd[2777]: time="2025-07-06T23:46:04.233344416Z" level=info msg="TearDown network for sandbox \"ed1ef771cef9efeebc6d34fd59702c200960451cd392fab8ab19e3edb13958c5\" successfully" Jul 6 23:46:04.234877 containerd[2777]: time="2025-07-06T23:46:04.233353936Z" level=info msg="StopPodSandbox for \"ed1ef771cef9efeebc6d34fd59702c200960451cd392fab8ab19e3edb13958c5\" returns successfully" Jul 6 23:46:04.234877 containerd[2777]: time="2025-07-06T23:46:04.233414296Z" level=info msg="StopPodSandbox for \"4bb7360e989b50d413ec27c004346ce241f583edcb9a2b67c80e082cf7ac5244\"" Jul 6 23:46:04.234877 containerd[2777]: time="2025-07-06T23:46:04.233467056Z" level=info msg="TearDown network for sandbox \"4bb7360e989b50d413ec27c004346ce241f583edcb9a2b67c80e082cf7ac5244\" successfully" Jul 6 23:46:04.234877 containerd[2777]: time="2025-07-06T23:46:04.233475256Z" level=info msg="StopPodSandbox for \"4bb7360e989b50d413ec27c004346ce241f583edcb9a2b67c80e082cf7ac5244\" returns successfully" Jul 6 23:46:04.234877 containerd[2777]: time="2025-07-06T23:46:04.234008136Z" level=info msg="StopPodSandbox for \"c58b3575346115043e42d3160110ad65f9442033fef9a4e3609a527e9e41f300\"" Jul 6 23:46:04.234877 containerd[2777]: time="2025-07-06T23:46:04.234085776Z" level=info msg="TearDown network for sandbox \"c58b3575346115043e42d3160110ad65f9442033fef9a4e3609a527e9e41f300\" successfully" Jul 6 23:46:04.234877 containerd[2777]: time="2025-07-06T23:46:04.234094736Z" level=info msg="StopPodSandbox for \"c58b3575346115043e42d3160110ad65f9442033fef9a4e3609a527e9e41f300\" returns successfully" Jul 6 23:46:04.235242 containerd[2777]: time="2025-07-06T23:46:04.234965816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r6sm8,Uid:994a699b-ff00-478f-a192-50e32c9ff313,Namespace:kube-system,Attempt:3,}" Jul 6 23:46:04.237735 containerd[2777]: time="2025-07-06T23:46:04.236091696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f699b944c-vw6wk,Uid:857da74c-a792-4f7a-9157-031cec6c674a,Namespace:calico-apiserver,Attempt:3,}" Jul 6 23:46:04.237735 containerd[2777]: time="2025-07-06T23:46:04.237131576Z" level=info msg="StopPodSandbox for \"5a2826ca413e2d92674965c2b0ac08422d77a7a10af570f31567dfebd484dda1\"" Jul 6 23:46:04.237735 containerd[2777]: time="2025-07-06T23:46:04.237278456Z" level=info msg="Ensure that sandbox 5a2826ca413e2d92674965c2b0ac08422d77a7a10af570f31567dfebd484dda1 in task-service has been cleanup successfully" Jul 6 23:46:04.239854 kubelet[4301]: I0706 23:46:04.236715 4301 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a2826ca413e2d92674965c2b0ac08422d77a7a10af570f31567dfebd484dda1" Jul 6 23:46:04.240021 containerd[2777]: time="2025-07-06T23:46:04.239927416Z" level=info msg="TearDown network for sandbox \"5a2826ca413e2d92674965c2b0ac08422d77a7a10af570f31567dfebd484dda1\" successfully" Jul 6 23:46:04.240021 containerd[2777]: time="2025-07-06T23:46:04.239956256Z" level=info msg="StopPodSandbox for \"5a2826ca413e2d92674965c2b0ac08422d77a7a10af570f31567dfebd484dda1\" returns successfully" Jul 6 23:46:04.242959 containerd[2777]: time="2025-07-06T23:46:04.242914256Z" level=info msg="StopPodSandbox for \"dd37911a27c4787e700d8c3b35bf3819df12e6ee99f4e06d332024a23effc0ad\"" Jul 6 23:46:04.243055 containerd[2777]: time="2025-07-06T23:46:04.243002176Z" level=info msg="TearDown network for sandbox \"dd37911a27c4787e700d8c3b35bf3819df12e6ee99f4e06d332024a23effc0ad\" successfully" Jul 6 23:46:04.243055 containerd[2777]: time="2025-07-06T23:46:04.243014296Z" level=info msg="StopPodSandbox for \"dd37911a27c4787e700d8c3b35bf3819df12e6ee99f4e06d332024a23effc0ad\" returns successfully" Jul 6 23:46:04.243648 containerd[2777]: time="2025-07-06T23:46:04.243467616Z" level=info msg="StopPodSandbox for \"3e59b4729dad76f14993b50c23bcc5b51745d3d4a26156978cc1317bfdae12ee\"" Jul 6 23:46:04.243648 containerd[2777]: time="2025-07-06T23:46:04.243545296Z" level=info msg="TearDown network for sandbox \"3e59b4729dad76f14993b50c23bcc5b51745d3d4a26156978cc1317bfdae12ee\" successfully" Jul 6 23:46:04.243648 containerd[2777]: time="2025-07-06T23:46:04.243555936Z" level=info msg="StopPodSandbox for \"3e59b4729dad76f14993b50c23bcc5b51745d3d4a26156978cc1317bfdae12ee\" returns successfully" Jul 6 23:46:04.244138 containerd[2777]: time="2025-07-06T23:46:04.244114496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b4c6f9887-2rpdn,Uid:0269473e-9b77-4370-b872-0e13aff1fbe2,Namespace:calico-system,Attempt:3,}" Jul 6 23:46:04.244203 kubelet[4301]: I0706 23:46:04.244165 4301 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b5c9dd063ac6b003e02bd71376fb8dc9574f47895d67712a98ee73b64942671" Jul 6 23:46:04.244611 containerd[2777]: time="2025-07-06T23:46:04.244589816Z" level=info msg="StopPodSandbox for \"4b5c9dd063ac6b003e02bd71376fb8dc9574f47895d67712a98ee73b64942671\"" Jul 6 23:46:04.244777 containerd[2777]: time="2025-07-06T23:46:04.244723456Z" level=info msg="Ensure that sandbox 4b5c9dd063ac6b003e02bd71376fb8dc9574f47895d67712a98ee73b64942671 in task-service has been cleanup successfully" Jul 6 23:46:04.244902 containerd[2777]: time="2025-07-06T23:46:04.244887576Z" level=info msg="TearDown network for sandbox \"4b5c9dd063ac6b003e02bd71376fb8dc9574f47895d67712a98ee73b64942671\" successfully" Jul 6 23:46:04.244926 containerd[2777]: time="2025-07-06T23:46:04.244901296Z" level=info msg="StopPodSandbox for \"4b5c9dd063ac6b003e02bd71376fb8dc9574f47895d67712a98ee73b64942671\" returns successfully" Jul 6 23:46:04.245219 containerd[2777]: time="2025-07-06T23:46:04.245198776Z" level=info msg="StopPodSandbox for \"0431b148033811abd7adc33d94f5e30f08850680469a4ff9b2f62cca5d90a198\"" Jul 6 23:46:04.245292 containerd[2777]: time="2025-07-06T23:46:04.245280256Z" level=info msg="TearDown network for sandbox \"0431b148033811abd7adc33d94f5e30f08850680469a4ff9b2f62cca5d90a198\" successfully" Jul 6 23:46:04.245319 containerd[2777]: time="2025-07-06T23:46:04.245292576Z" level=info msg="StopPodSandbox for \"0431b148033811abd7adc33d94f5e30f08850680469a4ff9b2f62cca5d90a198\" returns successfully" Jul 6 23:46:04.245544 containerd[2777]: time="2025-07-06T23:46:04.245505936Z" level=info msg="StopPodSandbox for \"da8c00b01623512125b5fa588e21d32fee34d73ffd612e233afeadb15d9d9fb5\"" Jul 6 23:46:04.245596 kubelet[4301]: I0706 23:46:04.245579 4301 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64b399458d5fdeb321079777a99f72aa68a319d6eafcc77cf44c5da98abc5c83" Jul 6 23:46:04.245627 containerd[2777]: time="2025-07-06T23:46:04.245590616Z" level=info msg="TearDown network for sandbox \"da8c00b01623512125b5fa588e21d32fee34d73ffd612e233afeadb15d9d9fb5\" successfully" Jul 6 23:46:04.245627 containerd[2777]: time="2025-07-06T23:46:04.245602496Z" level=info msg="StopPodSandbox for \"da8c00b01623512125b5fa588e21d32fee34d73ffd612e233afeadb15d9d9fb5\" returns successfully" Jul 6 23:46:04.246024 containerd[2777]: time="2025-07-06T23:46:04.246002056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q9rm8,Uid:149827c6-4bc8-4f81-a88b-f0d5697ac2f9,Namespace:kube-system,Attempt:3,}" Jul 6 23:46:04.246080 containerd[2777]: time="2025-07-06T23:46:04.246055256Z" level=info msg="StopPodSandbox for \"64b399458d5fdeb321079777a99f72aa68a319d6eafcc77cf44c5da98abc5c83\"" Jul 6 23:46:04.246240 containerd[2777]: time="2025-07-06T23:46:04.246225296Z" level=info msg="Ensure that sandbox 64b399458d5fdeb321079777a99f72aa68a319d6eafcc77cf44c5da98abc5c83 in task-service has been cleanup successfully" Jul 6 23:46:04.246451 containerd[2777]: time="2025-07-06T23:46:04.246424336Z" level=info msg="TearDown network for sandbox \"64b399458d5fdeb321079777a99f72aa68a319d6eafcc77cf44c5da98abc5c83\" successfully" Jul 6 23:46:04.246451 containerd[2777]: time="2025-07-06T23:46:04.246440776Z" level=info msg="StopPodSandbox for \"64b399458d5fdeb321079777a99f72aa68a319d6eafcc77cf44c5da98abc5c83\" returns successfully" Jul 6 23:46:04.246688 containerd[2777]: time="2025-07-06T23:46:04.246672496Z" level=info msg="StopPodSandbox for \"e1157630a8ab5b265dd7f77df56f15dfe0fe032d107f4533541c371f3ea0d888\"" Jul 6 23:46:04.246818 containerd[2777]: time="2025-07-06T23:46:04.246746856Z" level=info msg="TearDown network for sandbox \"e1157630a8ab5b265dd7f77df56f15dfe0fe032d107f4533541c371f3ea0d888\" successfully" Jul 6 23:46:04.246818 containerd[2777]: time="2025-07-06T23:46:04.246757696Z" level=info msg="StopPodSandbox for \"e1157630a8ab5b265dd7f77df56f15dfe0fe032d107f4533541c371f3ea0d888\" returns successfully" Jul 6 23:46:04.246982 kubelet[4301]: I0706 23:46:04.246962 4301 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa92bf17061afeedc521996d1f46c10828ee73ebd12c994a7397e669b2364f48" Jul 6 23:46:04.247015 containerd[2777]: time="2025-07-06T23:46:04.246991456Z" level=info msg="StopPodSandbox for \"ed88fb7349684f51014d3036c107009b175993f050a65a9dff6bd628f7595af5\"" Jul 6 23:46:04.247086 containerd[2777]: time="2025-07-06T23:46:04.247074456Z" level=info msg="TearDown network for sandbox \"ed88fb7349684f51014d3036c107009b175993f050a65a9dff6bd628f7595af5\" successfully" Jul 6 23:46:04.247109 containerd[2777]: time="2025-07-06T23:46:04.247085856Z" level=info msg="StopPodSandbox for \"ed88fb7349684f51014d3036c107009b175993f050a65a9dff6bd628f7595af5\" returns successfully" Jul 6 23:46:04.247400 containerd[2777]: time="2025-07-06T23:46:04.247384536Z" level=info msg="StopPodSandbox for \"fa92bf17061afeedc521996d1f46c10828ee73ebd12c994a7397e669b2364f48\"" Jul 6 23:46:04.247529 containerd[2777]: time="2025-07-06T23:46:04.247517616Z" level=info msg="Ensure that sandbox fa92bf17061afeedc521996d1f46c10828ee73ebd12c994a7397e669b2364f48 in task-service has been cleanup successfully" Jul 6 23:46:04.247681 containerd[2777]: time="2025-07-06T23:46:04.247570136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-plknp,Uid:8f8629de-298f-4ed4-a595-838d54ca032b,Namespace:calico-system,Attempt:3,}" Jul 6 23:46:04.247735 containerd[2777]: time="2025-07-06T23:46:04.247686616Z" level=info msg="TearDown network for sandbox \"fa92bf17061afeedc521996d1f46c10828ee73ebd12c994a7397e669b2364f48\" successfully" Jul 6 23:46:04.247735 containerd[2777]: time="2025-07-06T23:46:04.247701216Z" level=info msg="StopPodSandbox for \"fa92bf17061afeedc521996d1f46c10828ee73ebd12c994a7397e669b2364f48\" returns successfully" Jul 6 23:46:04.247933 containerd[2777]: time="2025-07-06T23:46:04.247913736Z" level=info msg="StopPodSandbox for \"b42d008a74e05ed0851f7ccd6bed88b6777beb64cbe1cd6fbccdf532040670f6\"" Jul 6 23:46:04.248002 containerd[2777]: time="2025-07-06T23:46:04.247991496Z" level=info msg="TearDown network for sandbox \"b42d008a74e05ed0851f7ccd6bed88b6777beb64cbe1cd6fbccdf532040670f6\" successfully" Jul 6 23:46:04.248030 containerd[2777]: time="2025-07-06T23:46:04.248001936Z" level=info msg="StopPodSandbox for \"b42d008a74e05ed0851f7ccd6bed88b6777beb64cbe1cd6fbccdf532040670f6\" returns successfully" Jul 6 23:46:04.248195 containerd[2777]: time="2025-07-06T23:46:04.248178896Z" level=info msg="StopPodSandbox for \"464f9b256e2324160a8698f641d846186fb6b7c809a88e7b16b687e615d2cc0a\"" Jul 6 23:46:04.248269 containerd[2777]: time="2025-07-06T23:46:04.248254216Z" level=info msg="TearDown network for sandbox \"464f9b256e2324160a8698f641d846186fb6b7c809a88e7b16b687e615d2cc0a\" successfully" Jul 6 23:46:04.248269 containerd[2777]: time="2025-07-06T23:46:04.248265056Z" level=info msg="StopPodSandbox for \"464f9b256e2324160a8698f641d846186fb6b7c809a88e7b16b687e615d2cc0a\" returns successfully" Jul 6 23:46:04.248391 kubelet[4301]: I0706 23:46:04.248376 4301 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c224bb69c6528453a159ee5bb2d79c6e7ea654748b74b2eecc0643dee435c2f" Jul 6 23:46:04.248962 containerd[2777]: time="2025-07-06T23:46:04.248636536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-fm57w,Uid:2789cf2c-eaba-4e19-aa42-51b1fb03b3f4,Namespace:calico-system,Attempt:3,}" Jul 6 23:46:04.248962 containerd[2777]: time="2025-07-06T23:46:04.248658536Z" level=info msg="StopPodSandbox for \"3c224bb69c6528453a159ee5bb2d79c6e7ea654748b74b2eecc0643dee435c2f\"" Jul 6 23:46:04.248962 containerd[2777]: time="2025-07-06T23:46:04.248792376Z" level=info msg="Ensure that sandbox 3c224bb69c6528453a159ee5bb2d79c6e7ea654748b74b2eecc0643dee435c2f in task-service has been cleanup successfully" Jul 6 23:46:04.248962 containerd[2777]: time="2025-07-06T23:46:04.248952656Z" level=info msg="TearDown network for sandbox \"3c224bb69c6528453a159ee5bb2d79c6e7ea654748b74b2eecc0643dee435c2f\" successfully" Jul 6 23:46:04.248962 containerd[2777]: time="2025-07-06T23:46:04.248966456Z" level=info msg="StopPodSandbox for \"3c224bb69c6528453a159ee5bb2d79c6e7ea654748b74b2eecc0643dee435c2f\" returns successfully" Jul 6 23:46:04.249250 containerd[2777]: time="2025-07-06T23:46:04.249228296Z" level=info msg="StopPodSandbox for \"c2d347f408603e868d5a95098be007fb32f834f6e2cee49d6628e4e888d74094\"" Jul 6 23:46:04.249335 containerd[2777]: time="2025-07-06T23:46:04.249321856Z" level=info msg="TearDown network for sandbox \"c2d347f408603e868d5a95098be007fb32f834f6e2cee49d6628e4e888d74094\" successfully" Jul 6 23:46:04.249335 containerd[2777]: time="2025-07-06T23:46:04.249333616Z" level=info msg="StopPodSandbox for \"c2d347f408603e868d5a95098be007fb32f834f6e2cee49d6628e4e888d74094\" returns successfully" Jul 6 23:46:04.249556 kubelet[4301]: I0706 23:46:04.249518 4301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mp7hl" podStartSLOduration=1.830380067 podStartE2EDuration="8.249507896s" podCreationTimestamp="2025-07-06 23:45:56 +0000 UTC" firstStartedPulling="2025-07-06 23:45:57.354234456 +0000 UTC m=+20.260981717" lastFinishedPulling="2025-07-06 23:46:03.773362285 +0000 UTC m=+26.680109546" observedRunningTime="2025-07-06 23:46:04.249258376 +0000 UTC m=+27.156005637" watchObservedRunningTime="2025-07-06 23:46:04.249507896 +0000 UTC m=+27.156255157" Jul 6 23:46:04.249625 containerd[2777]: time="2025-07-06T23:46:04.249544576Z" level=info msg="StopPodSandbox for \"827747d3cb061ac5d9f0a033aea8fb6c25b8fd4e577c892cdeebdb49bfb41cbe\"" Jul 6 23:46:04.249648 containerd[2777]: time="2025-07-06T23:46:04.249630856Z" level=info msg="TearDown network for sandbox \"827747d3cb061ac5d9f0a033aea8fb6c25b8fd4e577c892cdeebdb49bfb41cbe\" successfully" Jul 6 23:46:04.249648 containerd[2777]: time="2025-07-06T23:46:04.249642096Z" level=info msg="StopPodSandbox for \"827747d3cb061ac5d9f0a033aea8fb6c25b8fd4e577c892cdeebdb49bfb41cbe\" returns successfully" Jul 6 23:46:04.318865 kubelet[4301]: I0706 23:46:04.318830 4301 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4rw9\" (UniqueName: \"kubernetes.io/projected/e0b41b8f-0f59-42bb-bf26-a6f229d75d87-kube-api-access-s4rw9\") pod \"e0b41b8f-0f59-42bb-bf26-a6f229d75d87\" (UID: \"e0b41b8f-0f59-42bb-bf26-a6f229d75d87\") " Jul 6 23:46:04.319005 kubelet[4301]: I0706 23:46:04.318882 4301 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e0b41b8f-0f59-42bb-bf26-a6f229d75d87-whisker-backend-key-pair\") pod \"e0b41b8f-0f59-42bb-bf26-a6f229d75d87\" (UID: \"e0b41b8f-0f59-42bb-bf26-a6f229d75d87\") " Jul 6 23:46:04.319005 kubelet[4301]: I0706 23:46:04.318904 4301 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0b41b8f-0f59-42bb-bf26-a6f229d75d87-whisker-ca-bundle\") pod \"e0b41b8f-0f59-42bb-bf26-a6f229d75d87\" (UID: \"e0b41b8f-0f59-42bb-bf26-a6f229d75d87\") " Jul 6 23:46:04.319474 kubelet[4301]: I0706 23:46:04.319227 4301 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0b41b8f-0f59-42bb-bf26-a6f229d75d87-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e0b41b8f-0f59-42bb-bf26-a6f229d75d87" (UID: "e0b41b8f-0f59-42bb-bf26-a6f229d75d87"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 6 23:46:04.321265 kubelet[4301]: I0706 23:46:04.321239 4301 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0b41b8f-0f59-42bb-bf26-a6f229d75d87-kube-api-access-s4rw9" (OuterVolumeSpecName: "kube-api-access-s4rw9") pod "e0b41b8f-0f59-42bb-bf26-a6f229d75d87" (UID: "e0b41b8f-0f59-42bb-bf26-a6f229d75d87"). InnerVolumeSpecName "kube-api-access-s4rw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 6 23:46:04.321613 kubelet[4301]: I0706 23:46:04.321585 4301 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0b41b8f-0f59-42bb-bf26-a6f229d75d87-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e0b41b8f-0f59-42bb-bf26-a6f229d75d87" (UID: "e0b41b8f-0f59-42bb-bf26-a6f229d75d87"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 6 23:46:04.340453 systemd-networkd[2683]: cali261612f9f0d: Link UP Jul 6 23:46:04.340660 systemd-networkd[2683]: cali261612f9f0d: Gained carrier Jul 6 23:46:04.347501 containerd[2777]: 2025-07-06 23:46:04.257 [INFO][6902] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:46:04.347501 containerd[2777]: 2025-07-06 23:46:04.275 [INFO][6902] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.2.1--a--784d2181dd-k8s-calico--apiserver--7f699b944c--lr8j7-eth0 calico-apiserver-7f699b944c- calico-apiserver 77b24a7d-db32-4239-b789-7ecb6aad130e 811 0 2025-07-06 23:45:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f699b944c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4230.2.1-a-784d2181dd calico-apiserver-7f699b944c-lr8j7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali261612f9f0d [] [] }} ContainerID="5a2ebc4283989617e9c002744177fab7c511a4ffb2d4e277b43d7350fb40d363" Namespace="calico-apiserver" Pod="calico-apiserver-7f699b944c-lr8j7" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-calico--apiserver--7f699b944c--lr8j7-" Jul 6 23:46:04.347501 containerd[2777]: 2025-07-06 23:46:04.275 [INFO][6902] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5a2ebc4283989617e9c002744177fab7c511a4ffb2d4e277b43d7350fb40d363" Namespace="calico-apiserver" Pod="calico-apiserver-7f699b944c-lr8j7" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-calico--apiserver--7f699b944c--lr8j7-eth0" Jul 6 23:46:04.347501 containerd[2777]: 2025-07-06 23:46:04.307 [INFO][7086] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5a2ebc4283989617e9c002744177fab7c511a4ffb2d4e277b43d7350fb40d363" HandleID="k8s-pod-network.5a2ebc4283989617e9c002744177fab7c511a4ffb2d4e277b43d7350fb40d363" Workload="ci--4230.2.1--a--784d2181dd-k8s-calico--apiserver--7f699b944c--lr8j7-eth0" Jul 6 23:46:04.347501 containerd[2777]: 2025-07-06 23:46:04.308 [INFO][7086] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5a2ebc4283989617e9c002744177fab7c511a4ffb2d4e277b43d7350fb40d363" HandleID="k8s-pod-network.5a2ebc4283989617e9c002744177fab7c511a4ffb2d4e277b43d7350fb40d363" Workload="ci--4230.2.1--a--784d2181dd-k8s-calico--apiserver--7f699b944c--lr8j7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003625b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4230.2.1-a-784d2181dd", "pod":"calico-apiserver-7f699b944c-lr8j7", "timestamp":"2025-07-06 23:46:04.307941257 +0000 UTC"}, Hostname:"ci-4230.2.1-a-784d2181dd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:46:04.347501 containerd[2777]: 2025-07-06 23:46:04.308 [INFO][7086] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:46:04.347501 containerd[2777]: 2025-07-06 23:46:04.308 [INFO][7086] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:46:04.347501 containerd[2777]: 2025-07-06 23:46:04.308 [INFO][7086] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.2.1-a-784d2181dd' Jul 6 23:46:04.347501 containerd[2777]: 2025-07-06 23:46:04.317 [INFO][7086] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5a2ebc4283989617e9c002744177fab7c511a4ffb2d4e277b43d7350fb40d363" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.347501 containerd[2777]: 2025-07-06 23:46:04.320 [INFO][7086] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.347501 containerd[2777]: 2025-07-06 23:46:04.324 [INFO][7086] ipam/ipam.go 511: Trying affinity for 192.168.9.128/26 host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.347501 containerd[2777]: 2025-07-06 23:46:04.325 [INFO][7086] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.128/26 host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.347501 containerd[2777]: 2025-07-06 23:46:04.326 [INFO][7086] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.128/26 host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.347501 containerd[2777]: 2025-07-06 23:46:04.326 [INFO][7086] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.9.128/26 handle="k8s-pod-network.5a2ebc4283989617e9c002744177fab7c511a4ffb2d4e277b43d7350fb40d363" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.347501 containerd[2777]: 2025-07-06 23:46:04.327 [INFO][7086] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5a2ebc4283989617e9c002744177fab7c511a4ffb2d4e277b43d7350fb40d363 Jul 6 23:46:04.347501 containerd[2777]: 2025-07-06 23:46:04.330 [INFO][7086] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.9.128/26 handle="k8s-pod-network.5a2ebc4283989617e9c002744177fab7c511a4ffb2d4e277b43d7350fb40d363" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.347501 containerd[2777]: 2025-07-06 23:46:04.333 [INFO][7086] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.9.129/26] block=192.168.9.128/26 handle="k8s-pod-network.5a2ebc4283989617e9c002744177fab7c511a4ffb2d4e277b43d7350fb40d363" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.347501 containerd[2777]: 2025-07-06 23:46:04.333 [INFO][7086] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.129/26] handle="k8s-pod-network.5a2ebc4283989617e9c002744177fab7c511a4ffb2d4e277b43d7350fb40d363" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.347501 containerd[2777]: 2025-07-06 23:46:04.333 [INFO][7086] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:46:04.347501 containerd[2777]: 2025-07-06 23:46:04.333 [INFO][7086] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.129/26] IPv6=[] ContainerID="5a2ebc4283989617e9c002744177fab7c511a4ffb2d4e277b43d7350fb40d363" HandleID="k8s-pod-network.5a2ebc4283989617e9c002744177fab7c511a4ffb2d4e277b43d7350fb40d363" Workload="ci--4230.2.1--a--784d2181dd-k8s-calico--apiserver--7f699b944c--lr8j7-eth0" Jul 6 23:46:04.347961 containerd[2777]: 2025-07-06 23:46:04.335 [INFO][6902] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5a2ebc4283989617e9c002744177fab7c511a4ffb2d4e277b43d7350fb40d363" Namespace="calico-apiserver" Pod="calico-apiserver-7f699b944c-lr8j7" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-calico--apiserver--7f699b944c--lr8j7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.2.1--a--784d2181dd-k8s-calico--apiserver--7f699b944c--lr8j7-eth0", GenerateName:"calico-apiserver-7f699b944c-", Namespace:"calico-apiserver", SelfLink:"", UID:"77b24a7d-db32-4239-b789-7ecb6aad130e", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 45, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f699b944c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.2.1-a-784d2181dd", ContainerID:"", Pod:"calico-apiserver-7f699b944c-lr8j7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali261612f9f0d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:46:04.347961 containerd[2777]: 2025-07-06 23:46:04.335 [INFO][6902] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.129/32] ContainerID="5a2ebc4283989617e9c002744177fab7c511a4ffb2d4e277b43d7350fb40d363" Namespace="calico-apiserver" Pod="calico-apiserver-7f699b944c-lr8j7" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-calico--apiserver--7f699b944c--lr8j7-eth0" Jul 6 23:46:04.347961 containerd[2777]: 2025-07-06 23:46:04.335 [INFO][6902] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali261612f9f0d ContainerID="5a2ebc4283989617e9c002744177fab7c511a4ffb2d4e277b43d7350fb40d363" Namespace="calico-apiserver" Pod="calico-apiserver-7f699b944c-lr8j7" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-calico--apiserver--7f699b944c--lr8j7-eth0" Jul 6 23:46:04.347961 containerd[2777]: 2025-07-06 23:46:04.340 [INFO][6902] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5a2ebc4283989617e9c002744177fab7c511a4ffb2d4e277b43d7350fb40d363" Namespace="calico-apiserver" Pod="calico-apiserver-7f699b944c-lr8j7" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-calico--apiserver--7f699b944c--lr8j7-eth0" Jul 6 23:46:04.347961 containerd[2777]: 2025-07-06 23:46:04.340 [INFO][6902] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5a2ebc4283989617e9c002744177fab7c511a4ffb2d4e277b43d7350fb40d363" Namespace="calico-apiserver" Pod="calico-apiserver-7f699b944c-lr8j7" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-calico--apiserver--7f699b944c--lr8j7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.2.1--a--784d2181dd-k8s-calico--apiserver--7f699b944c--lr8j7-eth0", GenerateName:"calico-apiserver-7f699b944c-", Namespace:"calico-apiserver", SelfLink:"", UID:"77b24a7d-db32-4239-b789-7ecb6aad130e", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 45, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f699b944c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.2.1-a-784d2181dd", ContainerID:"5a2ebc4283989617e9c002744177fab7c511a4ffb2d4e277b43d7350fb40d363", Pod:"calico-apiserver-7f699b944c-lr8j7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali261612f9f0d", MAC:"16:99:ad:e4:46:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:46:04.347961 containerd[2777]: 2025-07-06 23:46:04.346 [INFO][6902] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5a2ebc4283989617e9c002744177fab7c511a4ffb2d4e277b43d7350fb40d363" Namespace="calico-apiserver" Pod="calico-apiserver-7f699b944c-lr8j7" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-calico--apiserver--7f699b944c--lr8j7-eth0" Jul 6 23:46:04.359941 containerd[2777]: time="2025-07-06T23:46:04.359882619Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:46:04.359968 containerd[2777]: time="2025-07-06T23:46:04.359936299Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:46:04.359968 containerd[2777]: time="2025-07-06T23:46:04.359947979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:46:04.360035 containerd[2777]: time="2025-07-06T23:46:04.360019099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:46:04.385989 systemd[1]: Started cri-containerd-5a2ebc4283989617e9c002744177fab7c511a4ffb2d4e277b43d7350fb40d363.scope - libcontainer container 5a2ebc4283989617e9c002744177fab7c511a4ffb2d4e277b43d7350fb40d363. Jul 6 23:46:04.409921 containerd[2777]: time="2025-07-06T23:46:04.409888100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f699b944c-lr8j7,Uid:77b24a7d-db32-4239-b789-7ecb6aad130e,Namespace:calico-apiserver,Attempt:3,} returns sandbox id \"5a2ebc4283989617e9c002744177fab7c511a4ffb2d4e277b43d7350fb40d363\"" Jul 6 23:46:04.410953 containerd[2777]: time="2025-07-06T23:46:04.410933820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:46:04.420125 kubelet[4301]: I0706 23:46:04.420105 4301 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4rw9\" (UniqueName: \"kubernetes.io/projected/e0b41b8f-0f59-42bb-bf26-a6f229d75d87-kube-api-access-s4rw9\") on node \"ci-4230.2.1-a-784d2181dd\" DevicePath \"\"" Jul 6 23:46:04.420194 kubelet[4301]: I0706 23:46:04.420127 4301 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e0b41b8f-0f59-42bb-bf26-a6f229d75d87-whisker-backend-key-pair\") on node \"ci-4230.2.1-a-784d2181dd\" DevicePath \"\"" Jul 6 23:46:04.420194 kubelet[4301]: I0706 23:46:04.420137 4301 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0b41b8f-0f59-42bb-bf26-a6f229d75d87-whisker-ca-bundle\") on node \"ci-4230.2.1-a-784d2181dd\" DevicePath \"\"" Jul 6 23:46:04.428041 systemd[1]: run-netns-cni\x2d137bc667\x2db2c9\x2dcf8b\x2d7eb1\x2d614205af8c66.mount: Deactivated successfully. Jul 6 23:46:04.428121 systemd[1]: run-netns-cni\x2d993c9746\x2d58c7\x2dd77a\x2d43c0\x2d1015183a2d20.mount: Deactivated successfully. Jul 6 23:46:04.428166 systemd[1]: run-netns-cni\x2db70cabaa\x2da2f8\x2d8273\x2d448d\x2d224956ccb46d.mount: Deactivated successfully. Jul 6 23:46:04.428211 systemd[1]: run-netns-cni\x2d5d068398\x2dc34a\x2d9c64\x2dbb4a\x2daaa73a58a96e.mount: Deactivated successfully. Jul 6 23:46:04.428254 systemd[1]: run-netns-cni\x2d45077c52\x2dcf6e\x2d32ca\x2de871\x2d7d40ad5a61b8.mount: Deactivated successfully. Jul 6 23:46:04.428296 systemd[1]: run-netns-cni\x2d9de6c1ce\x2d84c0\x2de239\x2d55bb\x2d99b39139df41.mount: Deactivated successfully. Jul 6 23:46:04.428337 systemd[1]: run-netns-cni\x2d9f3d9695\x2de27d\x2dceb6\x2d337d\x2d370436cbf62e.mount: Deactivated successfully. Jul 6 23:46:04.428379 systemd[1]: run-netns-cni\x2dbb714a9b\x2d5b5f\x2d2cdf\x2d1cd4\x2daf4747d7376c.mount: Deactivated successfully. Jul 6 23:46:04.428424 systemd[1]: var-lib-kubelet-pods-e0b41b8f\x2d0f59\x2d42bb\x2dbf26\x2da6f229d75d87-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds4rw9.mount: Deactivated successfully. Jul 6 23:46:04.428475 systemd[1]: var-lib-kubelet-pods-e0b41b8f\x2d0f59\x2d42bb\x2dbf26\x2da6f229d75d87-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 6 23:46:04.437481 systemd-networkd[2683]: calic9132eb2c04: Link UP Jul 6 23:46:04.437660 systemd-networkd[2683]: calic9132eb2c04: Gained carrier Jul 6 23:46:04.446897 containerd[2777]: 2025-07-06 23:46:04.269 [INFO][6965] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:46:04.446897 containerd[2777]: 2025-07-06 23:46:04.279 [INFO][6965] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.2.1--a--784d2181dd-k8s-coredns--7c65d6cfc9--q9rm8-eth0 coredns-7c65d6cfc9- kube-system 149827c6-4bc8-4f81-a88b-f0d5697ac2f9 814 0 2025-07-06 23:45:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4230.2.1-a-784d2181dd coredns-7c65d6cfc9-q9rm8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic9132eb2c04 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f3b4393597dea35002cfa2fe9c2e21c8d036687dde81e4d966c0e93750b22d43" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q9rm8" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-coredns--7c65d6cfc9--q9rm8-" Jul 6 23:46:04.446897 containerd[2777]: 2025-07-06 23:46:04.279 [INFO][6965] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f3b4393597dea35002cfa2fe9c2e21c8d036687dde81e4d966c0e93750b22d43" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q9rm8" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-coredns--7c65d6cfc9--q9rm8-eth0" Jul 6 23:46:04.446897 containerd[2777]: 2025-07-06 23:46:04.307 [INFO][7106] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f3b4393597dea35002cfa2fe9c2e21c8d036687dde81e4d966c0e93750b22d43" HandleID="k8s-pod-network.f3b4393597dea35002cfa2fe9c2e21c8d036687dde81e4d966c0e93750b22d43" Workload="ci--4230.2.1--a--784d2181dd-k8s-coredns--7c65d6cfc9--q9rm8-eth0" Jul 6 23:46:04.446897 containerd[2777]: 2025-07-06 23:46:04.308 [INFO][7106] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f3b4393597dea35002cfa2fe9c2e21c8d036687dde81e4d966c0e93750b22d43" HandleID="k8s-pod-network.f3b4393597dea35002cfa2fe9c2e21c8d036687dde81e4d966c0e93750b22d43" Workload="ci--4230.2.1--a--784d2181dd-k8s-coredns--7c65d6cfc9--q9rm8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003e1640), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4230.2.1-a-784d2181dd", "pod":"coredns-7c65d6cfc9-q9rm8", "timestamp":"2025-07-06 23:46:04.307941737 +0000 UTC"}, Hostname:"ci-4230.2.1-a-784d2181dd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:46:04.446897 containerd[2777]: 2025-07-06 23:46:04.308 [INFO][7106] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:46:04.446897 containerd[2777]: 2025-07-06 23:46:04.333 [INFO][7106] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:46:04.446897 containerd[2777]: 2025-07-06 23:46:04.333 [INFO][7106] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.2.1-a-784d2181dd' Jul 6 23:46:04.446897 containerd[2777]: 2025-07-06 23:46:04.417 [INFO][7106] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f3b4393597dea35002cfa2fe9c2e21c8d036687dde81e4d966c0e93750b22d43" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.446897 containerd[2777]: 2025-07-06 23:46:04.420 [INFO][7106] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.446897 containerd[2777]: 2025-07-06 23:46:04.425 [INFO][7106] ipam/ipam.go 511: Trying affinity for 192.168.9.128/26 host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.446897 containerd[2777]: 2025-07-06 23:46:04.426 [INFO][7106] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.128/26 host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.446897 containerd[2777]: 2025-07-06 23:46:04.427 [INFO][7106] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.128/26 host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.446897 containerd[2777]: 2025-07-06 23:46:04.428 [INFO][7106] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.9.128/26 handle="k8s-pod-network.f3b4393597dea35002cfa2fe9c2e21c8d036687dde81e4d966c0e93750b22d43" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.446897 containerd[2777]: 2025-07-06 23:46:04.428 [INFO][7106] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f3b4393597dea35002cfa2fe9c2e21c8d036687dde81e4d966c0e93750b22d43 Jul 6 23:46:04.446897 containerd[2777]: 2025-07-06 23:46:04.431 [INFO][7106] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.9.128/26 handle="k8s-pod-network.f3b4393597dea35002cfa2fe9c2e21c8d036687dde81e4d966c0e93750b22d43" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.446897 containerd[2777]: 2025-07-06 23:46:04.434 [INFO][7106] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.9.130/26] block=192.168.9.128/26 handle="k8s-pod-network.f3b4393597dea35002cfa2fe9c2e21c8d036687dde81e4d966c0e93750b22d43" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.446897 containerd[2777]: 2025-07-06 23:46:04.434 [INFO][7106] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.130/26] handle="k8s-pod-network.f3b4393597dea35002cfa2fe9c2e21c8d036687dde81e4d966c0e93750b22d43" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.446897 containerd[2777]: 2025-07-06 23:46:04.434 [INFO][7106] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:46:04.446897 containerd[2777]: 2025-07-06 23:46:04.434 [INFO][7106] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.130/26] IPv6=[] ContainerID="f3b4393597dea35002cfa2fe9c2e21c8d036687dde81e4d966c0e93750b22d43" HandleID="k8s-pod-network.f3b4393597dea35002cfa2fe9c2e21c8d036687dde81e4d966c0e93750b22d43" Workload="ci--4230.2.1--a--784d2181dd-k8s-coredns--7c65d6cfc9--q9rm8-eth0" Jul 6 23:46:04.447352 containerd[2777]: 2025-07-06 23:46:04.436 [INFO][6965] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f3b4393597dea35002cfa2fe9c2e21c8d036687dde81e4d966c0e93750b22d43" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q9rm8" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-coredns--7c65d6cfc9--q9rm8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.2.1--a--784d2181dd-k8s-coredns--7c65d6cfc9--q9rm8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"149827c6-4bc8-4f81-a88b-f0d5697ac2f9", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 45, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.2.1-a-784d2181dd", ContainerID:"", Pod:"coredns-7c65d6cfc9-q9rm8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic9132eb2c04", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:46:04.447352 containerd[2777]: 2025-07-06 23:46:04.436 [INFO][6965] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.130/32] ContainerID="f3b4393597dea35002cfa2fe9c2e21c8d036687dde81e4d966c0e93750b22d43" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q9rm8" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-coredns--7c65d6cfc9--q9rm8-eth0" Jul 6 23:46:04.447352 containerd[2777]: 2025-07-06 23:46:04.436 [INFO][6965] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9132eb2c04 ContainerID="f3b4393597dea35002cfa2fe9c2e21c8d036687dde81e4d966c0e93750b22d43" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q9rm8" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-coredns--7c65d6cfc9--q9rm8-eth0" Jul 6 23:46:04.447352 containerd[2777]: 2025-07-06 23:46:04.437 [INFO][6965] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f3b4393597dea35002cfa2fe9c2e21c8d036687dde81e4d966c0e93750b22d43" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q9rm8" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-coredns--7c65d6cfc9--q9rm8-eth0" Jul 6 23:46:04.447352 containerd[2777]: 2025-07-06 23:46:04.437 [INFO][6965] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f3b4393597dea35002cfa2fe9c2e21c8d036687dde81e4d966c0e93750b22d43" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q9rm8" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-coredns--7c65d6cfc9--q9rm8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.2.1--a--784d2181dd-k8s-coredns--7c65d6cfc9--q9rm8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"149827c6-4bc8-4f81-a88b-f0d5697ac2f9", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 45, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.2.1-a-784d2181dd", ContainerID:"f3b4393597dea35002cfa2fe9c2e21c8d036687dde81e4d966c0e93750b22d43", Pod:"coredns-7c65d6cfc9-q9rm8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic9132eb2c04", MAC:"de:4d:7b:f0:4c:3d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:46:04.447352 containerd[2777]: 2025-07-06 23:46:04.445 [INFO][6965] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f3b4393597dea35002cfa2fe9c2e21c8d036687dde81e4d966c0e93750b22d43" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q9rm8" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-coredns--7c65d6cfc9--q9rm8-eth0" Jul 6 23:46:04.461697 containerd[2777]: time="2025-07-06T23:46:04.461635621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:46:04.461731 containerd[2777]: time="2025-07-06T23:46:04.461690701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:46:04.461731 containerd[2777]: time="2025-07-06T23:46:04.461704981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:46:04.461795 containerd[2777]: time="2025-07-06T23:46:04.461778861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:46:04.488999 systemd[1]: Started cri-containerd-f3b4393597dea35002cfa2fe9c2e21c8d036687dde81e4d966c0e93750b22d43.scope - libcontainer container f3b4393597dea35002cfa2fe9c2e21c8d036687dde81e4d966c0e93750b22d43. Jul 6 23:46:04.512626 containerd[2777]: time="2025-07-06T23:46:04.512596142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q9rm8,Uid:149827c6-4bc8-4f81-a88b-f0d5697ac2f9,Namespace:kube-system,Attempt:3,} returns sandbox id \"f3b4393597dea35002cfa2fe9c2e21c8d036687dde81e4d966c0e93750b22d43\"" Jul 6 23:46:04.514472 containerd[2777]: time="2025-07-06T23:46:04.514449302Z" level=info msg="CreateContainer within sandbox \"f3b4393597dea35002cfa2fe9c2e21c8d036687dde81e4d966c0e93750b22d43\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:46:04.521389 containerd[2777]: time="2025-07-06T23:46:04.521363422Z" level=info msg="CreateContainer within sandbox \"f3b4393597dea35002cfa2fe9c2e21c8d036687dde81e4d966c0e93750b22d43\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ee647f7c2f3b0e95f8e332765c9f843dc78fbe740c2eddd064f992480981a214\"" Jul 6 23:46:04.521737 containerd[2777]: time="2025-07-06T23:46:04.521668822Z" level=info msg="StartContainer for \"ee647f7c2f3b0e95f8e332765c9f843dc78fbe740c2eddd064f992480981a214\"" Jul 6 23:46:04.537581 systemd-networkd[2683]: calia7ffb28c362: Link UP Jul 6 23:46:04.537775 systemd-networkd[2683]: calia7ffb28c362: Gained carrier Jul 6 23:46:04.544998 systemd[1]: Started cri-containerd-ee647f7c2f3b0e95f8e332765c9f843dc78fbe740c2eddd064f992480981a214.scope - libcontainer container ee647f7c2f3b0e95f8e332765c9f843dc78fbe740c2eddd064f992480981a214. Jul 6 23:46:04.546454 containerd[2777]: 2025-07-06 23:46:04.266 [INFO][6945] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:46:04.546454 containerd[2777]: 2025-07-06 23:46:04.277 [INFO][6945] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.2.1--a--784d2181dd-k8s-calico--kube--controllers--7b4c6f9887--2rpdn-eth0 calico-kube-controllers-7b4c6f9887- calico-system 0269473e-9b77-4370-b872-0e13aff1fbe2 805 0 2025-07-06 23:45:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b4c6f9887 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4230.2.1-a-784d2181dd calico-kube-controllers-7b4c6f9887-2rpdn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia7ffb28c362 [] [] }} ContainerID="d93ad09e79916c1308f082b440e7a520b4c4040169f79dd86b74a21fd4896df5" Namespace="calico-system" Pod="calico-kube-controllers-7b4c6f9887-2rpdn" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-calico--kube--controllers--7b4c6f9887--2rpdn-" Jul 6 23:46:04.546454 containerd[2777]: 2025-07-06 23:46:04.277 [INFO][6945] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d93ad09e79916c1308f082b440e7a520b4c4040169f79dd86b74a21fd4896df5" Namespace="calico-system" Pod="calico-kube-controllers-7b4c6f9887-2rpdn" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-calico--kube--controllers--7b4c6f9887--2rpdn-eth0" Jul 6 23:46:04.546454 containerd[2777]: 2025-07-06 23:46:04.307 [INFO][7096] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d93ad09e79916c1308f082b440e7a520b4c4040169f79dd86b74a21fd4896df5" HandleID="k8s-pod-network.d93ad09e79916c1308f082b440e7a520b4c4040169f79dd86b74a21fd4896df5" Workload="ci--4230.2.1--a--784d2181dd-k8s-calico--kube--controllers--7b4c6f9887--2rpdn-eth0" Jul 6 23:46:04.546454 containerd[2777]: 2025-07-06 23:46:04.308 [INFO][7096] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d93ad09e79916c1308f082b440e7a520b4c4040169f79dd86b74a21fd4896df5" HandleID="k8s-pod-network.d93ad09e79916c1308f082b440e7a520b4c4040169f79dd86b74a21fd4896df5" Workload="ci--4230.2.1--a--784d2181dd-k8s-calico--kube--controllers--7b4c6f9887--2rpdn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400043d8d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4230.2.1-a-784d2181dd", "pod":"calico-kube-controllers-7b4c6f9887-2rpdn", "timestamp":"2025-07-06 23:46:04.307938977 +0000 UTC"}, Hostname:"ci-4230.2.1-a-784d2181dd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:46:04.546454 containerd[2777]: 2025-07-06 23:46:04.308 [INFO][7096] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:46:04.546454 containerd[2777]: 2025-07-06 23:46:04.434 [INFO][7096] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:46:04.546454 containerd[2777]: 2025-07-06 23:46:04.434 [INFO][7096] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.2.1-a-784d2181dd' Jul 6 23:46:04.546454 containerd[2777]: 2025-07-06 23:46:04.518 [INFO][7096] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d93ad09e79916c1308f082b440e7a520b4c4040169f79dd86b74a21fd4896df5" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.546454 containerd[2777]: 2025-07-06 23:46:04.521 [INFO][7096] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.546454 containerd[2777]: 2025-07-06 23:46:04.525 [INFO][7096] ipam/ipam.go 511: Trying affinity for 192.168.9.128/26 host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.546454 containerd[2777]: 2025-07-06 23:46:04.526 [INFO][7096] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.128/26 host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.546454 containerd[2777]: 2025-07-06 23:46:04.527 [INFO][7096] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.128/26 host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.546454 containerd[2777]: 2025-07-06 23:46:04.527 [INFO][7096] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.9.128/26 handle="k8s-pod-network.d93ad09e79916c1308f082b440e7a520b4c4040169f79dd86b74a21fd4896df5" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.546454 containerd[2777]: 2025-07-06 23:46:04.528 [INFO][7096] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d93ad09e79916c1308f082b440e7a520b4c4040169f79dd86b74a21fd4896df5 Jul 6 23:46:04.546454 containerd[2777]: 2025-07-06 23:46:04.531 [INFO][7096] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.9.128/26 handle="k8s-pod-network.d93ad09e79916c1308f082b440e7a520b4c4040169f79dd86b74a21fd4896df5" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.546454 containerd[2777]: 2025-07-06 23:46:04.534 [INFO][7096] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.9.131/26] block=192.168.9.128/26 handle="k8s-pod-network.d93ad09e79916c1308f082b440e7a520b4c4040169f79dd86b74a21fd4896df5" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.546454 containerd[2777]: 2025-07-06 23:46:04.534 [INFO][7096] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.131/26] handle="k8s-pod-network.d93ad09e79916c1308f082b440e7a520b4c4040169f79dd86b74a21fd4896df5" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.546454 containerd[2777]: 2025-07-06 23:46:04.534 [INFO][7096] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:46:04.546454 containerd[2777]: 2025-07-06 23:46:04.534 [INFO][7096] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.131/26] IPv6=[] ContainerID="d93ad09e79916c1308f082b440e7a520b4c4040169f79dd86b74a21fd4896df5" HandleID="k8s-pod-network.d93ad09e79916c1308f082b440e7a520b4c4040169f79dd86b74a21fd4896df5" Workload="ci--4230.2.1--a--784d2181dd-k8s-calico--kube--controllers--7b4c6f9887--2rpdn-eth0" Jul 6 23:46:04.546945 containerd[2777]: 2025-07-06 23:46:04.536 [INFO][6945] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d93ad09e79916c1308f082b440e7a520b4c4040169f79dd86b74a21fd4896df5" Namespace="calico-system" Pod="calico-kube-controllers-7b4c6f9887-2rpdn" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-calico--kube--controllers--7b4c6f9887--2rpdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.2.1--a--784d2181dd-k8s-calico--kube--controllers--7b4c6f9887--2rpdn-eth0", GenerateName:"calico-kube-controllers-7b4c6f9887-", Namespace:"calico-system", SelfLink:"", UID:"0269473e-9b77-4370-b872-0e13aff1fbe2", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 45, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b4c6f9887", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.2.1-a-784d2181dd", ContainerID:"", Pod:"calico-kube-controllers-7b4c6f9887-2rpdn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia7ffb28c362", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:46:04.546945 containerd[2777]: 2025-07-06 23:46:04.536 [INFO][6945] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.131/32] ContainerID="d93ad09e79916c1308f082b440e7a520b4c4040169f79dd86b74a21fd4896df5" Namespace="calico-system" Pod="calico-kube-controllers-7b4c6f9887-2rpdn" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-calico--kube--controllers--7b4c6f9887--2rpdn-eth0" Jul 6 23:46:04.546945 containerd[2777]: 2025-07-06 23:46:04.536 [INFO][6945] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia7ffb28c362 ContainerID="d93ad09e79916c1308f082b440e7a520b4c4040169f79dd86b74a21fd4896df5" Namespace="calico-system" Pod="calico-kube-controllers-7b4c6f9887-2rpdn" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-calico--kube--controllers--7b4c6f9887--2rpdn-eth0" Jul 6 23:46:04.546945 containerd[2777]: 2025-07-06 23:46:04.537 [INFO][6945] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d93ad09e79916c1308f082b440e7a520b4c4040169f79dd86b74a21fd4896df5" Namespace="calico-system" Pod="calico-kube-controllers-7b4c6f9887-2rpdn" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-calico--kube--controllers--7b4c6f9887--2rpdn-eth0" Jul 6 23:46:04.546945 containerd[2777]: 2025-07-06 23:46:04.538 [INFO][6945] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d93ad09e79916c1308f082b440e7a520b4c4040169f79dd86b74a21fd4896df5" Namespace="calico-system" Pod="calico-kube-controllers-7b4c6f9887-2rpdn" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-calico--kube--controllers--7b4c6f9887--2rpdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.2.1--a--784d2181dd-k8s-calico--kube--controllers--7b4c6f9887--2rpdn-eth0", GenerateName:"calico-kube-controllers-7b4c6f9887-", Namespace:"calico-system", SelfLink:"", UID:"0269473e-9b77-4370-b872-0e13aff1fbe2", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 45, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b4c6f9887", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.2.1-a-784d2181dd", ContainerID:"d93ad09e79916c1308f082b440e7a520b4c4040169f79dd86b74a21fd4896df5", Pod:"calico-kube-controllers-7b4c6f9887-2rpdn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia7ffb28c362", MAC:"2a:a1:89:29:3e:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:46:04.546945 containerd[2777]: 2025-07-06 23:46:04.545 [INFO][6945] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d93ad09e79916c1308f082b440e7a520b4c4040169f79dd86b74a21fd4896df5" Namespace="calico-system" Pod="calico-kube-controllers-7b4c6f9887-2rpdn" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-calico--kube--controllers--7b4c6f9887--2rpdn-eth0" Jul 6 23:46:04.558960 containerd[2777]: time="2025-07-06T23:46:04.558893423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:46:04.559003 containerd[2777]: time="2025-07-06T23:46:04.558957703Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:46:04.559003 containerd[2777]: time="2025-07-06T23:46:04.558970223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:46:04.559068 containerd[2777]: time="2025-07-06T23:46:04.559051863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:46:04.563192 containerd[2777]: time="2025-07-06T23:46:04.563161743Z" level=info msg="StartContainer for \"ee647f7c2f3b0e95f8e332765c9f843dc78fbe740c2eddd064f992480981a214\" returns successfully" Jul 6 23:46:04.590003 systemd[1]: Started cri-containerd-d93ad09e79916c1308f082b440e7a520b4c4040169f79dd86b74a21fd4896df5.scope - libcontainer container d93ad09e79916c1308f082b440e7a520b4c4040169f79dd86b74a21fd4896df5. Jul 6 23:46:04.614597 containerd[2777]: time="2025-07-06T23:46:04.614564824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b4c6f9887-2rpdn,Uid:0269473e-9b77-4370-b872-0e13aff1fbe2,Namespace:calico-system,Attempt:3,} returns sandbox id \"d93ad09e79916c1308f082b440e7a520b4c4040169f79dd86b74a21fd4896df5\"" Jul 6 23:46:04.638057 systemd-networkd[2683]: cali1f99fcf441b: Link UP Jul 6 23:46:04.638314 systemd-networkd[2683]: cali1f99fcf441b: Gained carrier Jul 6 23:46:04.646984 containerd[2777]: 2025-07-06 23:46:04.261 [INFO][6913] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:46:04.646984 containerd[2777]: 2025-07-06 23:46:04.275 [INFO][6913] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.2.1--a--784d2181dd-k8s-coredns--7c65d6cfc9--r6sm8-eth0 coredns-7c65d6cfc9- kube-system 994a699b-ff00-478f-a192-50e32c9ff313 812 0 2025-07-06 23:45:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4230.2.1-a-784d2181dd coredns-7c65d6cfc9-r6sm8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1f99fcf441b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d41439823ca6a003fa62b34e63e15275ea5f6ee717d1b0db016a74ac6de3dc45" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r6sm8" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-coredns--7c65d6cfc9--r6sm8-" Jul 6 23:46:04.646984 containerd[2777]: 2025-07-06 23:46:04.275 [INFO][6913] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d41439823ca6a003fa62b34e63e15275ea5f6ee717d1b0db016a74ac6de3dc45" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r6sm8" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-coredns--7c65d6cfc9--r6sm8-eth0" Jul 6 23:46:04.646984 containerd[2777]: 2025-07-06 23:46:04.307 [INFO][7082] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d41439823ca6a003fa62b34e63e15275ea5f6ee717d1b0db016a74ac6de3dc45" HandleID="k8s-pod-network.d41439823ca6a003fa62b34e63e15275ea5f6ee717d1b0db016a74ac6de3dc45" Workload="ci--4230.2.1--a--784d2181dd-k8s-coredns--7c65d6cfc9--r6sm8-eth0" Jul 6 23:46:04.646984 containerd[2777]: 2025-07-06 23:46:04.308 [INFO][7082] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d41439823ca6a003fa62b34e63e15275ea5f6ee717d1b0db016a74ac6de3dc45" HandleID="k8s-pod-network.d41439823ca6a003fa62b34e63e15275ea5f6ee717d1b0db016a74ac6de3dc45" Workload="ci--4230.2.1--a--784d2181dd-k8s-coredns--7c65d6cfc9--r6sm8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000706a00), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4230.2.1-a-784d2181dd", "pod":"coredns-7c65d6cfc9-r6sm8", "timestamp":"2025-07-06 23:46:04.307938377 +0000 UTC"}, Hostname:"ci-4230.2.1-a-784d2181dd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:46:04.646984 containerd[2777]: 2025-07-06 23:46:04.308 [INFO][7082] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:46:04.646984 containerd[2777]: 2025-07-06 23:46:04.534 [INFO][7082] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:46:04.646984 containerd[2777]: 2025-07-06 23:46:04.534 [INFO][7082] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.2.1-a-784d2181dd' Jul 6 23:46:04.646984 containerd[2777]: 2025-07-06 23:46:04.618 [INFO][7082] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d41439823ca6a003fa62b34e63e15275ea5f6ee717d1b0db016a74ac6de3dc45" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.646984 containerd[2777]: 2025-07-06 23:46:04.621 [INFO][7082] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.646984 containerd[2777]: 2025-07-06 23:46:04.625 [INFO][7082] ipam/ipam.go 511: Trying affinity for 192.168.9.128/26 host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.646984 containerd[2777]: 2025-07-06 23:46:04.626 [INFO][7082] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.128/26 host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.646984 containerd[2777]: 2025-07-06 23:46:04.628 [INFO][7082] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.128/26 host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.646984 containerd[2777]: 2025-07-06 23:46:04.628 [INFO][7082] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.9.128/26 handle="k8s-pod-network.d41439823ca6a003fa62b34e63e15275ea5f6ee717d1b0db016a74ac6de3dc45" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.646984 containerd[2777]: 2025-07-06 23:46:04.629 [INFO][7082] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d41439823ca6a003fa62b34e63e15275ea5f6ee717d1b0db016a74ac6de3dc45 Jul 6 23:46:04.646984 containerd[2777]: 2025-07-06 23:46:04.631 [INFO][7082] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.9.128/26 handle="k8s-pod-network.d41439823ca6a003fa62b34e63e15275ea5f6ee717d1b0db016a74ac6de3dc45" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.646984 containerd[2777]: 2025-07-06 23:46:04.635 [INFO][7082] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.9.132/26] block=192.168.9.128/26 handle="k8s-pod-network.d41439823ca6a003fa62b34e63e15275ea5f6ee717d1b0db016a74ac6de3dc45" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.646984 containerd[2777]: 2025-07-06 23:46:04.635 [INFO][7082] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.132/26] handle="k8s-pod-network.d41439823ca6a003fa62b34e63e15275ea5f6ee717d1b0db016a74ac6de3dc45" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.646984 containerd[2777]: 2025-07-06 23:46:04.635 [INFO][7082] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:46:04.646984 containerd[2777]: 2025-07-06 23:46:04.635 [INFO][7082] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.132/26] IPv6=[] ContainerID="d41439823ca6a003fa62b34e63e15275ea5f6ee717d1b0db016a74ac6de3dc45" HandleID="k8s-pod-network.d41439823ca6a003fa62b34e63e15275ea5f6ee717d1b0db016a74ac6de3dc45" Workload="ci--4230.2.1--a--784d2181dd-k8s-coredns--7c65d6cfc9--r6sm8-eth0" Jul 6 23:46:04.647436 containerd[2777]: 2025-07-06 23:46:04.636 [INFO][6913] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d41439823ca6a003fa62b34e63e15275ea5f6ee717d1b0db016a74ac6de3dc45" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r6sm8" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-coredns--7c65d6cfc9--r6sm8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.2.1--a--784d2181dd-k8s-coredns--7c65d6cfc9--r6sm8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"994a699b-ff00-478f-a192-50e32c9ff313", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 45, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.2.1-a-784d2181dd", ContainerID:"", Pod:"coredns-7c65d6cfc9-r6sm8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f99fcf441b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:46:04.647436 containerd[2777]: 2025-07-06 23:46:04.636 [INFO][6913] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.132/32] ContainerID="d41439823ca6a003fa62b34e63e15275ea5f6ee717d1b0db016a74ac6de3dc45" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r6sm8" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-coredns--7c65d6cfc9--r6sm8-eth0" Jul 6 23:46:04.647436 containerd[2777]: 2025-07-06 23:46:04.636 [INFO][6913] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f99fcf441b ContainerID="d41439823ca6a003fa62b34e63e15275ea5f6ee717d1b0db016a74ac6de3dc45" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r6sm8" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-coredns--7c65d6cfc9--r6sm8-eth0" Jul 6 23:46:04.647436 containerd[2777]: 2025-07-06 23:46:04.639 [INFO][6913] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d41439823ca6a003fa62b34e63e15275ea5f6ee717d1b0db016a74ac6de3dc45" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r6sm8" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-coredns--7c65d6cfc9--r6sm8-eth0" Jul 6 23:46:04.647436 containerd[2777]: 2025-07-06 23:46:04.639 [INFO][6913] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d41439823ca6a003fa62b34e63e15275ea5f6ee717d1b0db016a74ac6de3dc45" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r6sm8" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-coredns--7c65d6cfc9--r6sm8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.2.1--a--784d2181dd-k8s-coredns--7c65d6cfc9--r6sm8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"994a699b-ff00-478f-a192-50e32c9ff313", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 45, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.2.1-a-784d2181dd", ContainerID:"d41439823ca6a003fa62b34e63e15275ea5f6ee717d1b0db016a74ac6de3dc45", Pod:"coredns-7c65d6cfc9-r6sm8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f99fcf441b", MAC:"72:bf:29:48:31:50", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:46:04.647436 containerd[2777]: 2025-07-06 23:46:04.645 [INFO][6913] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d41439823ca6a003fa62b34e63e15275ea5f6ee717d1b0db016a74ac6de3dc45" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r6sm8" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-coredns--7c65d6cfc9--r6sm8-eth0" Jul 6 23:46:04.661221 containerd[2777]: time="2025-07-06T23:46:04.659517265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:46:04.661221 containerd[2777]: time="2025-07-06T23:46:04.659573945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:46:04.661221 containerd[2777]: time="2025-07-06T23:46:04.659584745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:46:04.661221 containerd[2777]: time="2025-07-06T23:46:04.659660425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:46:04.684045 systemd[1]: Started cri-containerd-d41439823ca6a003fa62b34e63e15275ea5f6ee717d1b0db016a74ac6de3dc45.scope - libcontainer container d41439823ca6a003fa62b34e63e15275ea5f6ee717d1b0db016a74ac6de3dc45. Jul 6 23:46:04.711735 containerd[2777]: time="2025-07-06T23:46:04.711698107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r6sm8,Uid:994a699b-ff00-478f-a192-50e32c9ff313,Namespace:kube-system,Attempt:3,} returns sandbox id \"d41439823ca6a003fa62b34e63e15275ea5f6ee717d1b0db016a74ac6de3dc45\"" Jul 6 23:46:04.713886 containerd[2777]: time="2025-07-06T23:46:04.713854427Z" level=info msg="CreateContainer within sandbox \"d41439823ca6a003fa62b34e63e15275ea5f6ee717d1b0db016a74ac6de3dc45\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:46:04.721232 containerd[2777]: time="2025-07-06T23:46:04.721205507Z" level=info msg="CreateContainer within sandbox \"d41439823ca6a003fa62b34e63e15275ea5f6ee717d1b0db016a74ac6de3dc45\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6bf8e543720cedf7c71e6655d78eb4ae9994c2c1413fb3fd6ad81a580f0af728\"" Jul 6 23:46:04.721569 containerd[2777]: time="2025-07-06T23:46:04.721551267Z" level=info msg="StartContainer for \"6bf8e543720cedf7c71e6655d78eb4ae9994c2c1413fb3fd6ad81a580f0af728\"" Jul 6 23:46:04.741026 systemd-networkd[2683]: cali128c906496e: Link UP Jul 6 23:46:04.741279 systemd-networkd[2683]: cali128c906496e: Gained carrier Jul 6 23:46:04.750195 containerd[2777]: 2025-07-06 23:46:04.270 [INFO][6990] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:46:04.750195 containerd[2777]: 2025-07-06 23:46:04.280 [INFO][6990] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.2.1--a--784d2181dd-k8s-goldmane--58fd7646b9--fm57w-eth0 goldmane-58fd7646b9- calico-system 2789cf2c-eaba-4e19-aa42-51b1fb03b3f4 815 0 2025-07-06 23:45:57 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4230.2.1-a-784d2181dd goldmane-58fd7646b9-fm57w eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali128c906496e [] [] }} ContainerID="aeb9833eb8cdcc186cec1d1a66f87dec5e915252c1e0c82c45c1fd9c5a5588ac" Namespace="calico-system" Pod="goldmane-58fd7646b9-fm57w" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-goldmane--58fd7646b9--fm57w-" Jul 6 23:46:04.750195 containerd[2777]: 2025-07-06 23:46:04.280 [INFO][6990] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aeb9833eb8cdcc186cec1d1a66f87dec5e915252c1e0c82c45c1fd9c5a5588ac" Namespace="calico-system" Pod="goldmane-58fd7646b9-fm57w" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-goldmane--58fd7646b9--fm57w-eth0" Jul 6 23:46:04.750195 containerd[2777]: 2025-07-06 23:46:04.307 [INFO][7110] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aeb9833eb8cdcc186cec1d1a66f87dec5e915252c1e0c82c45c1fd9c5a5588ac" HandleID="k8s-pod-network.aeb9833eb8cdcc186cec1d1a66f87dec5e915252c1e0c82c45c1fd9c5a5588ac" Workload="ci--4230.2.1--a--784d2181dd-k8s-goldmane--58fd7646b9--fm57w-eth0" Jul 6 23:46:04.750195 containerd[2777]: 2025-07-06 23:46:04.308 [INFO][7110] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aeb9833eb8cdcc186cec1d1a66f87dec5e915252c1e0c82c45c1fd9c5a5588ac" HandleID="k8s-pod-network.aeb9833eb8cdcc186cec1d1a66f87dec5e915252c1e0c82c45c1fd9c5a5588ac" Workload="ci--4230.2.1--a--784d2181dd-k8s-goldmane--58fd7646b9--fm57w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005187b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4230.2.1-a-784d2181dd", "pod":"goldmane-58fd7646b9-fm57w", "timestamp":"2025-07-06 23:46:04.307941697 +0000 UTC"}, Hostname:"ci-4230.2.1-a-784d2181dd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:46:04.750195 containerd[2777]: 2025-07-06 23:46:04.308 [INFO][7110] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:46:04.750195 containerd[2777]: 2025-07-06 23:46:04.635 [INFO][7110] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:46:04.750195 containerd[2777]: 2025-07-06 23:46:04.635 [INFO][7110] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.2.1-a-784d2181dd' Jul 6 23:46:04.750195 containerd[2777]: 2025-07-06 23:46:04.718 [INFO][7110] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aeb9833eb8cdcc186cec1d1a66f87dec5e915252c1e0c82c45c1fd9c5a5588ac" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.750195 containerd[2777]: 2025-07-06 23:46:04.721 [INFO][7110] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.750195 containerd[2777]: 2025-07-06 23:46:04.726 [INFO][7110] ipam/ipam.go 511: Trying affinity for 192.168.9.128/26 host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.750195 containerd[2777]: 2025-07-06 23:46:04.727 [INFO][7110] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.128/26 host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.750195 containerd[2777]: 2025-07-06 23:46:04.730 [INFO][7110] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.128/26 host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.750195 containerd[2777]: 2025-07-06 23:46:04.730 [INFO][7110] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.9.128/26 handle="k8s-pod-network.aeb9833eb8cdcc186cec1d1a66f87dec5e915252c1e0c82c45c1fd9c5a5588ac" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.750195 containerd[2777]: 2025-07-06 23:46:04.731 [INFO][7110] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.aeb9833eb8cdcc186cec1d1a66f87dec5e915252c1e0c82c45c1fd9c5a5588ac Jul 6 23:46:04.750195 containerd[2777]: 2025-07-06 23:46:04.734 [INFO][7110] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.9.128/26 handle="k8s-pod-network.aeb9833eb8cdcc186cec1d1a66f87dec5e915252c1e0c82c45c1fd9c5a5588ac" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.750195 containerd[2777]: 2025-07-06 23:46:04.737 [INFO][7110] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.9.133/26] block=192.168.9.128/26 handle="k8s-pod-network.aeb9833eb8cdcc186cec1d1a66f87dec5e915252c1e0c82c45c1fd9c5a5588ac" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.750195 containerd[2777]: 2025-07-06 23:46:04.737 [INFO][7110] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.133/26] handle="k8s-pod-network.aeb9833eb8cdcc186cec1d1a66f87dec5e915252c1e0c82c45c1fd9c5a5588ac" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.750195 containerd[2777]: 2025-07-06 23:46:04.737 [INFO][7110] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:46:04.750195 containerd[2777]: 2025-07-06 23:46:04.738 [INFO][7110] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.133/26] IPv6=[] ContainerID="aeb9833eb8cdcc186cec1d1a66f87dec5e915252c1e0c82c45c1fd9c5a5588ac" HandleID="k8s-pod-network.aeb9833eb8cdcc186cec1d1a66f87dec5e915252c1e0c82c45c1fd9c5a5588ac" Workload="ci--4230.2.1--a--784d2181dd-k8s-goldmane--58fd7646b9--fm57w-eth0" Jul 6 23:46:04.750595 containerd[2777]: 2025-07-06 23:46:04.739 [INFO][6990] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aeb9833eb8cdcc186cec1d1a66f87dec5e915252c1e0c82c45c1fd9c5a5588ac" Namespace="calico-system" Pod="goldmane-58fd7646b9-fm57w" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-goldmane--58fd7646b9--fm57w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.2.1--a--784d2181dd-k8s-goldmane--58fd7646b9--fm57w-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"2789cf2c-eaba-4e19-aa42-51b1fb03b3f4", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 45, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.2.1-a-784d2181dd", ContainerID:"", Pod:"goldmane-58fd7646b9-fm57w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.9.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali128c906496e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:46:04.750595 containerd[2777]: 2025-07-06 23:46:04.739 [INFO][6990] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.133/32] ContainerID="aeb9833eb8cdcc186cec1d1a66f87dec5e915252c1e0c82c45c1fd9c5a5588ac" Namespace="calico-system" Pod="goldmane-58fd7646b9-fm57w" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-goldmane--58fd7646b9--fm57w-eth0" Jul 6 23:46:04.750595 containerd[2777]: 2025-07-06 23:46:04.739 [INFO][6990] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali128c906496e ContainerID="aeb9833eb8cdcc186cec1d1a66f87dec5e915252c1e0c82c45c1fd9c5a5588ac" Namespace="calico-system" Pod="goldmane-58fd7646b9-fm57w" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-goldmane--58fd7646b9--fm57w-eth0" Jul 6 23:46:04.750595 containerd[2777]: 2025-07-06 23:46:04.741 [INFO][6990] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aeb9833eb8cdcc186cec1d1a66f87dec5e915252c1e0c82c45c1fd9c5a5588ac" Namespace="calico-system" Pod="goldmane-58fd7646b9-fm57w" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-goldmane--58fd7646b9--fm57w-eth0" Jul 6 23:46:04.750595 containerd[2777]: 2025-07-06 23:46:04.741 [INFO][6990] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aeb9833eb8cdcc186cec1d1a66f87dec5e915252c1e0c82c45c1fd9c5a5588ac" Namespace="calico-system" Pod="goldmane-58fd7646b9-fm57w" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-goldmane--58fd7646b9--fm57w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.2.1--a--784d2181dd-k8s-goldmane--58fd7646b9--fm57w-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"2789cf2c-eaba-4e19-aa42-51b1fb03b3f4", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 45, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.2.1-a-784d2181dd", ContainerID:"aeb9833eb8cdcc186cec1d1a66f87dec5e915252c1e0c82c45c1fd9c5a5588ac", Pod:"goldmane-58fd7646b9-fm57w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.9.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali128c906496e", MAC:"1a:0e:87:de:96:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:46:04.750595 containerd[2777]: 2025-07-06 23:46:04.748 [INFO][6990] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aeb9833eb8cdcc186cec1d1a66f87dec5e915252c1e0c82c45c1fd9c5a5588ac" Namespace="calico-system" Pod="goldmane-58fd7646b9-fm57w" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-goldmane--58fd7646b9--fm57w-eth0" Jul 6 23:46:04.752002 systemd[1]: Started cri-containerd-6bf8e543720cedf7c71e6655d78eb4ae9994c2c1413fb3fd6ad81a580f0af728.scope - libcontainer container 6bf8e543720cedf7c71e6655d78eb4ae9994c2c1413fb3fd6ad81a580f0af728. Jul 6 23:46:04.764223 containerd[2777]: time="2025-07-06T23:46:04.764000508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:46:04.764223 containerd[2777]: time="2025-07-06T23:46:04.764052588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:46:04.764223 containerd[2777]: time="2025-07-06T23:46:04.764067708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:46:04.764437 containerd[2777]: time="2025-07-06T23:46:04.764371908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:46:04.769985 containerd[2777]: time="2025-07-06T23:46:04.769956308Z" level=info msg="StartContainer for \"6bf8e543720cedf7c71e6655d78eb4ae9994c2c1413fb3fd6ad81a580f0af728\" returns successfully" Jul 6 23:46:04.790019 systemd[1]: Started cri-containerd-aeb9833eb8cdcc186cec1d1a66f87dec5e915252c1e0c82c45c1fd9c5a5588ac.scope - libcontainer container aeb9833eb8cdcc186cec1d1a66f87dec5e915252c1e0c82c45c1fd9c5a5588ac. Jul 6 23:46:04.815991 containerd[2777]: time="2025-07-06T23:46:04.815938069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-fm57w,Uid:2789cf2c-eaba-4e19-aa42-51b1fb03b3f4,Namespace:calico-system,Attempt:3,} returns sandbox id \"aeb9833eb8cdcc186cec1d1a66f87dec5e915252c1e0c82c45c1fd9c5a5588ac\"" Jul 6 23:46:04.839560 systemd-networkd[2683]: calide264d98b63: Link UP Jul 6 23:46:04.839770 systemd-networkd[2683]: calide264d98b63: Gained carrier Jul 6 23:46:04.848017 containerd[2777]: 2025-07-06 23:46:04.270 [INFO][6973] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:46:04.848017 containerd[2777]: 2025-07-06 23:46:04.280 [INFO][6973] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.2.1--a--784d2181dd-k8s-csi--node--driver--plknp-eth0 csi-node-driver- calico-system 8f8629de-298f-4ed4-a595-838d54ca032b 701 0 2025-07-06 23:45:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4230.2.1-a-784d2181dd csi-node-driver-plknp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calide264d98b63 [] [] }} ContainerID="5e796d7b29ff4d7411896b1da3f4a64512d861b0b56d3adc01b1880b6d7e722b" Namespace="calico-system" Pod="csi-node-driver-plknp" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-csi--node--driver--plknp-" Jul 6 23:46:04.848017 containerd[2777]: 2025-07-06 23:46:04.280 [INFO][6973] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5e796d7b29ff4d7411896b1da3f4a64512d861b0b56d3adc01b1880b6d7e722b" Namespace="calico-system" Pod="csi-node-driver-plknp" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-csi--node--driver--plknp-eth0" Jul 6 23:46:04.848017 containerd[2777]: 2025-07-06 23:46:04.307 [INFO][7108] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e796d7b29ff4d7411896b1da3f4a64512d861b0b56d3adc01b1880b6d7e722b" HandleID="k8s-pod-network.5e796d7b29ff4d7411896b1da3f4a64512d861b0b56d3adc01b1880b6d7e722b" Workload="ci--4230.2.1--a--784d2181dd-k8s-csi--node--driver--plknp-eth0" Jul 6 23:46:04.848017 containerd[2777]: 2025-07-06 23:46:04.308 [INFO][7108] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5e796d7b29ff4d7411896b1da3f4a64512d861b0b56d3adc01b1880b6d7e722b" HandleID="k8s-pod-network.5e796d7b29ff4d7411896b1da3f4a64512d861b0b56d3adc01b1880b6d7e722b" Workload="ci--4230.2.1--a--784d2181dd-k8s-csi--node--driver--plknp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400043d3a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4230.2.1-a-784d2181dd", "pod":"csi-node-driver-plknp", "timestamp":"2025-07-06 23:46:04.307939217 +0000 UTC"}, Hostname:"ci-4230.2.1-a-784d2181dd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:46:04.848017 containerd[2777]: 2025-07-06 23:46:04.308 [INFO][7108] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:46:04.848017 containerd[2777]: 2025-07-06 23:46:04.738 [INFO][7108] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:46:04.848017 containerd[2777]: 2025-07-06 23:46:04.738 [INFO][7108] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.2.1-a-784d2181dd' Jul 6 23:46:04.848017 containerd[2777]: 2025-07-06 23:46:04.818 [INFO][7108] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5e796d7b29ff4d7411896b1da3f4a64512d861b0b56d3adc01b1880b6d7e722b" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.848017 containerd[2777]: 2025-07-06 23:46:04.822 [INFO][7108] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.848017 containerd[2777]: 2025-07-06 23:46:04.826 [INFO][7108] ipam/ipam.go 511: Trying affinity for 192.168.9.128/26 host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.848017 containerd[2777]: 2025-07-06 23:46:04.827 [INFO][7108] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.128/26 host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.848017 containerd[2777]: 2025-07-06 23:46:04.829 [INFO][7108] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.128/26 host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.848017 containerd[2777]: 2025-07-06 23:46:04.829 [INFO][7108] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.9.128/26 handle="k8s-pod-network.5e796d7b29ff4d7411896b1da3f4a64512d861b0b56d3adc01b1880b6d7e722b" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.848017 containerd[2777]: 2025-07-06 23:46:04.830 [INFO][7108] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5e796d7b29ff4d7411896b1da3f4a64512d861b0b56d3adc01b1880b6d7e722b Jul 6 23:46:04.848017 containerd[2777]: 2025-07-06 23:46:04.832 [INFO][7108] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.9.128/26 handle="k8s-pod-network.5e796d7b29ff4d7411896b1da3f4a64512d861b0b56d3adc01b1880b6d7e722b" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.848017 containerd[2777]: 2025-07-06 23:46:04.836 [INFO][7108] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.9.134/26] block=192.168.9.128/26 handle="k8s-pod-network.5e796d7b29ff4d7411896b1da3f4a64512d861b0b56d3adc01b1880b6d7e722b" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.848017 containerd[2777]: 2025-07-06 23:46:04.836 [INFO][7108] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.134/26] handle="k8s-pod-network.5e796d7b29ff4d7411896b1da3f4a64512d861b0b56d3adc01b1880b6d7e722b" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.848017 containerd[2777]: 2025-07-06 23:46:04.836 [INFO][7108] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:46:04.848017 containerd[2777]: 2025-07-06 23:46:04.836 [INFO][7108] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.134/26] IPv6=[] ContainerID="5e796d7b29ff4d7411896b1da3f4a64512d861b0b56d3adc01b1880b6d7e722b" HandleID="k8s-pod-network.5e796d7b29ff4d7411896b1da3f4a64512d861b0b56d3adc01b1880b6d7e722b" Workload="ci--4230.2.1--a--784d2181dd-k8s-csi--node--driver--plknp-eth0" Jul 6 23:46:04.848416 containerd[2777]: 2025-07-06 23:46:04.837 [INFO][6973] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5e796d7b29ff4d7411896b1da3f4a64512d861b0b56d3adc01b1880b6d7e722b" Namespace="calico-system" Pod="csi-node-driver-plknp" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-csi--node--driver--plknp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.2.1--a--784d2181dd-k8s-csi--node--driver--plknp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8f8629de-298f-4ed4-a595-838d54ca032b", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 45, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.2.1-a-784d2181dd", ContainerID:"", Pod:"csi-node-driver-plknp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calide264d98b63", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:46:04.848416 containerd[2777]: 2025-07-06 23:46:04.837 [INFO][6973] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.134/32] ContainerID="5e796d7b29ff4d7411896b1da3f4a64512d861b0b56d3adc01b1880b6d7e722b" Namespace="calico-system" Pod="csi-node-driver-plknp" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-csi--node--driver--plknp-eth0" Jul 6 23:46:04.848416 containerd[2777]: 2025-07-06 23:46:04.837 [INFO][6973] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calide264d98b63 ContainerID="5e796d7b29ff4d7411896b1da3f4a64512d861b0b56d3adc01b1880b6d7e722b" Namespace="calico-system" Pod="csi-node-driver-plknp" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-csi--node--driver--plknp-eth0" Jul 6 23:46:04.848416 containerd[2777]: 2025-07-06 23:46:04.839 [INFO][6973] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e796d7b29ff4d7411896b1da3f4a64512d861b0b56d3adc01b1880b6d7e722b" Namespace="calico-system" Pod="csi-node-driver-plknp" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-csi--node--driver--plknp-eth0" Jul 6 23:46:04.848416 containerd[2777]: 2025-07-06 23:46:04.840 [INFO][6973] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5e796d7b29ff4d7411896b1da3f4a64512d861b0b56d3adc01b1880b6d7e722b" Namespace="calico-system" Pod="csi-node-driver-plknp" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-csi--node--driver--plknp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.2.1--a--784d2181dd-k8s-csi--node--driver--plknp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8f8629de-298f-4ed4-a595-838d54ca032b", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 45, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.2.1-a-784d2181dd", ContainerID:"5e796d7b29ff4d7411896b1da3f4a64512d861b0b56d3adc01b1880b6d7e722b", Pod:"csi-node-driver-plknp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calide264d98b63", MAC:"92:ed:b3:03:a9:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:46:04.848416 containerd[2777]: 2025-07-06 23:46:04.846 [INFO][6973] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5e796d7b29ff4d7411896b1da3f4a64512d861b0b56d3adc01b1880b6d7e722b" Namespace="calico-system" Pod="csi-node-driver-plknp" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-csi--node--driver--plknp-eth0" Jul 6 23:46:04.860064 containerd[2777]: time="2025-07-06T23:46:04.860001270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:46:04.860064 containerd[2777]: time="2025-07-06T23:46:04.860056430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:46:04.860105 containerd[2777]: time="2025-07-06T23:46:04.860067950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:46:04.860161 containerd[2777]: time="2025-07-06T23:46:04.860143030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:46:04.885059 systemd[1]: Started cri-containerd-5e796d7b29ff4d7411896b1da3f4a64512d861b0b56d3adc01b1880b6d7e722b.scope - libcontainer container 5e796d7b29ff4d7411896b1da3f4a64512d861b0b56d3adc01b1880b6d7e722b. Jul 6 23:46:04.902042 containerd[2777]: time="2025-07-06T23:46:04.902015391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-plknp,Uid:8f8629de-298f-4ed4-a595-838d54ca032b,Namespace:calico-system,Attempt:3,} returns sandbox id \"5e796d7b29ff4d7411896b1da3f4a64512d861b0b56d3adc01b1880b6d7e722b\"" Jul 6 23:46:04.939838 systemd-networkd[2683]: calif182fcc4ab8: Link UP Jul 6 23:46:04.940042 systemd-networkd[2683]: calif182fcc4ab8: Gained carrier Jul 6 23:46:04.951928 containerd[2777]: 2025-07-06 23:46:04.261 [INFO][6915] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:46:04.951928 containerd[2777]: 2025-07-06 23:46:04.275 [INFO][6915] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.2.1--a--784d2181dd-k8s-calico--apiserver--7f699b944c--vw6wk-eth0 calico-apiserver-7f699b944c- calico-apiserver 857da74c-a792-4f7a-9157-031cec6c674a 813 0 2025-07-06 23:45:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f699b944c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4230.2.1-a-784d2181dd calico-apiserver-7f699b944c-vw6wk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif182fcc4ab8 [] [] }} ContainerID="36822fbb666db8c38186dd6d93fb1f32d0932d27f88b6a31def510c6dabee6c8" Namespace="calico-apiserver" Pod="calico-apiserver-7f699b944c-vw6wk" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-calico--apiserver--7f699b944c--vw6wk-" Jul 6 23:46:04.951928 containerd[2777]: 2025-07-06 23:46:04.275 [INFO][6915] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="36822fbb666db8c38186dd6d93fb1f32d0932d27f88b6a31def510c6dabee6c8" Namespace="calico-apiserver" Pod="calico-apiserver-7f699b944c-vw6wk" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-calico--apiserver--7f699b944c--vw6wk-eth0" Jul 6 23:46:04.951928 containerd[2777]: 2025-07-06 23:46:04.307 [INFO][7078] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="36822fbb666db8c38186dd6d93fb1f32d0932d27f88b6a31def510c6dabee6c8" HandleID="k8s-pod-network.36822fbb666db8c38186dd6d93fb1f32d0932d27f88b6a31def510c6dabee6c8" Workload="ci--4230.2.1--a--784d2181dd-k8s-calico--apiserver--7f699b944c--vw6wk-eth0" Jul 6 23:46:04.951928 containerd[2777]: 2025-07-06 23:46:04.308 [INFO][7078] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="36822fbb666db8c38186dd6d93fb1f32d0932d27f88b6a31def510c6dabee6c8" HandleID="k8s-pod-network.36822fbb666db8c38186dd6d93fb1f32d0932d27f88b6a31def510c6dabee6c8" Workload="ci--4230.2.1--a--784d2181dd-k8s-calico--apiserver--7f699b944c--vw6wk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400045d480), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4230.2.1-a-784d2181dd", "pod":"calico-apiserver-7f699b944c-vw6wk", "timestamp":"2025-07-06 23:46:04.307937377 +0000 UTC"}, Hostname:"ci-4230.2.1-a-784d2181dd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:46:04.951928 containerd[2777]: 2025-07-06 23:46:04.308 [INFO][7078] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:46:04.951928 containerd[2777]: 2025-07-06 23:46:04.836 [INFO][7078] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:46:04.951928 containerd[2777]: 2025-07-06 23:46:04.836 [INFO][7078] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.2.1-a-784d2181dd' Jul 6 23:46:04.951928 containerd[2777]: 2025-07-06 23:46:04.919 [INFO][7078] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.36822fbb666db8c38186dd6d93fb1f32d0932d27f88b6a31def510c6dabee6c8" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.951928 containerd[2777]: 2025-07-06 23:46:04.922 [INFO][7078] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.951928 containerd[2777]: 2025-07-06 23:46:04.926 [INFO][7078] ipam/ipam.go 511: Trying affinity for 192.168.9.128/26 host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.951928 containerd[2777]: 2025-07-06 23:46:04.927 [INFO][7078] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.128/26 host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.951928 containerd[2777]: 2025-07-06 23:46:04.929 [INFO][7078] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.128/26 host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.951928 containerd[2777]: 2025-07-06 23:46:04.929 [INFO][7078] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.9.128/26 handle="k8s-pod-network.36822fbb666db8c38186dd6d93fb1f32d0932d27f88b6a31def510c6dabee6c8" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.951928 containerd[2777]: 2025-07-06 23:46:04.930 [INFO][7078] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.36822fbb666db8c38186dd6d93fb1f32d0932d27f88b6a31def510c6dabee6c8 Jul 6 23:46:04.951928 containerd[2777]: 2025-07-06 23:46:04.932 [INFO][7078] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.9.128/26 handle="k8s-pod-network.36822fbb666db8c38186dd6d93fb1f32d0932d27f88b6a31def510c6dabee6c8" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.951928 containerd[2777]: 2025-07-06 23:46:04.936 [INFO][7078] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.9.135/26] block=192.168.9.128/26 handle="k8s-pod-network.36822fbb666db8c38186dd6d93fb1f32d0932d27f88b6a31def510c6dabee6c8" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.951928 containerd[2777]: 2025-07-06 23:46:04.936 [INFO][7078] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.135/26] handle="k8s-pod-network.36822fbb666db8c38186dd6d93fb1f32d0932d27f88b6a31def510c6dabee6c8" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:04.951928 containerd[2777]: 2025-07-06 23:46:04.936 [INFO][7078] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:46:04.951928 containerd[2777]: 2025-07-06 23:46:04.936 [INFO][7078] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.135/26] IPv6=[] ContainerID="36822fbb666db8c38186dd6d93fb1f32d0932d27f88b6a31def510c6dabee6c8" HandleID="k8s-pod-network.36822fbb666db8c38186dd6d93fb1f32d0932d27f88b6a31def510c6dabee6c8" Workload="ci--4230.2.1--a--784d2181dd-k8s-calico--apiserver--7f699b944c--vw6wk-eth0" Jul 6 23:46:04.952322 containerd[2777]: 2025-07-06 23:46:04.938 [INFO][6915] cni-plugin/k8s.go 418: Populated endpoint ContainerID="36822fbb666db8c38186dd6d93fb1f32d0932d27f88b6a31def510c6dabee6c8" Namespace="calico-apiserver" Pod="calico-apiserver-7f699b944c-vw6wk" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-calico--apiserver--7f699b944c--vw6wk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.2.1--a--784d2181dd-k8s-calico--apiserver--7f699b944c--vw6wk-eth0", GenerateName:"calico-apiserver-7f699b944c-", Namespace:"calico-apiserver", SelfLink:"", UID:"857da74c-a792-4f7a-9157-031cec6c674a", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 45, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f699b944c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.2.1-a-784d2181dd", ContainerID:"", Pod:"calico-apiserver-7f699b944c-vw6wk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif182fcc4ab8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:46:04.952322 containerd[2777]: 2025-07-06 23:46:04.938 [INFO][6915] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.135/32] ContainerID="36822fbb666db8c38186dd6d93fb1f32d0932d27f88b6a31def510c6dabee6c8" Namespace="calico-apiserver" Pod="calico-apiserver-7f699b944c-vw6wk" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-calico--apiserver--7f699b944c--vw6wk-eth0" Jul 6 23:46:04.952322 containerd[2777]: 2025-07-06 23:46:04.938 [INFO][6915] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif182fcc4ab8 ContainerID="36822fbb666db8c38186dd6d93fb1f32d0932d27f88b6a31def510c6dabee6c8" Namespace="calico-apiserver" Pod="calico-apiserver-7f699b944c-vw6wk" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-calico--apiserver--7f699b944c--vw6wk-eth0" Jul 6 23:46:04.952322 containerd[2777]: 2025-07-06 23:46:04.941 [INFO][6915] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="36822fbb666db8c38186dd6d93fb1f32d0932d27f88b6a31def510c6dabee6c8" Namespace="calico-apiserver" Pod="calico-apiserver-7f699b944c-vw6wk" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-calico--apiserver--7f699b944c--vw6wk-eth0" Jul 6 23:46:04.952322 containerd[2777]: 2025-07-06 23:46:04.941 [INFO][6915] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="36822fbb666db8c38186dd6d93fb1f32d0932d27f88b6a31def510c6dabee6c8" Namespace="calico-apiserver" Pod="calico-apiserver-7f699b944c-vw6wk" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-calico--apiserver--7f699b944c--vw6wk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.2.1--a--784d2181dd-k8s-calico--apiserver--7f699b944c--vw6wk-eth0", GenerateName:"calico-apiserver-7f699b944c-", Namespace:"calico-apiserver", SelfLink:"", UID:"857da74c-a792-4f7a-9157-031cec6c674a", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 45, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f699b944c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.2.1-a-784d2181dd", ContainerID:"36822fbb666db8c38186dd6d93fb1f32d0932d27f88b6a31def510c6dabee6c8", Pod:"calico-apiserver-7f699b944c-vw6wk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif182fcc4ab8", MAC:"aa:9e:d7:52:c0:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:46:04.952322 containerd[2777]: 2025-07-06 23:46:04.950 [INFO][6915] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="36822fbb666db8c38186dd6d93fb1f32d0932d27f88b6a31def510c6dabee6c8" Namespace="calico-apiserver" Pod="calico-apiserver-7f699b944c-vw6wk" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-calico--apiserver--7f699b944c--vw6wk-eth0" Jul 6 23:46:04.964493 containerd[2777]: time="2025-07-06T23:46:04.964437712Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:46:04.964493 containerd[2777]: time="2025-07-06T23:46:04.964489472Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:46:04.964549 containerd[2777]: time="2025-07-06T23:46:04.964500152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:46:04.964592 containerd[2777]: time="2025-07-06T23:46:04.964572912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:46:04.994993 systemd[1]: Started cri-containerd-36822fbb666db8c38186dd6d93fb1f32d0932d27f88b6a31def510c6dabee6c8.scope - libcontainer container 36822fbb666db8c38186dd6d93fb1f32d0932d27f88b6a31def510c6dabee6c8. Jul 6 23:46:05.018924 containerd[2777]: time="2025-07-06T23:46:05.018896954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f699b944c-vw6wk,Uid:857da74c-a792-4f7a-9157-031cec6c674a,Namespace:calico-apiserver,Attempt:3,} returns sandbox id \"36822fbb666db8c38186dd6d93fb1f32d0932d27f88b6a31def510c6dabee6c8\"" Jul 6 23:46:05.168440 systemd[1]: Removed slice kubepods-besteffort-pode0b41b8f_0f59_42bb_bf26_a6f229d75d87.slice - libcontainer container kubepods-besteffort-pode0b41b8f_0f59_42bb_bf26_a6f229d75d87.slice. Jul 6 23:46:05.214455 containerd[2777]: time="2025-07-06T23:46:05.214419918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:05.214545 containerd[2777]: time="2025-07-06T23:46:05.214473078Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 6 23:46:05.215259 containerd[2777]: time="2025-07-06T23:46:05.215235198Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:05.217258 containerd[2777]: time="2025-07-06T23:46:05.217231878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:05.218024 containerd[2777]: time="2025-07-06T23:46:05.217990878Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 807.026858ms" Jul 6 23:46:05.218051 containerd[2777]: time="2025-07-06T23:46:05.218028998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 6 23:46:05.218854 containerd[2777]: time="2025-07-06T23:46:05.218835478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 6 23:46:05.219666 containerd[2777]: time="2025-07-06T23:46:05.219646758Z" level=info msg="CreateContainer within sandbox \"5a2ebc4283989617e9c002744177fab7c511a4ffb2d4e277b43d7350fb40d363\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:46:05.237812 containerd[2777]: time="2025-07-06T23:46:05.237781758Z" level=info msg="CreateContainer within sandbox \"5a2ebc4283989617e9c002744177fab7c511a4ffb2d4e277b43d7350fb40d363\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"151d50c390d83ea1d5a8a7939f488f01266d6f9b441386422945b353d73f5060\"" Jul 6 23:46:05.238157 containerd[2777]: time="2025-07-06T23:46:05.238116398Z" level=info msg="StartContainer for \"151d50c390d83ea1d5a8a7939f488f01266d6f9b441386422945b353d73f5060\"" Jul 6 23:46:05.259629 kubelet[4301]: I0706 23:46:05.259576 4301 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:46:05.272052 kubelet[4301]: I0706 23:46:05.262980 4301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-r6sm8" podStartSLOduration=21.262966199 podStartE2EDuration="21.262966199s" podCreationTimestamp="2025-07-06 23:45:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:46:05.262548919 +0000 UTC m=+28.169296180" watchObservedRunningTime="2025-07-06 23:46:05.262966199 +0000 UTC m=+28.169713460" Jul 6 23:46:05.272052 kubelet[4301]: I0706 23:46:05.269293 4301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-q9rm8" podStartSLOduration=21.269280079 podStartE2EDuration="21.269280079s" podCreationTimestamp="2025-07-06 23:45:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:46:05.269039399 +0000 UTC m=+28.175786700" watchObservedRunningTime="2025-07-06 23:46:05.269280079 +0000 UTC m=+28.176027340" Jul 6 23:46:05.272090 systemd[1]: Started cri-containerd-151d50c390d83ea1d5a8a7939f488f01266d6f9b441386422945b353d73f5060.scope - libcontainer container 151d50c390d83ea1d5a8a7939f488f01266d6f9b441386422945b353d73f5060. Jul 6 23:46:05.298713 containerd[2777]: time="2025-07-06T23:46:05.298680199Z" level=info msg="StartContainer for \"151d50c390d83ea1d5a8a7939f488f01266d6f9b441386422945b353d73f5060\" returns successfully" Jul 6 23:46:05.302288 systemd[1]: Created slice kubepods-besteffort-podf2bd88f0_cd69_4e1e_8995_ac748d1b04d4.slice - libcontainer container kubepods-besteffort-podf2bd88f0_cd69_4e1e_8995_ac748d1b04d4.slice. Jul 6 23:46:05.324704 kubelet[4301]: I0706 23:46:05.324678 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh8jn\" (UniqueName: \"kubernetes.io/projected/f2bd88f0-cd69-4e1e-8995-ac748d1b04d4-kube-api-access-sh8jn\") pod \"whisker-df9d54f6-gsskt\" (UID: \"f2bd88f0-cd69-4e1e-8995-ac748d1b04d4\") " pod="calico-system/whisker-df9d54f6-gsskt" Jul 6 23:46:05.324734 kubelet[4301]: I0706 23:46:05.324714 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f2bd88f0-cd69-4e1e-8995-ac748d1b04d4-whisker-backend-key-pair\") pod \"whisker-df9d54f6-gsskt\" (UID: \"f2bd88f0-cd69-4e1e-8995-ac748d1b04d4\") " pod="calico-system/whisker-df9d54f6-gsskt" Jul 6 23:46:05.324875 kubelet[4301]: I0706 23:46:05.324846 4301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2bd88f0-cd69-4e1e-8995-ac748d1b04d4-whisker-ca-bundle\") pod \"whisker-df9d54f6-gsskt\" (UID: \"f2bd88f0-cd69-4e1e-8995-ac748d1b04d4\") " pod="calico-system/whisker-df9d54f6-gsskt" Jul 6 23:46:05.501989 systemd-networkd[2683]: calic9132eb2c04: Gained IPv6LL Jul 6 23:46:05.605249 containerd[2777]: time="2025-07-06T23:46:05.605157446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-df9d54f6-gsskt,Uid:f2bd88f0-cd69-4e1e-8995-ac748d1b04d4,Namespace:calico-system,Attempt:0,}" Jul 6 23:46:05.684779 systemd-networkd[2683]: cali0485817422d: Link UP Jul 6 23:46:05.685237 systemd-networkd[2683]: cali0485817422d: Gained carrier Jul 6 23:46:05.692055 containerd[2777]: 2025-07-06 23:46:05.626 [INFO][7889] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:46:05.692055 containerd[2777]: 2025-07-06 23:46:05.637 [INFO][7889] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.2.1--a--784d2181dd-k8s-whisker--df9d54f6--gsskt-eth0 whisker-df9d54f6- calico-system f2bd88f0-cd69-4e1e-8995-ac748d1b04d4 960 0 2025-07-06 23:46:05 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:df9d54f6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4230.2.1-a-784d2181dd whisker-df9d54f6-gsskt eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0485817422d [] [] }} ContainerID="c0776c3b3cd3d21ca0f5a9a4ab671c822a31729c77815aaf21864df56f58c9f3" Namespace="calico-system" Pod="whisker-df9d54f6-gsskt" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-whisker--df9d54f6--gsskt-" Jul 6 23:46:05.692055 containerd[2777]: 2025-07-06 23:46:05.637 [INFO][7889] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0776c3b3cd3d21ca0f5a9a4ab671c822a31729c77815aaf21864df56f58c9f3" Namespace="calico-system" Pod="whisker-df9d54f6-gsskt" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-whisker--df9d54f6--gsskt-eth0" Jul 6 23:46:05.692055 containerd[2777]: 2025-07-06 23:46:05.657 [INFO][7915] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0776c3b3cd3d21ca0f5a9a4ab671c822a31729c77815aaf21864df56f58c9f3" HandleID="k8s-pod-network.c0776c3b3cd3d21ca0f5a9a4ab671c822a31729c77815aaf21864df56f58c9f3" Workload="ci--4230.2.1--a--784d2181dd-k8s-whisker--df9d54f6--gsskt-eth0" Jul 6 23:46:05.692055 containerd[2777]: 2025-07-06 23:46:05.657 [INFO][7915] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c0776c3b3cd3d21ca0f5a9a4ab671c822a31729c77815aaf21864df56f58c9f3" HandleID="k8s-pod-network.c0776c3b3cd3d21ca0f5a9a4ab671c822a31729c77815aaf21864df56f58c9f3" Workload="ci--4230.2.1--a--784d2181dd-k8s-whisker--df9d54f6--gsskt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ca00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4230.2.1-a-784d2181dd", "pod":"whisker-df9d54f6-gsskt", "timestamp":"2025-07-06 23:46:05.657045967 +0000 UTC"}, Hostname:"ci-4230.2.1-a-784d2181dd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:46:05.692055 containerd[2777]: 2025-07-06 23:46:05.657 [INFO][7915] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:46:05.692055 containerd[2777]: 2025-07-06 23:46:05.657 [INFO][7915] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:46:05.692055 containerd[2777]: 2025-07-06 23:46:05.657 [INFO][7915] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.2.1-a-784d2181dd' Jul 6 23:46:05.692055 containerd[2777]: 2025-07-06 23:46:05.664 [INFO][7915] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c0776c3b3cd3d21ca0f5a9a4ab671c822a31729c77815aaf21864df56f58c9f3" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:05.692055 containerd[2777]: 2025-07-06 23:46:05.667 [INFO][7915] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:05.692055 containerd[2777]: 2025-07-06 23:46:05.670 [INFO][7915] ipam/ipam.go 511: Trying affinity for 192.168.9.128/26 host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:05.692055 containerd[2777]: 2025-07-06 23:46:05.672 [INFO][7915] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.128/26 host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:05.692055 containerd[2777]: 2025-07-06 23:46:05.673 [INFO][7915] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.128/26 host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:05.692055 containerd[2777]: 2025-07-06 23:46:05.673 [INFO][7915] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.9.128/26 handle="k8s-pod-network.c0776c3b3cd3d21ca0f5a9a4ab671c822a31729c77815aaf21864df56f58c9f3" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:05.692055 containerd[2777]: 2025-07-06 23:46:05.674 [INFO][7915] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c0776c3b3cd3d21ca0f5a9a4ab671c822a31729c77815aaf21864df56f58c9f3 Jul 6 23:46:05.692055 containerd[2777]: 2025-07-06 23:46:05.677 [INFO][7915] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.9.128/26 handle="k8s-pod-network.c0776c3b3cd3d21ca0f5a9a4ab671c822a31729c77815aaf21864df56f58c9f3" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:05.692055 containerd[2777]: 2025-07-06 23:46:05.681 [INFO][7915] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.9.136/26] block=192.168.9.128/26 handle="k8s-pod-network.c0776c3b3cd3d21ca0f5a9a4ab671c822a31729c77815aaf21864df56f58c9f3" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:05.692055 containerd[2777]: 2025-07-06 23:46:05.681 [INFO][7915] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.136/26] handle="k8s-pod-network.c0776c3b3cd3d21ca0f5a9a4ab671c822a31729c77815aaf21864df56f58c9f3" host="ci-4230.2.1-a-784d2181dd" Jul 6 23:46:05.692055 containerd[2777]: 2025-07-06 23:46:05.681 [INFO][7915] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:46:05.692055 containerd[2777]: 2025-07-06 23:46:05.681 [INFO][7915] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.136/26] IPv6=[] ContainerID="c0776c3b3cd3d21ca0f5a9a4ab671c822a31729c77815aaf21864df56f58c9f3" HandleID="k8s-pod-network.c0776c3b3cd3d21ca0f5a9a4ab671c822a31729c77815aaf21864df56f58c9f3" Workload="ci--4230.2.1--a--784d2181dd-k8s-whisker--df9d54f6--gsskt-eth0" Jul 6 23:46:05.692572 containerd[2777]: 2025-07-06 23:46:05.682 [INFO][7889] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0776c3b3cd3d21ca0f5a9a4ab671c822a31729c77815aaf21864df56f58c9f3" Namespace="calico-system" Pod="whisker-df9d54f6-gsskt" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-whisker--df9d54f6--gsskt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.2.1--a--784d2181dd-k8s-whisker--df9d54f6--gsskt-eth0", GenerateName:"whisker-df9d54f6-", Namespace:"calico-system", SelfLink:"", UID:"f2bd88f0-cd69-4e1e-8995-ac748d1b04d4", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 46, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"df9d54f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.2.1-a-784d2181dd", ContainerID:"", Pod:"whisker-df9d54f6-gsskt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.9.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0485817422d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:46:05.692572 containerd[2777]: 2025-07-06 23:46:05.682 [INFO][7889] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.136/32] ContainerID="c0776c3b3cd3d21ca0f5a9a4ab671c822a31729c77815aaf21864df56f58c9f3" Namespace="calico-system" Pod="whisker-df9d54f6-gsskt" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-whisker--df9d54f6--gsskt-eth0" Jul 6 23:46:05.692572 containerd[2777]: 2025-07-06 23:46:05.682 [INFO][7889] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0485817422d ContainerID="c0776c3b3cd3d21ca0f5a9a4ab671c822a31729c77815aaf21864df56f58c9f3" Namespace="calico-system" Pod="whisker-df9d54f6-gsskt" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-whisker--df9d54f6--gsskt-eth0" Jul 6 23:46:05.692572 containerd[2777]: 2025-07-06 23:46:05.685 [INFO][7889] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0776c3b3cd3d21ca0f5a9a4ab671c822a31729c77815aaf21864df56f58c9f3" Namespace="calico-system" Pod="whisker-df9d54f6-gsskt" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-whisker--df9d54f6--gsskt-eth0" Jul 6 23:46:05.692572 containerd[2777]: 2025-07-06 23:46:05.685 [INFO][7889] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0776c3b3cd3d21ca0f5a9a4ab671c822a31729c77815aaf21864df56f58c9f3" Namespace="calico-system" Pod="whisker-df9d54f6-gsskt" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-whisker--df9d54f6--gsskt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.2.1--a--784d2181dd-k8s-whisker--df9d54f6--gsskt-eth0", GenerateName:"whisker-df9d54f6-", Namespace:"calico-system", SelfLink:"", UID:"f2bd88f0-cd69-4e1e-8995-ac748d1b04d4", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 46, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"df9d54f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.2.1-a-784d2181dd", ContainerID:"c0776c3b3cd3d21ca0f5a9a4ab671c822a31729c77815aaf21864df56f58c9f3", Pod:"whisker-df9d54f6-gsskt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.9.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0485817422d", MAC:"aa:94:03:b0:0a:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:46:05.692572 containerd[2777]: 2025-07-06 23:46:05.690 [INFO][7889] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0776c3b3cd3d21ca0f5a9a4ab671c822a31729c77815aaf21864df56f58c9f3" Namespace="calico-system" Pod="whisker-df9d54f6-gsskt" WorkloadEndpoint="ci--4230.2.1--a--784d2181dd-k8s-whisker--df9d54f6--gsskt-eth0" Jul 6 23:46:05.704495 containerd[2777]: time="2025-07-06T23:46:05.704421968Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:46:05.704495 containerd[2777]: time="2025-07-06T23:46:05.704483488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:46:05.704556 containerd[2777]: time="2025-07-06T23:46:05.704494368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:46:05.704591 containerd[2777]: time="2025-07-06T23:46:05.704573008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:46:05.727049 systemd[1]: Started cri-containerd-c0776c3b3cd3d21ca0f5a9a4ab671c822a31729c77815aaf21864df56f58c9f3.scope - libcontainer container c0776c3b3cd3d21ca0f5a9a4ab671c822a31729c77815aaf21864df56f58c9f3. Jul 6 23:46:05.751242 containerd[2777]: time="2025-07-06T23:46:05.751211809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-df9d54f6-gsskt,Uid:f2bd88f0-cd69-4e1e-8995-ac748d1b04d4,Namespace:calico-system,Attempt:0,} returns sandbox id \"c0776c3b3cd3d21ca0f5a9a4ab671c822a31729c77815aaf21864df56f58c9f3\"" Jul 6 23:46:05.886094 systemd-networkd[2683]: cali1f99fcf441b: Gained IPv6LL Jul 6 23:46:06.076570 containerd[2777]: time="2025-07-06T23:46:06.076528576Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:06.076669 containerd[2777]: time="2025-07-06T23:46:06.076533656Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 6 23:46:06.077305 containerd[2777]: time="2025-07-06T23:46:06.077288776Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:06.079452 containerd[2777]: time="2025-07-06T23:46:06.079429016Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:06.080001 containerd[2777]: time="2025-07-06T23:46:06.079971816Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 861.106978ms" Jul 6 23:46:06.080078 containerd[2777]: time="2025-07-06T23:46:06.080005376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 6 23:46:06.080736 containerd[2777]: time="2025-07-06T23:46:06.080715896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 6 23:46:06.085464 containerd[2777]: time="2025-07-06T23:46:06.085437856Z" level=info msg="CreateContainer within sandbox \"d93ad09e79916c1308f082b440e7a520b4c4040169f79dd86b74a21fd4896df5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 6 23:46:06.090379 containerd[2777]: time="2025-07-06T23:46:06.090350896Z" level=info msg="CreateContainer within sandbox \"d93ad09e79916c1308f082b440e7a520b4c4040169f79dd86b74a21fd4896df5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0dec33fd49e94b03334c2c6d14d4c91c5c0a86b65951683e2fde31a7f53d7475\"" Jul 6 23:46:06.090723 containerd[2777]: time="2025-07-06T23:46:06.090698656Z" level=info msg="StartContainer for \"0dec33fd49e94b03334c2c6d14d4c91c5c0a86b65951683e2fde31a7f53d7475\"" Jul 6 23:46:06.121055 systemd[1]: Started cri-containerd-0dec33fd49e94b03334c2c6d14d4c91c5c0a86b65951683e2fde31a7f53d7475.scope - libcontainer container 0dec33fd49e94b03334c2c6d14d4c91c5c0a86b65951683e2fde31a7f53d7475. Jul 6 23:46:06.146359 containerd[2777]: time="2025-07-06T23:46:06.146297937Z" level=info msg="StartContainer for \"0dec33fd49e94b03334c2c6d14d4c91c5c0a86b65951683e2fde31a7f53d7475\" returns successfully" Jul 6 23:46:06.269967 systemd-networkd[2683]: calia7ffb28c362: Gained IPv6LL Jul 6 23:46:06.276767 kubelet[4301]: I0706 23:46:06.276718 4301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f699b944c-lr8j7" podStartSLOduration=12.468747522 podStartE2EDuration="13.27670118s" podCreationTimestamp="2025-07-06 23:45:53 +0000 UTC" firstStartedPulling="2025-07-06 23:46:04.4107391 +0000 UTC m=+27.317486361" lastFinishedPulling="2025-07-06 23:46:05.218692758 +0000 UTC m=+28.125440019" observedRunningTime="2025-07-06 23:46:06.27667354 +0000 UTC m=+29.183420801" watchObservedRunningTime="2025-07-06 23:46:06.27670118 +0000 UTC m=+29.183448441" Jul 6 23:46:06.283919 kubelet[4301]: I0706 23:46:06.283842 4301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7b4c6f9887-2rpdn" podStartSLOduration=7.818729548 podStartE2EDuration="9.28382706s" podCreationTimestamp="2025-07-06 23:45:57 +0000 UTC" firstStartedPulling="2025-07-06 23:46:04.615508064 +0000 UTC m=+27.522255325" lastFinishedPulling="2025-07-06 23:46:06.080605576 +0000 UTC m=+28.987352837" observedRunningTime="2025-07-06 23:46:06.28364558 +0000 UTC m=+29.190392841" watchObservedRunningTime="2025-07-06 23:46:06.28382706 +0000 UTC m=+29.190574321" Jul 6 23:46:06.397988 systemd-networkd[2683]: cali261612f9f0d: Gained IPv6LL Jul 6 23:46:06.461933 systemd-networkd[2683]: cali128c906496e: Gained IPv6LL Jul 6 23:46:06.525947 systemd-networkd[2683]: calide264d98b63: Gained IPv6LL Jul 6 23:46:06.782037 systemd-networkd[2683]: calif182fcc4ab8: Gained IPv6LL Jul 6 23:46:06.862818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2441037388.mount: Deactivated successfully. Jul 6 23:46:07.063106 containerd[2777]: time="2025-07-06T23:46:07.063027475Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:07.063382 containerd[2777]: time="2025-07-06T23:46:07.063085075Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 6 23:46:07.063890 containerd[2777]: time="2025-07-06T23:46:07.063865155Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:07.065924 containerd[2777]: time="2025-07-06T23:46:07.065898395Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:07.066733 containerd[2777]: time="2025-07-06T23:46:07.066709075Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 985.963659ms" Jul 6 23:46:07.066766 containerd[2777]: time="2025-07-06T23:46:07.066741115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 6 23:46:07.067515 containerd[2777]: time="2025-07-06T23:46:07.067496715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 6 23:46:07.068397 containerd[2777]: time="2025-07-06T23:46:07.068376396Z" level=info msg="CreateContainer within sandbox \"aeb9833eb8cdcc186cec1d1a66f87dec5e915252c1e0c82c45c1fd9c5a5588ac\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 6 23:46:07.074013 containerd[2777]: time="2025-07-06T23:46:07.073984276Z" level=info msg="CreateContainer within sandbox \"aeb9833eb8cdcc186cec1d1a66f87dec5e915252c1e0c82c45c1fd9c5a5588ac\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"c3ae529ed7781693fb283d522d0d153cd4ec85ce1e0de53656bf128d6cbd85ec\"" Jul 6 23:46:07.076353 containerd[2777]: time="2025-07-06T23:46:07.076335036Z" level=info msg="StartContainer for \"c3ae529ed7781693fb283d522d0d153cd4ec85ce1e0de53656bf128d6cbd85ec\"" Jul 6 23:46:07.104989 systemd[1]: Started cri-containerd-c3ae529ed7781693fb283d522d0d153cd4ec85ce1e0de53656bf128d6cbd85ec.scope - libcontainer container c3ae529ed7781693fb283d522d0d153cd4ec85ce1e0de53656bf128d6cbd85ec. Jul 6 23:46:07.129711 containerd[2777]: time="2025-07-06T23:46:07.129681557Z" level=info msg="StartContainer for \"c3ae529ed7781693fb283d522d0d153cd4ec85ce1e0de53656bf128d6cbd85ec\" returns successfully" Jul 6 23:46:07.165318 kubelet[4301]: I0706 23:46:07.165287 4301 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0b41b8f-0f59-42bb-bf26-a6f229d75d87" path="/var/lib/kubelet/pods/e0b41b8f-0f59-42bb-bf26-a6f229d75d87/volumes" Jul 6 23:46:07.267598 kubelet[4301]: I0706 23:46:07.267569 4301 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:46:07.267712 kubelet[4301]: I0706 23:46:07.267589 4301 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:46:07.275347 kubelet[4301]: I0706 23:46:07.275303 4301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-fm57w" podStartSLOduration=8.026049473 podStartE2EDuration="10.275288319s" podCreationTimestamp="2025-07-06 23:45:57 +0000 UTC" firstStartedPulling="2025-07-06 23:46:04.818135109 +0000 UTC m=+27.724882370" lastFinishedPulling="2025-07-06 23:46:07.067373955 +0000 UTC m=+29.974121216" observedRunningTime="2025-07-06 23:46:07.275254599 +0000 UTC m=+30.182001860" watchObservedRunningTime="2025-07-06 23:46:07.275288319 +0000 UTC m=+30.182035580" Jul 6 23:46:07.473235 containerd[2777]: time="2025-07-06T23:46:07.473192123Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:07.473367 containerd[2777]: time="2025-07-06T23:46:07.473228963Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 6 23:46:07.473974 containerd[2777]: time="2025-07-06T23:46:07.473951483Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:07.475758 containerd[2777]: time="2025-07-06T23:46:07.475731723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:07.476475 containerd[2777]: time="2025-07-06T23:46:07.476449643Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 408.924448ms" Jul 6 23:46:07.476527 containerd[2777]: time="2025-07-06T23:46:07.476474683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 6 23:46:07.477260 containerd[2777]: time="2025-07-06T23:46:07.477241003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:46:07.478249 containerd[2777]: time="2025-07-06T23:46:07.478224923Z" level=info msg="CreateContainer within sandbox \"5e796d7b29ff4d7411896b1da3f4a64512d861b0b56d3adc01b1880b6d7e722b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 6 23:46:07.485265 containerd[2777]: time="2025-07-06T23:46:07.485236603Z" level=info msg="CreateContainer within sandbox \"5e796d7b29ff4d7411896b1da3f4a64512d861b0b56d3adc01b1880b6d7e722b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6d5ebd43308a715cc0419eebc1af96098656c17de441c6bc5630250a4457bec6\"" Jul 6 23:46:07.485588 containerd[2777]: time="2025-07-06T23:46:07.485570203Z" level=info msg="StartContainer for \"6d5ebd43308a715cc0419eebc1af96098656c17de441c6bc5630250a4457bec6\"" Jul 6 23:46:07.512988 systemd[1]: Started cri-containerd-6d5ebd43308a715cc0419eebc1af96098656c17de441c6bc5630250a4457bec6.scope - libcontainer container 6d5ebd43308a715cc0419eebc1af96098656c17de441c6bc5630250a4457bec6. Jul 6 23:46:07.533591 containerd[2777]: time="2025-07-06T23:46:07.533563844Z" level=info msg="StartContainer for \"6d5ebd43308a715cc0419eebc1af96098656c17de441c6bc5630250a4457bec6\" returns successfully" Jul 6 23:46:07.538709 containerd[2777]: time="2025-07-06T23:46:07.538679444Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:07.538760 containerd[2777]: time="2025-07-06T23:46:07.538727204Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 6 23:46:07.541285 containerd[2777]: time="2025-07-06T23:46:07.541259444Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 63.985801ms" Jul 6 23:46:07.541313 containerd[2777]: time="2025-07-06T23:46:07.541288604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 6 23:46:07.541933 containerd[2777]: time="2025-07-06T23:46:07.541914804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 6 23:46:07.542671 containerd[2777]: time="2025-07-06T23:46:07.542648684Z" level=info msg="CreateContainer within sandbox \"36822fbb666db8c38186dd6d93fb1f32d0932d27f88b6a31def510c6dabee6c8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:46:07.547424 containerd[2777]: time="2025-07-06T23:46:07.547395204Z" level=info msg="CreateContainer within sandbox \"36822fbb666db8c38186dd6d93fb1f32d0932d27f88b6a31def510c6dabee6c8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6679d20d43781c073a8f82f51da935065e737abee06394adaf572b478913f651\"" Jul 6 23:46:07.547752 containerd[2777]: time="2025-07-06T23:46:07.547730724Z" level=info msg="StartContainer for \"6679d20d43781c073a8f82f51da935065e737abee06394adaf572b478913f651\"" Jul 6 23:46:07.569991 systemd[1]: Started cri-containerd-6679d20d43781c073a8f82f51da935065e737abee06394adaf572b478913f651.scope - libcontainer container 6679d20d43781c073a8f82f51da935065e737abee06394adaf572b478913f651. Jul 6 23:46:07.594331 containerd[2777]: time="2025-07-06T23:46:07.594305645Z" level=info msg="StartContainer for \"6679d20d43781c073a8f82f51da935065e737abee06394adaf572b478913f651\" returns successfully" Jul 6 23:46:07.677991 systemd-networkd[2683]: cali0485817422d: Gained IPv6LL Jul 6 23:46:07.875672 containerd[2777]: time="2025-07-06T23:46:07.875578371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:07.875672 containerd[2777]: time="2025-07-06T23:46:07.875620171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 6 23:46:07.876445 containerd[2777]: time="2025-07-06T23:46:07.876422371Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:07.878466 containerd[2777]: time="2025-07-06T23:46:07.878445731Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:07.879258 containerd[2777]: time="2025-07-06T23:46:07.879230931Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 337.288247ms" Jul 6 23:46:07.879287 containerd[2777]: time="2025-07-06T23:46:07.879262291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 6 23:46:07.880074 containerd[2777]: time="2025-07-06T23:46:07.880056491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 6 23:46:07.880909 containerd[2777]: time="2025-07-06T23:46:07.880887251Z" level=info msg="CreateContainer within sandbox \"c0776c3b3cd3d21ca0f5a9a4ab671c822a31729c77815aaf21864df56f58c9f3\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 6 23:46:07.885532 containerd[2777]: time="2025-07-06T23:46:07.885503371Z" level=info msg="CreateContainer within sandbox \"c0776c3b3cd3d21ca0f5a9a4ab671c822a31729c77815aaf21864df56f58c9f3\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"28df718eb8f480199a3689724f9ab83f87c9a73f698c9eb278083508fb237976\"" Jul 6 23:46:07.885929 containerd[2777]: time="2025-07-06T23:46:07.885902651Z" level=info msg="StartContainer for \"28df718eb8f480199a3689724f9ab83f87c9a73f698c9eb278083508fb237976\"" Jul 6 23:46:07.920056 systemd[1]: Started cri-containerd-28df718eb8f480199a3689724f9ab83f87c9a73f698c9eb278083508fb237976.scope - libcontainer container 28df718eb8f480199a3689724f9ab83f87c9a73f698c9eb278083508fb237976. Jul 6 23:46:07.945639 containerd[2777]: time="2025-07-06T23:46:07.945607812Z" level=info msg="StartContainer for \"28df718eb8f480199a3689724f9ab83f87c9a73f698c9eb278083508fb237976\" returns successfully" Jul 6 23:46:08.274012 kubelet[4301]: I0706 23:46:08.273974 4301 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:46:08.282169 kubelet[4301]: I0706 23:46:08.282125 4301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f699b944c-vw6wk" podStartSLOduration=12.760113328 podStartE2EDuration="15.282111138s" podCreationTimestamp="2025-07-06 23:45:53 +0000 UTC" firstStartedPulling="2025-07-06 23:46:05.019774594 +0000 UTC m=+27.926521855" lastFinishedPulling="2025-07-06 23:46:07.541772404 +0000 UTC m=+30.448519665" observedRunningTime="2025-07-06 23:46:08.281827298 +0000 UTC m=+31.188574559" watchObservedRunningTime="2025-07-06 23:46:08.282111138 +0000 UTC m=+31.188858399" Jul 6 23:46:08.360764 containerd[2777]: time="2025-07-06T23:46:08.360723419Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:08.361099 containerd[2777]: time="2025-07-06T23:46:08.360801499Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 6 23:46:08.361528 containerd[2777]: time="2025-07-06T23:46:08.361508379Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:08.363384 containerd[2777]: time="2025-07-06T23:46:08.363365699Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:08.364137 containerd[2777]: time="2025-07-06T23:46:08.364114099Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 484.030448ms" Jul 6 23:46:08.364158 containerd[2777]: time="2025-07-06T23:46:08.364143339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 6 23:46:08.364922 containerd[2777]: time="2025-07-06T23:46:08.364904699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 6 23:46:08.365858 containerd[2777]: time="2025-07-06T23:46:08.365837299Z" level=info msg="CreateContainer within sandbox \"5e796d7b29ff4d7411896b1da3f4a64512d861b0b56d3adc01b1880b6d7e722b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 6 23:46:08.371757 containerd[2777]: time="2025-07-06T23:46:08.371727739Z" level=info msg="CreateContainer within sandbox \"5e796d7b29ff4d7411896b1da3f4a64512d861b0b56d3adc01b1880b6d7e722b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"aabe675b9053cfab34be1b40bf1888422c91f68acc0fbdc4fbc7d2115c26f81c\"" Jul 6 23:46:08.372070 containerd[2777]: time="2025-07-06T23:46:08.372050219Z" level=info msg="StartContainer for \"aabe675b9053cfab34be1b40bf1888422c91f68acc0fbdc4fbc7d2115c26f81c\"" Jul 6 23:46:08.400991 systemd[1]: Started cri-containerd-aabe675b9053cfab34be1b40bf1888422c91f68acc0fbdc4fbc7d2115c26f81c.scope - libcontainer container aabe675b9053cfab34be1b40bf1888422c91f68acc0fbdc4fbc7d2115c26f81c. Jul 6 23:46:08.421728 containerd[2777]: time="2025-07-06T23:46:08.421698980Z" level=info msg="StartContainer for \"aabe675b9053cfab34be1b40bf1888422c91f68acc0fbdc4fbc7d2115c26f81c\" returns successfully" Jul 6 23:46:08.997860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2545914314.mount: Deactivated successfully. Jul 6 23:46:09.000304 containerd[2777]: time="2025-07-06T23:46:09.000272550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:09.000402 containerd[2777]: time="2025-07-06T23:46:09.000287310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 6 23:46:09.001109 containerd[2777]: time="2025-07-06T23:46:09.001090550Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:09.003061 containerd[2777]: time="2025-07-06T23:46:09.003037310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:09.003792 containerd[2777]: time="2025-07-06T23:46:09.003776670Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 638.841571ms" Jul 6 23:46:09.003814 containerd[2777]: time="2025-07-06T23:46:09.003798230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 6 23:46:09.005427 containerd[2777]: time="2025-07-06T23:46:09.005405030Z" level=info msg="CreateContainer within sandbox \"c0776c3b3cd3d21ca0f5a9a4ab671c822a31729c77815aaf21864df56f58c9f3\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 6 23:46:09.010274 containerd[2777]: time="2025-07-06T23:46:09.010246111Z" level=info msg="CreateContainer within sandbox \"c0776c3b3cd3d21ca0f5a9a4ab671c822a31729c77815aaf21864df56f58c9f3\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"29e037f4e96c9f72e1625fb56d80e6aeee2093b19179ffa987f1f41ae778b02e\"" Jul 6 23:46:09.010600 containerd[2777]: time="2025-07-06T23:46:09.010573191Z" level=info msg="StartContainer for \"29e037f4e96c9f72e1625fb56d80e6aeee2093b19179ffa987f1f41ae778b02e\"" Jul 6 23:46:09.040989 systemd[1]: Started cri-containerd-29e037f4e96c9f72e1625fb56d80e6aeee2093b19179ffa987f1f41ae778b02e.scope - libcontainer container 29e037f4e96c9f72e1625fb56d80e6aeee2093b19179ffa987f1f41ae778b02e. Jul 6 23:46:09.075509 containerd[2777]: time="2025-07-06T23:46:09.075477592Z" level=info msg="StartContainer for \"29e037f4e96c9f72e1625fb56d80e6aeee2093b19179ffa987f1f41ae778b02e\" returns successfully" Jul 6 23:46:09.208022 kubelet[4301]: I0706 23:46:09.207988 4301 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 6 23:46:09.208022 kubelet[4301]: I0706 23:46:09.208024 4301 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 6 23:46:09.283948 kubelet[4301]: I0706 23:46:09.283858 4301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-df9d54f6-gsskt" podStartSLOduration=1.031492374 podStartE2EDuration="4.283842995s" podCreationTimestamp="2025-07-06 23:46:05 +0000 UTC" firstStartedPulling="2025-07-06 23:46:05.752066889 +0000 UTC m=+28.658814150" lastFinishedPulling="2025-07-06 23:46:09.00441751 +0000 UTC m=+31.911164771" observedRunningTime="2025-07-06 23:46:09.283335315 +0000 UTC m=+32.190082576" watchObservedRunningTime="2025-07-06 23:46:09.283842995 +0000 UTC m=+32.190590256" Jul 6 23:46:09.290887 kubelet[4301]: I0706 23:46:09.290838 4301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-plknp" podStartSLOduration=8.828989767 podStartE2EDuration="12.290826075s" podCreationTimestamp="2025-07-06 23:45:57 +0000 UTC" firstStartedPulling="2025-07-06 23:46:04.902905511 +0000 UTC m=+27.809652772" lastFinishedPulling="2025-07-06 23:46:08.364741859 +0000 UTC m=+31.271489080" observedRunningTime="2025-07-06 23:46:09.290308035 +0000 UTC m=+32.197055296" watchObservedRunningTime="2025-07-06 23:46:09.290826075 +0000 UTC m=+32.197573336" Jul 6 23:46:09.531972 kubelet[4301]: I0706 23:46:09.531936 4301 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:46:09.999967 kubelet[4301]: I0706 23:46:09.999933 4301 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:46:21.126858 kubelet[4301]: I0706 23:46:21.126811 4301 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:46:21.517932 kernel: bpftool[9464]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 6 23:46:21.684241 systemd-networkd[2683]: vxlan.calico: Link UP Jul 6 23:46:21.684250 systemd-networkd[2683]: vxlan.calico: Gained carrier Jul 6 23:46:23.294001 systemd-networkd[2683]: vxlan.calico: Gained IPv6LL Jul 6 23:46:24.771363 kubelet[4301]: I0706 23:46:24.771292 4301 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:46:26.401138 kubelet[4301]: I0706 23:46:26.401078 4301 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:46:37.155495 containerd[2777]: time="2025-07-06T23:46:37.155457850Z" level=info msg="StopPodSandbox for \"4bb7360e989b50d413ec27c004346ce241f583edcb9a2b67c80e082cf7ac5244\"" Jul 6 23:46:37.155910 containerd[2777]: time="2025-07-06T23:46:37.155570090Z" level=info msg="TearDown network for sandbox \"4bb7360e989b50d413ec27c004346ce241f583edcb9a2b67c80e082cf7ac5244\" successfully" Jul 6 23:46:37.155910 containerd[2777]: time="2025-07-06T23:46:37.155583170Z" level=info msg="StopPodSandbox for \"4bb7360e989b50d413ec27c004346ce241f583edcb9a2b67c80e082cf7ac5244\" returns successfully" Jul 6 23:46:37.155910 containerd[2777]: time="2025-07-06T23:46:37.155862970Z" level=info msg="RemovePodSandbox for \"4bb7360e989b50d413ec27c004346ce241f583edcb9a2b67c80e082cf7ac5244\"" Jul 6 23:46:37.155985 containerd[2777]: time="2025-07-06T23:46:37.155917810Z" level=info msg="Forcibly stopping sandbox \"4bb7360e989b50d413ec27c004346ce241f583edcb9a2b67c80e082cf7ac5244\"" Jul 6 23:46:37.156010 containerd[2777]: time="2025-07-06T23:46:37.155990490Z" level=info msg="TearDown network for sandbox \"4bb7360e989b50d413ec27c004346ce241f583edcb9a2b67c80e082cf7ac5244\" successfully" Jul 6 23:46:37.236323 containerd[2777]: time="2025-07-06T23:46:37.236273650Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4bb7360e989b50d413ec27c004346ce241f583edcb9a2b67c80e082cf7ac5244\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:46:37.236427 containerd[2777]: time="2025-07-06T23:46:37.236358290Z" level=info msg="RemovePodSandbox \"4bb7360e989b50d413ec27c004346ce241f583edcb9a2b67c80e082cf7ac5244\" returns successfully" Jul 6 23:46:37.236713 containerd[2777]: time="2025-07-06T23:46:37.236690450Z" level=info msg="StopPodSandbox for \"1a4d18ea0b58e548c41a7c34af0b0b243947d571c13702a36d3a3ddbb4cee7a6\"" Jul 6 23:46:37.236791 containerd[2777]: time="2025-07-06T23:46:37.236777450Z" level=info msg="TearDown network for sandbox \"1a4d18ea0b58e548c41a7c34af0b0b243947d571c13702a36d3a3ddbb4cee7a6\" successfully" Jul 6 23:46:37.236817 containerd[2777]: time="2025-07-06T23:46:37.236789930Z" level=info msg="StopPodSandbox for \"1a4d18ea0b58e548c41a7c34af0b0b243947d571c13702a36d3a3ddbb4cee7a6\" returns successfully" Jul 6 23:46:37.237069 containerd[2777]: time="2025-07-06T23:46:37.237046490Z" level=info msg="RemovePodSandbox for \"1a4d18ea0b58e548c41a7c34af0b0b243947d571c13702a36d3a3ddbb4cee7a6\"" Jul 6 23:46:37.237097 containerd[2777]: time="2025-07-06T23:46:37.237072930Z" level=info msg="Forcibly stopping sandbox \"1a4d18ea0b58e548c41a7c34af0b0b243947d571c13702a36d3a3ddbb4cee7a6\"" Jul 6 23:46:37.237155 containerd[2777]: time="2025-07-06T23:46:37.237143930Z" level=info msg="TearDown network for sandbox \"1a4d18ea0b58e548c41a7c34af0b0b243947d571c13702a36d3a3ddbb4cee7a6\" successfully" Jul 6 23:46:37.277310 containerd[2777]: time="2025-07-06T23:46:37.277274690Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1a4d18ea0b58e548c41a7c34af0b0b243947d571c13702a36d3a3ddbb4cee7a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:46:37.277372 containerd[2777]: time="2025-07-06T23:46:37.277345250Z" level=info msg="RemovePodSandbox \"1a4d18ea0b58e548c41a7c34af0b0b243947d571c13702a36d3a3ddbb4cee7a6\" returns successfully" Jul 6 23:46:37.277640 containerd[2777]: time="2025-07-06T23:46:37.277610290Z" level=info msg="StopPodSandbox for \"8301f3fd650439914a27cc54e681dacbfe954c84eac1c9400caa86d026837686\"" Jul 6 23:46:37.277714 containerd[2777]: time="2025-07-06T23:46:37.277701730Z" level=info msg="TearDown network for sandbox \"8301f3fd650439914a27cc54e681dacbfe954c84eac1c9400caa86d026837686\" successfully" Jul 6 23:46:37.277738 containerd[2777]: time="2025-07-06T23:46:37.277712210Z" level=info msg="StopPodSandbox for \"8301f3fd650439914a27cc54e681dacbfe954c84eac1c9400caa86d026837686\" returns successfully" Jul 6 23:46:37.277995 containerd[2777]: time="2025-07-06T23:46:37.277973530Z" level=info msg="RemovePodSandbox for \"8301f3fd650439914a27cc54e681dacbfe954c84eac1c9400caa86d026837686\"" Jul 6 23:46:37.278017 containerd[2777]: time="2025-07-06T23:46:37.277999010Z" level=info msg="Forcibly stopping sandbox \"8301f3fd650439914a27cc54e681dacbfe954c84eac1c9400caa86d026837686\"" Jul 6 23:46:37.278083 containerd[2777]: time="2025-07-06T23:46:37.278070490Z" level=info msg="TearDown network for sandbox \"8301f3fd650439914a27cc54e681dacbfe954c84eac1c9400caa86d026837686\" successfully" Jul 6 23:46:37.322533 containerd[2777]: time="2025-07-06T23:46:37.322502330Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8301f3fd650439914a27cc54e681dacbfe954c84eac1c9400caa86d026837686\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:46:37.322649 containerd[2777]: time="2025-07-06T23:46:37.322556530Z" level=info msg="RemovePodSandbox \"8301f3fd650439914a27cc54e681dacbfe954c84eac1c9400caa86d026837686\" returns successfully" Jul 6 23:46:37.322901 containerd[2777]: time="2025-07-06T23:46:37.322877290Z" level=info msg="StopPodSandbox for \"c58b3575346115043e42d3160110ad65f9442033fef9a4e3609a527e9e41f300\"" Jul 6 23:46:37.322991 containerd[2777]: time="2025-07-06T23:46:37.322977530Z" level=info msg="TearDown network for sandbox \"c58b3575346115043e42d3160110ad65f9442033fef9a4e3609a527e9e41f300\" successfully" Jul 6 23:46:37.322991 containerd[2777]: time="2025-07-06T23:46:37.322988490Z" level=info msg="StopPodSandbox for \"c58b3575346115043e42d3160110ad65f9442033fef9a4e3609a527e9e41f300\" returns successfully" Jul 6 23:46:37.323197 containerd[2777]: time="2025-07-06T23:46:37.323179570Z" level=info msg="RemovePodSandbox for \"c58b3575346115043e42d3160110ad65f9442033fef9a4e3609a527e9e41f300\"" Jul 6 23:46:37.323233 containerd[2777]: time="2025-07-06T23:46:37.323200530Z" level=info msg="Forcibly stopping sandbox \"c58b3575346115043e42d3160110ad65f9442033fef9a4e3609a527e9e41f300\"" Jul 6 23:46:37.323275 containerd[2777]: time="2025-07-06T23:46:37.323264970Z" level=info msg="TearDown network for sandbox \"c58b3575346115043e42d3160110ad65f9442033fef9a4e3609a527e9e41f300\" successfully" Jul 6 23:46:37.396852 containerd[2777]: time="2025-07-06T23:46:37.396816051Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c58b3575346115043e42d3160110ad65f9442033fef9a4e3609a527e9e41f300\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:46:37.396932 containerd[2777]: time="2025-07-06T23:46:37.396883691Z" level=info msg="RemovePodSandbox \"c58b3575346115043e42d3160110ad65f9442033fef9a4e3609a527e9e41f300\" returns successfully" Jul 6 23:46:37.397209 containerd[2777]: time="2025-07-06T23:46:37.397186211Z" level=info msg="StopPodSandbox for \"ed1ef771cef9efeebc6d34fd59702c200960451cd392fab8ab19e3edb13958c5\"" Jul 6 23:46:37.397302 containerd[2777]: time="2025-07-06T23:46:37.397288571Z" level=info msg="TearDown network for sandbox \"ed1ef771cef9efeebc6d34fd59702c200960451cd392fab8ab19e3edb13958c5\" successfully" Jul 6 23:46:37.397327 containerd[2777]: time="2025-07-06T23:46:37.397299851Z" level=info msg="StopPodSandbox for \"ed1ef771cef9efeebc6d34fd59702c200960451cd392fab8ab19e3edb13958c5\" returns successfully" Jul 6 23:46:37.397570 containerd[2777]: time="2025-07-06T23:46:37.397548731Z" level=info msg="RemovePodSandbox for \"ed1ef771cef9efeebc6d34fd59702c200960451cd392fab8ab19e3edb13958c5\"" Jul 6 23:46:37.397595 containerd[2777]: time="2025-07-06T23:46:37.397578171Z" level=info msg="Forcibly stopping sandbox \"ed1ef771cef9efeebc6d34fd59702c200960451cd392fab8ab19e3edb13958c5\"" Jul 6 23:46:37.397661 containerd[2777]: time="2025-07-06T23:46:37.397647411Z" level=info msg="TearDown network for sandbox \"ed1ef771cef9efeebc6d34fd59702c200960451cd392fab8ab19e3edb13958c5\" successfully" Jul 6 23:46:37.399222 containerd[2777]: time="2025-07-06T23:46:37.399195171Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ed1ef771cef9efeebc6d34fd59702c200960451cd392fab8ab19e3edb13958c5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:46:37.399261 containerd[2777]: time="2025-07-06T23:46:37.399250651Z" level=info msg="RemovePodSandbox \"ed1ef771cef9efeebc6d34fd59702c200960451cd392fab8ab19e3edb13958c5\" returns successfully" Jul 6 23:46:37.399496 containerd[2777]: time="2025-07-06T23:46:37.399476771Z" level=info msg="StopPodSandbox for \"7fdd44146f32b6ceda976b4f4993ec80635e19da5c4c75dc29e9ea0763f599d5\"" Jul 6 23:46:37.399565 containerd[2777]: time="2025-07-06T23:46:37.399555091Z" level=info msg="TearDown network for sandbox \"7fdd44146f32b6ceda976b4f4993ec80635e19da5c4c75dc29e9ea0763f599d5\" successfully" Jul 6 23:46:37.399588 containerd[2777]: time="2025-07-06T23:46:37.399565331Z" level=info msg="StopPodSandbox for \"7fdd44146f32b6ceda976b4f4993ec80635e19da5c4c75dc29e9ea0763f599d5\" returns successfully" Jul 6 23:46:37.399824 containerd[2777]: time="2025-07-06T23:46:37.399804891Z" level=info msg="RemovePodSandbox for \"7fdd44146f32b6ceda976b4f4993ec80635e19da5c4c75dc29e9ea0763f599d5\"" Jul 6 23:46:37.399847 containerd[2777]: time="2025-07-06T23:46:37.399831451Z" level=info msg="Forcibly stopping sandbox \"7fdd44146f32b6ceda976b4f4993ec80635e19da5c4c75dc29e9ea0763f599d5\"" Jul 6 23:46:37.399921 containerd[2777]: time="2025-07-06T23:46:37.399909451Z" level=info msg="TearDown network for sandbox \"7fdd44146f32b6ceda976b4f4993ec80635e19da5c4c75dc29e9ea0763f599d5\" successfully" Jul 6 23:46:37.401359 containerd[2777]: time="2025-07-06T23:46:37.401336611Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7fdd44146f32b6ceda976b4f4993ec80635e19da5c4c75dc29e9ea0763f599d5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:46:37.401399 containerd[2777]: time="2025-07-06T23:46:37.401388051Z" level=info msg="RemovePodSandbox \"7fdd44146f32b6ceda976b4f4993ec80635e19da5c4c75dc29e9ea0763f599d5\" returns successfully" Jul 6 23:46:37.401631 containerd[2777]: time="2025-07-06T23:46:37.401609931Z" level=info msg="StopPodSandbox for \"ed88fb7349684f51014d3036c107009b175993f050a65a9dff6bd628f7595af5\"" Jul 6 23:46:37.401702 containerd[2777]: time="2025-07-06T23:46:37.401691491Z" level=info msg="TearDown network for sandbox \"ed88fb7349684f51014d3036c107009b175993f050a65a9dff6bd628f7595af5\" successfully" Jul 6 23:46:37.401724 containerd[2777]: time="2025-07-06T23:46:37.401702411Z" level=info msg="StopPodSandbox for \"ed88fb7349684f51014d3036c107009b175993f050a65a9dff6bd628f7595af5\" returns successfully" Jul 6 23:46:37.401916 containerd[2777]: time="2025-07-06T23:46:37.401898571Z" level=info msg="RemovePodSandbox for \"ed88fb7349684f51014d3036c107009b175993f050a65a9dff6bd628f7595af5\"" Jul 6 23:46:37.401943 containerd[2777]: time="2025-07-06T23:46:37.401922651Z" level=info msg="Forcibly stopping sandbox \"ed88fb7349684f51014d3036c107009b175993f050a65a9dff6bd628f7595af5\"" Jul 6 23:46:37.402000 containerd[2777]: time="2025-07-06T23:46:37.401989611Z" level=info msg="TearDown network for sandbox \"ed88fb7349684f51014d3036c107009b175993f050a65a9dff6bd628f7595af5\" successfully" Jul 6 23:46:37.405984 containerd[2777]: time="2025-07-06T23:46:37.405904811Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ed88fb7349684f51014d3036c107009b175993f050a65a9dff6bd628f7595af5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:46:37.405984 containerd[2777]: time="2025-07-06T23:46:37.405959291Z" level=info msg="RemovePodSandbox \"ed88fb7349684f51014d3036c107009b175993f050a65a9dff6bd628f7595af5\" returns successfully" Jul 6 23:46:37.406212 containerd[2777]: time="2025-07-06T23:46:37.406190651Z" level=info msg="StopPodSandbox for \"e1157630a8ab5b265dd7f77df56f15dfe0fe032d107f4533541c371f3ea0d888\"" Jul 6 23:46:37.406286 containerd[2777]: time="2025-07-06T23:46:37.406273131Z" level=info msg="TearDown network for sandbox \"e1157630a8ab5b265dd7f77df56f15dfe0fe032d107f4533541c371f3ea0d888\" successfully" Jul 6 23:46:37.406286 containerd[2777]: time="2025-07-06T23:46:37.406283691Z" level=info msg="StopPodSandbox for \"e1157630a8ab5b265dd7f77df56f15dfe0fe032d107f4533541c371f3ea0d888\" returns successfully" Jul 6 23:46:37.406480 containerd[2777]: time="2025-07-06T23:46:37.406462451Z" level=info msg="RemovePodSandbox for \"e1157630a8ab5b265dd7f77df56f15dfe0fe032d107f4533541c371f3ea0d888\"" Jul 6 23:46:37.406506 containerd[2777]: time="2025-07-06T23:46:37.406499131Z" level=info msg="Forcibly stopping sandbox \"e1157630a8ab5b265dd7f77df56f15dfe0fe032d107f4533541c371f3ea0d888\"" Jul 6 23:46:37.406570 containerd[2777]: time="2025-07-06T23:46:37.406559611Z" level=info msg="TearDown network for sandbox \"e1157630a8ab5b265dd7f77df56f15dfe0fe032d107f4533541c371f3ea0d888\" successfully" Jul 6 23:46:37.407959 containerd[2777]: time="2025-07-06T23:46:37.407935291Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e1157630a8ab5b265dd7f77df56f15dfe0fe032d107f4533541c371f3ea0d888\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:46:37.407999 containerd[2777]: time="2025-07-06T23:46:37.407987971Z" level=info msg="RemovePodSandbox \"e1157630a8ab5b265dd7f77df56f15dfe0fe032d107f4533541c371f3ea0d888\" returns successfully" Jul 6 23:46:37.408254 containerd[2777]: time="2025-07-06T23:46:37.408238251Z" level=info msg="StopPodSandbox for \"64b399458d5fdeb321079777a99f72aa68a319d6eafcc77cf44c5da98abc5c83\"" Jul 6 23:46:37.408327 containerd[2777]: time="2025-07-06T23:46:37.408315851Z" level=info msg="TearDown network for sandbox \"64b399458d5fdeb321079777a99f72aa68a319d6eafcc77cf44c5da98abc5c83\" successfully" Jul 6 23:46:37.408347 containerd[2777]: time="2025-07-06T23:46:37.408328731Z" level=info msg="StopPodSandbox for \"64b399458d5fdeb321079777a99f72aa68a319d6eafcc77cf44c5da98abc5c83\" returns successfully" Jul 6 23:46:37.408551 containerd[2777]: time="2025-07-06T23:46:37.408531091Z" level=info msg="RemovePodSandbox for \"64b399458d5fdeb321079777a99f72aa68a319d6eafcc77cf44c5da98abc5c83\"" Jul 6 23:46:37.408573 containerd[2777]: time="2025-07-06T23:46:37.408558331Z" level=info msg="Forcibly stopping sandbox \"64b399458d5fdeb321079777a99f72aa68a319d6eafcc77cf44c5da98abc5c83\"" Jul 6 23:46:37.408639 containerd[2777]: time="2025-07-06T23:46:37.408629011Z" level=info msg="TearDown network for sandbox \"64b399458d5fdeb321079777a99f72aa68a319d6eafcc77cf44c5da98abc5c83\" successfully" Jul 6 23:46:37.410077 containerd[2777]: time="2025-07-06T23:46:37.410053091Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"64b399458d5fdeb321079777a99f72aa68a319d6eafcc77cf44c5da98abc5c83\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:46:37.410125 containerd[2777]: time="2025-07-06T23:46:37.410101851Z" level=info msg="RemovePodSandbox \"64b399458d5fdeb321079777a99f72aa68a319d6eafcc77cf44c5da98abc5c83\" returns successfully" Jul 6 23:46:37.410325 containerd[2777]: time="2025-07-06T23:46:37.410309971Z" level=info msg="StopPodSandbox for \"da8c00b01623512125b5fa588e21d32fee34d73ffd612e233afeadb15d9d9fb5\"" Jul 6 23:46:37.410399 containerd[2777]: time="2025-07-06T23:46:37.410388691Z" level=info msg="TearDown network for sandbox \"da8c00b01623512125b5fa588e21d32fee34d73ffd612e233afeadb15d9d9fb5\" successfully" Jul 6 23:46:37.410421 containerd[2777]: time="2025-07-06T23:46:37.410399051Z" level=info msg="StopPodSandbox for \"da8c00b01623512125b5fa588e21d32fee34d73ffd612e233afeadb15d9d9fb5\" returns successfully" Jul 6 23:46:37.410584 containerd[2777]: time="2025-07-06T23:46:37.410568851Z" level=info msg="RemovePodSandbox for \"da8c00b01623512125b5fa588e21d32fee34d73ffd612e233afeadb15d9d9fb5\"" Jul 6 23:46:37.410603 containerd[2777]: time="2025-07-06T23:46:37.410589131Z" level=info msg="Forcibly stopping sandbox \"da8c00b01623512125b5fa588e21d32fee34d73ffd612e233afeadb15d9d9fb5\"" Jul 6 23:46:37.410656 containerd[2777]: time="2025-07-06T23:46:37.410646291Z" level=info msg="TearDown network for sandbox \"da8c00b01623512125b5fa588e21d32fee34d73ffd612e233afeadb15d9d9fb5\" successfully" Jul 6 23:46:37.412084 containerd[2777]: time="2025-07-06T23:46:37.412062331Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"da8c00b01623512125b5fa588e21d32fee34d73ffd612e233afeadb15d9d9fb5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:46:37.412126 containerd[2777]: time="2025-07-06T23:46:37.412115771Z" level=info msg="RemovePodSandbox \"da8c00b01623512125b5fa588e21d32fee34d73ffd612e233afeadb15d9d9fb5\" returns successfully" Jul 6 23:46:37.412350 containerd[2777]: time="2025-07-06T23:46:37.412335371Z" level=info msg="StopPodSandbox for \"0431b148033811abd7adc33d94f5e30f08850680469a4ff9b2f62cca5d90a198\"" Jul 6 23:46:37.412436 containerd[2777]: time="2025-07-06T23:46:37.412425011Z" level=info msg="TearDown network for sandbox \"0431b148033811abd7adc33d94f5e30f08850680469a4ff9b2f62cca5d90a198\" successfully" Jul 6 23:46:37.412458 containerd[2777]: time="2025-07-06T23:46:37.412436331Z" level=info msg="StopPodSandbox for \"0431b148033811abd7adc33d94f5e30f08850680469a4ff9b2f62cca5d90a198\" returns successfully" Jul 6 23:46:37.412646 containerd[2777]: time="2025-07-06T23:46:37.412629171Z" level=info msg="RemovePodSandbox for \"0431b148033811abd7adc33d94f5e30f08850680469a4ff9b2f62cca5d90a198\"" Jul 6 23:46:37.412668 containerd[2777]: time="2025-07-06T23:46:37.412651651Z" level=info msg="Forcibly stopping sandbox \"0431b148033811abd7adc33d94f5e30f08850680469a4ff9b2f62cca5d90a198\"" Jul 6 23:46:37.412731 containerd[2777]: time="2025-07-06T23:46:37.412720211Z" level=info msg="TearDown network for sandbox \"0431b148033811abd7adc33d94f5e30f08850680469a4ff9b2f62cca5d90a198\" successfully" Jul 6 23:46:37.414151 containerd[2777]: time="2025-07-06T23:46:37.414125531Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0431b148033811abd7adc33d94f5e30f08850680469a4ff9b2f62cca5d90a198\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:46:37.414198 containerd[2777]: time="2025-07-06T23:46:37.414184291Z" level=info msg="RemovePodSandbox \"0431b148033811abd7adc33d94f5e30f08850680469a4ff9b2f62cca5d90a198\" returns successfully" Jul 6 23:46:37.414404 containerd[2777]: time="2025-07-06T23:46:37.414388331Z" level=info msg="StopPodSandbox for \"4b5c9dd063ac6b003e02bd71376fb8dc9574f47895d67712a98ee73b64942671\"" Jul 6 23:46:37.414471 containerd[2777]: time="2025-07-06T23:46:37.414459251Z" level=info msg="TearDown network for sandbox \"4b5c9dd063ac6b003e02bd71376fb8dc9574f47895d67712a98ee73b64942671\" successfully" Jul 6 23:46:37.414493 containerd[2777]: time="2025-07-06T23:46:37.414471531Z" level=info msg="StopPodSandbox for \"4b5c9dd063ac6b003e02bd71376fb8dc9574f47895d67712a98ee73b64942671\" returns successfully" Jul 6 23:46:37.414714 containerd[2777]: time="2025-07-06T23:46:37.414698931Z" level=info msg="RemovePodSandbox for \"4b5c9dd063ac6b003e02bd71376fb8dc9574f47895d67712a98ee73b64942671\"" Jul 6 23:46:37.414735 containerd[2777]: time="2025-07-06T23:46:37.414719451Z" level=info msg="Forcibly stopping sandbox \"4b5c9dd063ac6b003e02bd71376fb8dc9574f47895d67712a98ee73b64942671\"" Jul 6 23:46:37.414796 containerd[2777]: time="2025-07-06T23:46:37.414786291Z" level=info msg="TearDown network for sandbox \"4b5c9dd063ac6b003e02bd71376fb8dc9574f47895d67712a98ee73b64942671\" successfully" Jul 6 23:46:37.416220 containerd[2777]: time="2025-07-06T23:46:37.416197371Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4b5c9dd063ac6b003e02bd71376fb8dc9574f47895d67712a98ee73b64942671\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:46:37.416258 containerd[2777]: time="2025-07-06T23:46:37.416248451Z" level=info msg="RemovePodSandbox \"4b5c9dd063ac6b003e02bd71376fb8dc9574f47895d67712a98ee73b64942671\" returns successfully" Jul 6 23:46:37.416479 containerd[2777]: time="2025-07-06T23:46:37.416464171Z" level=info msg="StopPodSandbox for \"464f9b256e2324160a8698f641d846186fb6b7c809a88e7b16b687e615d2cc0a\"" Jul 6 23:46:37.416559 containerd[2777]: time="2025-07-06T23:46:37.416547851Z" level=info msg="TearDown network for sandbox \"464f9b256e2324160a8698f641d846186fb6b7c809a88e7b16b687e615d2cc0a\" successfully" Jul 6 23:46:37.416581 containerd[2777]: time="2025-07-06T23:46:37.416559171Z" level=info msg="StopPodSandbox for \"464f9b256e2324160a8698f641d846186fb6b7c809a88e7b16b687e615d2cc0a\" returns successfully" Jul 6 23:46:37.416763 containerd[2777]: time="2025-07-06T23:46:37.416748611Z" level=info msg="RemovePodSandbox for \"464f9b256e2324160a8698f641d846186fb6b7c809a88e7b16b687e615d2cc0a\"" Jul 6 23:46:37.416783 containerd[2777]: time="2025-07-06T23:46:37.416771091Z" level=info msg="Forcibly stopping sandbox \"464f9b256e2324160a8698f641d846186fb6b7c809a88e7b16b687e615d2cc0a\"" Jul 6 23:46:37.416842 containerd[2777]: time="2025-07-06T23:46:37.416832491Z" level=info msg="TearDown network for sandbox \"464f9b256e2324160a8698f641d846186fb6b7c809a88e7b16b687e615d2cc0a\" successfully" Jul 6 23:46:37.418236 containerd[2777]: time="2025-07-06T23:46:37.418211131Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"464f9b256e2324160a8698f641d846186fb6b7c809a88e7b16b687e615d2cc0a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:46:37.418276 containerd[2777]: time="2025-07-06T23:46:37.418264251Z" level=info msg="RemovePodSandbox \"464f9b256e2324160a8698f641d846186fb6b7c809a88e7b16b687e615d2cc0a\" returns successfully" Jul 6 23:46:37.418471 containerd[2777]: time="2025-07-06T23:46:37.418457291Z" level=info msg="StopPodSandbox for \"b42d008a74e05ed0851f7ccd6bed88b6777beb64cbe1cd6fbccdf532040670f6\"" Jul 6 23:46:37.418536 containerd[2777]: time="2025-07-06T23:46:37.418526171Z" level=info msg="TearDown network for sandbox \"b42d008a74e05ed0851f7ccd6bed88b6777beb64cbe1cd6fbccdf532040670f6\" successfully" Jul 6 23:46:37.418558 containerd[2777]: time="2025-07-06T23:46:37.418536731Z" level=info msg="StopPodSandbox for \"b42d008a74e05ed0851f7ccd6bed88b6777beb64cbe1cd6fbccdf532040670f6\" returns successfully" Jul 6 23:46:37.418766 containerd[2777]: time="2025-07-06T23:46:37.418749131Z" level=info msg="RemovePodSandbox for \"b42d008a74e05ed0851f7ccd6bed88b6777beb64cbe1cd6fbccdf532040670f6\"" Jul 6 23:46:37.418787 containerd[2777]: time="2025-07-06T23:46:37.418773611Z" level=info msg="Forcibly stopping sandbox \"b42d008a74e05ed0851f7ccd6bed88b6777beb64cbe1cd6fbccdf532040670f6\"" Jul 6 23:46:37.418852 containerd[2777]: time="2025-07-06T23:46:37.418841771Z" level=info msg="TearDown network for sandbox \"b42d008a74e05ed0851f7ccd6bed88b6777beb64cbe1cd6fbccdf532040670f6\" successfully" Jul 6 23:46:37.420303 containerd[2777]: time="2025-07-06T23:46:37.420281291Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b42d008a74e05ed0851f7ccd6bed88b6777beb64cbe1cd6fbccdf532040670f6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:46:37.420341 containerd[2777]: time="2025-07-06T23:46:37.420331851Z" level=info msg="RemovePodSandbox \"b42d008a74e05ed0851f7ccd6bed88b6777beb64cbe1cd6fbccdf532040670f6\" returns successfully" Jul 6 23:46:37.420558 containerd[2777]: time="2025-07-06T23:46:37.420543451Z" level=info msg="StopPodSandbox for \"fa92bf17061afeedc521996d1f46c10828ee73ebd12c994a7397e669b2364f48\"" Jul 6 23:46:37.420623 containerd[2777]: time="2025-07-06T23:46:37.420613091Z" level=info msg="TearDown network for sandbox \"fa92bf17061afeedc521996d1f46c10828ee73ebd12c994a7397e669b2364f48\" successfully" Jul 6 23:46:37.420648 containerd[2777]: time="2025-07-06T23:46:37.420623611Z" level=info msg="StopPodSandbox for \"fa92bf17061afeedc521996d1f46c10828ee73ebd12c994a7397e669b2364f48\" returns successfully" Jul 6 23:46:37.420814 containerd[2777]: time="2025-07-06T23:46:37.420799571Z" level=info msg="RemovePodSandbox for \"fa92bf17061afeedc521996d1f46c10828ee73ebd12c994a7397e669b2364f48\"" Jul 6 23:46:37.420836 containerd[2777]: time="2025-07-06T23:46:37.420819491Z" level=info msg="Forcibly stopping sandbox \"fa92bf17061afeedc521996d1f46c10828ee73ebd12c994a7397e669b2364f48\"" Jul 6 23:46:37.420912 containerd[2777]: time="2025-07-06T23:46:37.420901011Z" level=info msg="TearDown network for sandbox \"fa92bf17061afeedc521996d1f46c10828ee73ebd12c994a7397e669b2364f48\" successfully" Jul 6 23:46:37.422286 containerd[2777]: time="2025-07-06T23:46:37.422263811Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fa92bf17061afeedc521996d1f46c10828ee73ebd12c994a7397e669b2364f48\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:46:37.422325 containerd[2777]: time="2025-07-06T23:46:37.422315691Z" level=info msg="RemovePodSandbox \"fa92bf17061afeedc521996d1f46c10828ee73ebd12c994a7397e669b2364f48\" returns successfully" Jul 6 23:46:37.422539 containerd[2777]: time="2025-07-06T23:46:37.422524811Z" level=info msg="StopPodSandbox for \"3c224bb69c6528453a159ee5bb2d79c6e7ea654748b74b2eecc0643dee435c2f\"" Jul 6 23:46:37.422608 containerd[2777]: time="2025-07-06T23:46:37.422597691Z" level=info msg="TearDown network for sandbox \"3c224bb69c6528453a159ee5bb2d79c6e7ea654748b74b2eecc0643dee435c2f\" successfully" Jul 6 23:46:37.422630 containerd[2777]: time="2025-07-06T23:46:37.422608291Z" level=info msg="StopPodSandbox for \"3c224bb69c6528453a159ee5bb2d79c6e7ea654748b74b2eecc0643dee435c2f\" returns successfully" Jul 6 23:46:37.422816 containerd[2777]: time="2025-07-06T23:46:37.422800491Z" level=info msg="RemovePodSandbox for \"3c224bb69c6528453a159ee5bb2d79c6e7ea654748b74b2eecc0643dee435c2f\"" Jul 6 23:46:37.422840 containerd[2777]: time="2025-07-06T23:46:37.422823051Z" level=info msg="Forcibly stopping sandbox \"3c224bb69c6528453a159ee5bb2d79c6e7ea654748b74b2eecc0643dee435c2f\"" Jul 6 23:46:37.422906 containerd[2777]: time="2025-07-06T23:46:37.422895131Z" level=info msg="TearDown network for sandbox \"3c224bb69c6528453a159ee5bb2d79c6e7ea654748b74b2eecc0643dee435c2f\" successfully" Jul 6 23:46:37.424303 containerd[2777]: time="2025-07-06T23:46:37.424279371Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c224bb69c6528453a159ee5bb2d79c6e7ea654748b74b2eecc0643dee435c2f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:46:37.424345 containerd[2777]: time="2025-07-06T23:46:37.424335331Z" level=info msg="RemovePodSandbox \"3c224bb69c6528453a159ee5bb2d79c6e7ea654748b74b2eecc0643dee435c2f\" returns successfully" Jul 6 23:46:37.424560 containerd[2777]: time="2025-07-06T23:46:37.424542291Z" level=info msg="StopPodSandbox for \"827747d3cb061ac5d9f0a033aea8fb6c25b8fd4e577c892cdeebdb49bfb41cbe\"" Jul 6 23:46:37.424632 containerd[2777]: time="2025-07-06T23:46:37.424620811Z" level=info msg="TearDown network for sandbox \"827747d3cb061ac5d9f0a033aea8fb6c25b8fd4e577c892cdeebdb49bfb41cbe\" successfully" Jul 6 23:46:37.424656 containerd[2777]: time="2025-07-06T23:46:37.424631171Z" level=info msg="StopPodSandbox for \"827747d3cb061ac5d9f0a033aea8fb6c25b8fd4e577c892cdeebdb49bfb41cbe\" returns successfully" Jul 6 23:46:37.424831 containerd[2777]: time="2025-07-06T23:46:37.424816251Z" level=info msg="RemovePodSandbox for \"827747d3cb061ac5d9f0a033aea8fb6c25b8fd4e577c892cdeebdb49bfb41cbe\"" Jul 6 23:46:37.424856 containerd[2777]: time="2025-07-06T23:46:37.424837211Z" level=info msg="Forcibly stopping sandbox \"827747d3cb061ac5d9f0a033aea8fb6c25b8fd4e577c892cdeebdb49bfb41cbe\"" Jul 6 23:46:37.424908 containerd[2777]: time="2025-07-06T23:46:37.424898331Z" level=info msg="TearDown network for sandbox \"827747d3cb061ac5d9f0a033aea8fb6c25b8fd4e577c892cdeebdb49bfb41cbe\" successfully" Jul 6 23:46:37.426310 containerd[2777]: time="2025-07-06T23:46:37.426287811Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"827747d3cb061ac5d9f0a033aea8fb6c25b8fd4e577c892cdeebdb49bfb41cbe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:46:37.426351 containerd[2777]: time="2025-07-06T23:46:37.426341371Z" level=info msg="RemovePodSandbox \"827747d3cb061ac5d9f0a033aea8fb6c25b8fd4e577c892cdeebdb49bfb41cbe\" returns successfully" Jul 6 23:46:37.426570 containerd[2777]: time="2025-07-06T23:46:37.426554291Z" level=info msg="StopPodSandbox for \"c2d347f408603e868d5a95098be007fb32f834f6e2cee49d6628e4e888d74094\"" Jul 6 23:46:37.426646 containerd[2777]: time="2025-07-06T23:46:37.426634891Z" level=info msg="TearDown network for sandbox \"c2d347f408603e868d5a95098be007fb32f834f6e2cee49d6628e4e888d74094\" successfully" Jul 6 23:46:37.426670 containerd[2777]: time="2025-07-06T23:46:37.426646331Z" level=info msg="StopPodSandbox for \"c2d347f408603e868d5a95098be007fb32f834f6e2cee49d6628e4e888d74094\" returns successfully" Jul 6 23:46:37.426849 containerd[2777]: time="2025-07-06T23:46:37.426833571Z" level=info msg="RemovePodSandbox for \"c2d347f408603e868d5a95098be007fb32f834f6e2cee49d6628e4e888d74094\"" Jul 6 23:46:37.426873 containerd[2777]: time="2025-07-06T23:46:37.426858571Z" level=info msg="Forcibly stopping sandbox \"c2d347f408603e868d5a95098be007fb32f834f6e2cee49d6628e4e888d74094\"" Jul 6 23:46:37.426939 containerd[2777]: time="2025-07-06T23:46:37.426927851Z" level=info msg="TearDown network for sandbox \"c2d347f408603e868d5a95098be007fb32f834f6e2cee49d6628e4e888d74094\" successfully" Jul 6 23:46:37.429439 containerd[2777]: time="2025-07-06T23:46:37.429373571Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c2d347f408603e868d5a95098be007fb32f834f6e2cee49d6628e4e888d74094\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:46:37.429536 containerd[2777]: time="2025-07-06T23:46:37.429523171Z" level=info msg="RemovePodSandbox \"c2d347f408603e868d5a95098be007fb32f834f6e2cee49d6628e4e888d74094\" returns successfully" Jul 6 23:46:37.430026 containerd[2777]: time="2025-07-06T23:46:37.429997131Z" level=info msg="StopPodSandbox for \"f47d0d97118fecbcf7c7a5a1c7cdcc49825de08cbee49547e1c0bcd8afbeb167\"" Jul 6 23:46:37.430753 containerd[2777]: time="2025-07-06T23:46:37.430726011Z" level=info msg="TearDown network for sandbox \"f47d0d97118fecbcf7c7a5a1c7cdcc49825de08cbee49547e1c0bcd8afbeb167\" successfully" Jul 6 23:46:37.430773 containerd[2777]: time="2025-07-06T23:46:37.430755211Z" level=info msg="StopPodSandbox for \"f47d0d97118fecbcf7c7a5a1c7cdcc49825de08cbee49547e1c0bcd8afbeb167\" returns successfully" Jul 6 23:46:37.431042 containerd[2777]: time="2025-07-06T23:46:37.431020011Z" level=info msg="RemovePodSandbox for \"f47d0d97118fecbcf7c7a5a1c7cdcc49825de08cbee49547e1c0bcd8afbeb167\"" Jul 6 23:46:37.431062 containerd[2777]: time="2025-07-06T23:46:37.431049251Z" level=info msg="Forcibly stopping sandbox \"f47d0d97118fecbcf7c7a5a1c7cdcc49825de08cbee49547e1c0bcd8afbeb167\"" Jul 6 23:46:37.431125 containerd[2777]: time="2025-07-06T23:46:37.431115491Z" level=info msg="TearDown network for sandbox \"f47d0d97118fecbcf7c7a5a1c7cdcc49825de08cbee49547e1c0bcd8afbeb167\" successfully" Jul 6 23:46:37.432630 containerd[2777]: time="2025-07-06T23:46:37.432604931Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f47d0d97118fecbcf7c7a5a1c7cdcc49825de08cbee49547e1c0bcd8afbeb167\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:46:37.432669 containerd[2777]: time="2025-07-06T23:46:37.432658891Z" level=info msg="RemovePodSandbox \"f47d0d97118fecbcf7c7a5a1c7cdcc49825de08cbee49547e1c0bcd8afbeb167\" returns successfully" Jul 6 23:46:37.432899 containerd[2777]: time="2025-07-06T23:46:37.432880811Z" level=info msg="StopPodSandbox for \"c8d0e2cb9f3c1d0a431902d07ee056a4c3cdf35df1a4097cbcac64f90d899c85\"" Jul 6 23:46:37.432980 containerd[2777]: time="2025-07-06T23:46:37.432967851Z" level=info msg="TearDown network for sandbox \"c8d0e2cb9f3c1d0a431902d07ee056a4c3cdf35df1a4097cbcac64f90d899c85\" successfully" Jul 6 23:46:37.433005 containerd[2777]: time="2025-07-06T23:46:37.432980451Z" level=info msg="StopPodSandbox for \"c8d0e2cb9f3c1d0a431902d07ee056a4c3cdf35df1a4097cbcac64f90d899c85\" returns successfully" Jul 6 23:46:37.433228 containerd[2777]: time="2025-07-06T23:46:37.433205211Z" level=info msg="RemovePodSandbox for \"c8d0e2cb9f3c1d0a431902d07ee056a4c3cdf35df1a4097cbcac64f90d899c85\"" Jul 6 23:46:37.433248 containerd[2777]: time="2025-07-06T23:46:37.433236891Z" level=info msg="Forcibly stopping sandbox \"c8d0e2cb9f3c1d0a431902d07ee056a4c3cdf35df1a4097cbcac64f90d899c85\"" Jul 6 23:46:37.433319 containerd[2777]: time="2025-07-06T23:46:37.433309131Z" level=info msg="TearDown network for sandbox \"c8d0e2cb9f3c1d0a431902d07ee056a4c3cdf35df1a4097cbcac64f90d899c85\" successfully" Jul 6 23:46:37.434735 containerd[2777]: time="2025-07-06T23:46:37.434711571Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c8d0e2cb9f3c1d0a431902d07ee056a4c3cdf35df1a4097cbcac64f90d899c85\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:46:37.434774 containerd[2777]: time="2025-07-06T23:46:37.434763731Z" level=info msg="RemovePodSandbox \"c8d0e2cb9f3c1d0a431902d07ee056a4c3cdf35df1a4097cbcac64f90d899c85\" returns successfully" Jul 6 23:46:37.434989 containerd[2777]: time="2025-07-06T23:46:37.434973571Z" level=info msg="StopPodSandbox for \"6de0ba61cda0fa8afd1a40870b7799a55bd892c626bddb9307b08ba7d649294e\"" Jul 6 23:46:37.435057 containerd[2777]: time="2025-07-06T23:46:37.435047211Z" level=info msg="TearDown network for sandbox \"6de0ba61cda0fa8afd1a40870b7799a55bd892c626bddb9307b08ba7d649294e\" successfully" Jul 6 23:46:37.435077 containerd[2777]: time="2025-07-06T23:46:37.435057531Z" level=info msg="StopPodSandbox for \"6de0ba61cda0fa8afd1a40870b7799a55bd892c626bddb9307b08ba7d649294e\" returns successfully" Jul 6 23:46:37.435274 containerd[2777]: time="2025-07-06T23:46:37.435257011Z" level=info msg="RemovePodSandbox for \"6de0ba61cda0fa8afd1a40870b7799a55bd892c626bddb9307b08ba7d649294e\"" Jul 6 23:46:37.435294 containerd[2777]: time="2025-07-06T23:46:37.435280291Z" level=info msg="Forcibly stopping sandbox \"6de0ba61cda0fa8afd1a40870b7799a55bd892c626bddb9307b08ba7d649294e\"" Jul 6 23:46:37.435361 containerd[2777]: time="2025-07-06T23:46:37.435351051Z" level=info msg="TearDown network for sandbox \"6de0ba61cda0fa8afd1a40870b7799a55bd892c626bddb9307b08ba7d649294e\" successfully" Jul 6 23:46:37.436764 containerd[2777]: time="2025-07-06T23:46:37.436742371Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6de0ba61cda0fa8afd1a40870b7799a55bd892c626bddb9307b08ba7d649294e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:46:37.436803 containerd[2777]: time="2025-07-06T23:46:37.436792891Z" level=info msg="RemovePodSandbox \"6de0ba61cda0fa8afd1a40870b7799a55bd892c626bddb9307b08ba7d649294e\" returns successfully" Jul 6 23:46:37.437046 containerd[2777]: time="2025-07-06T23:46:37.437025571Z" level=info msg="StopPodSandbox for \"3e59b4729dad76f14993b50c23bcc5b51745d3d4a26156978cc1317bfdae12ee\"" Jul 6 23:46:37.437133 containerd[2777]: time="2025-07-06T23:46:37.437121491Z" level=info msg="TearDown network for sandbox \"3e59b4729dad76f14993b50c23bcc5b51745d3d4a26156978cc1317bfdae12ee\" successfully" Jul 6 23:46:37.437154 containerd[2777]: time="2025-07-06T23:46:37.437133611Z" level=info msg="StopPodSandbox for \"3e59b4729dad76f14993b50c23bcc5b51745d3d4a26156978cc1317bfdae12ee\" returns successfully" Jul 6 23:46:37.437343 containerd[2777]: time="2025-07-06T23:46:37.437329131Z" level=info msg="RemovePodSandbox for \"3e59b4729dad76f14993b50c23bcc5b51745d3d4a26156978cc1317bfdae12ee\"" Jul 6 23:46:37.437363 containerd[2777]: time="2025-07-06T23:46:37.437348971Z" level=info msg="Forcibly stopping sandbox \"3e59b4729dad76f14993b50c23bcc5b51745d3d4a26156978cc1317bfdae12ee\"" Jul 6 23:46:37.437425 containerd[2777]: time="2025-07-06T23:46:37.437415571Z" level=info msg="TearDown network for sandbox \"3e59b4729dad76f14993b50c23bcc5b51745d3d4a26156978cc1317bfdae12ee\" successfully" Jul 6 23:46:37.449931 containerd[2777]: time="2025-07-06T23:46:37.449901491Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3e59b4729dad76f14993b50c23bcc5b51745d3d4a26156978cc1317bfdae12ee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:46:37.449990 containerd[2777]: time="2025-07-06T23:46:37.449948291Z" level=info msg="RemovePodSandbox \"3e59b4729dad76f14993b50c23bcc5b51745d3d4a26156978cc1317bfdae12ee\" returns successfully" Jul 6 23:46:37.450247 containerd[2777]: time="2025-07-06T23:46:37.450227171Z" level=info msg="StopPodSandbox for \"dd37911a27c4787e700d8c3b35bf3819df12e6ee99f4e06d332024a23effc0ad\"" Jul 6 23:46:37.450323 containerd[2777]: time="2025-07-06T23:46:37.450310211Z" level=info msg="TearDown network for sandbox \"dd37911a27c4787e700d8c3b35bf3819df12e6ee99f4e06d332024a23effc0ad\" successfully" Jul 6 23:46:37.450345 containerd[2777]: time="2025-07-06T23:46:37.450321051Z" level=info msg="StopPodSandbox for \"dd37911a27c4787e700d8c3b35bf3819df12e6ee99f4e06d332024a23effc0ad\" returns successfully" Jul 6 23:46:37.450532 containerd[2777]: time="2025-07-06T23:46:37.450516051Z" level=info msg="RemovePodSandbox for \"dd37911a27c4787e700d8c3b35bf3819df12e6ee99f4e06d332024a23effc0ad\"" Jul 6 23:46:37.450556 containerd[2777]: time="2025-07-06T23:46:37.450535811Z" level=info msg="Forcibly stopping sandbox \"dd37911a27c4787e700d8c3b35bf3819df12e6ee99f4e06d332024a23effc0ad\"" Jul 6 23:46:37.450615 containerd[2777]: time="2025-07-06T23:46:37.450602331Z" level=info msg="TearDown network for sandbox \"dd37911a27c4787e700d8c3b35bf3819df12e6ee99f4e06d332024a23effc0ad\" successfully" Jul 6 23:46:37.452038 containerd[2777]: time="2025-07-06T23:46:37.452014331Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dd37911a27c4787e700d8c3b35bf3819df12e6ee99f4e06d332024a23effc0ad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:46:37.452077 containerd[2777]: time="2025-07-06T23:46:37.452064051Z" level=info msg="RemovePodSandbox \"dd37911a27c4787e700d8c3b35bf3819df12e6ee99f4e06d332024a23effc0ad\" returns successfully" Jul 6 23:46:37.452286 containerd[2777]: time="2025-07-06T23:46:37.452272371Z" level=info msg="StopPodSandbox for \"5a2826ca413e2d92674965c2b0ac08422d77a7a10af570f31567dfebd484dda1\"" Jul 6 23:46:37.452352 containerd[2777]: time="2025-07-06T23:46:37.452341611Z" level=info msg="TearDown network for sandbox \"5a2826ca413e2d92674965c2b0ac08422d77a7a10af570f31567dfebd484dda1\" successfully" Jul 6 23:46:37.452373 containerd[2777]: time="2025-07-06T23:46:37.452352491Z" level=info msg="StopPodSandbox for \"5a2826ca413e2d92674965c2b0ac08422d77a7a10af570f31567dfebd484dda1\" returns successfully" Jul 6 23:46:37.452557 containerd[2777]: time="2025-07-06T23:46:37.452542531Z" level=info msg="RemovePodSandbox for \"5a2826ca413e2d92674965c2b0ac08422d77a7a10af570f31567dfebd484dda1\"" Jul 6 23:46:37.452578 containerd[2777]: time="2025-07-06T23:46:37.452563011Z" level=info msg="Forcibly stopping sandbox \"5a2826ca413e2d92674965c2b0ac08422d77a7a10af570f31567dfebd484dda1\"" Jul 6 23:46:37.452636 containerd[2777]: time="2025-07-06T23:46:37.452626011Z" level=info msg="TearDown network for sandbox \"5a2826ca413e2d92674965c2b0ac08422d77a7a10af570f31567dfebd484dda1\" successfully" Jul 6 23:46:37.454089 containerd[2777]: time="2025-07-06T23:46:37.454065331Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5a2826ca413e2d92674965c2b0ac08422d77a7a10af570f31567dfebd484dda1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:46:37.454131 containerd[2777]: time="2025-07-06T23:46:37.454116091Z" level=info msg="RemovePodSandbox \"5a2826ca413e2d92674965c2b0ac08422d77a7a10af570f31567dfebd484dda1\" returns successfully" Jul 6 23:49:08.111736 systemd[1]: Started sshd@7-147.28.150.251:22-80.94.95.116:18686.service - OpenSSH per-connection server daemon (80.94.95.116:18686). Jul 6 23:49:10.390055 sshd[10702]: Connection closed by authenticating user root 80.94.95.116 port 18686 [preauth] Jul 6 23:49:10.392239 systemd[1]: sshd@7-147.28.150.251:22-80.94.95.116:18686.service: Deactivated successfully. Jul 6 23:53:51.322968 systemd[1]: Started sshd@8-147.28.150.251:22-139.178.89.65:36166.service - OpenSSH per-connection server daemon (139.178.89.65:36166). Jul 6 23:53:51.727302 sshd[11967]: Accepted publickey for core from 139.178.89.65 port 36166 ssh2: RSA SHA256:jFXM65JZ7mqyKzAzCXI2L3LJhK0caTsT4P3ZL5RTqgY Jul 6 23:53:51.728820 sshd-session[11967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:51.732646 systemd-logind[2761]: New session 10 of user core. Jul 6 23:53:51.742987 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:53:52.083743 sshd[11969]: Connection closed by 139.178.89.65 port 36166 Jul 6 23:53:52.084023 sshd-session[11967]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:52.086988 systemd[1]: sshd@8-147.28.150.251:22-139.178.89.65:36166.service: Deactivated successfully. Jul 6 23:53:52.088771 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:53:52.089386 systemd-logind[2761]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:53:52.089938 systemd-logind[2761]: Removed session 10. Jul 6 23:53:57.156839 systemd[1]: Started sshd@9-147.28.150.251:22-139.178.89.65:36172.service - OpenSSH per-connection server daemon (139.178.89.65:36172). Jul 6 23:53:57.556118 sshd[12070]: Accepted publickey for core from 139.178.89.65 port 36172 ssh2: RSA SHA256:jFXM65JZ7mqyKzAzCXI2L3LJhK0caTsT4P3ZL5RTqgY Jul 6 23:53:57.557274 sshd-session[12070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:57.560680 systemd-logind[2761]: New session 11 of user core. Jul 6 23:53:57.572033 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:53:57.907335 sshd[12072]: Connection closed by 139.178.89.65 port 36172 Jul 6 23:53:57.907708 sshd-session[12070]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:57.910861 systemd[1]: sshd@9-147.28.150.251:22-139.178.89.65:36172.service: Deactivated successfully. Jul 6 23:53:57.913068 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:53:57.913672 systemd-logind[2761]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:53:57.914302 systemd-logind[2761]: Removed session 11. Jul 6 23:53:57.986027 systemd[1]: Started sshd@10-147.28.150.251:22-139.178.89.65:36176.service - OpenSSH per-connection server daemon (139.178.89.65:36176). Jul 6 23:53:58.387881 sshd[12099]: Accepted publickey for core from 139.178.89.65 port 36176 ssh2: RSA SHA256:jFXM65JZ7mqyKzAzCXI2L3LJhK0caTsT4P3ZL5RTqgY Jul 6 23:53:58.389056 sshd-session[12099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:58.392360 systemd-logind[2761]: New session 12 of user core. Jul 6 23:53:58.401029 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:53:58.759473 sshd[12102]: Connection closed by 139.178.89.65 port 36176 Jul 6 23:53:58.759863 sshd-session[12099]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:58.762792 systemd[1]: sshd@10-147.28.150.251:22-139.178.89.65:36176.service: Deactivated successfully. Jul 6 23:53:58.765519 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:53:58.766089 systemd-logind[2761]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:53:58.766667 systemd-logind[2761]: Removed session 12. Jul 6 23:53:58.837787 systemd[1]: Started sshd@11-147.28.150.251:22-139.178.89.65:36192.service - OpenSSH per-connection server daemon (139.178.89.65:36192). Jul 6 23:53:59.242494 sshd[12142]: Accepted publickey for core from 139.178.89.65 port 36192 ssh2: RSA SHA256:jFXM65JZ7mqyKzAzCXI2L3LJhK0caTsT4P3ZL5RTqgY Jul 6 23:53:59.243679 sshd-session[12142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:59.249274 systemd-logind[2761]: New session 13 of user core. Jul 6 23:53:59.266988 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:53:59.592720 sshd[12144]: Connection closed by 139.178.89.65 port 36192 Jul 6 23:53:59.593080 sshd-session[12142]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:59.596086 systemd[1]: sshd@11-147.28.150.251:22-139.178.89.65:36192.service: Deactivated successfully. Jul 6 23:53:59.598397 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:53:59.598998 systemd-logind[2761]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:53:59.599608 systemd-logind[2761]: Removed session 13. Jul 6 23:54:04.663768 systemd[1]: Started sshd@12-147.28.150.251:22-139.178.89.65:58124.service - OpenSSH per-connection server daemon (139.178.89.65:58124). Jul 6 23:54:05.063697 sshd[12181]: Accepted publickey for core from 139.178.89.65 port 58124 ssh2: RSA SHA256:jFXM65JZ7mqyKzAzCXI2L3LJhK0caTsT4P3ZL5RTqgY Jul 6 23:54:05.064762 sshd-session[12181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:54:05.068042 systemd-logind[2761]: New session 14 of user core. Jul 6 23:54:05.077983 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:54:05.411684 sshd[12183]: Connection closed by 139.178.89.65 port 58124 Jul 6 23:54:05.412034 sshd-session[12181]: pam_unix(sshd:session): session closed for user core Jul 6 23:54:05.415006 systemd[1]: sshd@12-147.28.150.251:22-139.178.89.65:58124.service: Deactivated successfully. Jul 6 23:54:05.416741 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:54:05.417325 systemd-logind[2761]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:54:05.417922 systemd-logind[2761]: Removed session 14. Jul 6 23:54:05.486695 systemd[1]: Started sshd@13-147.28.150.251:22-139.178.89.65:58126.service - OpenSSH per-connection server daemon (139.178.89.65:58126). Jul 6 23:54:05.898864 sshd[12221]: Accepted publickey for core from 139.178.89.65 port 58126 ssh2: RSA SHA256:jFXM65JZ7mqyKzAzCXI2L3LJhK0caTsT4P3ZL5RTqgY Jul 6 23:54:05.899983 sshd-session[12221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:54:05.903167 systemd-logind[2761]: New session 15 of user core. Jul 6 23:54:05.913984 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:54:06.285141 sshd[12223]: Connection closed by 139.178.89.65 port 58126 Jul 6 23:54:06.285501 sshd-session[12221]: pam_unix(sshd:session): session closed for user core Jul 6 23:54:06.288323 systemd[1]: sshd@13-147.28.150.251:22-139.178.89.65:58126.service: Deactivated successfully. Jul 6 23:54:06.290061 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:54:06.290582 systemd-logind[2761]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:54:06.291129 systemd-logind[2761]: Removed session 15. Jul 6 23:54:06.358714 systemd[1]: Started sshd@14-147.28.150.251:22-139.178.89.65:58134.service - OpenSSH per-connection server daemon (139.178.89.65:58134). Jul 6 23:54:06.771250 sshd[12255]: Accepted publickey for core from 139.178.89.65 port 58134 ssh2: RSA SHA256:jFXM65JZ7mqyKzAzCXI2L3LJhK0caTsT4P3ZL5RTqgY Jul 6 23:54:06.772299 sshd-session[12255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:54:06.775277 systemd-logind[2761]: New session 16 of user core. Jul 6 23:54:06.794970 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:54:08.271066 sshd[12257]: Connection closed by 139.178.89.65 port 58134 Jul 6 23:54:08.271469 sshd-session[12255]: pam_unix(sshd:session): session closed for user core Jul 6 23:54:08.274441 systemd[1]: sshd@14-147.28.150.251:22-139.178.89.65:58134.service: Deactivated successfully. Jul 6 23:54:08.276202 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:54:08.276392 systemd[1]: session-16.scope: Consumed 4.290s CPU time, 123.6M memory peak. Jul 6 23:54:08.276748 systemd-logind[2761]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:54:08.277331 systemd-logind[2761]: Removed session 16. Jul 6 23:54:08.343566 systemd[1]: Started sshd@15-147.28.150.251:22-139.178.89.65:58142.service - OpenSSH per-connection server daemon (139.178.89.65:58142). Jul 6 23:54:08.752980 sshd[12374]: Accepted publickey for core from 139.178.89.65 port 58142 ssh2: RSA SHA256:jFXM65JZ7mqyKzAzCXI2L3LJhK0caTsT4P3ZL5RTqgY Jul 6 23:54:08.754053 sshd-session[12374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:54:08.757374 systemd-logind[2761]: New session 17 of user core. Jul 6 23:54:08.766041 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:54:09.193978 sshd[12376]: Connection closed by 139.178.89.65 port 58142 Jul 6 23:54:09.194362 sshd-session[12374]: pam_unix(sshd:session): session closed for user core Jul 6 23:54:09.197292 systemd[1]: sshd@15-147.28.150.251:22-139.178.89.65:58142.service: Deactivated successfully. Jul 6 23:54:09.200384 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:54:09.200965 systemd-logind[2761]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:54:09.201495 systemd-logind[2761]: Removed session 17. Jul 6 23:54:09.272789 systemd[1]: Started sshd@16-147.28.150.251:22-139.178.89.65:58154.service - OpenSSH per-connection server daemon (139.178.89.65:58154). Jul 6 23:54:09.682359 sshd[12443]: Accepted publickey for core from 139.178.89.65 port 58154 ssh2: RSA SHA256:jFXM65JZ7mqyKzAzCXI2L3LJhK0caTsT4P3ZL5RTqgY Jul 6 23:54:09.683521 sshd-session[12443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:54:09.686929 systemd-logind[2761]: New session 18 of user core. Jul 6 23:54:09.697980 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:54:10.034438 sshd[12478]: Connection closed by 139.178.89.65 port 58154 Jul 6 23:54:10.034760 sshd-session[12443]: pam_unix(sshd:session): session closed for user core Jul 6 23:54:10.037618 systemd[1]: sshd@16-147.28.150.251:22-139.178.89.65:58154.service: Deactivated successfully. Jul 6 23:54:10.039337 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:54:10.039910 systemd-logind[2761]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:54:10.040457 systemd-logind[2761]: Removed session 18. Jul 6 23:54:15.104881 systemd[1]: Started sshd@17-147.28.150.251:22-139.178.89.65:59696.service - OpenSSH per-connection server daemon (139.178.89.65:59696). Jul 6 23:54:15.504495 sshd[12526]: Accepted publickey for core from 139.178.89.65 port 59696 ssh2: RSA SHA256:jFXM65JZ7mqyKzAzCXI2L3LJhK0caTsT4P3ZL5RTqgY Jul 6 23:54:15.505572 sshd-session[12526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:54:15.508797 systemd-logind[2761]: New session 19 of user core. Jul 6 23:54:15.521972 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:54:15.848279 sshd[12528]: Connection closed by 139.178.89.65 port 59696 Jul 6 23:54:15.848637 sshd-session[12526]: pam_unix(sshd:session): session closed for user core Jul 6 23:54:15.851545 systemd[1]: sshd@17-147.28.150.251:22-139.178.89.65:59696.service: Deactivated successfully. Jul 6 23:54:15.853682 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:54:15.854257 systemd-logind[2761]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:54:15.854777 systemd-logind[2761]: Removed session 19. Jul 6 23:54:20.921838 systemd[1]: Started sshd@18-147.28.150.251:22-139.178.89.65:40124.service - OpenSSH per-connection server daemon (139.178.89.65:40124). Jul 6 23:54:21.324636 sshd[12554]: Accepted publickey for core from 139.178.89.65 port 40124 ssh2: RSA SHA256:jFXM65JZ7mqyKzAzCXI2L3LJhK0caTsT4P3ZL5RTqgY Jul 6 23:54:21.325876 sshd-session[12554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:54:21.329250 systemd-logind[2761]: New session 20 of user core. Jul 6 23:54:21.344974 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:54:21.671926 sshd[12556]: Connection closed by 139.178.89.65 port 40124 Jul 6 23:54:21.672236 sshd-session[12554]: pam_unix(sshd:session): session closed for user core Jul 6 23:54:21.675138 systemd[1]: sshd@18-147.28.150.251:22-139.178.89.65:40124.service: Deactivated successfully. Jul 6 23:54:21.677454 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:54:21.678009 systemd-logind[2761]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:54:21.678653 systemd-logind[2761]: Removed session 20. Jul 6 23:54:26.744803 systemd[1]: Started sshd@19-147.28.150.251:22-139.178.89.65:40128.service - OpenSSH per-connection server daemon (139.178.89.65:40128). Jul 6 23:54:27.146569 sshd[12645]: Accepted publickey for core from 139.178.89.65 port 40128 ssh2: RSA SHA256:jFXM65JZ7mqyKzAzCXI2L3LJhK0caTsT4P3ZL5RTqgY Jul 6 23:54:27.155854 sshd-session[12645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:54:27.159053 systemd-logind[2761]: New session 21 of user core. Jul 6 23:54:27.170990 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:54:27.494217 sshd[12647]: Connection closed by 139.178.89.65 port 40128 Jul 6 23:54:27.494604 sshd-session[12645]: pam_unix(sshd:session): session closed for user core Jul 6 23:54:27.497515 systemd[1]: sshd@19-147.28.150.251:22-139.178.89.65:40128.service: Deactivated successfully. Jul 6 23:54:27.499202 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:54:27.499901 systemd-logind[2761]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:54:27.500468 systemd-logind[2761]: Removed session 21.