Jul 7 00:13:48.200495 kernel: Booting Linux on physical CPU 0x0000120000 [0x413fd0c1] Jul 7 00:13:48.200519 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Sun Jul 6 21:52:18 -00 2025 Jul 7 00:13:48.200527 kernel: KASLR enabled Jul 7 00:13:48.200533 kernel: efi: EFI v2.7 by American Megatrends Jul 7 00:13:48.200538 kernel: efi: ACPI 2.0=0xeea40000 SMBIOS 3.0=0xf1b2ff98 ESRT=0xecd1f018 RNG=0xee8d0018 MEMRESERVE=0xe6f2cf98 Jul 7 00:13:48.200543 kernel: random: crng init done Jul 7 00:13:48.200550 kernel: secureboot: Secure boot disabled Jul 7 00:13:48.200556 kernel: esrt: Reserving ESRT space from 0x00000000ecd1f018 to 0x00000000ecd1f078. Jul 7 00:13:48.200563 kernel: ACPI: Early table checksum verification disabled Jul 7 00:13:48.200569 kernel: ACPI: RSDP 0x00000000EEA40000 000024 (v02 Ampere) Jul 7 00:13:48.200574 kernel: ACPI: XSDT 0x00000000EEA30000 0000A4 (v01 Ampere Altra 00000000 AMI 01000013) Jul 7 00:13:48.200580 kernel: ACPI: FACP 0x00000000EEA10000 000114 (v06 Ampere Altra 00000000 INTL 20190509) Jul 7 00:13:48.200586 kernel: ACPI: DSDT 0x00000000EE9B0000 019B3A (v02 Ampere Jade 00000001 INTL 20200717) Jul 7 00:13:48.200592 kernel: ACPI: DBG2 0x00000000EEA20000 00005C (v00 Ampere Altra 00000000 INTL 20190509) Jul 7 00:13:48.200600 kernel: ACPI: GTDT 0x00000000EEA00000 000110 (v03 Ampere Altra 00000000 INTL 20190509) Jul 7 00:13:48.200606 kernel: ACPI: SSDT 0x00000000EE9F0000 00002D (v02 Ampere Altra 00000001 INTL 20190509) Jul 7 00:13:48.200612 kernel: ACPI: FIDT 0x00000000EE9A0000 00009C (v01 ALASKA A M I 01072009 AMI 00010013) Jul 7 00:13:48.200618 kernel: ACPI: SPCR 0x00000000EE990000 000050 (v02 ALASKA A M I 01072009 AMI 0005000F) Jul 7 00:13:48.200624 kernel: ACPI: BGRT 0x00000000EE980000 000038 (v01 ALASKA A M I 01072009 AMI 00010013) Jul 7 00:13:48.200630 kernel: ACPI: MCFG 0x00000000EE970000 0000AC (v01 Ampere Altra 00000001 AMP. 01000013) Jul 7 00:13:48.200636 kernel: ACPI: IORT 0x00000000EE960000 000610 (v00 Ampere Altra 00000000 AMP. 01000013) Jul 7 00:13:48.200642 kernel: ACPI: PPTT 0x00000000EE940000 006E60 (v02 Ampere Altra 00000000 AMP. 01000013) Jul 7 00:13:48.200648 kernel: ACPI: SLIT 0x00000000EE930000 00002D (v01 Ampere Altra 00000000 AMP. 01000013) Jul 7 00:13:48.200654 kernel: ACPI: SRAT 0x00000000EE920000 0006D0 (v03 Ampere Altra 00000000 AMP. 01000013) Jul 7 00:13:48.200662 kernel: ACPI: APIC 0x00000000EE950000 0019F4 (v05 Ampere Altra 00000003 AMI 01000013) Jul 7 00:13:48.200668 kernel: ACPI: PCCT 0x00000000EE900000 000576 (v02 Ampere Altra 00000003 AMP. 01000013) Jul 7 00:13:48.200674 kernel: ACPI: WSMT 0x00000000EE8F0000 000028 (v01 ALASKA A M I 01072009 AMI 00010013) Jul 7 00:13:48.200680 kernel: ACPI: FPDT 0x00000000EE8E0000 000044 (v01 ALASKA A M I 01072009 AMI 01000013) Jul 7 00:13:48.200686 kernel: ACPI: SPCR: console: pl011,mmio32,0x100002600000,115200 Jul 7 00:13:48.200692 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 7 00:13:48.200698 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x88300000-0x883fffff] Jul 7 00:13:48.200704 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x90000000-0xffffffff] Jul 7 00:13:48.200710 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0x8007fffffff] Jul 7 00:13:48.200716 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80100000000-0x83fffffffff] Jul 7 00:13:48.200722 kernel: NUMA: Initialized distance table, cnt=1 Jul 7 00:13:48.200729 kernel: NUMA: Node 0 [mem 0x88300000-0x883fffff] + [mem 0x90000000-0xffffffff] -> [mem 0x88300000-0xffffffff] Jul 7 00:13:48.200736 kernel: NUMA: Node 0 [mem 0x88300000-0xffffffff] + [mem 0x80000000000-0x8007fffffff] -> [mem 0x88300000-0x8007fffffff] Jul 7 00:13:48.200742 kernel: NUMA: Node 0 [mem 0x88300000-0x8007fffffff] + [mem 0x80100000000-0x83fffffffff] -> [mem 0x88300000-0x83fffffffff] Jul 7 00:13:48.200748 kernel: NODE_DATA(0) allocated [mem 0x83fdffc7dc0-0x83fdffcefff] Jul 7 00:13:48.200754 kernel: Zone ranges: Jul 7 00:13:48.200763 kernel: DMA [mem 0x0000000088300000-0x00000000ffffffff] Jul 7 00:13:48.200770 kernel: DMA32 empty Jul 7 00:13:48.200776 kernel: Normal [mem 0x0000000100000000-0x0000083fffffffff] Jul 7 00:13:48.200783 kernel: Device empty Jul 7 00:13:48.200789 kernel: Movable zone start for each node Jul 7 00:13:48.200795 kernel: Early memory node ranges Jul 7 00:13:48.200802 kernel: node 0: [mem 0x0000000088300000-0x00000000883fffff] Jul 7 00:13:48.200808 kernel: node 0: [mem 0x0000000090000000-0x0000000091ffffff] Jul 7 00:13:48.200815 kernel: node 0: [mem 0x0000000092000000-0x0000000093ffffff] Jul 7 00:13:48.200821 kernel: node 0: [mem 0x0000000094000000-0x00000000ee70efff] Jul 7 00:13:48.200827 kernel: node 0: [mem 0x00000000ee70f000-0x00000000ee88cfff] Jul 7 00:13:48.200835 kernel: node 0: [mem 0x00000000ee88d000-0x00000000ee88dfff] Jul 7 00:13:48.200841 kernel: node 0: [mem 0x00000000ee88e000-0x00000000ee88ffff] Jul 7 00:13:48.200847 kernel: node 0: [mem 0x00000000ee890000-0x00000000eeaaffff] Jul 7 00:13:48.200854 kernel: node 0: [mem 0x00000000eeab0000-0x00000000eeabffff] Jul 7 00:13:48.200860 kernel: node 0: [mem 0x00000000eeac0000-0x00000000eff2ffff] Jul 7 00:13:48.200866 kernel: node 0: [mem 0x00000000eff30000-0x00000000f0280fff] Jul 7 00:13:48.200872 kernel: node 0: [mem 0x00000000f0281000-0x00000000f06affff] Jul 7 00:13:48.200879 kernel: node 0: [mem 0x00000000f06b0000-0x00000000f766ffff] Jul 7 00:13:48.200885 kernel: node 0: [mem 0x00000000f7670000-0x00000000f784ffff] Jul 7 00:13:48.200891 kernel: node 0: [mem 0x00000000f7850000-0x00000000f7fdffff] Jul 7 00:13:48.200898 kernel: node 0: [mem 0x00000000f7fe0000-0x00000000ffc8efff] Jul 7 00:13:48.200904 kernel: node 0: [mem 0x00000000ffc8f000-0x00000000ffc8ffff] Jul 7 00:13:48.200912 kernel: node 0: [mem 0x00000000ffc90000-0x00000000ffffffff] Jul 7 00:13:48.200918 kernel: node 0: [mem 0x0000080000000000-0x000008007fffffff] Jul 7 00:13:48.200924 kernel: node 0: [mem 0x0000080100000000-0x0000083fffffffff] Jul 7 00:13:48.200931 kernel: Initmem setup node 0 [mem 0x0000000088300000-0x0000083fffffffff] Jul 7 00:13:48.200941 kernel: On node 0, zone DMA: 768 pages in unavailable ranges Jul 7 00:13:48.200948 kernel: On node 0, zone DMA: 31744 pages in unavailable ranges Jul 7 00:13:48.200954 kernel: psci: probing for conduit method from ACPI. Jul 7 00:13:48.200960 kernel: psci: PSCIv1.1 detected in firmware. Jul 7 00:13:48.200967 kernel: psci: Using standard PSCI v0.2 function IDs Jul 7 00:13:48.200973 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 7 00:13:48.200979 kernel: psci: SMC Calling Convention v1.2 Jul 7 00:13:48.200986 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 7 00:13:48.200993 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100 -> Node 0 Jul 7 00:13:48.201000 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10000 -> Node 0 Jul 7 00:13:48.201006 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10100 -> Node 0 Jul 7 00:13:48.201012 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20000 -> Node 0 Jul 7 00:13:48.201019 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20100 -> Node 0 Jul 7 00:13:48.201025 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30000 -> Node 0 Jul 7 00:13:48.201031 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30100 -> Node 0 Jul 7 00:13:48.201038 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40000 -> Node 0 Jul 7 00:13:48.201044 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40100 -> Node 0 Jul 7 00:13:48.201050 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50000 -> Node 0 Jul 7 00:13:48.201057 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50100 -> Node 0 Jul 7 00:13:48.201064 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60000 -> Node 0 Jul 7 00:13:48.201071 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60100 -> Node 0 Jul 7 00:13:48.201077 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70000 -> Node 0 Jul 7 00:13:48.201083 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70100 -> Node 0 Jul 7 00:13:48.201089 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80000 -> Node 0 Jul 7 00:13:48.201096 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80100 -> Node 0 Jul 7 00:13:48.201102 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90000 -> Node 0 Jul 7 00:13:48.201108 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90100 -> Node 0 Jul 7 00:13:48.201115 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0000 -> Node 0 Jul 7 00:13:48.201121 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0100 -> Node 0 Jul 7 00:13:48.201127 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0000 -> Node 0 Jul 7 00:13:48.201134 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0100 -> Node 0 Jul 7 00:13:48.201141 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0000 -> Node 0 Jul 7 00:13:48.201148 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0100 -> Node 0 Jul 7 00:13:48.201154 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0000 -> Node 0 Jul 7 00:13:48.201160 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0100 -> Node 0 Jul 7 00:13:48.201167 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0000 -> Node 0 Jul 7 00:13:48.201173 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0100 -> Node 0 Jul 7 00:13:48.201179 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0000 -> Node 0 Jul 7 00:13:48.201186 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0100 -> Node 0 Jul 7 00:13:48.201192 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100000 -> Node 0 Jul 7 00:13:48.201198 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100100 -> Node 0 Jul 7 00:13:48.201205 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110000 -> Node 0 Jul 7 00:13:48.201213 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110100 -> Node 0 Jul 7 00:13:48.201219 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120000 -> Node 0 Jul 7 00:13:48.201225 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120100 -> Node 0 Jul 7 00:13:48.201232 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130000 -> Node 0 Jul 7 00:13:48.201238 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130100 -> Node 0 Jul 7 00:13:48.201245 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140000 -> Node 0 Jul 7 00:13:48.201251 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140100 -> Node 0 Jul 7 00:13:48.201257 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150000 -> Node 0 Jul 7 00:13:48.201263 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150100 -> Node 0 Jul 7 00:13:48.201275 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160000 -> Node 0 Jul 7 00:13:48.201283 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160100 -> Node 0 Jul 7 00:13:48.201290 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170000 -> Node 0 Jul 7 00:13:48.201297 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170100 -> Node 0 Jul 7 00:13:48.201304 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180000 -> Node 0 Jul 7 00:13:48.201311 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180100 -> Node 0 Jul 7 00:13:48.201317 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190000 -> Node 0 Jul 7 00:13:48.201325 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190100 -> Node 0 Jul 7 00:13:48.201332 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0000 -> Node 0 Jul 7 00:13:48.201339 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0100 -> Node 0 Jul 7 00:13:48.201345 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0000 -> Node 0 Jul 7 00:13:48.201352 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0100 -> Node 0 Jul 7 00:13:48.201359 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0000 -> Node 0 Jul 7 00:13:48.201366 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0100 -> Node 0 Jul 7 00:13:48.201372 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0000 -> Node 0 Jul 7 00:13:48.201379 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0100 -> Node 0 Jul 7 00:13:48.201386 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0000 -> Node 0 Jul 7 00:13:48.201392 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0100 -> Node 0 Jul 7 00:13:48.201399 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0000 -> Node 0 Jul 7 00:13:48.201407 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0100 -> Node 0 Jul 7 00:13:48.201414 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200000 -> Node 0 Jul 7 00:13:48.201420 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200100 -> Node 0 Jul 7 00:13:48.201427 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210000 -> Node 0 Jul 7 00:13:48.201434 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210100 -> Node 0 Jul 7 00:13:48.201440 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220000 -> Node 0 Jul 7 00:13:48.201447 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220100 -> Node 0 Jul 7 00:13:48.201454 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230000 -> Node 0 Jul 7 00:13:48.201460 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230100 -> Node 0 Jul 7 00:13:48.201467 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240000 -> Node 0 Jul 7 00:13:48.201474 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240100 -> Node 0 Jul 7 00:13:48.201480 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250000 -> Node 0 Jul 7 00:13:48.201488 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250100 -> Node 0 Jul 7 00:13:48.201495 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260000 -> Node 0 Jul 7 00:13:48.201502 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260100 -> Node 0 Jul 7 00:13:48.201509 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270000 -> Node 0 Jul 7 00:13:48.201515 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270100 -> Node 0 Jul 7 00:13:48.201522 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 7 00:13:48.201529 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 7 00:13:48.201535 kernel: pcpu-alloc: [0] 00 [0] 01 [0] 02 [0] 03 [0] 04 [0] 05 [0] 06 [0] 07 Jul 7 00:13:48.201542 kernel: pcpu-alloc: [0] 08 [0] 09 [0] 10 [0] 11 [0] 12 [0] 13 [0] 14 [0] 15 Jul 7 00:13:48.201549 kernel: pcpu-alloc: [0] 16 [0] 17 [0] 18 [0] 19 [0] 20 [0] 21 [0] 22 [0] 23 Jul 7 00:13:48.201556 kernel: pcpu-alloc: [0] 24 [0] 25 [0] 26 [0] 27 [0] 28 [0] 29 [0] 30 [0] 31 Jul 7 00:13:48.201564 kernel: pcpu-alloc: [0] 32 [0] 33 [0] 34 [0] 35 [0] 36 [0] 37 [0] 38 [0] 39 Jul 7 00:13:48.201571 kernel: pcpu-alloc: [0] 40 [0] 41 [0] 42 [0] 43 [0] 44 [0] 45 [0] 46 [0] 47 Jul 7 00:13:48.201577 kernel: pcpu-alloc: [0] 48 [0] 49 [0] 50 [0] 51 [0] 52 [0] 53 [0] 54 [0] 55 Jul 7 00:13:48.201584 kernel: pcpu-alloc: [0] 56 [0] 57 [0] 58 [0] 59 [0] 60 [0] 61 [0] 62 [0] 63 Jul 7 00:13:48.201591 kernel: pcpu-alloc: [0] 64 [0] 65 [0] 66 [0] 67 [0] 68 [0] 69 [0] 70 [0] 71 Jul 7 00:13:48.201598 kernel: pcpu-alloc: [0] 72 [0] 73 [0] 74 [0] 75 [0] 76 [0] 77 [0] 78 [0] 79 Jul 7 00:13:48.201604 kernel: Detected PIPT I-cache on CPU0 Jul 7 00:13:48.201611 kernel: CPU features: detected: GIC system register CPU interface Jul 7 00:13:48.201618 kernel: CPU features: detected: Virtualization Host Extensions Jul 7 00:13:48.201624 kernel: CPU features: detected: Spectre-v4 Jul 7 00:13:48.201631 kernel: CPU features: detected: Spectre-BHB Jul 7 00:13:48.201639 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 7 00:13:48.201646 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 7 00:13:48.201653 kernel: CPU features: detected: ARM erratum 1418040 Jul 7 00:13:48.201659 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 7 00:13:48.201666 kernel: alternatives: applying boot alternatives Jul 7 00:13:48.201674 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=dd2d39de40482a23e9bb75390ff5ca85cd9bd34d902b8049121a8373f8cb2ef2 Jul 7 00:13:48.201681 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 00:13:48.201688 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jul 7 00:13:48.201695 kernel: printk: log_buf_len total cpu_extra contributions: 323584 bytes Jul 7 00:13:48.201701 kernel: printk: log_buf_len min size: 262144 bytes Jul 7 00:13:48.201708 kernel: printk: log_buf_len: 1048576 bytes Jul 7 00:13:48.201716 kernel: printk: early log buf free: 249440(95%) Jul 7 00:13:48.201723 kernel: Dentry cache hash table entries: 16777216 (order: 15, 134217728 bytes, linear) Jul 7 00:13:48.201730 kernel: Inode-cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear) Jul 7 00:13:48.201736 kernel: Fallback order for Node 0: 0 Jul 7 00:13:48.201743 kernel: Built 1 zonelists, mobility grouping on. Total pages: 67043584 Jul 7 00:13:48.201750 kernel: Policy zone: Normal Jul 7 00:13:48.201757 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 00:13:48.201763 kernel: software IO TLB: area num 128. Jul 7 00:13:48.201770 kernel: software IO TLB: mapped [mem 0x00000000fbc8f000-0x00000000ffc8f000] (64MB) Jul 7 00:13:48.201777 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=80, Nodes=1 Jul 7 00:13:48.201784 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 00:13:48.201792 kernel: rcu: RCU event tracing is enabled. Jul 7 00:13:48.201799 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=80. Jul 7 00:13:48.201806 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 00:13:48.201813 kernel: Tracing variant of Tasks RCU enabled. Jul 7 00:13:48.201820 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 00:13:48.201827 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=80 Jul 7 00:13:48.201834 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. Jul 7 00:13:48.201841 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. Jul 7 00:13:48.201847 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 7 00:13:48.201854 kernel: GICv3: GIC: Using split EOI/Deactivate mode Jul 7 00:13:48.201861 kernel: GICv3: 672 SPIs implemented Jul 7 00:13:48.201868 kernel: GICv3: 0 Extended SPIs implemented Jul 7 00:13:48.201876 kernel: Root IRQ handler: gic_handle_irq Jul 7 00:13:48.201882 kernel: GICv3: GICv3 features: 16 PPIs Jul 7 00:13:48.201889 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=1 Jul 7 00:13:48.201896 kernel: GICv3: CPU0: found redistributor 120000 region 0:0x00001001005c0000 Jul 7 00:13:48.201903 kernel: SRAT: PXM 0 -> ITS 0 -> Node 0 Jul 7 00:13:48.201909 kernel: SRAT: PXM 0 -> ITS 1 -> Node 0 Jul 7 00:13:48.201916 kernel: SRAT: PXM 0 -> ITS 2 -> Node 0 Jul 7 00:13:48.201923 kernel: SRAT: PXM 0 -> ITS 3 -> Node 0 Jul 7 00:13:48.201930 kernel: SRAT: PXM 0 -> ITS 4 -> Node 0 Jul 7 00:13:48.201939 kernel: SRAT: PXM 0 -> ITS 5 -> Node 0 Jul 7 00:13:48.201946 kernel: SRAT: PXM 0 -> ITS 6 -> Node 0 Jul 7 00:13:48.201953 kernel: SRAT: PXM 0 -> ITS 7 -> Node 0 Jul 7 00:13:48.201961 kernel: ITS [mem 0x100100040000-0x10010005ffff] Jul 7 00:13:48.201968 kernel: ITS@0x0000100100040000: allocated 8192 Devices @80000340000 (indirect, esz 8, psz 64K, shr 1) Jul 7 00:13:48.201975 kernel: ITS@0x0000100100040000: allocated 32768 Interrupt Collections @80000350000 (flat, esz 2, psz 64K, shr 1) Jul 7 00:13:48.201981 kernel: ITS [mem 0x100100060000-0x10010007ffff] Jul 7 00:13:48.201988 kernel: ITS@0x0000100100060000: allocated 8192 Devices @80000370000 (indirect, esz 8, psz 64K, shr 1) Jul 7 00:13:48.201995 kernel: ITS@0x0000100100060000: allocated 32768 Interrupt Collections @80000380000 (flat, esz 2, psz 64K, shr 1) Jul 7 00:13:48.202002 kernel: ITS [mem 0x100100080000-0x10010009ffff] Jul 7 00:13:48.202009 kernel: ITS@0x0000100100080000: allocated 8192 Devices @800003a0000 (indirect, esz 8, psz 64K, shr 1) Jul 7 00:13:48.202016 kernel: ITS@0x0000100100080000: allocated 32768 Interrupt Collections @800003b0000 (flat, esz 2, psz 64K, shr 1) Jul 7 00:13:48.202022 kernel: ITS [mem 0x1001000a0000-0x1001000bffff] Jul 7 00:13:48.202029 kernel: ITS@0x00001001000a0000: allocated 8192 Devices @800003d0000 (indirect, esz 8, psz 64K, shr 1) Jul 7 00:13:48.202037 kernel: ITS@0x00001001000a0000: allocated 32768 Interrupt Collections @800003e0000 (flat, esz 2, psz 64K, shr 1) Jul 7 00:13:48.202044 kernel: ITS [mem 0x1001000c0000-0x1001000dffff] Jul 7 00:13:48.202051 kernel: ITS@0x00001001000c0000: allocated 8192 Devices @80000800000 (indirect, esz 8, psz 64K, shr 1) Jul 7 00:13:48.202058 kernel: ITS@0x00001001000c0000: allocated 32768 Interrupt Collections @80000810000 (flat, esz 2, psz 64K, shr 1) Jul 7 00:13:48.202065 kernel: ITS [mem 0x1001000e0000-0x1001000fffff] Jul 7 00:13:48.202071 kernel: ITS@0x00001001000e0000: allocated 8192 Devices @80000830000 (indirect, esz 8, psz 64K, shr 1) Jul 7 00:13:48.202078 kernel: ITS@0x00001001000e0000: allocated 32768 Interrupt Collections @80000840000 (flat, esz 2, psz 64K, shr 1) Jul 7 00:13:48.202085 kernel: ITS [mem 0x100100100000-0x10010011ffff] Jul 7 00:13:48.202092 kernel: ITS@0x0000100100100000: allocated 8192 Devices @80000860000 (indirect, esz 8, psz 64K, shr 1) Jul 7 00:13:48.202099 kernel: ITS@0x0000100100100000: allocated 32768 Interrupt Collections @80000870000 (flat, esz 2, psz 64K, shr 1) Jul 7 00:13:48.202106 kernel: ITS [mem 0x100100120000-0x10010013ffff] Jul 7 00:13:48.202114 kernel: ITS@0x0000100100120000: allocated 8192 Devices @80000890000 (indirect, esz 8, psz 64K, shr 1) Jul 7 00:13:48.202121 kernel: ITS@0x0000100100120000: allocated 32768 Interrupt Collections @800008a0000 (flat, esz 2, psz 64K, shr 1) Jul 7 00:13:48.202128 kernel: GICv3: using LPI property table @0x00000800008b0000 Jul 7 00:13:48.202134 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000800008c0000 Jul 7 00:13:48.202141 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 00:13:48.202148 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.202155 kernel: ACPI GTDT: found 1 memory-mapped timer block(s). Jul 7 00:13:48.202162 kernel: arch_timer: cp15 and mmio timer(s) running at 25.00MHz (phys/phys). Jul 7 00:13:48.202169 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 7 00:13:48.202175 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 7 00:13:48.202182 kernel: Console: colour dummy device 80x25 Jul 7 00:13:48.202191 kernel: printk: legacy console [tty0] enabled Jul 7 00:13:48.202198 kernel: ACPI: Core revision 20240827 Jul 7 00:13:48.202205 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 7 00:13:48.202212 kernel: pid_max: default: 81920 minimum: 640 Jul 7 00:13:48.202219 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 7 00:13:48.202226 kernel: landlock: Up and running. Jul 7 00:13:48.202233 kernel: SELinux: Initializing. Jul 7 00:13:48.202240 kernel: Mount-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 00:13:48.202247 kernel: Mountpoint-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 00:13:48.202255 kernel: rcu: Hierarchical SRCU implementation. Jul 7 00:13:48.202262 kernel: rcu: Max phase no-delay instances is 400. Jul 7 00:13:48.202269 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Jul 7 00:13:48.202276 kernel: Remapping and enabling EFI services. Jul 7 00:13:48.202283 kernel: smp: Bringing up secondary CPUs ... Jul 7 00:13:48.202290 kernel: Detected PIPT I-cache on CPU1 Jul 7 00:13:48.202297 kernel: GICv3: CPU1: found redistributor 1a0000 region 0:0x00001001007c0000 Jul 7 00:13:48.202304 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000800008d0000 Jul 7 00:13:48.202311 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.202318 kernel: CPU1: Booted secondary processor 0x00001a0000 [0x413fd0c1] Jul 7 00:13:48.202326 kernel: Detected PIPT I-cache on CPU2 Jul 7 00:13:48.202333 kernel: GICv3: CPU2: found redistributor 140000 region 0:0x0000100100640000 Jul 7 00:13:48.202340 kernel: GICv3: CPU2: using allocated LPI pending table @0x00000800008e0000 Jul 7 00:13:48.202347 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.202353 kernel: CPU2: Booted secondary processor 0x0000140000 [0x413fd0c1] Jul 7 00:13:48.202361 kernel: Detected PIPT I-cache on CPU3 Jul 7 00:13:48.202368 kernel: GICv3: CPU3: found redistributor 1c0000 region 0:0x0000100100840000 Jul 7 00:13:48.202374 kernel: GICv3: CPU3: using allocated LPI pending table @0x00000800008f0000 Jul 7 00:13:48.202382 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.202389 kernel: CPU3: Booted secondary processor 0x00001c0000 [0x413fd0c1] Jul 7 00:13:48.202396 kernel: Detected PIPT I-cache on CPU4 Jul 7 00:13:48.202403 kernel: GICv3: CPU4: found redistributor 100000 region 0:0x0000100100540000 Jul 7 00:13:48.202410 kernel: GICv3: CPU4: using allocated LPI pending table @0x0000080000900000 Jul 7 00:13:48.202417 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.202424 kernel: CPU4: Booted secondary processor 0x0000100000 [0x413fd0c1] Jul 7 00:13:48.202430 kernel: Detected PIPT I-cache on CPU5 Jul 7 00:13:48.202437 kernel: GICv3: CPU5: found redistributor 180000 region 0:0x0000100100740000 Jul 7 00:13:48.202444 kernel: GICv3: CPU5: using allocated LPI pending table @0x0000080000910000 Jul 7 00:13:48.202452 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.202460 kernel: CPU5: Booted secondary processor 0x0000180000 [0x413fd0c1] Jul 7 00:13:48.202466 kernel: Detected PIPT I-cache on CPU6 Jul 7 00:13:48.202473 kernel: GICv3: CPU6: found redistributor 160000 region 0:0x00001001006c0000 Jul 7 00:13:48.202480 kernel: GICv3: CPU6: using allocated LPI pending table @0x0000080000920000 Jul 7 00:13:48.202487 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.202494 kernel: CPU6: Booted secondary processor 0x0000160000 [0x413fd0c1] Jul 7 00:13:48.202501 kernel: Detected PIPT I-cache on CPU7 Jul 7 00:13:48.202508 kernel: GICv3: CPU7: found redistributor 1e0000 region 0:0x00001001008c0000 Jul 7 00:13:48.202515 kernel: GICv3: CPU7: using allocated LPI pending table @0x0000080000930000 Jul 7 00:13:48.202523 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.202530 kernel: CPU7: Booted secondary processor 0x00001e0000 [0x413fd0c1] Jul 7 00:13:48.202537 kernel: Detected PIPT I-cache on CPU8 Jul 7 00:13:48.202544 kernel: GICv3: CPU8: found redistributor a0000 region 0:0x00001001003c0000 Jul 7 00:13:48.202551 kernel: GICv3: CPU8: using allocated LPI pending table @0x0000080000940000 Jul 7 00:13:48.202558 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.202564 kernel: CPU8: Booted secondary processor 0x00000a0000 [0x413fd0c1] Jul 7 00:13:48.202571 kernel: Detected PIPT I-cache on CPU9 Jul 7 00:13:48.202578 kernel: GICv3: CPU9: found redistributor 220000 region 0:0x00001001009c0000 Jul 7 00:13:48.202585 kernel: GICv3: CPU9: using allocated LPI pending table @0x0000080000950000 Jul 7 00:13:48.202593 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.202600 kernel: CPU9: Booted secondary processor 0x0000220000 [0x413fd0c1] Jul 7 00:13:48.202607 kernel: Detected PIPT I-cache on CPU10 Jul 7 00:13:48.202614 kernel: GICv3: CPU10: found redistributor c0000 region 0:0x0000100100440000 Jul 7 00:13:48.202621 kernel: GICv3: CPU10: using allocated LPI pending table @0x0000080000960000 Jul 7 00:13:48.202628 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.202634 kernel: CPU10: Booted secondary processor 0x00000c0000 [0x413fd0c1] Jul 7 00:13:48.202641 kernel: Detected PIPT I-cache on CPU11 Jul 7 00:13:48.202648 kernel: GICv3: CPU11: found redistributor 240000 region 0:0x0000100100a40000 Jul 7 00:13:48.202657 kernel: GICv3: CPU11: using allocated LPI pending table @0x0000080000970000 Jul 7 00:13:48.202664 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.202670 kernel: CPU11: Booted secondary processor 0x0000240000 [0x413fd0c1] Jul 7 00:13:48.202677 kernel: Detected PIPT I-cache on CPU12 Jul 7 00:13:48.202684 kernel: GICv3: CPU12: found redistributor 80000 region 0:0x0000100100340000 Jul 7 00:13:48.202691 kernel: GICv3: CPU12: using allocated LPI pending table @0x0000080000980000 Jul 7 00:13:48.202698 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.202705 kernel: CPU12: Booted secondary processor 0x0000080000 [0x413fd0c1] Jul 7 00:13:48.202711 kernel: Detected PIPT I-cache on CPU13 Jul 7 00:13:48.202718 kernel: GICv3: CPU13: found redistributor 200000 region 0:0x0000100100940000 Jul 7 00:13:48.202727 kernel: GICv3: CPU13: using allocated LPI pending table @0x0000080000990000 Jul 7 00:13:48.202734 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.202741 kernel: CPU13: Booted secondary processor 0x0000200000 [0x413fd0c1] Jul 7 00:13:48.202748 kernel: Detected PIPT I-cache on CPU14 Jul 7 00:13:48.202754 kernel: GICv3: CPU14: found redistributor e0000 region 0:0x00001001004c0000 Jul 7 00:13:48.202761 kernel: GICv3: CPU14: using allocated LPI pending table @0x00000800009a0000 Jul 7 00:13:48.202768 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.202775 kernel: CPU14: Booted secondary processor 0x00000e0000 [0x413fd0c1] Jul 7 00:13:48.202782 kernel: Detected PIPT I-cache on CPU15 Jul 7 00:13:48.202790 kernel: GICv3: CPU15: found redistributor 260000 region 0:0x0000100100ac0000 Jul 7 00:13:48.202797 kernel: GICv3: CPU15: using allocated LPI pending table @0x00000800009b0000 Jul 7 00:13:48.202804 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.202811 kernel: CPU15: Booted secondary processor 0x0000260000 [0x413fd0c1] Jul 7 00:13:48.202818 kernel: Detected PIPT I-cache on CPU16 Jul 7 00:13:48.202825 kernel: GICv3: CPU16: found redistributor 20000 region 0:0x00001001001c0000 Jul 7 00:13:48.202832 kernel: GICv3: CPU16: using allocated LPI pending table @0x00000800009c0000 Jul 7 00:13:48.202839 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.202846 kernel: CPU16: Booted secondary processor 0x0000020000 [0x413fd0c1] Jul 7 00:13:48.202853 kernel: Detected PIPT I-cache on CPU17 Jul 7 00:13:48.202861 kernel: GICv3: CPU17: found redistributor 40000 region 0:0x0000100100240000 Jul 7 00:13:48.202868 kernel: GICv3: CPU17: using allocated LPI pending table @0x00000800009d0000 Jul 7 00:13:48.202875 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.202881 kernel: CPU17: Booted secondary processor 0x0000040000 [0x413fd0c1] Jul 7 00:13:48.202888 kernel: Detected PIPT I-cache on CPU18 Jul 7 00:13:48.202895 kernel: GICv3: CPU18: found redistributor 0 region 0:0x0000100100140000 Jul 7 00:13:48.202902 kernel: GICv3: CPU18: using allocated LPI pending table @0x00000800009e0000 Jul 7 00:13:48.202918 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.202927 kernel: CPU18: Booted secondary processor 0x0000000000 [0x413fd0c1] Jul 7 00:13:48.202938 kernel: Detected PIPT I-cache on CPU19 Jul 7 00:13:48.202946 kernel: GICv3: CPU19: found redistributor 60000 region 0:0x00001001002c0000 Jul 7 00:13:48.202953 kernel: GICv3: CPU19: using allocated LPI pending table @0x00000800009f0000 Jul 7 00:13:48.202960 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.202968 kernel: CPU19: Booted secondary processor 0x0000060000 [0x413fd0c1] Jul 7 00:13:48.202975 kernel: Detected PIPT I-cache on CPU20 Jul 7 00:13:48.202982 kernel: GICv3: CPU20: found redistributor 130000 region 0:0x0000100100600000 Jul 7 00:13:48.202989 kernel: GICv3: CPU20: using allocated LPI pending table @0x0000080000a00000 Jul 7 00:13:48.202997 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.203004 kernel: CPU20: Booted secondary processor 0x0000130000 [0x413fd0c1] Jul 7 00:13:48.203013 kernel: Detected PIPT I-cache on CPU21 Jul 7 00:13:48.203020 kernel: GICv3: CPU21: found redistributor 1b0000 region 0:0x0000100100800000 Jul 7 00:13:48.203027 kernel: GICv3: CPU21: using allocated LPI pending table @0x0000080000a10000 Jul 7 00:13:48.203036 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.203043 kernel: CPU21: Booted secondary processor 0x00001b0000 [0x413fd0c1] Jul 7 00:13:48.203052 kernel: Detected PIPT I-cache on CPU22 Jul 7 00:13:48.203060 kernel: GICv3: CPU22: found redistributor 150000 region 0:0x0000100100680000 Jul 7 00:13:48.203067 kernel: GICv3: CPU22: using allocated LPI pending table @0x0000080000a20000 Jul 7 00:13:48.203074 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.203081 kernel: CPU22: Booted secondary processor 0x0000150000 [0x413fd0c1] Jul 7 00:13:48.203089 kernel: Detected PIPT I-cache on CPU23 Jul 7 00:13:48.203096 kernel: GICv3: CPU23: found redistributor 1d0000 region 0:0x0000100100880000 Jul 7 00:13:48.203103 kernel: GICv3: CPU23: using allocated LPI pending table @0x0000080000a30000 Jul 7 00:13:48.203111 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.203118 kernel: CPU23: Booted secondary processor 0x00001d0000 [0x413fd0c1] Jul 7 00:13:48.203127 kernel: Detected PIPT I-cache on CPU24 Jul 7 00:13:48.203134 kernel: GICv3: CPU24: found redistributor 110000 region 0:0x0000100100580000 Jul 7 00:13:48.203142 kernel: GICv3: CPU24: using allocated LPI pending table @0x0000080000a40000 Jul 7 00:13:48.203149 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.203156 kernel: CPU24: Booted secondary processor 0x0000110000 [0x413fd0c1] Jul 7 00:13:48.203163 kernel: Detected PIPT I-cache on CPU25 Jul 7 00:13:48.203171 kernel: GICv3: CPU25: found redistributor 190000 region 0:0x0000100100780000 Jul 7 00:13:48.203178 kernel: GICv3: CPU25: using allocated LPI pending table @0x0000080000a50000 Jul 7 00:13:48.203186 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.203194 kernel: CPU25: Booted secondary processor 0x0000190000 [0x413fd0c1] Jul 7 00:13:48.203201 kernel: Detected PIPT I-cache on CPU26 Jul 7 00:13:48.203209 kernel: GICv3: CPU26: found redistributor 170000 region 0:0x0000100100700000 Jul 7 00:13:48.203216 kernel: GICv3: CPU26: using allocated LPI pending table @0x0000080000a60000 Jul 7 00:13:48.203223 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.203230 kernel: CPU26: Booted secondary processor 0x0000170000 [0x413fd0c1] Jul 7 00:13:48.203238 kernel: Detected PIPT I-cache on CPU27 Jul 7 00:13:48.203245 kernel: GICv3: CPU27: found redistributor 1f0000 region 0:0x0000100100900000 Jul 7 00:13:48.203252 kernel: GICv3: CPU27: using allocated LPI pending table @0x0000080000a70000 Jul 7 00:13:48.203260 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.203268 kernel: CPU27: Booted secondary processor 0x00001f0000 [0x413fd0c1] Jul 7 00:13:48.203275 kernel: Detected PIPT I-cache on CPU28 Jul 7 00:13:48.203283 kernel: GICv3: CPU28: found redistributor b0000 region 0:0x0000100100400000 Jul 7 00:13:48.203290 kernel: GICv3: CPU28: using allocated LPI pending table @0x0000080000a80000 Jul 7 00:13:48.203297 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.203305 kernel: CPU28: Booted secondary processor 0x00000b0000 [0x413fd0c1] Jul 7 00:13:48.203312 kernel: Detected PIPT I-cache on CPU29 Jul 7 00:13:48.203319 kernel: GICv3: CPU29: found redistributor 230000 region 0:0x0000100100a00000 Jul 7 00:13:48.203326 kernel: GICv3: CPU29: using allocated LPI pending table @0x0000080000a90000 Jul 7 00:13:48.203335 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.203342 kernel: CPU29: Booted secondary processor 0x0000230000 [0x413fd0c1] Jul 7 00:13:48.203350 kernel: Detected PIPT I-cache on CPU30 Jul 7 00:13:48.203357 kernel: GICv3: CPU30: found redistributor d0000 region 0:0x0000100100480000 Jul 7 00:13:48.203364 kernel: GICv3: CPU30: using allocated LPI pending table @0x0000080000aa0000 Jul 7 00:13:48.203371 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.203379 kernel: CPU30: Booted secondary processor 0x00000d0000 [0x413fd0c1] Jul 7 00:13:48.203386 kernel: Detected PIPT I-cache on CPU31 Jul 7 00:13:48.203393 kernel: GICv3: CPU31: found redistributor 250000 region 0:0x0000100100a80000 Jul 7 00:13:48.203400 kernel: GICv3: CPU31: using allocated LPI pending table @0x0000080000ab0000 Jul 7 00:13:48.203409 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.203416 kernel: CPU31: Booted secondary processor 0x0000250000 [0x413fd0c1] Jul 7 00:13:48.203423 kernel: Detected PIPT I-cache on CPU32 Jul 7 00:13:48.203430 kernel: GICv3: CPU32: found redistributor 90000 region 0:0x0000100100380000 Jul 7 00:13:48.203438 kernel: GICv3: CPU32: using allocated LPI pending table @0x0000080000ac0000 Jul 7 00:13:48.203445 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.203452 kernel: CPU32: Booted secondary processor 0x0000090000 [0x413fd0c1] Jul 7 00:13:48.203459 kernel: Detected PIPT I-cache on CPU33 Jul 7 00:13:48.203467 kernel: GICv3: CPU33: found redistributor 210000 region 0:0x0000100100980000 Jul 7 00:13:48.203475 kernel: GICv3: CPU33: using allocated LPI pending table @0x0000080000ad0000 Jul 7 00:13:48.203483 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.203490 kernel: CPU33: Booted secondary processor 0x0000210000 [0x413fd0c1] Jul 7 00:13:48.203497 kernel: Detected PIPT I-cache on CPU34 Jul 7 00:13:48.203504 kernel: GICv3: CPU34: found redistributor f0000 region 0:0x0000100100500000 Jul 7 00:13:48.203512 kernel: GICv3: CPU34: using allocated LPI pending table @0x0000080000ae0000 Jul 7 00:13:48.203519 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.203526 kernel: CPU34: Booted secondary processor 0x00000f0000 [0x413fd0c1] Jul 7 00:13:48.203533 kernel: Detected PIPT I-cache on CPU35 Jul 7 00:13:48.203541 kernel: GICv3: CPU35: found redistributor 270000 region 0:0x0000100100b00000 Jul 7 00:13:48.203549 kernel: GICv3: CPU35: using allocated LPI pending table @0x0000080000af0000 Jul 7 00:13:48.203557 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.203564 kernel: CPU35: Booted secondary processor 0x0000270000 [0x413fd0c1] Jul 7 00:13:48.203571 kernel: Detected PIPT I-cache on CPU36 Jul 7 00:13:48.203578 kernel: GICv3: CPU36: found redistributor 30000 region 0:0x0000100100200000 Jul 7 00:13:48.203586 kernel: GICv3: CPU36: using allocated LPI pending table @0x0000080000b00000 Jul 7 00:13:48.203593 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.203600 kernel: CPU36: Booted secondary processor 0x0000030000 [0x413fd0c1] Jul 7 00:13:48.203607 kernel: Detected PIPT I-cache on CPU37 Jul 7 00:13:48.203616 kernel: GICv3: CPU37: found redistributor 50000 region 0:0x0000100100280000 Jul 7 00:13:48.203623 kernel: GICv3: CPU37: using allocated LPI pending table @0x0000080000b10000 Jul 7 00:13:48.203630 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.203638 kernel: CPU37: Booted secondary processor 0x0000050000 [0x413fd0c1] Jul 7 00:13:48.203645 kernel: Detected PIPT I-cache on CPU38 Jul 7 00:13:48.203652 kernel: GICv3: CPU38: found redistributor 10000 region 0:0x0000100100180000 Jul 7 00:13:48.203659 kernel: GICv3: CPU38: using allocated LPI pending table @0x0000080000b20000 Jul 7 00:13:48.203667 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.203674 kernel: CPU38: Booted secondary processor 0x0000010000 [0x413fd0c1] Jul 7 00:13:48.203682 kernel: Detected PIPT I-cache on CPU39 Jul 7 00:13:48.203690 kernel: GICv3: CPU39: found redistributor 70000 region 0:0x0000100100300000 Jul 7 00:13:48.203697 kernel: GICv3: CPU39: using allocated LPI pending table @0x0000080000b30000 Jul 7 00:13:48.203704 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.203712 kernel: CPU39: Booted secondary processor 0x0000070000 [0x413fd0c1] Jul 7 00:13:48.203719 kernel: Detected PIPT I-cache on CPU40 Jul 7 00:13:48.203726 kernel: GICv3: CPU40: found redistributor 120100 region 0:0x00001001005e0000 Jul 7 00:13:48.203733 kernel: GICv3: CPU40: using allocated LPI pending table @0x0000080000b40000 Jul 7 00:13:48.203742 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.203749 kernel: CPU40: Booted secondary processor 0x0000120100 [0x413fd0c1] Jul 7 00:13:48.203757 kernel: Detected PIPT I-cache on CPU41 Jul 7 00:13:48.203764 kernel: GICv3: CPU41: found redistributor 1a0100 region 0:0x00001001007e0000 Jul 7 00:13:48.203771 kernel: GICv3: CPU41: using allocated LPI pending table @0x0000080000b50000 Jul 7 00:13:48.203778 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.203786 kernel: CPU41: Booted secondary processor 0x00001a0100 [0x413fd0c1] Jul 7 00:13:48.203793 kernel: Detected PIPT I-cache on CPU42 Jul 7 00:13:48.203800 kernel: GICv3: CPU42: found redistributor 140100 region 0:0x0000100100660000 Jul 7 00:13:48.203807 kernel: GICv3: CPU42: using allocated LPI pending table @0x0000080000b60000 Jul 7 00:13:48.203816 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.203823 kernel: CPU42: Booted secondary processor 0x0000140100 [0x413fd0c1] Jul 7 00:13:48.203831 kernel: Detected PIPT I-cache on CPU43 Jul 7 00:13:48.203838 kernel: GICv3: CPU43: found redistributor 1c0100 region 0:0x0000100100860000 Jul 7 00:13:48.203845 kernel: GICv3: CPU43: using allocated LPI pending table @0x0000080000b70000 Jul 7 00:13:48.203852 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.203860 kernel: CPU43: Booted secondary processor 0x00001c0100 [0x413fd0c1] Jul 7 00:13:48.203867 kernel: Detected PIPT I-cache on CPU44 Jul 7 00:13:48.203874 kernel: GICv3: CPU44: found redistributor 100100 region 0:0x0000100100560000 Jul 7 00:13:48.203883 kernel: GICv3: CPU44: using allocated LPI pending table @0x0000080000b80000 Jul 7 00:13:48.203890 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.203898 kernel: CPU44: Booted secondary processor 0x0000100100 [0x413fd0c1] Jul 7 00:13:48.203905 kernel: Detected PIPT I-cache on CPU45 Jul 7 00:13:48.203912 kernel: GICv3: CPU45: found redistributor 180100 region 0:0x0000100100760000 Jul 7 00:13:48.203919 kernel: GICv3: CPU45: using allocated LPI pending table @0x0000080000b90000 Jul 7 00:13:48.203927 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.203954 kernel: CPU45: Booted secondary processor 0x0000180100 [0x413fd0c1] Jul 7 00:13:48.203962 kernel: Detected PIPT I-cache on CPU46 Jul 7 00:13:48.203969 kernel: GICv3: CPU46: found redistributor 160100 region 0:0x00001001006e0000 Jul 7 00:13:48.203979 kernel: GICv3: CPU46: using allocated LPI pending table @0x0000080000ba0000 Jul 7 00:13:48.203987 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.203996 kernel: CPU46: Booted secondary processor 0x0000160100 [0x413fd0c1] Jul 7 00:13:48.204003 kernel: Detected PIPT I-cache on CPU47 Jul 7 00:13:48.204010 kernel: GICv3: CPU47: found redistributor 1e0100 region 0:0x00001001008e0000 Jul 7 00:13:48.204018 kernel: GICv3: CPU47: using allocated LPI pending table @0x0000080000bb0000 Jul 7 00:13:48.204025 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.204032 kernel: CPU47: Booted secondary processor 0x00001e0100 [0x413fd0c1] Jul 7 00:13:48.204040 kernel: Detected PIPT I-cache on CPU48 Jul 7 00:13:48.204048 kernel: GICv3: CPU48: found redistributor a0100 region 0:0x00001001003e0000 Jul 7 00:13:48.204056 kernel: GICv3: CPU48: using allocated LPI pending table @0x0000080000bc0000 Jul 7 00:13:48.204063 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.204070 kernel: CPU48: Booted secondary processor 0x00000a0100 [0x413fd0c1] Jul 7 00:13:48.204077 kernel: Detected PIPT I-cache on CPU49 Jul 7 00:13:48.204085 kernel: GICv3: CPU49: found redistributor 220100 region 0:0x00001001009e0000 Jul 7 00:13:48.204092 kernel: GICv3: CPU49: using allocated LPI pending table @0x0000080000bd0000 Jul 7 00:13:48.204099 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.204106 kernel: CPU49: Booted secondary processor 0x0000220100 [0x413fd0c1] Jul 7 00:13:48.204114 kernel: Detected PIPT I-cache on CPU50 Jul 7 00:13:48.204122 kernel: GICv3: CPU50: found redistributor c0100 region 0:0x0000100100460000 Jul 7 00:13:48.204130 kernel: GICv3: CPU50: using allocated LPI pending table @0x0000080000be0000 Jul 7 00:13:48.204137 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.204144 kernel: CPU50: Booted secondary processor 0x00000c0100 [0x413fd0c1] Jul 7 00:13:48.204152 kernel: Detected PIPT I-cache on CPU51 Jul 7 00:13:48.204159 kernel: GICv3: CPU51: found redistributor 240100 region 0:0x0000100100a60000 Jul 7 00:13:48.204166 kernel: GICv3: CPU51: using allocated LPI pending table @0x0000080000bf0000 Jul 7 00:13:48.204174 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.204181 kernel: CPU51: Booted secondary processor 0x0000240100 [0x413fd0c1] Jul 7 00:13:48.204189 kernel: Detected PIPT I-cache on CPU52 Jul 7 00:13:48.204197 kernel: GICv3: CPU52: found redistributor 80100 region 0:0x0000100100360000 Jul 7 00:13:48.204204 kernel: GICv3: CPU52: using allocated LPI pending table @0x0000080000c00000 Jul 7 00:13:48.204211 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.204218 kernel: CPU52: Booted secondary processor 0x0000080100 [0x413fd0c1] Jul 7 00:13:48.204226 kernel: Detected PIPT I-cache on CPU53 Jul 7 00:13:48.204233 kernel: GICv3: CPU53: found redistributor 200100 region 0:0x0000100100960000 Jul 7 00:13:48.204240 kernel: GICv3: CPU53: using allocated LPI pending table @0x0000080000c10000 Jul 7 00:13:48.204248 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.204255 kernel: CPU53: Booted secondary processor 0x0000200100 [0x413fd0c1] Jul 7 00:13:48.204263 kernel: Detected PIPT I-cache on CPU54 Jul 7 00:13:48.204271 kernel: GICv3: CPU54: found redistributor e0100 region 0:0x00001001004e0000 Jul 7 00:13:48.204278 kernel: GICv3: CPU54: using allocated LPI pending table @0x0000080000c20000 Jul 7 00:13:48.204285 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.204293 kernel: CPU54: Booted secondary processor 0x00000e0100 [0x413fd0c1] Jul 7 00:13:48.204300 kernel: Detected PIPT I-cache on CPU55 Jul 7 00:13:48.204307 kernel: GICv3: CPU55: found redistributor 260100 region 0:0x0000100100ae0000 Jul 7 00:13:48.204314 kernel: GICv3: CPU55: using allocated LPI pending table @0x0000080000c30000 Jul 7 00:13:48.204322 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.204330 kernel: CPU55: Booted secondary processor 0x0000260100 [0x413fd0c1] Jul 7 00:13:48.204337 kernel: Detected PIPT I-cache on CPU56 Jul 7 00:13:48.204344 kernel: GICv3: CPU56: found redistributor 20100 region 0:0x00001001001e0000 Jul 7 00:13:48.204352 kernel: GICv3: CPU56: using allocated LPI pending table @0x0000080000c40000 Jul 7 00:13:48.204359 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.204366 kernel: CPU56: Booted secondary processor 0x0000020100 [0x413fd0c1] Jul 7 00:13:48.204374 kernel: Detected PIPT I-cache on CPU57 Jul 7 00:13:48.204381 kernel: GICv3: CPU57: found redistributor 40100 region 0:0x0000100100260000 Jul 7 00:13:48.204388 kernel: GICv3: CPU57: using allocated LPI pending table @0x0000080000c50000 Jul 7 00:13:48.204397 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.204404 kernel: CPU57: Booted secondary processor 0x0000040100 [0x413fd0c1] Jul 7 00:13:48.204411 kernel: Detected PIPT I-cache on CPU58 Jul 7 00:13:48.204418 kernel: GICv3: CPU58: found redistributor 100 region 0:0x0000100100160000 Jul 7 00:13:48.204426 kernel: GICv3: CPU58: using allocated LPI pending table @0x0000080000c60000 Jul 7 00:13:48.204433 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.204440 kernel: CPU58: Booted secondary processor 0x0000000100 [0x413fd0c1] Jul 7 00:13:48.204447 kernel: Detected PIPT I-cache on CPU59 Jul 7 00:13:48.204455 kernel: GICv3: CPU59: found redistributor 60100 region 0:0x00001001002e0000 Jul 7 00:13:48.204462 kernel: GICv3: CPU59: using allocated LPI pending table @0x0000080000c70000 Jul 7 00:13:48.204471 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.204478 kernel: CPU59: Booted secondary processor 0x0000060100 [0x413fd0c1] Jul 7 00:13:48.204485 kernel: Detected PIPT I-cache on CPU60 Jul 7 00:13:48.204492 kernel: GICv3: CPU60: found redistributor 130100 region 0:0x0000100100620000 Jul 7 00:13:48.204499 kernel: GICv3: CPU60: using allocated LPI pending table @0x0000080000c80000 Jul 7 00:13:48.204507 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.204514 kernel: CPU60: Booted secondary processor 0x0000130100 [0x413fd0c1] Jul 7 00:13:48.204521 kernel: Detected PIPT I-cache on CPU61 Jul 7 00:13:48.204529 kernel: GICv3: CPU61: found redistributor 1b0100 region 0:0x0000100100820000 Jul 7 00:13:48.204537 kernel: GICv3: CPU61: using allocated LPI pending table @0x0000080000c90000 Jul 7 00:13:48.204545 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.204552 kernel: CPU61: Booted secondary processor 0x00001b0100 [0x413fd0c1] Jul 7 00:13:48.204559 kernel: Detected PIPT I-cache on CPU62 Jul 7 00:13:48.204566 kernel: GICv3: CPU62: found redistributor 150100 region 0:0x00001001006a0000 Jul 7 00:13:48.204574 kernel: GICv3: CPU62: using allocated LPI pending table @0x0000080000ca0000 Jul 7 00:13:48.204581 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.204588 kernel: CPU62: Booted secondary processor 0x0000150100 [0x413fd0c1] Jul 7 00:13:48.204595 kernel: Detected PIPT I-cache on CPU63 Jul 7 00:13:48.204603 kernel: GICv3: CPU63: found redistributor 1d0100 region 0:0x00001001008a0000 Jul 7 00:13:48.204611 kernel: GICv3: CPU63: using allocated LPI pending table @0x0000080000cb0000 Jul 7 00:13:48.204619 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.204626 kernel: CPU63: Booted secondary processor 0x00001d0100 [0x413fd0c1] Jul 7 00:13:48.204633 kernel: Detected PIPT I-cache on CPU64 Jul 7 00:13:48.204640 kernel: GICv3: CPU64: found redistributor 110100 region 0:0x00001001005a0000 Jul 7 00:13:48.204648 kernel: GICv3: CPU64: using allocated LPI pending table @0x0000080000cc0000 Jul 7 00:13:48.204655 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.204662 kernel: CPU64: Booted secondary processor 0x0000110100 [0x413fd0c1] Jul 7 00:13:48.204670 kernel: Detected PIPT I-cache on CPU65 Jul 7 00:13:48.204679 kernel: GICv3: CPU65: found redistributor 190100 region 0:0x00001001007a0000 Jul 7 00:13:48.204686 kernel: GICv3: CPU65: using allocated LPI pending table @0x0000080000cd0000 Jul 7 00:13:48.204694 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.204701 kernel: CPU65: Booted secondary processor 0x0000190100 [0x413fd0c1] Jul 7 00:13:48.204708 kernel: Detected PIPT I-cache on CPU66 Jul 7 00:13:48.204716 kernel: GICv3: CPU66: found redistributor 170100 region 0:0x0000100100720000 Jul 7 00:13:48.204723 kernel: GICv3: CPU66: using allocated LPI pending table @0x0000080000ce0000 Jul 7 00:13:48.204730 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.204737 kernel: CPU66: Booted secondary processor 0x0000170100 [0x413fd0c1] Jul 7 00:13:48.204745 kernel: Detected PIPT I-cache on CPU67 Jul 7 00:13:48.204753 kernel: GICv3: CPU67: found redistributor 1f0100 region 0:0x0000100100920000 Jul 7 00:13:48.204760 kernel: GICv3: CPU67: using allocated LPI pending table @0x0000080000cf0000 Jul 7 00:13:48.204768 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.204775 kernel: CPU67: Booted secondary processor 0x00001f0100 [0x413fd0c1] Jul 7 00:13:48.204782 kernel: Detected PIPT I-cache on CPU68 Jul 7 00:13:48.204789 kernel: GICv3: CPU68: found redistributor b0100 region 0:0x0000100100420000 Jul 7 00:13:48.204797 kernel: GICv3: CPU68: using allocated LPI pending table @0x0000080000d00000 Jul 7 00:13:48.204804 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.204811 kernel: CPU68: Booted secondary processor 0x00000b0100 [0x413fd0c1] Jul 7 00:13:48.204820 kernel: Detected PIPT I-cache on CPU69 Jul 7 00:13:48.204827 kernel: GICv3: CPU69: found redistributor 230100 region 0:0x0000100100a20000 Jul 7 00:13:48.204835 kernel: GICv3: CPU69: using allocated LPI pending table @0x0000080000d10000 Jul 7 00:13:48.204842 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.204849 kernel: CPU69: Booted secondary processor 0x0000230100 [0x413fd0c1] Jul 7 00:13:48.204856 kernel: Detected PIPT I-cache on CPU70 Jul 7 00:13:48.204864 kernel: GICv3: CPU70: found redistributor d0100 region 0:0x00001001004a0000 Jul 7 00:13:48.204871 kernel: GICv3: CPU70: using allocated LPI pending table @0x0000080000d20000 Jul 7 00:13:48.204878 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.204885 kernel: CPU70: Booted secondary processor 0x00000d0100 [0x413fd0c1] Jul 7 00:13:48.204894 kernel: Detected PIPT I-cache on CPU71 Jul 7 00:13:48.204901 kernel: GICv3: CPU71: found redistributor 250100 region 0:0x0000100100aa0000 Jul 7 00:13:48.204908 kernel: GICv3: CPU71: using allocated LPI pending table @0x0000080000d30000 Jul 7 00:13:48.204916 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.204923 kernel: CPU71: Booted secondary processor 0x0000250100 [0x413fd0c1] Jul 7 00:13:48.204930 kernel: Detected PIPT I-cache on CPU72 Jul 7 00:13:48.204965 kernel: GICv3: CPU72: found redistributor 90100 region 0:0x00001001003a0000 Jul 7 00:13:48.204973 kernel: GICv3: CPU72: using allocated LPI pending table @0x0000080000d40000 Jul 7 00:13:48.204981 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.204989 kernel: CPU72: Booted secondary processor 0x0000090100 [0x413fd0c1] Jul 7 00:13:48.204997 kernel: Detected PIPT I-cache on CPU73 Jul 7 00:13:48.205004 kernel: GICv3: CPU73: found redistributor 210100 region 0:0x00001001009a0000 Jul 7 00:13:48.205012 kernel: GICv3: CPU73: using allocated LPI pending table @0x0000080000d50000 Jul 7 00:13:48.205019 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.205026 kernel: CPU73: Booted secondary processor 0x0000210100 [0x413fd0c1] Jul 7 00:13:48.205033 kernel: Detected PIPT I-cache on CPU74 Jul 7 00:13:48.205041 kernel: GICv3: CPU74: found redistributor f0100 region 0:0x0000100100520000 Jul 7 00:13:48.205048 kernel: GICv3: CPU74: using allocated LPI pending table @0x0000080000d60000 Jul 7 00:13:48.205057 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.205064 kernel: CPU74: Booted secondary processor 0x00000f0100 [0x413fd0c1] Jul 7 00:13:48.205071 kernel: Detected PIPT I-cache on CPU75 Jul 7 00:13:48.205078 kernel: GICv3: CPU75: found redistributor 270100 region 0:0x0000100100b20000 Jul 7 00:13:48.205086 kernel: GICv3: CPU75: using allocated LPI pending table @0x0000080000d70000 Jul 7 00:13:48.205093 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.205100 kernel: CPU75: Booted secondary processor 0x0000270100 [0x413fd0c1] Jul 7 00:13:48.205107 kernel: Detected PIPT I-cache on CPU76 Jul 7 00:13:48.205115 kernel: GICv3: CPU76: found redistributor 30100 region 0:0x0000100100220000 Jul 7 00:13:48.205122 kernel: GICv3: CPU76: using allocated LPI pending table @0x0000080000d80000 Jul 7 00:13:48.205131 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.205138 kernel: CPU76: Booted secondary processor 0x0000030100 [0x413fd0c1] Jul 7 00:13:48.205145 kernel: Detected PIPT I-cache on CPU77 Jul 7 00:13:48.205152 kernel: GICv3: CPU77: found redistributor 50100 region 0:0x00001001002a0000 Jul 7 00:13:48.205160 kernel: GICv3: CPU77: using allocated LPI pending table @0x0000080000d90000 Jul 7 00:13:48.205167 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.205174 kernel: CPU77: Booted secondary processor 0x0000050100 [0x413fd0c1] Jul 7 00:13:48.205181 kernel: Detected PIPT I-cache on CPU78 Jul 7 00:13:48.205189 kernel: GICv3: CPU78: found redistributor 10100 region 0:0x00001001001a0000 Jul 7 00:13:48.205197 kernel: GICv3: CPU78: using allocated LPI pending table @0x0000080000da0000 Jul 7 00:13:48.205205 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.205212 kernel: CPU78: Booted secondary processor 0x0000010100 [0x413fd0c1] Jul 7 00:13:48.205219 kernel: Detected PIPT I-cache on CPU79 Jul 7 00:13:48.205226 kernel: GICv3: CPU79: found redistributor 70100 region 0:0x0000100100320000 Jul 7 00:13:48.205234 kernel: GICv3: CPU79: using allocated LPI pending table @0x0000080000db0000 Jul 7 00:13:48.205241 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 00:13:48.205248 kernel: CPU79: Booted secondary processor 0x0000070100 [0x413fd0c1] Jul 7 00:13:48.205255 kernel: smp: Brought up 1 node, 80 CPUs Jul 7 00:13:48.205263 kernel: SMP: Total of 80 processors activated. Jul 7 00:13:48.205271 kernel: CPU: All CPU(s) started at EL2 Jul 7 00:13:48.205278 kernel: CPU features: detected: 32-bit EL0 Support Jul 7 00:13:48.205286 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 7 00:13:48.205293 kernel: CPU features: detected: Common not Private translations Jul 7 00:13:48.205300 kernel: CPU features: detected: CRC32 instructions Jul 7 00:13:48.205308 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 7 00:13:48.205315 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 7 00:13:48.205322 kernel: CPU features: detected: LSE atomic instructions Jul 7 00:13:48.205330 kernel: CPU features: detected: Privileged Access Never Jul 7 00:13:48.205338 kernel: CPU features: detected: RAS Extension Support Jul 7 00:13:48.205346 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 7 00:13:48.205353 kernel: alternatives: applying system-wide alternatives Jul 7 00:13:48.205361 kernel: CPU features: detected: Hardware dirty bit management on CPU0-79 Jul 7 00:13:48.205368 kernel: Memory: 262893720K/268174336K available (11072K kernel code, 2428K rwdata, 9032K rodata, 39424K init, 1035K bss, 5220784K reserved, 0K cma-reserved) Jul 7 00:13:48.205376 kernel: devtmpfs: initialized Jul 7 00:13:48.205383 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 00:13:48.205390 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jul 7 00:13:48.205398 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 7 00:13:48.205406 kernel: 0 pages in range for non-PLT usage Jul 7 00:13:48.205413 kernel: 508480 pages in range for PLT usage Jul 7 00:13:48.205421 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 00:13:48.205428 kernel: SMBIOS 3.4.0 present. Jul 7 00:13:48.205435 kernel: DMI: GIGABYTE R272-P30-JG/MP32-AR0-JG, BIOS F16f (SCP: 1.07.20210713) 07/01/2021 Jul 7 00:13:48.205443 kernel: DMI: Memory slots populated: 8/16 Jul 7 00:13:48.205450 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 00:13:48.205457 kernel: DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations Jul 7 00:13:48.205465 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 7 00:13:48.205473 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 7 00:13:48.205481 kernel: audit: initializing netlink subsys (disabled) Jul 7 00:13:48.205488 kernel: audit: type=2000 audit(0.065:1): state=initialized audit_enabled=0 res=1 Jul 7 00:13:48.205495 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 00:13:48.205502 kernel: cpuidle: using governor menu Jul 7 00:13:48.205510 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 7 00:13:48.205517 kernel: ASID allocator initialised with 32768 entries Jul 7 00:13:48.205524 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 00:13:48.205532 kernel: Serial: AMBA PL011 UART driver Jul 7 00:13:48.205540 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 00:13:48.205548 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 00:13:48.205555 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 7 00:13:48.205562 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 7 00:13:48.205569 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 00:13:48.205577 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 00:13:48.205584 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 7 00:13:48.205591 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 7 00:13:48.205599 kernel: ACPI: Added _OSI(Module Device) Jul 7 00:13:48.205608 kernel: ACPI: Added _OSI(Processor Device) Jul 7 00:13:48.205615 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 00:13:48.205622 kernel: ACPI: 2 ACPI AML tables successfully acquired and loaded Jul 7 00:13:48.205630 kernel: ACPI: Interpreter enabled Jul 7 00:13:48.205637 kernel: ACPI: Using GIC for interrupt routing Jul 7 00:13:48.205644 kernel: ACPI: MCFG table detected, 8 entries Jul 7 00:13:48.205651 kernel: ACPI: IORT: SMMU-v3[33ffe0000000] Mapped to Proximity domain 0 Jul 7 00:13:48.205659 kernel: ACPI: IORT: SMMU-v3[37ffe0000000] Mapped to Proximity domain 0 Jul 7 00:13:48.205666 kernel: ACPI: IORT: SMMU-v3[3bffe0000000] Mapped to Proximity domain 0 Jul 7 00:13:48.205674 kernel: ACPI: IORT: SMMU-v3[3fffe0000000] Mapped to Proximity domain 0 Jul 7 00:13:48.205682 kernel: ACPI: IORT: SMMU-v3[23ffe0000000] Mapped to Proximity domain 0 Jul 7 00:13:48.205689 kernel: ACPI: IORT: SMMU-v3[27ffe0000000] Mapped to Proximity domain 0 Jul 7 00:13:48.205696 kernel: ACPI: IORT: SMMU-v3[2bffe0000000] Mapped to Proximity domain 0 Jul 7 00:13:48.205704 kernel: ACPI: IORT: SMMU-v3[2fffe0000000] Mapped to Proximity domain 0 Jul 7 00:13:48.205711 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x100002600000 (irq = 21, base_baud = 0) is a SBSA Jul 7 00:13:48.205718 kernel: printk: legacy console [ttyAMA0] enabled Jul 7 00:13:48.205726 kernel: ARMH0011:01: ttyAMA1 at MMIO 0x100002620000 (irq = 22, base_baud = 0) is a SBSA Jul 7 00:13:48.205735 kernel: ACPI: PCI Root Bridge [PCI1] (domain 000d [bus 00-ff]) Jul 7 00:13:48.205862 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 00:13:48.205925 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 7 00:13:48.205988 kernel: acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability] Jul 7 00:13:48.206044 kernel: acpi PNP0A08:00: MCFG quirk: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 7 00:13:48.206099 kernel: acpi PNP0A08:00: ECAM area [mem 0x37fff0000000-0x37ffffffffff] reserved by PNP0C02:00 Jul 7 00:13:48.206153 kernel: acpi PNP0A08:00: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] Jul 7 00:13:48.206165 kernel: PCI host bridge to bus 000d:00 Jul 7 00:13:48.206230 kernel: pci_bus 000d:00: root bus resource [mem 0x50000000-0x5fffffff window] Jul 7 00:13:48.206283 kernel: pci_bus 000d:00: root bus resource [mem 0x340000000000-0x37ffdfffffff window] Jul 7 00:13:48.206334 kernel: pci_bus 000d:00: root bus resource [bus 00-ff] Jul 7 00:13:48.206407 kernel: pci 000d:00:00.0: [1def:e100] type 00 class 0x060000 conventional PCI endpoint Jul 7 00:13:48.206478 kernel: pci 000d:00:01.0: [1def:e101] type 01 class 0x060400 PCIe Root Port Jul 7 00:13:48.206539 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] Jul 7 00:13:48.206599 kernel: pci 000d:00:01.0: enabling Extended Tags Jul 7 00:13:48.206658 kernel: pci 000d:00:01.0: supports D1 D2 Jul 7 00:13:48.206718 kernel: pci 000d:00:01.0: PME# supported from D0 D1 D3hot Jul 7 00:13:48.206784 kernel: pci 000d:00:02.0: [1def:e102] type 01 class 0x060400 PCIe Root Port Jul 7 00:13:48.206842 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] Jul 7 00:13:48.206900 kernel: pci 000d:00:02.0: supports D1 D2 Jul 7 00:13:48.206964 kernel: pci 000d:00:02.0: PME# supported from D0 D1 D3hot Jul 7 00:13:48.207030 kernel: pci 000d:00:03.0: [1def:e103] type 01 class 0x060400 PCIe Root Port Jul 7 00:13:48.207088 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] Jul 7 00:13:48.207145 kernel: pci 000d:00:03.0: supports D1 D2 Jul 7 00:13:48.207202 kernel: pci 000d:00:03.0: PME# supported from D0 D1 D3hot Jul 7 00:13:48.207266 kernel: pci 000d:00:04.0: [1def:e104] type 01 class 0x060400 PCIe Root Port Jul 7 00:13:48.207324 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] Jul 7 00:13:48.207383 kernel: pci 000d:00:04.0: supports D1 D2 Jul 7 00:13:48.207440 kernel: pci 000d:00:04.0: PME# supported from D0 D1 D3hot Jul 7 00:13:48.207450 kernel: acpiphp: Slot [1] registered Jul 7 00:13:48.207457 kernel: acpiphp: Slot [2] registered Jul 7 00:13:48.207464 kernel: acpiphp: Slot [3] registered Jul 7 00:13:48.207471 kernel: acpiphp: Slot [4] registered Jul 7 00:13:48.207522 kernel: pci_bus 000d:00: on NUMA node 0 Jul 7 00:13:48.207580 kernel: pci 000d:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 7 00:13:48.207640 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 7 00:13:48.207697 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 7 00:13:48.207756 kernel: pci 000d:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 7 00:13:48.207813 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 7 00:13:48.207871 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 7 00:13:48.207928 kernel: pci 000d:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 7 00:13:48.207989 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 7 00:13:48.208051 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jul 7 00:13:48.208109 kernel: pci 000d:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 7 00:13:48.208166 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 7 00:13:48.208224 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 7 00:13:48.208281 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff]: assigned Jul 7 00:13:48.208339 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref]: assigned Jul 7 00:13:48.208396 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff]: assigned Jul 7 00:13:48.208455 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref]: assigned Jul 7 00:13:48.208512 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff]: assigned Jul 7 00:13:48.208569 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref]: assigned Jul 7 00:13:48.208626 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff]: assigned Jul 7 00:13:48.208683 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref]: assigned Jul 7 00:13:48.208740 kernel: pci 000d:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.208797 kernel: pci 000d:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.208854 kernel: pci 000d:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.208913 kernel: pci 000d:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.208974 kernel: pci 000d:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.209031 kernel: pci 000d:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.209088 kernel: pci 000d:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.209145 kernel: pci 000d:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.209201 kernel: pci 000d:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.209259 kernel: pci 000d:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.209318 kernel: pci 000d:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.209375 kernel: pci 000d:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.209432 kernel: pci 000d:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.209489 kernel: pci 000d:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.209546 kernel: pci 000d:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.209603 kernel: pci 000d:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.209660 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] Jul 7 00:13:48.209717 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff] Jul 7 00:13:48.209776 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref] Jul 7 00:13:48.209833 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] Jul 7 00:13:48.209890 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff] Jul 7 00:13:48.209951 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref] Jul 7 00:13:48.210008 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] Jul 7 00:13:48.210066 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff] Jul 7 00:13:48.210125 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref] Jul 7 00:13:48.210182 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] Jul 7 00:13:48.210239 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff] Jul 7 00:13:48.210296 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref] Jul 7 00:13:48.210348 kernel: pci_bus 000d:00: resource 4 [mem 0x50000000-0x5fffffff window] Jul 7 00:13:48.210399 kernel: pci_bus 000d:00: resource 5 [mem 0x340000000000-0x37ffdfffffff window] Jul 7 00:13:48.210463 kernel: pci_bus 000d:01: resource 1 [mem 0x50000000-0x501fffff] Jul 7 00:13:48.210519 kernel: pci_bus 000d:01: resource 2 [mem 0x340000000000-0x3400001fffff 64bit pref] Jul 7 00:13:48.210580 kernel: pci_bus 000d:02: resource 1 [mem 0x50200000-0x503fffff] Jul 7 00:13:48.210633 kernel: pci_bus 000d:02: resource 2 [mem 0x340000200000-0x3400003fffff 64bit pref] Jul 7 00:13:48.210701 kernel: pci_bus 000d:03: resource 1 [mem 0x50400000-0x505fffff] Jul 7 00:13:48.210754 kernel: pci_bus 000d:03: resource 2 [mem 0x340000400000-0x3400005fffff 64bit pref] Jul 7 00:13:48.210815 kernel: pci_bus 000d:04: resource 1 [mem 0x50600000-0x507fffff] Jul 7 00:13:48.210870 kernel: pci_bus 000d:04: resource 2 [mem 0x340000600000-0x3400007fffff 64bit pref] Jul 7 00:13:48.210880 kernel: ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 00-ff]) Jul 7 00:13:48.210946 kernel: acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 00:13:48.211003 kernel: acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 7 00:13:48.211058 kernel: acpi PNP0A08:01: _OSC: OS now controls [AER PCIeCapability] Jul 7 00:13:48.211113 kernel: acpi PNP0A08:01: MCFG quirk: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 7 00:13:48.211167 kernel: acpi PNP0A08:01: ECAM area [mem 0x3ffff0000000-0x3fffffffffff] reserved by PNP0C02:00 Jul 7 00:13:48.211224 kernel: acpi PNP0A08:01: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] Jul 7 00:13:48.211234 kernel: PCI host bridge to bus 0000:00 Jul 7 00:13:48.211292 kernel: pci_bus 0000:00: root bus resource [mem 0x70000000-0x7fffffff window] Jul 7 00:13:48.211346 kernel: pci_bus 0000:00: root bus resource [mem 0x3c0000000000-0x3fffdfffffff window] Jul 7 00:13:48.211399 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 00:13:48.211468 kernel: pci 0000:00:00.0: [1def:e100] type 00 class 0x060000 conventional PCI endpoint Jul 7 00:13:48.211536 kernel: pci 0000:00:01.0: [1def:e101] type 01 class 0x060400 PCIe Root Port Jul 7 00:13:48.211596 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 7 00:13:48.211654 kernel: pci 0000:00:01.0: enabling Extended Tags Jul 7 00:13:48.211711 kernel: pci 0000:00:01.0: supports D1 D2 Jul 7 00:13:48.211769 kernel: pci 0000:00:01.0: PME# supported from D0 D1 D3hot Jul 7 00:13:48.211833 kernel: pci 0000:00:02.0: [1def:e102] type 01 class 0x060400 PCIe Root Port Jul 7 00:13:48.211893 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] Jul 7 00:13:48.211968 kernel: pci 0000:00:02.0: supports D1 D2 Jul 7 00:13:48.212030 kernel: pci 0000:00:02.0: PME# supported from D0 D1 D3hot Jul 7 00:13:48.212096 kernel: pci 0000:00:03.0: [1def:e103] type 01 class 0x060400 PCIe Root Port Jul 7 00:13:48.212158 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] Jul 7 00:13:48.212217 kernel: pci 0000:00:03.0: supports D1 D2 Jul 7 00:13:48.212273 kernel: pci 0000:00:03.0: PME# supported from D0 D1 D3hot Jul 7 00:13:48.212337 kernel: pci 0000:00:04.0: [1def:e104] type 01 class 0x060400 PCIe Root Port Jul 7 00:13:48.212394 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] Jul 7 00:13:48.212454 kernel: pci 0000:00:04.0: supports D1 D2 Jul 7 00:13:48.212510 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D3hot Jul 7 00:13:48.212520 kernel: acpiphp: Slot [1-1] registered Jul 7 00:13:48.212527 kernel: acpiphp: Slot [2-1] registered Jul 7 00:13:48.212534 kernel: acpiphp: Slot [3-1] registered Jul 7 00:13:48.212541 kernel: acpiphp: Slot [4-1] registered Jul 7 00:13:48.212591 kernel: pci_bus 0000:00: on NUMA node 0 Jul 7 00:13:48.212649 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 7 00:13:48.212708 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 7 00:13:48.212781 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 7 00:13:48.212840 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 7 00:13:48.212898 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 7 00:13:48.212961 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 7 00:13:48.213019 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 7 00:13:48.213077 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 7 00:13:48.213137 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jul 7 00:13:48.213195 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 7 00:13:48.213252 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 7 00:13:48.213309 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 7 00:13:48.213366 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff]: assigned Jul 7 00:13:48.213423 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref]: assigned Jul 7 00:13:48.213482 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff]: assigned Jul 7 00:13:48.213539 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref]: assigned Jul 7 00:13:48.213596 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff]: assigned Jul 7 00:13:48.213652 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref]: assigned Jul 7 00:13:48.213709 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff]: assigned Jul 7 00:13:48.213766 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref]: assigned Jul 7 00:13:48.213822 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.213879 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.213941 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.213998 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.214055 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.214114 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.214172 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.214229 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.214286 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.214343 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.214402 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.214459 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.214517 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.214575 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.214632 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.214691 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.214748 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 7 00:13:48.214806 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff] Jul 7 00:13:48.214863 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Jul 7 00:13:48.214920 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] Jul 7 00:13:48.214980 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff] Jul 7 00:13:48.215038 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Jul 7 00:13:48.215097 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] Jul 7 00:13:48.215154 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff] Jul 7 00:13:48.215211 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Jul 7 00:13:48.215268 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] Jul 7 00:13:48.215326 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff] Jul 7 00:13:48.215383 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Jul 7 00:13:48.215437 kernel: pci_bus 0000:00: resource 4 [mem 0x70000000-0x7fffffff window] Jul 7 00:13:48.215488 kernel: pci_bus 0000:00: resource 5 [mem 0x3c0000000000-0x3fffdfffffff window] Jul 7 00:13:48.215550 kernel: pci_bus 0000:01: resource 1 [mem 0x70000000-0x701fffff] Jul 7 00:13:48.215603 kernel: pci_bus 0000:01: resource 2 [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Jul 7 00:13:48.215663 kernel: pci_bus 0000:02: resource 1 [mem 0x70200000-0x703fffff] Jul 7 00:13:48.215717 kernel: pci_bus 0000:02: resource 2 [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Jul 7 00:13:48.215784 kernel: pci_bus 0000:03: resource 1 [mem 0x70400000-0x705fffff] Jul 7 00:13:48.215839 kernel: pci_bus 0000:03: resource 2 [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Jul 7 00:13:48.215899 kernel: pci_bus 0000:04: resource 1 [mem 0x70600000-0x707fffff] Jul 7 00:13:48.215956 kernel: pci_bus 0000:04: resource 2 [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Jul 7 00:13:48.215966 kernel: ACPI: PCI Root Bridge [PCI7] (domain 0005 [bus 00-ff]) Jul 7 00:13:48.216028 kernel: acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 00:13:48.216084 kernel: acpi PNP0A08:02: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 7 00:13:48.216141 kernel: acpi PNP0A08:02: _OSC: OS now controls [AER PCIeCapability] Jul 7 00:13:48.216195 kernel: acpi PNP0A08:02: MCFG quirk: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 7 00:13:48.216250 kernel: acpi PNP0A08:02: ECAM area [mem 0x2ffff0000000-0x2fffffffffff] reserved by PNP0C02:00 Jul 7 00:13:48.216304 kernel: acpi PNP0A08:02: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] Jul 7 00:13:48.216313 kernel: PCI host bridge to bus 0005:00 Jul 7 00:13:48.216373 kernel: pci_bus 0005:00: root bus resource [mem 0x30000000-0x3fffffff window] Jul 7 00:13:48.216424 kernel: pci_bus 0005:00: root bus resource [mem 0x2c0000000000-0x2fffdfffffff window] Jul 7 00:13:48.216476 kernel: pci_bus 0005:00: root bus resource [bus 00-ff] Jul 7 00:13:48.216540 kernel: pci 0005:00:00.0: [1def:e110] type 00 class 0x060000 conventional PCI endpoint Jul 7 00:13:48.216605 kernel: pci 0005:00:01.0: [1def:e111] type 01 class 0x060400 PCIe Root Port Jul 7 00:13:48.216662 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] Jul 7 00:13:48.216719 kernel: pci 0005:00:01.0: supports D1 D2 Jul 7 00:13:48.216775 kernel: pci 0005:00:01.0: PME# supported from D0 D1 D3hot Jul 7 00:13:48.216838 kernel: pci 0005:00:03.0: [1def:e113] type 01 class 0x060400 PCIe Root Port Jul 7 00:13:48.216898 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] Jul 7 00:13:48.216960 kernel: pci 0005:00:03.0: supports D1 D2 Jul 7 00:13:48.217017 kernel: pci 0005:00:03.0: PME# supported from D0 D1 D3hot Jul 7 00:13:48.217080 kernel: pci 0005:00:05.0: [1def:e115] type 01 class 0x060400 PCIe Root Port Jul 7 00:13:48.217138 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] Jul 7 00:13:48.217195 kernel: pci 0005:00:05.0: bridge window [mem 0x30100000-0x301fffff] Jul 7 00:13:48.217252 kernel: pci 0005:00:05.0: supports D1 D2 Jul 7 00:13:48.217311 kernel: pci 0005:00:05.0: PME# supported from D0 D1 D3hot Jul 7 00:13:48.217374 kernel: pci 0005:00:07.0: [1def:e117] type 01 class 0x060400 PCIe Root Port Jul 7 00:13:48.217431 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] Jul 7 00:13:48.217488 kernel: pci 0005:00:07.0: bridge window [mem 0x30000000-0x300fffff] Jul 7 00:13:48.217544 kernel: pci 0005:00:07.0: supports D1 D2 Jul 7 00:13:48.217600 kernel: pci 0005:00:07.0: PME# supported from D0 D1 D3hot Jul 7 00:13:48.217610 kernel: acpiphp: Slot [1-2] registered Jul 7 00:13:48.217617 kernel: acpiphp: Slot [2-2] registered Jul 7 00:13:48.217682 kernel: pci 0005:03:00.0: [144d:a808] type 00 class 0x010802 PCIe Endpoint Jul 7 00:13:48.217745 kernel: pci 0005:03:00.0: BAR 0 [mem 0x30110000-0x30113fff 64bit] Jul 7 00:13:48.217804 kernel: pci 0005:03:00.0: ROM [mem 0x30100000-0x3010ffff pref] Jul 7 00:13:48.217869 kernel: pci 0005:04:00.0: [144d:a808] type 00 class 0x010802 PCIe Endpoint Jul 7 00:13:48.217928 kernel: pci 0005:04:00.0: BAR 0 [mem 0x30010000-0x30013fff 64bit] Jul 7 00:13:48.217991 kernel: pci 0005:04:00.0: ROM [mem 0x30000000-0x3000ffff pref] Jul 7 00:13:48.218043 kernel: pci_bus 0005:00: on NUMA node 0 Jul 7 00:13:48.218103 kernel: pci 0005:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 7 00:13:48.218160 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 7 00:13:48.218217 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 7 00:13:48.218275 kernel: pci 0005:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 7 00:13:48.218332 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 7 00:13:48.218389 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 7 00:13:48.218450 kernel: pci 0005:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 7 00:13:48.218509 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 7 00:13:48.218566 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jul 7 00:13:48.218624 kernel: pci 0005:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 7 00:13:48.218682 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 7 00:13:48.218740 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x001fffff] to [bus 04] add_size 100000 add_align 100000 Jul 7 00:13:48.218796 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff]: assigned Jul 7 00:13:48.218855 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref]: assigned Jul 7 00:13:48.218912 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff]: assigned Jul 7 00:13:48.218973 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref]: assigned Jul 7 00:13:48.219032 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff]: assigned Jul 7 00:13:48.219089 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref]: assigned Jul 7 00:13:48.219146 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff]: assigned Jul 7 00:13:48.219203 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref]: assigned Jul 7 00:13:48.219261 kernel: pci 0005:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.219320 kernel: pci 0005:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.219377 kernel: pci 0005:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.219433 kernel: pci 0005:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.219490 kernel: pci 0005:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.219547 kernel: pci 0005:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.219604 kernel: pci 0005:00:07.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.219661 kernel: pci 0005:00:07.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.219718 kernel: pci 0005:00:07.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.219775 kernel: pci 0005:00:07.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.219833 kernel: pci 0005:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.219890 kernel: pci 0005:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.219950 kernel: pci 0005:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.220007 kernel: pci 0005:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.220064 kernel: pci 0005:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.220121 kernel: pci 0005:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.220180 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] Jul 7 00:13:48.220237 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff] Jul 7 00:13:48.220294 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Jul 7 00:13:48.220351 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] Jul 7 00:13:48.220408 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff] Jul 7 00:13:48.220465 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Jul 7 00:13:48.220525 kernel: pci 0005:03:00.0: ROM [mem 0x30400000-0x3040ffff pref]: assigned Jul 7 00:13:48.220586 kernel: pci 0005:03:00.0: BAR 0 [mem 0x30410000-0x30413fff 64bit]: assigned Jul 7 00:13:48.220642 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] Jul 7 00:13:48.220699 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff] Jul 7 00:13:48.220756 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Jul 7 00:13:48.220815 kernel: pci 0005:04:00.0: ROM [mem 0x30600000-0x3060ffff pref]: assigned Jul 7 00:13:48.220874 kernel: pci 0005:04:00.0: BAR 0 [mem 0x30610000-0x30613fff 64bit]: assigned Jul 7 00:13:48.220932 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] Jul 7 00:13:48.220993 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff] Jul 7 00:13:48.221052 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Jul 7 00:13:48.221104 kernel: pci_bus 0005:00: resource 4 [mem 0x30000000-0x3fffffff window] Jul 7 00:13:48.221154 kernel: pci_bus 0005:00: resource 5 [mem 0x2c0000000000-0x2fffdfffffff window] Jul 7 00:13:48.221216 kernel: pci_bus 0005:01: resource 1 [mem 0x30000000-0x301fffff] Jul 7 00:13:48.221269 kernel: pci_bus 0005:01: resource 2 [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Jul 7 00:13:48.221336 kernel: pci_bus 0005:02: resource 1 [mem 0x30200000-0x303fffff] Jul 7 00:13:48.221389 kernel: pci_bus 0005:02: resource 2 [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Jul 7 00:13:48.221451 kernel: pci_bus 0005:03: resource 1 [mem 0x30400000-0x305fffff] Jul 7 00:13:48.221504 kernel: pci_bus 0005:03: resource 2 [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Jul 7 00:13:48.221564 kernel: pci_bus 0005:04: resource 1 [mem 0x30600000-0x307fffff] Jul 7 00:13:48.221617 kernel: pci_bus 0005:04: resource 2 [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Jul 7 00:13:48.221626 kernel: ACPI: PCI Root Bridge [PCI5] (domain 0003 [bus 00-ff]) Jul 7 00:13:48.221691 kernel: acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 00:13:48.221747 kernel: acpi PNP0A08:03: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 7 00:13:48.221802 kernel: acpi PNP0A08:03: _OSC: OS now controls [AER PCIeCapability] Jul 7 00:13:48.221856 kernel: acpi PNP0A08:03: MCFG quirk: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 7 00:13:48.221910 kernel: acpi PNP0A08:03: ECAM area [mem 0x27fff0000000-0x27ffffffffff] reserved by PNP0C02:00 Jul 7 00:13:48.221970 kernel: acpi PNP0A08:03: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] Jul 7 00:13:48.221979 kernel: PCI host bridge to bus 0003:00 Jul 7 00:13:48.222039 kernel: pci_bus 0003:00: root bus resource [mem 0x10000000-0x1fffffff window] Jul 7 00:13:48.222090 kernel: pci_bus 0003:00: root bus resource [mem 0x240000000000-0x27ffdfffffff window] Jul 7 00:13:48.222140 kernel: pci_bus 0003:00: root bus resource [bus 00-ff] Jul 7 00:13:48.222203 kernel: pci 0003:00:00.0: [1def:e110] type 00 class 0x060000 conventional PCI endpoint Jul 7 00:13:48.222268 kernel: pci 0003:00:01.0: [1def:e111] type 01 class 0x060400 PCIe Root Port Jul 7 00:13:48.222325 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] Jul 7 00:13:48.222382 kernel: pci 0003:00:01.0: supports D1 D2 Jul 7 00:13:48.222441 kernel: pci 0003:00:01.0: PME# supported from D0 D1 D3hot Jul 7 00:13:48.222506 kernel: pci 0003:00:03.0: [1def:e113] type 01 class 0x060400 PCIe Root Port Jul 7 00:13:48.222563 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] Jul 7 00:13:48.222620 kernel: pci 0003:00:03.0: supports D1 D2 Jul 7 00:13:48.222678 kernel: pci 0003:00:03.0: PME# supported from D0 D1 D3hot Jul 7 00:13:48.222741 kernel: pci 0003:00:05.0: [1def:e115] type 01 class 0x060400 PCIe Root Port Jul 7 00:13:48.222798 kernel: pci 0003:00:05.0: PCI bridge to [bus 03] Jul 7 00:13:48.222857 kernel: pci 0003:00:05.0: bridge window [io 0x8000-0x8fff] Jul 7 00:13:48.222914 kernel: pci 0003:00:05.0: bridge window [mem 0x10000000-0x100fffff] Jul 7 00:13:48.222976 kernel: pci 0003:00:05.0: supports D1 D2 Jul 7 00:13:48.223033 kernel: pci 0003:00:05.0: PME# supported from D0 D1 D3hot Jul 7 00:13:48.223043 kernel: acpiphp: Slot [1-3] registered Jul 7 00:13:48.223050 kernel: acpiphp: Slot [2-3] registered Jul 7 00:13:48.223116 kernel: pci 0003:03:00.0: [8086:1521] type 00 class 0x020000 PCIe Endpoint Jul 7 00:13:48.223177 kernel: pci 0003:03:00.0: BAR 0 [mem 0x10020000-0x1003ffff] Jul 7 00:13:48.223236 kernel: pci 0003:03:00.0: BAR 2 [io 0x1fff8020-0x1fff803f] Jul 7 00:13:48.223294 kernel: pci 0003:03:00.0: BAR 3 [mem 0x10044000-0x10047fff] Jul 7 00:13:48.223353 kernel: pci 0003:03:00.0: PME# supported from D0 D3hot D3cold Jul 7 00:13:48.223411 kernel: pci 0003:03:00.0: VF BAR 0 [mem 0x00000000-0x00003fff 64bit pref] Jul 7 00:13:48.223470 kernel: pci 0003:03:00.0: VF BAR 0 [mem 0x00000000-0x0001ffff 64bit pref]: contains BAR 0 for 8 VFs Jul 7 00:13:48.223529 kernel: pci 0003:03:00.0: VF BAR 3 [mem 0x00000000-0x00003fff 64bit pref] Jul 7 00:13:48.223587 kernel: pci 0003:03:00.0: VF BAR 3 [mem 0x00000000-0x0001ffff 64bit pref]: contains BAR 3 for 8 VFs Jul 7 00:13:48.223647 kernel: pci 0003:03:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x2 link at 0003:00:05.0 (capable of 16.000 Gb/s with 5.0 GT/s PCIe x4 link) Jul 7 00:13:48.223713 kernel: pci 0003:03:00.1: [8086:1521] type 00 class 0x020000 PCIe Endpoint Jul 7 00:13:48.223772 kernel: pci 0003:03:00.1: BAR 0 [mem 0x10000000-0x1001ffff] Jul 7 00:13:48.223831 kernel: pci 0003:03:00.1: BAR 2 [io 0x1fff8000-0x1fff801f] Jul 7 00:13:48.223889 kernel: pci 0003:03:00.1: BAR 3 [mem 0x10040000-0x10043fff] Jul 7 00:13:48.223951 kernel: pci 0003:03:00.1: PME# supported from D0 D3hot D3cold Jul 7 00:13:48.224011 kernel: pci 0003:03:00.1: VF BAR 0 [mem 0x00000000-0x00003fff 64bit pref] Jul 7 00:13:48.224071 kernel: pci 0003:03:00.1: VF BAR 0 [mem 0x00000000-0x0001ffff 64bit pref]: contains BAR 0 for 8 VFs Jul 7 00:13:48.224130 kernel: pci 0003:03:00.1: VF BAR 3 [mem 0x00000000-0x00003fff 64bit pref] Jul 7 00:13:48.224198 kernel: pci 0003:03:00.1: VF BAR 3 [mem 0x00000000-0x0001ffff 64bit pref]: contains BAR 3 for 8 VFs Jul 7 00:13:48.224252 kernel: pci_bus 0003:00: on NUMA node 0 Jul 7 00:13:48.224303 kernel: pci_bus 0003:00: max bus depth: 1 pci_try_num: 2 Jul 7 00:13:48.224361 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff]: assigned Jul 7 00:13:48.224419 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref]: assigned Jul 7 00:13:48.224480 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff]: assigned Jul 7 00:13:48.224540 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref]: assigned Jul 7 00:13:48.224598 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff]: assigned Jul 7 00:13:48.224656 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400005fffff 64bit pref]: assigned Jul 7 00:13:48.224713 kernel: pci 0003:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.224771 kernel: pci 0003:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.224829 kernel: pci 0003:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.224887 kernel: pci 0003:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.224949 kernel: pci 0003:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.225007 kernel: pci 0003:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.225065 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] Jul 7 00:13:48.225122 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff] Jul 7 00:13:48.225180 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref] Jul 7 00:13:48.225237 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] Jul 7 00:13:48.225295 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff] Jul 7 00:13:48.225352 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref] Jul 7 00:13:48.225414 kernel: pci 0003:03:00.0: BAR 0 [mem 0x10400000-0x1041ffff]: assigned Jul 7 00:13:48.225475 kernel: pci 0003:03:00.1: BAR 0 [mem 0x10420000-0x1043ffff]: assigned Jul 7 00:13:48.225534 kernel: pci 0003:03:00.0: BAR 3 [mem 0x10440000-0x10443fff]: assigned Jul 7 00:13:48.225593 kernel: pci 0003:03:00.0: VF BAR 0 [mem 0x240000400000-0x24000041ffff 64bit pref]: assigned Jul 7 00:13:48.225652 kernel: pci 0003:03:00.0: VF BAR 3 [mem 0x240000420000-0x24000043ffff 64bit pref]: assigned Jul 7 00:13:48.225711 kernel: pci 0003:03:00.1: BAR 3 [mem 0x10444000-0x10447fff]: assigned Jul 7 00:13:48.225772 kernel: pci 0003:03:00.1: VF BAR 0 [mem 0x240000440000-0x24000045ffff 64bit pref]: assigned Jul 7 00:13:48.225832 kernel: pci 0003:03:00.1: VF BAR 3 [mem 0x240000460000-0x24000047ffff 64bit pref]: assigned Jul 7 00:13:48.225890 kernel: pci 0003:03:00.0: BAR 2 [io size 0x0020]: can't assign; no space Jul 7 00:13:48.225952 kernel: pci 0003:03:00.0: BAR 2 [io size 0x0020]: failed to assign Jul 7 00:13:48.226011 kernel: pci 0003:03:00.1: BAR 2 [io size 0x0020]: can't assign; no space Jul 7 00:13:48.226070 kernel: pci 0003:03:00.1: BAR 2 [io size 0x0020]: failed to assign Jul 7 00:13:48.226128 kernel: pci 0003:00:05.0: PCI bridge to [bus 03] Jul 7 00:13:48.226187 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff] Jul 7 00:13:48.226244 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400005fffff 64bit pref] Jul 7 00:13:48.226296 kernel: pci_bus 0003:00: No. 2 try to assign unassigned res Jul 7 00:13:48.226355 kernel: pci 0003:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.226413 kernel: pci 0003:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.226470 kernel: pci 0003:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.226528 kernel: pci 0003:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.226585 kernel: pci 0003:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.226645 kernel: pci 0003:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.226702 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] Jul 7 00:13:48.226760 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff] Jul 7 00:13:48.226816 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref] Jul 7 00:13:48.226873 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] Jul 7 00:13:48.226931 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff] Jul 7 00:13:48.226992 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref] Jul 7 00:13:48.227053 kernel: pci 0003:03:00.0: BAR 2 [io size 0x0020]: can't assign; no space Jul 7 00:13:48.227112 kernel: pci 0003:03:00.0: BAR 2 [io size 0x0020]: failed to assign Jul 7 00:13:48.227171 kernel: pci 0003:03:00.1: BAR 2 [io size 0x0020]: can't assign; no space Jul 7 00:13:48.227230 kernel: pci 0003:03:00.1: BAR 2 [io size 0x0020]: failed to assign Jul 7 00:13:48.227287 kernel: pci 0003:00:05.0: PCI bridge to [bus 03] Jul 7 00:13:48.227345 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff] Jul 7 00:13:48.227402 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400005fffff 64bit pref] Jul 7 00:13:48.227453 kernel: pci_bus 0003:00: Automatically enabled pci realloc, if you have problem, try booting with pci=realloc=off Jul 7 00:13:48.227505 kernel: pci_bus 0003:00: resource 4 [mem 0x10000000-0x1fffffff window] Jul 7 00:13:48.227557 kernel: pci_bus 0003:00: resource 5 [mem 0x240000000000-0x27ffdfffffff window] Jul 7 00:13:48.227620 kernel: pci_bus 0003:01: resource 1 [mem 0x10000000-0x101fffff] Jul 7 00:13:48.227674 kernel: pci_bus 0003:01: resource 2 [mem 0x240000000000-0x2400001fffff 64bit pref] Jul 7 00:13:48.227742 kernel: pci_bus 0003:02: resource 1 [mem 0x10200000-0x103fffff] Jul 7 00:13:48.227796 kernel: pci_bus 0003:02: resource 2 [mem 0x240000200000-0x2400003fffff 64bit pref] Jul 7 00:13:48.227856 kernel: pci_bus 0003:03: resource 1 [mem 0x10400000-0x105fffff] Jul 7 00:13:48.227909 kernel: pci_bus 0003:03: resource 2 [mem 0x240000400000-0x2400005fffff 64bit pref] Jul 7 00:13:48.227920 kernel: ACPI: PCI Root Bridge [PCI0] (domain 000c [bus 00-ff]) Jul 7 00:13:48.227987 kernel: acpi PNP0A08:04: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 00:13:48.228043 kernel: acpi PNP0A08:04: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 7 00:13:48.228099 kernel: acpi PNP0A08:04: _OSC: OS now controls [AER PCIeCapability] Jul 7 00:13:48.228154 kernel: acpi PNP0A08:04: MCFG quirk: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 7 00:13:48.228209 kernel: acpi PNP0A08:04: ECAM area [mem 0x33fff0000000-0x33ffffffffff] reserved by PNP0C02:00 Jul 7 00:13:48.228263 kernel: acpi PNP0A08:04: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] Jul 7 00:13:48.228275 kernel: PCI host bridge to bus 000c:00 Jul 7 00:13:48.228334 kernel: pci_bus 000c:00: root bus resource [mem 0x40000000-0x4fffffff window] Jul 7 00:13:48.228386 kernel: pci_bus 000c:00: root bus resource [mem 0x300000000000-0x33ffdfffffff window] Jul 7 00:13:48.228436 kernel: pci_bus 000c:00: root bus resource [bus 00-ff] Jul 7 00:13:48.228501 kernel: pci 000c:00:00.0: [1def:e100] type 00 class 0x060000 conventional PCI endpoint Jul 7 00:13:48.228566 kernel: pci 000c:00:01.0: [1def:e101] type 01 class 0x060400 PCIe Root Port Jul 7 00:13:48.228626 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] Jul 7 00:13:48.228684 kernel: pci 000c:00:01.0: enabling Extended Tags Jul 7 00:13:48.228742 kernel: pci 000c:00:01.0: supports D1 D2 Jul 7 00:13:48.228798 kernel: pci 000c:00:01.0: PME# supported from D0 D1 D3hot Jul 7 00:13:48.228863 kernel: pci 000c:00:02.0: [1def:e102] type 01 class 0x060400 PCIe Root Port Jul 7 00:13:48.228921 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] Jul 7 00:13:48.228984 kernel: pci 000c:00:02.0: supports D1 D2 Jul 7 00:13:48.229042 kernel: pci 000c:00:02.0: PME# supported from D0 D1 D3hot Jul 7 00:13:48.229107 kernel: pci 000c:00:03.0: [1def:e103] type 01 class 0x060400 PCIe Root Port Jul 7 00:13:48.229165 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] Jul 7 00:13:48.229223 kernel: pci 000c:00:03.0: supports D1 D2 Jul 7 00:13:48.229280 kernel: pci 000c:00:03.0: PME# supported from D0 D1 D3hot Jul 7 00:13:48.229343 kernel: pci 000c:00:04.0: [1def:e104] type 01 class 0x060400 PCIe Root Port Jul 7 00:13:48.229401 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] Jul 7 00:13:48.229458 kernel: pci 000c:00:04.0: supports D1 D2 Jul 7 00:13:48.229517 kernel: pci 000c:00:04.0: PME# supported from D0 D1 D3hot Jul 7 00:13:48.229527 kernel: acpiphp: Slot [1-4] registered Jul 7 00:13:48.229534 kernel: acpiphp: Slot [2-4] registered Jul 7 00:13:48.229542 kernel: acpiphp: Slot [3-2] registered Jul 7 00:13:48.229550 kernel: acpiphp: Slot [4-2] registered Jul 7 00:13:48.229599 kernel: pci_bus 000c:00: on NUMA node 0 Jul 7 00:13:48.229658 kernel: pci 000c:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 7 00:13:48.229716 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 7 00:13:48.229775 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 7 00:13:48.229833 kernel: pci 000c:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 7 00:13:48.229890 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 7 00:13:48.229953 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 7 00:13:48.230012 kernel: pci 000c:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 7 00:13:48.230070 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 7 00:13:48.230128 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jul 7 00:13:48.230188 kernel: pci 000c:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 7 00:13:48.230246 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 7 00:13:48.230303 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 7 00:13:48.230361 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff]: assigned Jul 7 00:13:48.230419 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref]: assigned Jul 7 00:13:48.230477 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff]: assigned Jul 7 00:13:48.230535 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref]: assigned Jul 7 00:13:48.230595 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff]: assigned Jul 7 00:13:48.230653 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref]: assigned Jul 7 00:13:48.230710 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff]: assigned Jul 7 00:13:48.230768 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref]: assigned Jul 7 00:13:48.230825 kernel: pci 000c:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.230882 kernel: pci 000c:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.230943 kernel: pci 000c:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.231001 kernel: pci 000c:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.231060 kernel: pci 000c:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.231117 kernel: pci 000c:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.231175 kernel: pci 000c:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.231232 kernel: pci 000c:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.231291 kernel: pci 000c:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.231349 kernel: pci 000c:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.231406 kernel: pci 000c:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.231463 kernel: pci 000c:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.231522 kernel: pci 000c:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.231579 kernel: pci 000c:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.231636 kernel: pci 000c:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.231694 kernel: pci 000c:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.231751 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] Jul 7 00:13:48.231808 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff] Jul 7 00:13:48.231866 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref] Jul 7 00:13:48.231924 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] Jul 7 00:13:48.231986 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff] Jul 7 00:13:48.232044 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref] Jul 7 00:13:48.232101 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] Jul 7 00:13:48.232159 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff] Jul 7 00:13:48.232218 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref] Jul 7 00:13:48.232276 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] Jul 7 00:13:48.232333 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff] Jul 7 00:13:48.232391 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref] Jul 7 00:13:48.232444 kernel: pci_bus 000c:00: resource 4 [mem 0x40000000-0x4fffffff window] Jul 7 00:13:48.232495 kernel: pci_bus 000c:00: resource 5 [mem 0x300000000000-0x33ffdfffffff window] Jul 7 00:13:48.232557 kernel: pci_bus 000c:01: resource 1 [mem 0x40000000-0x401fffff] Jul 7 00:13:48.232612 kernel: pci_bus 000c:01: resource 2 [mem 0x300000000000-0x3000001fffff 64bit pref] Jul 7 00:13:48.232673 kernel: pci_bus 000c:02: resource 1 [mem 0x40200000-0x403fffff] Jul 7 00:13:48.232726 kernel: pci_bus 000c:02: resource 2 [mem 0x300000200000-0x3000003fffff 64bit pref] Jul 7 00:13:48.232794 kernel: pci_bus 000c:03: resource 1 [mem 0x40400000-0x405fffff] Jul 7 00:13:48.232848 kernel: pci_bus 000c:03: resource 2 [mem 0x300000400000-0x3000005fffff 64bit pref] Jul 7 00:13:48.232907 kernel: pci_bus 000c:04: resource 1 [mem 0x40600000-0x407fffff] Jul 7 00:13:48.232967 kernel: pci_bus 000c:04: resource 2 [mem 0x300000600000-0x3000007fffff 64bit pref] Jul 7 00:13:48.232977 kernel: ACPI: PCI Root Bridge [PCI4] (domain 0002 [bus 00-ff]) Jul 7 00:13:48.233040 kernel: acpi PNP0A08:05: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 00:13:48.233097 kernel: acpi PNP0A08:05: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 7 00:13:48.233151 kernel: acpi PNP0A08:05: _OSC: OS now controls [AER PCIeCapability] Jul 7 00:13:48.233206 kernel: acpi PNP0A08:05: MCFG quirk: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 7 00:13:48.233261 kernel: acpi PNP0A08:05: ECAM area [mem 0x23fff0000000-0x23ffffffffff] reserved by PNP0C02:00 Jul 7 00:13:48.233318 kernel: acpi PNP0A08:05: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] Jul 7 00:13:48.233328 kernel: PCI host bridge to bus 0002:00 Jul 7 00:13:48.233387 kernel: pci_bus 0002:00: root bus resource [mem 0x00800000-0x0fffffff window] Jul 7 00:13:48.233439 kernel: pci_bus 0002:00: root bus resource [mem 0x200000000000-0x23ffdfffffff window] Jul 7 00:13:48.233489 kernel: pci_bus 0002:00: root bus resource [bus 00-ff] Jul 7 00:13:48.233553 kernel: pci 0002:00:00.0: [1def:e110] type 00 class 0x060000 conventional PCI endpoint Jul 7 00:13:48.233621 kernel: pci 0002:00:01.0: [1def:e111] type 01 class 0x060400 PCIe Root Port Jul 7 00:13:48.233679 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] Jul 7 00:13:48.233737 kernel: pci 0002:00:01.0: supports D1 D2 Jul 7 00:13:48.233795 kernel: pci 0002:00:01.0: PME# supported from D0 D1 D3hot Jul 7 00:13:48.233859 kernel: pci 0002:00:03.0: [1def:e113] type 01 class 0x060400 PCIe Root Port Jul 7 00:13:48.233918 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] Jul 7 00:13:48.233979 kernel: pci 0002:00:03.0: supports D1 D2 Jul 7 00:13:48.234039 kernel: pci 0002:00:03.0: PME# supported from D0 D1 D3hot Jul 7 00:13:48.234103 kernel: pci 0002:00:05.0: [1def:e115] type 01 class 0x060400 PCIe Root Port Jul 7 00:13:48.234161 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] Jul 7 00:13:48.234218 kernel: pci 0002:00:05.0: supports D1 D2 Jul 7 00:13:48.234275 kernel: pci 0002:00:05.0: PME# supported from D0 D1 D3hot Jul 7 00:13:48.234339 kernel: pci 0002:00:07.0: [1def:e117] type 01 class 0x060400 PCIe Root Port Jul 7 00:13:48.234398 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] Jul 7 00:13:48.234457 kernel: pci 0002:00:07.0: supports D1 D2 Jul 7 00:13:48.234515 kernel: pci 0002:00:07.0: PME# supported from D0 D1 D3hot Jul 7 00:13:48.234524 kernel: acpiphp: Slot [1-5] registered Jul 7 00:13:48.234532 kernel: acpiphp: Slot [2-5] registered Jul 7 00:13:48.234540 kernel: acpiphp: Slot [3-3] registered Jul 7 00:13:48.234549 kernel: acpiphp: Slot [4-3] registered Jul 7 00:13:48.234599 kernel: pci_bus 0002:00: on NUMA node 0 Jul 7 00:13:48.234657 kernel: pci 0002:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 7 00:13:48.234714 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 7 00:13:48.234774 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 7 00:13:48.234832 kernel: pci 0002:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 7 00:13:48.234889 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 7 00:13:48.234950 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 7 00:13:48.235008 kernel: pci 0002:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 7 00:13:48.235066 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 7 00:13:48.235126 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jul 7 00:13:48.235183 kernel: pci 0002:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 7 00:13:48.235241 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 7 00:13:48.235299 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 7 00:13:48.235357 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff]: assigned Jul 7 00:13:48.235415 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref]: assigned Jul 7 00:13:48.235473 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff]: assigned Jul 7 00:13:48.235530 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref]: assigned Jul 7 00:13:48.235591 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff]: assigned Jul 7 00:13:48.235650 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref]: assigned Jul 7 00:13:48.235708 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff]: assigned Jul 7 00:13:48.235766 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref]: assigned Jul 7 00:13:48.235823 kernel: pci 0002:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.235880 kernel: pci 0002:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.235941 kernel: pci 0002:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.236001 kernel: pci 0002:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.236059 kernel: pci 0002:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.236116 kernel: pci 0002:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.236174 kernel: pci 0002:00:07.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.236232 kernel: pci 0002:00:07.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.236290 kernel: pci 0002:00:07.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.236348 kernel: pci 0002:00:07.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.236405 kernel: pci 0002:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.236462 kernel: pci 0002:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.236521 kernel: pci 0002:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.236578 kernel: pci 0002:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.236635 kernel: pci 0002:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.236692 kernel: pci 0002:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.236750 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] Jul 7 00:13:48.236807 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff] Jul 7 00:13:48.236866 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref] Jul 7 00:13:48.236923 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] Jul 7 00:13:48.236992 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff] Jul 7 00:13:48.237050 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref] Jul 7 00:13:48.237107 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] Jul 7 00:13:48.237165 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff] Jul 7 00:13:48.237222 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref] Jul 7 00:13:48.237279 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] Jul 7 00:13:48.237339 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff] Jul 7 00:13:48.237397 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref] Jul 7 00:13:48.237449 kernel: pci_bus 0002:00: resource 4 [mem 0x00800000-0x0fffffff window] Jul 7 00:13:48.237500 kernel: pci_bus 0002:00: resource 5 [mem 0x200000000000-0x23ffdfffffff window] Jul 7 00:13:48.237560 kernel: pci_bus 0002:01: resource 1 [mem 0x00800000-0x009fffff] Jul 7 00:13:48.237614 kernel: pci_bus 0002:01: resource 2 [mem 0x200000000000-0x2000001fffff 64bit pref] Jul 7 00:13:48.237674 kernel: pci_bus 0002:02: resource 1 [mem 0x00a00000-0x00bfffff] Jul 7 00:13:48.237729 kernel: pci_bus 0002:02: resource 2 [mem 0x200000200000-0x2000003fffff 64bit pref] Jul 7 00:13:48.237789 kernel: pci_bus 0002:03: resource 1 [mem 0x00c00000-0x00dfffff] Jul 7 00:13:48.237842 kernel: pci_bus 0002:03: resource 2 [mem 0x200000400000-0x2000005fffff 64bit pref] Jul 7 00:13:48.237908 kernel: pci_bus 0002:04: resource 1 [mem 0x00e00000-0x00ffffff] Jul 7 00:13:48.237976 kernel: pci_bus 0002:04: resource 2 [mem 0x200000600000-0x2000007fffff 64bit pref] Jul 7 00:13:48.238001 kernel: ACPI: PCI Root Bridge [PCI2] (domain 0001 [bus 00-ff]) Jul 7 00:13:48.238072 kernel: acpi PNP0A08:06: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 00:13:48.238129 kernel: acpi PNP0A08:06: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 7 00:13:48.238184 kernel: acpi PNP0A08:06: _OSC: OS now controls [AER PCIeCapability] Jul 7 00:13:48.238239 kernel: acpi PNP0A08:06: MCFG quirk: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 7 00:13:48.238294 kernel: acpi PNP0A08:06: ECAM area [mem 0x3bfff0000000-0x3bffffffffff] reserved by PNP0C02:00 Jul 7 00:13:48.238348 kernel: acpi PNP0A08:06: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] Jul 7 00:13:48.238360 kernel: PCI host bridge to bus 0001:00 Jul 7 00:13:48.238418 kernel: pci_bus 0001:00: root bus resource [mem 0x60000000-0x6fffffff window] Jul 7 00:13:48.238469 kernel: pci_bus 0001:00: root bus resource [mem 0x380000000000-0x3bffdfffffff window] Jul 7 00:13:48.238520 kernel: pci_bus 0001:00: root bus resource [bus 00-ff] Jul 7 00:13:48.238584 kernel: pci 0001:00:00.0: [1def:e100] type 00 class 0x060000 conventional PCI endpoint Jul 7 00:13:48.238649 kernel: pci 0001:00:01.0: [1def:e101] type 01 class 0x060400 PCIe Root Port Jul 7 00:13:48.238710 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] Jul 7 00:13:48.238768 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] Jul 7 00:13:48.238826 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref] Jul 7 00:13:48.238884 kernel: pci 0001:00:01.0: enabling Extended Tags Jul 7 00:13:48.238946 kernel: pci 0001:00:01.0: supports D1 D2 Jul 7 00:13:48.239005 kernel: pci 0001:00:01.0: PME# supported from D0 D1 D3hot Jul 7 00:13:48.239071 kernel: pci 0001:00:02.0: [1def:e102] type 01 class 0x060400 PCIe Root Port Jul 7 00:13:48.239132 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] Jul 7 00:13:48.239190 kernel: pci 0001:00:02.0: supports D1 D2 Jul 7 00:13:48.239246 kernel: pci 0001:00:02.0: PME# supported from D0 D1 D3hot Jul 7 00:13:48.239312 kernel: pci 0001:00:03.0: [1def:e103] type 01 class 0x060400 PCIe Root Port Jul 7 00:13:48.239370 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] Jul 7 00:13:48.239428 kernel: pci 0001:00:03.0: supports D1 D2 Jul 7 00:13:48.239485 kernel: pci 0001:00:03.0: PME# supported from D0 D1 D3hot Jul 7 00:13:48.239548 kernel: pci 0001:00:04.0: [1def:e104] type 01 class 0x060400 PCIe Root Port Jul 7 00:13:48.239608 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] Jul 7 00:13:48.239665 kernel: pci 0001:00:04.0: supports D1 D2 Jul 7 00:13:48.239722 kernel: pci 0001:00:04.0: PME# supported from D0 D1 D3hot Jul 7 00:13:48.239732 kernel: acpiphp: Slot [1-6] registered Jul 7 00:13:48.239795 kernel: pci 0001:01:00.0: [15b3:1015] type 00 class 0x020000 PCIe Endpoint Jul 7 00:13:48.239854 kernel: pci 0001:01:00.0: BAR 0 [mem 0x380002000000-0x380003ffffff 64bit pref] Jul 7 00:13:48.239914 kernel: pci 0001:01:00.0: ROM [mem 0x60100000-0x601fffff pref] Jul 7 00:13:48.239979 kernel: pci 0001:01:00.0: PME# supported from D3cold Jul 7 00:13:48.240038 kernel: pci 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jul 7 00:13:48.240103 kernel: pci 0001:01:00.1: [15b3:1015] type 00 class 0x020000 PCIe Endpoint Jul 7 00:13:48.240163 kernel: pci 0001:01:00.1: BAR 0 [mem 0x380000000000-0x380001ffffff 64bit pref] Jul 7 00:13:48.240222 kernel: pci 0001:01:00.1: ROM [mem 0x60000000-0x600fffff pref] Jul 7 00:13:48.240280 kernel: pci 0001:01:00.1: PME# supported from D3cold Jul 7 00:13:48.240290 kernel: acpiphp: Slot [2-6] registered Jul 7 00:13:48.240299 kernel: acpiphp: Slot [3-4] registered Jul 7 00:13:48.240307 kernel: acpiphp: Slot [4-4] registered Jul 7 00:13:48.240358 kernel: pci_bus 0001:00: on NUMA node 0 Jul 7 00:13:48.240415 kernel: pci 0001:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 7 00:13:48.240473 kernel: pci 0001:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 7 00:13:48.240531 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 7 00:13:48.240588 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 7 00:13:48.240646 kernel: pci 0001:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 7 00:13:48.240705 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 7 00:13:48.240762 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jul 7 00:13:48.240820 kernel: pci 0001:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 7 00:13:48.240878 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 7 00:13:48.240940 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 7 00:13:48.240999 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref]: assigned Jul 7 00:13:48.241056 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff]: assigned Jul 7 00:13:48.241116 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff]: assigned Jul 7 00:13:48.241173 kernel: pci 0001:00:02.0: bridge window [mem 0x380004000000-0x3800041fffff 64bit pref]: assigned Jul 7 00:13:48.241231 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff]: assigned Jul 7 00:13:48.241288 kernel: pci 0001:00:03.0: bridge window [mem 0x380004200000-0x3800043fffff 64bit pref]: assigned Jul 7 00:13:48.241347 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff]: assigned Jul 7 00:13:48.241404 kernel: pci 0001:00:04.0: bridge window [mem 0x380004400000-0x3800045fffff 64bit pref]: assigned Jul 7 00:13:48.241462 kernel: pci 0001:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.241520 kernel: pci 0001:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.241579 kernel: pci 0001:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.241637 kernel: pci 0001:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.241694 kernel: pci 0001:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.241751 kernel: pci 0001:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.241809 kernel: pci 0001:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.241866 kernel: pci 0001:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.241924 kernel: pci 0001:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.241984 kernel: pci 0001:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.242044 kernel: pci 0001:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.242101 kernel: pci 0001:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.242158 kernel: pci 0001:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.242215 kernel: pci 0001:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.242273 kernel: pci 0001:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.242329 kernel: pci 0001:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.242389 kernel: pci 0001:01:00.0: BAR 0 [mem 0x380000000000-0x380001ffffff 64bit pref]: assigned Jul 7 00:13:48.242449 kernel: pci 0001:01:00.1: BAR 0 [mem 0x380002000000-0x380003ffffff 64bit pref]: assigned Jul 7 00:13:48.242510 kernel: pci 0001:01:00.0: ROM [mem 0x60000000-0x600fffff pref]: assigned Jul 7 00:13:48.242569 kernel: pci 0001:01:00.1: ROM [mem 0x60100000-0x601fffff pref]: assigned Jul 7 00:13:48.242627 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] Jul 7 00:13:48.242684 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] Jul 7 00:13:48.242741 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref] Jul 7 00:13:48.242799 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] Jul 7 00:13:48.242856 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff] Jul 7 00:13:48.242915 kernel: pci 0001:00:02.0: bridge window [mem 0x380004000000-0x3800041fffff 64bit pref] Jul 7 00:13:48.242976 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] Jul 7 00:13:48.243034 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff] Jul 7 00:13:48.243091 kernel: pci 0001:00:03.0: bridge window [mem 0x380004200000-0x3800043fffff 64bit pref] Jul 7 00:13:48.243148 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] Jul 7 00:13:48.243205 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff] Jul 7 00:13:48.243264 kernel: pci 0001:00:04.0: bridge window [mem 0x380004400000-0x3800045fffff 64bit pref] Jul 7 00:13:48.243317 kernel: pci_bus 0001:00: resource 4 [mem 0x60000000-0x6fffffff window] Jul 7 00:13:48.243368 kernel: pci_bus 0001:00: resource 5 [mem 0x380000000000-0x3bffdfffffff window] Jul 7 00:13:48.243430 kernel: pci_bus 0001:01: resource 1 [mem 0x60000000-0x601fffff] Jul 7 00:13:48.243484 kernel: pci_bus 0001:01: resource 2 [mem 0x380000000000-0x380003ffffff 64bit pref] Jul 7 00:13:48.243549 kernel: pci_bus 0001:02: resource 1 [mem 0x60200000-0x603fffff] Jul 7 00:13:48.243603 kernel: pci_bus 0001:02: resource 2 [mem 0x380004000000-0x3800041fffff 64bit pref] Jul 7 00:13:48.243664 kernel: pci_bus 0001:03: resource 1 [mem 0x60400000-0x605fffff] Jul 7 00:13:48.243717 kernel: pci_bus 0001:03: resource 2 [mem 0x380004200000-0x3800043fffff 64bit pref] Jul 7 00:13:48.243777 kernel: pci_bus 0001:04: resource 1 [mem 0x60600000-0x607fffff] Jul 7 00:13:48.243830 kernel: pci_bus 0001:04: resource 2 [mem 0x380004400000-0x3800045fffff 64bit pref] Jul 7 00:13:48.243840 kernel: ACPI: PCI Root Bridge [PCI6] (domain 0004 [bus 00-ff]) Jul 7 00:13:48.243902 kernel: acpi PNP0A08:07: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 00:13:48.243963 kernel: acpi PNP0A08:07: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 7 00:13:48.244019 kernel: acpi PNP0A08:07: _OSC: OS now controls [AER PCIeCapability] Jul 7 00:13:48.244074 kernel: acpi PNP0A08:07: MCFG quirk: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 7 00:13:48.244129 kernel: acpi PNP0A08:07: ECAM area [mem 0x2bfff0000000-0x2bffffffffff] reserved by PNP0C02:00 Jul 7 00:13:48.244183 kernel: acpi PNP0A08:07: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] Jul 7 00:13:48.244193 kernel: PCI host bridge to bus 0004:00 Jul 7 00:13:48.244251 kernel: pci_bus 0004:00: root bus resource [mem 0x20000000-0x2fffffff window] Jul 7 00:13:48.244305 kernel: pci_bus 0004:00: root bus resource [mem 0x280000000000-0x2bffdfffffff window] Jul 7 00:13:48.244355 kernel: pci_bus 0004:00: root bus resource [bus 00-ff] Jul 7 00:13:48.244418 kernel: pci 0004:00:00.0: [1def:e110] type 00 class 0x060000 conventional PCI endpoint Jul 7 00:13:48.244483 kernel: pci 0004:00:01.0: [1def:e111] type 01 class 0x060400 PCIe Root Port Jul 7 00:13:48.244542 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] Jul 7 00:13:48.244600 kernel: pci 0004:00:01.0: bridge window [io 0x8000-0x8fff] Jul 7 00:13:48.244657 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x220fffff] Jul 7 00:13:48.244715 kernel: pci 0004:00:01.0: supports D1 D2 Jul 7 00:13:48.244772 kernel: pci 0004:00:01.0: PME# supported from D0 D1 D3hot Jul 7 00:13:48.244837 kernel: pci 0004:00:03.0: [1def:e113] type 01 class 0x060400 PCIe Root Port Jul 7 00:13:48.244895 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] Jul 7 00:13:48.244956 kernel: pci 0004:00:03.0: bridge window [mem 0x22200000-0x222fffff] Jul 7 00:13:48.245016 kernel: pci 0004:00:03.0: supports D1 D2 Jul 7 00:13:48.245073 kernel: pci 0004:00:03.0: PME# supported from D0 D1 D3hot Jul 7 00:13:48.245138 kernel: pci 0004:00:05.0: [1def:e115] type 01 class 0x060400 PCIe Root Port Jul 7 00:13:48.245196 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] Jul 7 00:13:48.245253 kernel: pci 0004:00:05.0: supports D1 D2 Jul 7 00:13:48.245311 kernel: pci 0004:00:05.0: PME# supported from D0 D1 D3hot Jul 7 00:13:48.245377 kernel: pci 0004:01:00.0: [1a03:1150] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jul 7 00:13:48.245437 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] Jul 7 00:13:48.245496 kernel: pci 0004:01:00.0: bridge window [io 0x2fff8000-0x2fff8fff] Jul 7 00:13:48.245556 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x220fffff] Jul 7 00:13:48.245615 kernel: pci 0004:01:00.0: enabling Extended Tags Jul 7 00:13:48.245673 kernel: pci 0004:01:00.0: supports D1 D2 Jul 7 00:13:48.245733 kernel: pci 0004:01:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 7 00:13:48.245796 kernel: pci_bus 0004:02: extended config space not accessible Jul 7 00:13:48.245866 kernel: pci 0004:02:00.0: [1a03:2000] type 00 class 0x030000 conventional PCI endpoint Jul 7 00:13:48.245928 kernel: pci 0004:02:00.0: BAR 0 [mem 0x20000000-0x21ffffff] Jul 7 00:13:48.245995 kernel: pci 0004:02:00.0: BAR 1 [mem 0x22000000-0x2201ffff] Jul 7 00:13:48.246057 kernel: pci 0004:02:00.0: BAR 2 [io 0x2fff8000-0x2fff807f] Jul 7 00:13:48.246117 kernel: pci 0004:02:00.0: supports D1 D2 Jul 7 00:13:48.246178 kernel: pci 0004:02:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 7 00:13:48.246252 kernel: pci 0004:03:00.0: [1912:0014] type 00 class 0x0c0330 PCIe Endpoint Jul 7 00:13:48.246316 kernel: pci 0004:03:00.0: BAR 0 [mem 0x22200000-0x22201fff 64bit] Jul 7 00:13:48.246375 kernel: pci 0004:03:00.0: PME# supported from D0 D3hot D3cold Jul 7 00:13:48.246429 kernel: pci_bus 0004:00: on NUMA node 0 Jul 7 00:13:48.246488 kernel: pci 0004:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01-02] add_size 200000 add_align 100000 Jul 7 00:13:48.246546 kernel: pci 0004:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 7 00:13:48.246605 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 7 00:13:48.246663 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jul 7 00:13:48.246721 kernel: pci 0004:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 7 00:13:48.246779 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 7 00:13:48.246839 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 7 00:13:48.246896 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff]: assigned Jul 7 00:13:48.246958 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref]: assigned Jul 7 00:13:48.247016 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff]: assigned Jul 7 00:13:48.247073 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref]: assigned Jul 7 00:13:48.247133 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff]: assigned Jul 7 00:13:48.247191 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref]: assigned Jul 7 00:13:48.247249 kernel: pci 0004:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.247309 kernel: pci 0004:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.247366 kernel: pci 0004:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.247424 kernel: pci 0004:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.247481 kernel: pci 0004:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.247538 kernel: pci 0004:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.247596 kernel: pci 0004:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.247653 kernel: pci 0004:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.247710 kernel: pci 0004:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.247769 kernel: pci 0004:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.247826 kernel: pci 0004:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.247883 kernel: pci 0004:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.247946 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff]: assigned Jul 7 00:13:48.248006 kernel: pci 0004:01:00.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 00:13:48.248065 kernel: pci 0004:01:00.0: bridge window [io size 0x1000]: failed to assign Jul 7 00:13:48.248126 kernel: pci 0004:02:00.0: BAR 0 [mem 0x20000000-0x21ffffff]: assigned Jul 7 00:13:48.248188 kernel: pci 0004:02:00.0: BAR 1 [mem 0x22000000-0x2201ffff]: assigned Jul 7 00:13:48.248251 kernel: pci 0004:02:00.0: BAR 2 [io size 0x0080]: can't assign; no space Jul 7 00:13:48.248312 kernel: pci 0004:02:00.0: BAR 2 [io size 0x0080]: failed to assign Jul 7 00:13:48.248371 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] Jul 7 00:13:48.248430 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff] Jul 7 00:13:48.248487 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] Jul 7 00:13:48.248545 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff] Jul 7 00:13:48.248603 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref] Jul 7 00:13:48.248664 kernel: pci 0004:03:00.0: BAR 0 [mem 0x23000000-0x23001fff 64bit]: assigned Jul 7 00:13:48.248722 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] Jul 7 00:13:48.248780 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff] Jul 7 00:13:48.248838 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref] Jul 7 00:13:48.248896 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] Jul 7 00:13:48.248958 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff] Jul 7 00:13:48.249016 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref] Jul 7 00:13:48.249070 kernel: pci_bus 0004:00: Some PCI device resources are unassigned, try booting with pci=realloc Jul 7 00:13:48.249121 kernel: pci_bus 0004:00: resource 4 [mem 0x20000000-0x2fffffff window] Jul 7 00:13:48.249173 kernel: pci_bus 0004:00: resource 5 [mem 0x280000000000-0x2bffdfffffff window] Jul 7 00:13:48.249234 kernel: pci_bus 0004:01: resource 1 [mem 0x20000000-0x22ffffff] Jul 7 00:13:48.249289 kernel: pci_bus 0004:01: resource 2 [mem 0x280000000000-0x2800001fffff 64bit pref] Jul 7 00:13:48.249346 kernel: pci_bus 0004:02: resource 1 [mem 0x20000000-0x22ffffff] Jul 7 00:13:48.249406 kernel: pci_bus 0004:03: resource 1 [mem 0x23000000-0x231fffff] Jul 7 00:13:48.249462 kernel: pci_bus 0004:03: resource 2 [mem 0x280000200000-0x2800003fffff 64bit pref] Jul 7 00:13:48.249522 kernel: pci_bus 0004:04: resource 1 [mem 0x23200000-0x233fffff] Jul 7 00:13:48.249575 kernel: pci_bus 0004:04: resource 2 [mem 0x280000400000-0x2800005fffff 64bit pref] Jul 7 00:13:48.249585 kernel: ACPI: CPU18 has been hot-added Jul 7 00:13:48.249592 kernel: ACPI: CPU58 has been hot-added Jul 7 00:13:48.249600 kernel: ACPI: CPU38 has been hot-added Jul 7 00:13:48.249608 kernel: ACPI: CPU78 has been hot-added Jul 7 00:13:48.249617 kernel: ACPI: CPU16 has been hot-added Jul 7 00:13:48.249625 kernel: ACPI: CPU56 has been hot-added Jul 7 00:13:48.249633 kernel: ACPI: CPU36 has been hot-added Jul 7 00:13:48.249641 kernel: ACPI: CPU76 has been hot-added Jul 7 00:13:48.249648 kernel: ACPI: CPU17 has been hot-added Jul 7 00:13:48.249656 kernel: ACPI: CPU57 has been hot-added Jul 7 00:13:48.249663 kernel: ACPI: CPU37 has been hot-added Jul 7 00:13:48.249671 kernel: ACPI: CPU77 has been hot-added Jul 7 00:13:48.249678 kernel: ACPI: CPU19 has been hot-added Jul 7 00:13:48.249686 kernel: ACPI: CPU59 has been hot-added Jul 7 00:13:48.249695 kernel: ACPI: CPU39 has been hot-added Jul 7 00:13:48.249702 kernel: ACPI: CPU79 has been hot-added Jul 7 00:13:48.249710 kernel: ACPI: CPU12 has been hot-added Jul 7 00:13:48.249717 kernel: ACPI: CPU52 has been hot-added Jul 7 00:13:48.249725 kernel: ACPI: CPU32 has been hot-added Jul 7 00:13:48.249732 kernel: ACPI: CPU72 has been hot-added Jul 7 00:13:48.249740 kernel: ACPI: CPU8 has been hot-added Jul 7 00:13:48.249750 kernel: ACPI: CPU48 has been hot-added Jul 7 00:13:48.249758 kernel: ACPI: CPU28 has been hot-added Jul 7 00:13:48.249765 kernel: ACPI: CPU68 has been hot-added Jul 7 00:13:48.249775 kernel: ACPI: CPU10 has been hot-added Jul 7 00:13:48.249782 kernel: ACPI: CPU50 has been hot-added Jul 7 00:13:48.249791 kernel: ACPI: CPU30 has been hot-added Jul 7 00:13:48.249799 kernel: ACPI: CPU70 has been hot-added Jul 7 00:13:48.249806 kernel: ACPI: CPU14 has been hot-added Jul 7 00:13:48.249814 kernel: ACPI: CPU54 has been hot-added Jul 7 00:13:48.249822 kernel: ACPI: CPU34 has been hot-added Jul 7 00:13:48.249829 kernel: ACPI: CPU74 has been hot-added Jul 7 00:13:48.249837 kernel: ACPI: CPU4 has been hot-added Jul 7 00:13:48.249846 kernel: ACPI: CPU44 has been hot-added Jul 7 00:13:48.249854 kernel: ACPI: CPU24 has been hot-added Jul 7 00:13:48.249861 kernel: ACPI: CPU64 has been hot-added Jul 7 00:13:48.249869 kernel: ACPI: CPU0 has been hot-added Jul 7 00:13:48.249877 kernel: ACPI: CPU40 has been hot-added Jul 7 00:13:48.249884 kernel: ACPI: CPU20 has been hot-added Jul 7 00:13:48.249892 kernel: ACPI: CPU60 has been hot-added Jul 7 00:13:48.249900 kernel: ACPI: CPU2 has been hot-added Jul 7 00:13:48.249907 kernel: ACPI: CPU42 has been hot-added Jul 7 00:13:48.249916 kernel: ACPI: CPU22 has been hot-added Jul 7 00:13:48.249923 kernel: ACPI: CPU62 has been hot-added Jul 7 00:13:48.249931 kernel: ACPI: CPU6 has been hot-added Jul 7 00:13:48.249942 kernel: ACPI: CPU46 has been hot-added Jul 7 00:13:48.249950 kernel: ACPI: CPU26 has been hot-added Jul 7 00:13:48.249958 kernel: ACPI: CPU66 has been hot-added Jul 7 00:13:48.249965 kernel: ACPI: CPU5 has been hot-added Jul 7 00:13:48.249973 kernel: ACPI: CPU45 has been hot-added Jul 7 00:13:48.249980 kernel: ACPI: CPU25 has been hot-added Jul 7 00:13:48.249988 kernel: ACPI: CPU65 has been hot-added Jul 7 00:13:48.249998 kernel: ACPI: CPU1 has been hot-added Jul 7 00:13:48.250006 kernel: ACPI: CPU41 has been hot-added Jul 7 00:13:48.250013 kernel: ACPI: CPU21 has been hot-added Jul 7 00:13:48.250021 kernel: ACPI: CPU61 has been hot-added Jul 7 00:13:48.250029 kernel: ACPI: CPU3 has been hot-added Jul 7 00:13:48.250036 kernel: ACPI: CPU43 has been hot-added Jul 7 00:13:48.250043 kernel: ACPI: CPU23 has been hot-added Jul 7 00:13:48.250051 kernel: ACPI: CPU63 has been hot-added Jul 7 00:13:48.250059 kernel: ACPI: CPU7 has been hot-added Jul 7 00:13:48.250067 kernel: ACPI: CPU47 has been hot-added Jul 7 00:13:48.250075 kernel: ACPI: CPU27 has been hot-added Jul 7 00:13:48.250083 kernel: ACPI: CPU67 has been hot-added Jul 7 00:13:48.250090 kernel: ACPI: CPU13 has been hot-added Jul 7 00:13:48.250099 kernel: ACPI: CPU53 has been hot-added Jul 7 00:13:48.250107 kernel: ACPI: CPU33 has been hot-added Jul 7 00:13:48.250115 kernel: ACPI: CPU73 has been hot-added Jul 7 00:13:48.250122 kernel: ACPI: CPU9 has been hot-added Jul 7 00:13:48.250130 kernel: ACPI: CPU49 has been hot-added Jul 7 00:13:48.250137 kernel: ACPI: CPU29 has been hot-added Jul 7 00:13:48.250146 kernel: ACPI: CPU69 has been hot-added Jul 7 00:13:48.250154 kernel: ACPI: CPU11 has been hot-added Jul 7 00:13:48.250162 kernel: ACPI: CPU51 has been hot-added Jul 7 00:13:48.250169 kernel: ACPI: CPU31 has been hot-added Jul 7 00:13:48.250177 kernel: ACPI: CPU71 has been hot-added Jul 7 00:13:48.250184 kernel: ACPI: CPU15 has been hot-added Jul 7 00:13:48.250192 kernel: ACPI: CPU55 has been hot-added Jul 7 00:13:48.250199 kernel: ACPI: CPU35 has been hot-added Jul 7 00:13:48.250207 kernel: ACPI: CPU75 has been hot-added Jul 7 00:13:48.250216 kernel: iommu: Default domain type: Translated Jul 7 00:13:48.250223 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 7 00:13:48.250231 kernel: efivars: Registered efivars operations Jul 7 00:13:48.250299 kernel: pci 0004:02:00.0: vgaarb: setting as boot VGA device Jul 7 00:13:48.250362 kernel: pci 0004:02:00.0: vgaarb: bridge control possible Jul 7 00:13:48.250423 kernel: pci 0004:02:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none Jul 7 00:13:48.250433 kernel: vgaarb: loaded Jul 7 00:13:48.250440 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 7 00:13:48.250448 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 00:13:48.250457 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 00:13:48.250465 kernel: pnp: PnP ACPI init Jul 7 00:13:48.250528 kernel: system 00:00: [mem 0x3bfff0000000-0x3bffffffffff window] could not be reserved Jul 7 00:13:48.250582 kernel: system 00:00: [mem 0x3ffff0000000-0x3fffffffffff window] could not be reserved Jul 7 00:13:48.250635 kernel: system 00:00: [mem 0x23fff0000000-0x23ffffffffff window] could not be reserved Jul 7 00:13:48.250687 kernel: system 00:00: [mem 0x27fff0000000-0x27ffffffffff window] could not be reserved Jul 7 00:13:48.250739 kernel: system 00:00: [mem 0x2bfff0000000-0x2bffffffffff window] could not be reserved Jul 7 00:13:48.250793 kernel: system 00:00: [mem 0x2ffff0000000-0x2fffffffffff window] could not be reserved Jul 7 00:13:48.250845 kernel: system 00:00: [mem 0x33fff0000000-0x33ffffffffff window] could not be reserved Jul 7 00:13:48.250898 kernel: system 00:00: [mem 0x37fff0000000-0x37ffffffffff window] could not be reserved Jul 7 00:13:48.250907 kernel: pnp: PnP ACPI: found 1 devices Jul 7 00:13:48.250915 kernel: NET: Registered PF_INET protocol family Jul 7 00:13:48.250923 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 00:13:48.250931 kernel: tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear) Jul 7 00:13:48.250942 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 00:13:48.250952 kernel: TCP established hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 00:13:48.250960 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 7 00:13:48.250967 kernel: TCP: Hash tables configured (established 524288 bind 65536) Jul 7 00:13:48.250975 kernel: UDP hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 7 00:13:48.250983 kernel: UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 7 00:13:48.250991 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 00:13:48.251051 kernel: pci 0001:01:00.0: CLS mismatch (64 != 32), using 64 bytes Jul 7 00:13:48.251062 kernel: kvm [1]: nv: 554 coarse grained trap handlers Jul 7 00:13:48.251070 kernel: kvm [1]: IPA Size Limit: 48 bits Jul 7 00:13:48.251079 kernel: kvm [1]: GICv3: no GICV resource entry Jul 7 00:13:48.251086 kernel: kvm [1]: disabling GICv2 emulation Jul 7 00:13:48.251094 kernel: kvm [1]: GIC system register CPU interface enabled Jul 7 00:13:48.251102 kernel: kvm [1]: vgic interrupt IRQ9 Jul 7 00:13:48.251109 kernel: kvm [1]: VHE mode initialized successfully Jul 7 00:13:48.251117 kernel: Initialise system trusted keyrings Jul 7 00:13:48.251124 kernel: workingset: timestamp_bits=39 max_order=26 bucket_order=0 Jul 7 00:13:48.251132 kernel: Key type asymmetric registered Jul 7 00:13:48.251139 kernel: Asymmetric key parser 'x509' registered Jul 7 00:13:48.251148 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 7 00:13:48.251156 kernel: io scheduler mq-deadline registered Jul 7 00:13:48.251163 kernel: io scheduler kyber registered Jul 7 00:13:48.251171 kernel: io scheduler bfq registered Jul 7 00:13:48.251180 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 7 00:13:48.251188 kernel: ACPI: button: Power Button [PWRB] Jul 7 00:13:48.251196 kernel: ACPI GTDT: found 1 SBSA generic Watchdog(s). Jul 7 00:13:48.251203 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 00:13:48.251270 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: option mask 0x0 Jul 7 00:13:48.251328 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: IDR0.COHACC overridden by FW configuration (false) Jul 7 00:13:48.251383 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 7 00:13:48.251438 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for cmdq Jul 7 00:13:48.251491 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 131072 entries for evtq Jul 7 00:13:48.251545 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for priq Jul 7 00:13:48.251607 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: option mask 0x0 Jul 7 00:13:48.251662 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: IDR0.COHACC overridden by FW configuration (false) Jul 7 00:13:48.251716 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 7 00:13:48.251770 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for cmdq Jul 7 00:13:48.251823 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 131072 entries for evtq Jul 7 00:13:48.251877 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for priq Jul 7 00:13:48.251943 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: option mask 0x0 Jul 7 00:13:48.251998 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: IDR0.COHACC overridden by FW configuration (false) Jul 7 00:13:48.252053 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 7 00:13:48.252108 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for cmdq Jul 7 00:13:48.252161 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 131072 entries for evtq Jul 7 00:13:48.252214 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for priq Jul 7 00:13:48.252274 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: option mask 0x0 Jul 7 00:13:48.252328 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: IDR0.COHACC overridden by FW configuration (false) Jul 7 00:13:48.252384 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 7 00:13:48.252437 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for cmdq Jul 7 00:13:48.252505 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 131072 entries for evtq Jul 7 00:13:48.252559 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for priq Jul 7 00:13:48.252620 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: option mask 0x0 Jul 7 00:13:48.252674 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: IDR0.COHACC overridden by FW configuration (false) Jul 7 00:13:48.252728 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 7 00:13:48.252786 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for cmdq Jul 7 00:13:48.252840 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 131072 entries for evtq Jul 7 00:13:48.252893 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for priq Jul 7 00:13:48.252959 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: option mask 0x0 Jul 7 00:13:48.253015 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: IDR0.COHACC overridden by FW configuration (false) Jul 7 00:13:48.253068 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 7 00:13:48.253124 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for cmdq Jul 7 00:13:48.253180 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 131072 entries for evtq Jul 7 00:13:48.253234 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for priq Jul 7 00:13:48.253302 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: option mask 0x0 Jul 7 00:13:48.253356 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: IDR0.COHACC overridden by FW configuration (false) Jul 7 00:13:48.253410 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 7 00:13:48.253464 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for cmdq Jul 7 00:13:48.253519 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 131072 entries for evtq Jul 7 00:13:48.253572 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for priq Jul 7 00:13:48.253632 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: option mask 0x0 Jul 7 00:13:48.253686 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: IDR0.COHACC overridden by FW configuration (false) Jul 7 00:13:48.253739 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 7 00:13:48.253792 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for cmdq Jul 7 00:13:48.253847 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 131072 entries for evtq Jul 7 00:13:48.253900 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for priq Jul 7 00:13:48.253910 kernel: thunder_xcv, ver 1.0 Jul 7 00:13:48.253918 kernel: thunder_bgx, ver 1.0 Jul 7 00:13:48.253925 kernel: nicpf, ver 1.0 Jul 7 00:13:48.253936 kernel: nicvf, ver 1.0 Jul 7 00:13:48.253997 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 7 00:13:48.254052 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-07T00:13:46 UTC (1751847226) Jul 7 00:13:48.254064 kernel: efifb: probing for efifb Jul 7 00:13:48.254072 kernel: efifb: framebuffer at 0x20000000, using 1876k, total 1875k Jul 7 00:13:48.254080 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jul 7 00:13:48.254089 kernel: efifb: scrolling: redraw Jul 7 00:13:48.254097 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 7 00:13:48.254105 kernel: Console: switching to colour frame buffer device 100x37 Jul 7 00:13:48.254112 kernel: fb0: EFI VGA frame buffer device Jul 7 00:13:48.254120 kernel: SMCCC: SOC_ID: ID = jep106:0a16:0001 Revision = 0x000000a1 Jul 7 00:13:48.254128 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 00:13:48.254137 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 7 00:13:48.254145 kernel: watchdog: NMI not fully supported Jul 7 00:13:48.254153 kernel: NET: Registered PF_INET6 protocol family Jul 7 00:13:48.254160 kernel: watchdog: Hard watchdog permanently disabled Jul 7 00:13:48.254168 kernel: Segment Routing with IPv6 Jul 7 00:13:48.254176 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 00:13:48.254183 kernel: NET: Registered PF_PACKET protocol family Jul 7 00:13:48.254191 kernel: Key type dns_resolver registered Jul 7 00:13:48.254198 kernel: registered taskstats version 1 Jul 7 00:13:48.254207 kernel: Loading compiled-in X.509 certificates Jul 7 00:13:48.254215 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: 90fb300ebe1fa0773739bb35dad461c5679d8dfb' Jul 7 00:13:48.254223 kernel: Demotion targets for Node 0: null Jul 7 00:13:48.254231 kernel: Key type .fscrypt registered Jul 7 00:13:48.254238 kernel: Key type fscrypt-provisioning registered Jul 7 00:13:48.254246 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 00:13:48.254253 kernel: ima: Allocated hash algorithm: sha1 Jul 7 00:13:48.254261 kernel: ima: No architecture policies found Jul 7 00:13:48.254269 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 7 00:13:48.254330 kernel: pcieport 000d:00:01.0: Adding to iommu group 0 Jul 7 00:13:48.254389 kernel: pcieport 000d:00:01.0: AER: enabled with IRQ 91 Jul 7 00:13:48.254449 kernel: pcieport 000d:00:02.0: Adding to iommu group 1 Jul 7 00:13:48.254508 kernel: pcieport 000d:00:02.0: AER: enabled with IRQ 91 Jul 7 00:13:48.254567 kernel: pcieport 000d:00:03.0: Adding to iommu group 2 Jul 7 00:13:48.254625 kernel: pcieport 000d:00:03.0: AER: enabled with IRQ 91 Jul 7 00:13:48.254684 kernel: pcieport 000d:00:04.0: Adding to iommu group 3 Jul 7 00:13:48.254743 kernel: pcieport 000d:00:04.0: AER: enabled with IRQ 91 Jul 7 00:13:48.254805 kernel: pcieport 0000:00:01.0: Adding to iommu group 4 Jul 7 00:13:48.254863 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 92 Jul 7 00:13:48.254922 kernel: pcieport 0000:00:02.0: Adding to iommu group 5 Jul 7 00:13:48.254984 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 92 Jul 7 00:13:48.255044 kernel: pcieport 0000:00:03.0: Adding to iommu group 6 Jul 7 00:13:48.255102 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 92 Jul 7 00:13:48.255160 kernel: pcieport 0000:00:04.0: Adding to iommu group 7 Jul 7 00:13:48.255218 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 92 Jul 7 00:13:48.255278 kernel: pcieport 0005:00:01.0: Adding to iommu group 8 Jul 7 00:13:48.255338 kernel: pcieport 0005:00:01.0: AER: enabled with IRQ 93 Jul 7 00:13:48.255397 kernel: pcieport 0005:00:03.0: Adding to iommu group 9 Jul 7 00:13:48.255456 kernel: pcieport 0005:00:03.0: AER: enabled with IRQ 93 Jul 7 00:13:48.255515 kernel: pcieport 0005:00:05.0: Adding to iommu group 10 Jul 7 00:13:48.255573 kernel: pcieport 0005:00:05.0: AER: enabled with IRQ 93 Jul 7 00:13:48.255632 kernel: pcieport 0005:00:07.0: Adding to iommu group 11 Jul 7 00:13:48.255689 kernel: pcieport 0005:00:07.0: AER: enabled with IRQ 93 Jul 7 00:13:48.255749 kernel: pcieport 0003:00:01.0: Adding to iommu group 12 Jul 7 00:13:48.255809 kernel: pcieport 0003:00:01.0: AER: enabled with IRQ 94 Jul 7 00:13:48.255868 kernel: pcieport 0003:00:03.0: Adding to iommu group 13 Jul 7 00:13:48.255926 kernel: pcieport 0003:00:03.0: AER: enabled with IRQ 94 Jul 7 00:13:48.255988 kernel: pcieport 0003:00:05.0: Adding to iommu group 14 Jul 7 00:13:48.256047 kernel: pcieport 0003:00:05.0: AER: enabled with IRQ 94 Jul 7 00:13:48.256106 kernel: pcieport 000c:00:01.0: Adding to iommu group 15 Jul 7 00:13:48.256164 kernel: pcieport 000c:00:01.0: AER: enabled with IRQ 95 Jul 7 00:13:48.256223 kernel: pcieport 000c:00:02.0: Adding to iommu group 16 Jul 7 00:13:48.256283 kernel: pcieport 000c:00:02.0: AER: enabled with IRQ 95 Jul 7 00:13:48.256342 kernel: pcieport 000c:00:03.0: Adding to iommu group 17 Jul 7 00:13:48.256400 kernel: pcieport 000c:00:03.0: AER: enabled with IRQ 95 Jul 7 00:13:48.256458 kernel: pcieport 000c:00:04.0: Adding to iommu group 18 Jul 7 00:13:48.256518 kernel: pcieport 000c:00:04.0: AER: enabled with IRQ 95 Jul 7 00:13:48.256578 kernel: pcieport 0002:00:01.0: Adding to iommu group 19 Jul 7 00:13:48.256636 kernel: pcieport 0002:00:01.0: AER: enabled with IRQ 96 Jul 7 00:13:48.256695 kernel: pcieport 0002:00:03.0: Adding to iommu group 20 Jul 7 00:13:48.256755 kernel: pcieport 0002:00:03.0: AER: enabled with IRQ 96 Jul 7 00:13:48.256813 kernel: pcieport 0002:00:05.0: Adding to iommu group 21 Jul 7 00:13:48.256871 kernel: pcieport 0002:00:05.0: AER: enabled with IRQ 96 Jul 7 00:13:48.256929 kernel: pcieport 0002:00:07.0: Adding to iommu group 22 Jul 7 00:13:48.256993 kernel: pcieport 0002:00:07.0: AER: enabled with IRQ 96 Jul 7 00:13:48.257052 kernel: pcieport 0001:00:01.0: Adding to iommu group 23 Jul 7 00:13:48.257110 kernel: pcieport 0001:00:01.0: AER: enabled with IRQ 97 Jul 7 00:13:48.257169 kernel: pcieport 0001:00:02.0: Adding to iommu group 24 Jul 7 00:13:48.257228 kernel: pcieport 0001:00:02.0: AER: enabled with IRQ 97 Jul 7 00:13:48.257289 kernel: pcieport 0001:00:03.0: Adding to iommu group 25 Jul 7 00:13:48.257347 kernel: pcieport 0001:00:03.0: AER: enabled with IRQ 97 Jul 7 00:13:48.257406 kernel: pcieport 0001:00:04.0: Adding to iommu group 26 Jul 7 00:13:48.257464 kernel: pcieport 0001:00:04.0: AER: enabled with IRQ 97 Jul 7 00:13:48.257522 kernel: pcieport 0004:00:01.0: Adding to iommu group 27 Jul 7 00:13:48.257580 kernel: pcieport 0004:00:01.0: AER: enabled with IRQ 98 Jul 7 00:13:48.257638 kernel: pcieport 0004:00:03.0: Adding to iommu group 28 Jul 7 00:13:48.257696 kernel: pcieport 0004:00:03.0: AER: enabled with IRQ 98 Jul 7 00:13:48.257756 kernel: pcieport 0004:00:05.0: Adding to iommu group 29 Jul 7 00:13:48.257814 kernel: pcieport 0004:00:05.0: AER: enabled with IRQ 98 Jul 7 00:13:48.257875 kernel: pcieport 0004:01:00.0: Adding to iommu group 30 Jul 7 00:13:48.257885 kernel: clk: Disabling unused clocks Jul 7 00:13:48.257893 kernel: PM: genpd: Disabling unused power domains Jul 7 00:13:48.257900 kernel: Warning: unable to open an initial console. Jul 7 00:13:48.257908 kernel: Freeing unused kernel memory: 39424K Jul 7 00:13:48.257916 kernel: Run /init as init process Jul 7 00:13:48.257924 kernel: with arguments: Jul 7 00:13:48.257936 kernel: /init Jul 7 00:13:48.257944 kernel: with environment: Jul 7 00:13:48.257951 kernel: HOME=/ Jul 7 00:13:48.257958 kernel: TERM=linux Jul 7 00:13:48.257966 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 00:13:48.257975 systemd[1]: Successfully made /usr/ read-only. Jul 7 00:13:48.257985 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 00:13:48.257995 systemd[1]: Detected architecture arm64. Jul 7 00:13:48.258003 systemd[1]: Running in initrd. Jul 7 00:13:48.258011 systemd[1]: No hostname configured, using default hostname. Jul 7 00:13:48.258019 systemd[1]: Hostname set to . Jul 7 00:13:48.258027 systemd[1]: Initializing machine ID from random generator. Jul 7 00:13:48.258035 systemd[1]: Queued start job for default target initrd.target. Jul 7 00:13:48.258043 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:13:48.258051 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:13:48.258060 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 00:13:48.258070 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:13:48.258079 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 00:13:48.258087 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 00:13:48.258097 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 00:13:48.258105 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 00:13:48.258113 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:13:48.258122 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:13:48.258131 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:13:48.258139 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:13:48.258147 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:13:48.258155 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:13:48.258163 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:13:48.258171 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:13:48.258179 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 00:13:48.258187 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 7 00:13:48.258197 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:13:48.258205 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:13:48.258213 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:13:48.258221 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:13:48.258229 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 00:13:48.258237 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:13:48.258245 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 00:13:48.258254 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 7 00:13:48.258263 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 00:13:48.258271 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:13:48.258279 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:13:48.258287 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:13:48.258314 systemd-journald[909]: Collecting audit messages is disabled. Jul 7 00:13:48.258334 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 00:13:48.258343 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 00:13:48.258351 kernel: Bridge firewalling registered Jul 7 00:13:48.258360 systemd-journald[909]: Journal started Jul 7 00:13:48.258379 systemd-journald[909]: Runtime Journal (/run/log/journal/aa2b7b2917a34aaea158d8bad8b7a16b) is 8M, max 4G, 3.9G free. Jul 7 00:13:48.194978 systemd-modules-load[911]: Inserted module 'overlay' Jul 7 00:13:48.288627 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:13:48.252638 systemd-modules-load[911]: Inserted module 'br_netfilter' Jul 7 00:13:48.294327 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:13:48.305270 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 00:13:48.317963 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:13:48.327168 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:13:48.341963 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:13:48.350215 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:13:48.376530 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 00:13:48.383450 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:13:48.396005 systemd-tmpfiles[941]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 7 00:13:48.412714 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:13:48.429171 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:13:48.445917 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:13:48.458962 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:13:48.477582 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 00:13:48.512091 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:13:48.525736 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:13:48.540360 dracut-cmdline[956]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=dd2d39de40482a23e9bb75390ff5ca85cd9bd34d902b8049121a8373f8cb2ef2 Jul 7 00:13:48.548996 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:13:48.550937 systemd-resolved[959]: Positive Trust Anchors: Jul 7 00:13:48.550946 systemd-resolved[959]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:13:48.550978 systemd-resolved[959]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:13:48.566632 systemd-resolved[959]: Defaulting to hostname 'linux'. Jul 7 00:13:48.588298 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:13:48.608999 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:13:48.719949 kernel: SCSI subsystem initialized Jul 7 00:13:48.735943 kernel: Loading iSCSI transport class v2.0-870. Jul 7 00:13:48.754943 kernel: iscsi: registered transport (tcp) Jul 7 00:13:48.783563 kernel: iscsi: registered transport (qla4xxx) Jul 7 00:13:48.783584 kernel: QLogic iSCSI HBA Driver Jul 7 00:13:48.802384 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 00:13:48.834979 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:13:48.843454 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 00:13:48.901868 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 00:13:48.914178 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 00:13:48.997947 kernel: raid6: neonx8 gen() 15849 MB/s Jul 7 00:13:49.023940 kernel: raid6: neonx4 gen() 15896 MB/s Jul 7 00:13:49.049944 kernel: raid6: neonx2 gen() 13259 MB/s Jul 7 00:13:49.075943 kernel: raid6: neonx1 gen() 10470 MB/s Jul 7 00:13:49.100944 kernel: raid6: int64x8 gen() 6934 MB/s Jul 7 00:13:49.125944 kernel: raid6: int64x4 gen() 7400 MB/s Jul 7 00:13:49.151940 kernel: raid6: int64x2 gen() 6130 MB/s Jul 7 00:13:49.180414 kernel: raid6: int64x1 gen() 5077 MB/s Jul 7 00:13:49.180439 kernel: raid6: using algorithm neonx4 gen() 15896 MB/s Jul 7 00:13:49.215650 kernel: raid6: .... xor() 12377 MB/s, rmw enabled Jul 7 00:13:49.215671 kernel: raid6: using neon recovery algorithm Jul 7 00:13:49.240534 kernel: xor: measuring software checksum speed Jul 7 00:13:49.240555 kernel: 8regs : 21618 MB/sec Jul 7 00:13:49.257681 kernel: 32regs : 21399 MB/sec Jul 7 00:13:49.257703 kernel: arm64_neon : 28264 MB/sec Jul 7 00:13:49.265734 kernel: xor: using function: arm64_neon (28264 MB/sec) Jul 7 00:13:49.331943 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 00:13:49.338968 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:13:49.346871 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:13:49.387773 systemd-udevd[1177]: Using default interface naming scheme 'v255'. Jul 7 00:13:49.391762 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:13:49.398001 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 00:13:49.440624 dracut-pre-trigger[1188]: rd.md=0: removing MD RAID activation Jul 7 00:13:49.462994 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:13:49.472480 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:13:49.769956 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:13:49.889369 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 7 00:13:49.889399 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 7 00:13:49.889417 kernel: nvme 0005:03:00.0: Adding to iommu group 31 Jul 7 00:13:49.889611 kernel: ACPI: bus type USB registered Jul 7 00:13:49.889621 kernel: usbcore: registered new interface driver usbfs Jul 7 00:13:49.889631 kernel: usbcore: registered new interface driver hub Jul 7 00:13:49.889640 kernel: usbcore: registered new device driver usb Jul 7 00:13:49.889649 kernel: nvme 0005:04:00.0: Adding to iommu group 32 Jul 7 00:13:49.889734 kernel: nvme nvme0: pci function 0005:03:00.0 Jul 7 00:13:49.889821 kernel: nvme nvme1: pci function 0005:04:00.0 Jul 7 00:13:49.889896 kernel: PTP clock support registered Jul 7 00:13:49.889906 kernel: nvme nvme0: D3 entry latency set to 8 seconds Jul 7 00:13:49.889979 kernel: nvme nvme1: D3 entry latency set to 8 seconds Jul 7 00:13:49.907908 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 00:13:49.913940 kernel: nvme nvme1: 32/0/0 default/read/poll queues Jul 7 00:13:49.917946 kernel: nvme nvme0: 32/0/0 default/read/poll queues Jul 7 00:13:49.955649 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 00:13:49.955671 kernel: GPT:9289727 != 1875385007 Jul 7 00:13:49.955689 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 00:13:49.955706 kernel: GPT:9289727 != 1875385007 Jul 7 00:13:49.956188 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:13:50.003498 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 00:13:50.003511 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 00:13:49.956327 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:13:50.092829 kernel: xhci_hcd 0004:03:00.0: Adding to iommu group 33 Jul 7 00:13:50.092958 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller Jul 7 00:13:50.093039 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 1 Jul 7 00:13:50.093114 kernel: xhci_hcd 0004:03:00.0: Zeroing 64bit base registers, expecting fault Jul 7 00:13:50.093185 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jul 7 00:13:50.093195 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jul 7 00:13:50.093205 kernel: igb 0003:03:00.0: Adding to iommu group 34 Jul 7 00:13:50.087312 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:13:50.113827 kernel: mlx5_core 0001:01:00.0: Adding to iommu group 35 Jul 7 00:13:50.109454 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:13:50.119338 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:13:50.136027 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 00:13:50.156587 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:13:50.181041 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - SAMSUNG MZ1LB960HAJQ-00007 ROOT. Jul 7 00:13:50.210159 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - SAMSUNG MZ1LB960HAJQ-00007 EFI-SYSTEM. Jul 7 00:13:50.222071 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. Jul 7 00:13:50.233380 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. Jul 7 00:13:50.382195 kernel: xhci_hcd 0004:03:00.0: hcc params 0x014051cf hci version 0x100 quirks 0x0000000100000010 Jul 7 00:13:50.382394 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller Jul 7 00:13:50.382472 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 2 Jul 7 00:13:50.382546 kernel: xhci_hcd 0004:03:00.0: Host supports USB 3.0 SuperSpeed Jul 7 00:13:50.382618 kernel: hub 1-0:1.0: USB hub found Jul 7 00:13:50.382706 kernel: mlx5_core 0001:01:00.0: PTM is not supported by PCIe Jul 7 00:13:50.382784 kernel: mlx5_core 0001:01:00.0: firmware version: 14.31.1014 Jul 7 00:13:50.382854 kernel: mlx5_core 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jul 7 00:13:50.382924 kernel: hub 1-0:1.0: 4 ports detected Jul 7 00:13:50.258400 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. Jul 7 00:13:50.496598 kernel: igb 0003:03:00.0: added PHC on eth0 Jul 7 00:13:50.496715 kernel: igb 0003:03:00.0: Intel(R) Gigabit Ethernet Network Connection Jul 7 00:13:50.496790 kernel: igb 0003:03:00.0: eth0: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0d:9c:c2 Jul 7 00:13:50.496860 kernel: igb 0003:03:00.0: eth0: PBA No: 106300-000 Jul 7 00:13:50.496930 kernel: igb 0003:03:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) Jul 7 00:13:50.497009 kernel: igb 0003:03:00.1: Adding to iommu group 36 Jul 7 00:13:50.497089 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jul 7 00:13:50.497215 kernel: hub 2-0:1.0: USB hub found Jul 7 00:13:50.497302 kernel: hub 2-0:1.0: 4 ports detected Jul 7 00:13:50.387738 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:13:50.501658 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:13:50.516517 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:13:50.606375 kernel: igb 0003:03:00.1: added PHC on eth1 Jul 7 00:13:50.606502 kernel: igb 0003:03:00.1: Intel(R) Gigabit Ethernet Network Connection Jul 7 00:13:50.606580 kernel: igb 0003:03:00.1: eth1: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0d:9c:c3 Jul 7 00:13:50.606651 kernel: igb 0003:03:00.1: eth1: PBA No: 106300-000 Jul 7 00:13:50.606723 kernel: igb 0003:03:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) Jul 7 00:13:50.606792 kernel: igb 0003:03:00.0 eno1: renamed from eth0 Jul 7 00:13:50.606865 kernel: igb 0003:03:00.1 eno2: renamed from eth1 Jul 7 00:13:50.527429 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 00:13:50.660331 kernel: mlx5_core 0001:01:00.0: E-Switch: Total vports 2, per vport: max uc(128) max mc(2048) Jul 7 00:13:50.660452 kernel: mlx5_core 0001:01:00.0: Port module event: module 0, Cable plugged Jul 7 00:13:50.660532 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 00:13:50.615487 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 00:13:50.670905 disk-uuid[1343]: Primary Header is updated. Jul 7 00:13:50.670905 disk-uuid[1343]: Secondary Entries is updated. Jul 7 00:13:50.670905 disk-uuid[1343]: Secondary Header is updated. Jul 7 00:13:50.676446 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:13:50.710938 kernel: usb 1-3: new high-speed USB device number 2 using xhci_hcd Jul 7 00:13:50.857475 kernel: hub 1-3:1.0: USB hub found Jul 7 00:13:50.857716 kernel: hub 1-3:1.0: 4 ports detected Jul 7 00:13:50.962946 kernel: usb 2-3: new SuperSpeed USB device number 2 using xhci_hcd Jul 7 00:13:50.962994 kernel: mlx5_core 0001:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 7 00:13:50.988945 kernel: mlx5_core 0001:01:00.1: Adding to iommu group 37 Jul 7 00:13:50.989126 kernel: hub 2-3:1.0: USB hub found Jul 7 00:13:50.998066 kernel: mlx5_core 0001:01:00.1: PTM is not supported by PCIe Jul 7 00:13:50.998186 kernel: hub 2-3:1.0: 4 ports detected Jul 7 00:13:50.998260 kernel: mlx5_core 0001:01:00.1: firmware version: 14.31.1014 Jul 7 00:13:51.036349 kernel: mlx5_core 0001:01:00.1: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jul 7 00:13:51.378942 kernel: mlx5_core 0001:01:00.1: E-Switch: Total vports 2, per vport: max uc(128) max mc(2048) Jul 7 00:13:51.396230 kernel: mlx5_core 0001:01:00.1: Port module event: module 1, Cable plugged Jul 7 00:13:51.641859 disk-uuid[1345]: The operation has completed successfully. Jul 7 00:13:51.646977 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 00:13:51.731948 kernel: mlx5_core 0001:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 7 00:13:51.746945 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: renamed from eth1 Jul 7 00:13:51.747036 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: renamed from eth0 Jul 7 00:13:51.789221 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 00:13:51.790394 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 00:13:51.800227 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 00:13:51.825473 sh[1534]: Success Jul 7 00:13:51.864270 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 00:13:51.864303 kernel: device-mapper: uevent: version 1.0.3 Jul 7 00:13:51.882942 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 7 00:13:51.900946 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 7 00:13:51.932195 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 00:13:51.943705 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 00:13:51.960930 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 00:13:51.967939 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 7 00:13:51.967963 kernel: BTRFS: device fsid aa7ffdf7-f152-4ceb-bd0e-b3b3f8f8b296 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (1550) Jul 7 00:13:51.969939 kernel: BTRFS info (device dm-0): first mount of filesystem aa7ffdf7-f152-4ceb-bd0e-b3b3f8f8b296 Jul 7 00:13:51.969950 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 7 00:13:51.969959 kernel: BTRFS info (device dm-0): using free-space-tree Jul 7 00:13:52.056999 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 00:13:52.063209 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 7 00:13:52.073760 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 00:13:52.074839 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 00:13:52.095415 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 00:13:52.210664 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:6) scanned by mount (1574) Jul 7 00:13:52.210680 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 7 00:13:52.210690 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 7 00:13:52.210699 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 7 00:13:52.210708 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 7 00:13:52.206286 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 00:13:52.216990 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:13:52.229877 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 00:13:52.259228 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:13:52.292510 systemd-networkd[1722]: lo: Link UP Jul 7 00:13:52.292516 systemd-networkd[1722]: lo: Gained carrier Jul 7 00:13:52.295876 systemd-networkd[1722]: Enumeration completed Jul 7 00:13:52.296148 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:13:52.297084 systemd-networkd[1722]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:13:52.304171 systemd[1]: Reached target network.target - Network. Jul 7 00:13:52.351518 systemd-networkd[1722]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:13:52.370180 ignition[1720]: Ignition 2.21.0 Jul 7 00:13:52.370188 ignition[1720]: Stage: fetch-offline Jul 7 00:13:52.370216 ignition[1720]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:13:52.377427 unknown[1720]: fetched base config from "system" Jul 7 00:13:52.370224 ignition[1720]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 00:13:52.377433 unknown[1720]: fetched user config from "system" Jul 7 00:13:52.370396 ignition[1720]: parsed url from cmdline: "" Jul 7 00:13:52.380998 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:13:52.370399 ignition[1720]: no config URL provided Jul 7 00:13:52.386882 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 7 00:13:52.370403 ignition[1720]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 00:13:52.388017 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 00:13:52.370456 ignition[1720]: parsing config with SHA512: 8b6d378804711a04f9c38a2598ae09397c36c0beebbbbcd56d026f5faa0de8c4e8aec33e664d69bdf31836ee86953fc06be4659d4323ec1b5de99047ef30970e Jul 7 00:13:52.403582 systemd-networkd[1722]: enP1p1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:13:52.377814 ignition[1720]: fetch-offline: fetch-offline passed Jul 7 00:13:52.377819 ignition[1720]: POST message to Packet Timeline Jul 7 00:13:52.377824 ignition[1720]: POST Status error: resource requires networking Jul 7 00:13:52.377901 ignition[1720]: Ignition finished successfully Jul 7 00:13:52.451685 ignition[1788]: Ignition 2.21.0 Jul 7 00:13:52.451691 ignition[1788]: Stage: kargs Jul 7 00:13:52.451905 ignition[1788]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:13:52.451913 ignition[1788]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 00:13:52.457181 ignition[1788]: kargs: kargs passed Jul 7 00:13:52.457188 ignition[1788]: POST message to Packet Timeline Jul 7 00:13:52.457489 ignition[1788]: GET https://metadata.packet.net/metadata: attempt #1 Jul 7 00:13:52.461901 ignition[1788]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:60413->[::1]:53: read: connection refused Jul 7 00:13:52.662026 ignition[1788]: GET https://metadata.packet.net/metadata: attempt #2 Jul 7 00:13:52.662672 ignition[1788]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:60276->[::1]:53: read: connection refused Jul 7 00:13:52.987950 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up Jul 7 00:13:52.990798 systemd-networkd[1722]: enP1p1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:13:53.063108 ignition[1788]: GET https://metadata.packet.net/metadata: attempt #3 Jul 7 00:13:53.063534 ignition[1788]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:40098->[::1]:53: read: connection refused Jul 7 00:13:53.595949 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up Jul 7 00:13:53.599242 systemd-networkd[1722]: eno1: Link UP Jul 7 00:13:53.599367 systemd-networkd[1722]: eno2: Link UP Jul 7 00:13:53.599475 systemd-networkd[1722]: enP1p1s0f0np0: Link UP Jul 7 00:13:53.599610 systemd-networkd[1722]: enP1p1s0f0np0: Gained carrier Jul 7 00:13:53.611145 systemd-networkd[1722]: enP1p1s0f1np1: Link UP Jul 7 00:13:53.612317 systemd-networkd[1722]: enP1p1s0f1np1: Gained carrier Jul 7 00:13:53.645967 systemd-networkd[1722]: enP1p1s0f0np0: DHCPv4 address 147.28.143.210/30, gateway 147.28.143.209 acquired from 145.40.76.140 Jul 7 00:13:53.863740 ignition[1788]: GET https://metadata.packet.net/metadata: attempt #4 Jul 7 00:13:53.864227 ignition[1788]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:42747->[::1]:53: read: connection refused Jul 7 00:13:54.777010 systemd-networkd[1722]: enP1p1s0f0np0: Gained IPv6LL Jul 7 00:13:55.464965 ignition[1788]: GET https://metadata.packet.net/metadata: attempt #5 Jul 7 00:13:55.465378 ignition[1788]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:48938->[::1]:53: read: connection refused Jul 7 00:13:55.608995 systemd-networkd[1722]: enP1p1s0f1np1: Gained IPv6LL Jul 7 00:13:58.667701 ignition[1788]: GET https://metadata.packet.net/metadata: attempt #6 Jul 7 00:13:59.455778 ignition[1788]: GET result: OK Jul 7 00:13:59.894307 ignition[1788]: Ignition finished successfully Jul 7 00:13:59.899016 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 00:13:59.901413 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 00:13:59.941940 ignition[1823]: Ignition 2.21.0 Jul 7 00:13:59.941949 ignition[1823]: Stage: disks Jul 7 00:13:59.942086 ignition[1823]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:13:59.942095 ignition[1823]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 00:13:59.944923 ignition[1823]: disks: disks passed Jul 7 00:13:59.944929 ignition[1823]: POST message to Packet Timeline Jul 7 00:13:59.944955 ignition[1823]: GET https://metadata.packet.net/metadata: attempt #1 Jul 7 00:14:00.796382 ignition[1823]: GET result: OK Jul 7 00:14:01.132007 ignition[1823]: Ignition finished successfully Jul 7 00:14:01.134818 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 00:14:01.142005 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 00:14:01.149678 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 00:14:01.157880 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:14:01.166672 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:14:01.175789 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:14:01.186307 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 00:14:01.220701 systemd-fsck[1841]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 7 00:14:01.223918 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 00:14:01.232571 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 00:14:01.320950 kernel: EXT4-fs (nvme0n1p9): mounted filesystem a6b10247-fbe6-4a25-95d9-ddd4b58604ec r/w with ordered data mode. Quota mode: none. Jul 7 00:14:01.321472 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 00:14:01.331882 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 00:14:01.343046 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:14:01.370439 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 00:14:01.378939 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 (259:6) scanned by mount (1854) Jul 7 00:14:01.378962 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 7 00:14:01.378972 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 7 00:14:01.378981 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 7 00:14:01.447383 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 7 00:14:01.453896 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jul 7 00:14:01.470279 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 00:14:01.470317 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:14:01.483862 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:14:01.499078 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 00:14:01.513050 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 00:14:01.533294 coreos-metadata[1872]: Jul 07 00:14:01.519 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 7 00:14:01.544490 coreos-metadata[1873]: Jul 07 00:14:01.519 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 7 00:14:01.571090 initrd-setup-root[1892]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 00:14:01.577574 initrd-setup-root[1900]: cut: /sysroot/etc/group: No such file or directory Jul 7 00:14:01.584043 initrd-setup-root[1907]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 00:14:01.590721 initrd-setup-root[1914]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 00:14:01.658971 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 00:14:01.671148 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 00:14:01.698527 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 00:14:01.707939 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 7 00:14:01.731826 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 00:14:01.747055 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 00:14:01.754484 ignition[1986]: INFO : Ignition 2.21.0 Jul 7 00:14:01.754484 ignition[1986]: INFO : Stage: mount Jul 7 00:14:01.770267 ignition[1986]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:14:01.770267 ignition[1986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 00:14:01.770267 ignition[1986]: INFO : mount: mount passed Jul 7 00:14:01.770267 ignition[1986]: INFO : POST message to Packet Timeline Jul 7 00:14:01.770267 ignition[1986]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 7 00:14:02.298075 coreos-metadata[1872]: Jul 07 00:14:02.298 INFO Fetch successful Jul 7 00:14:02.339654 coreos-metadata[1872]: Jul 07 00:14:02.339 INFO wrote hostname ci-4344.1.1-a-1996c8fb49 to /sysroot/etc/hostname Jul 7 00:14:02.343020 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 00:14:02.363674 coreos-metadata[1873]: Jul 07 00:14:02.353 INFO Fetch successful Jul 7 00:14:02.398839 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jul 7 00:14:02.398956 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jul 7 00:14:02.915067 ignition[1986]: INFO : GET result: OK Jul 7 00:14:03.242909 ignition[1986]: INFO : Ignition finished successfully Jul 7 00:14:03.246543 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 00:14:03.258500 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 00:14:03.290961 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:14:03.333338 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 (259:6) scanned by mount (2013) Jul 7 00:14:03.333373 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 7 00:14:03.347956 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 7 00:14:03.361250 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 7 00:14:03.370158 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:14:03.412962 ignition[2030]: INFO : Ignition 2.21.0 Jul 7 00:14:03.412962 ignition[2030]: INFO : Stage: files Jul 7 00:14:03.423744 ignition[2030]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:14:03.423744 ignition[2030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 00:14:03.423744 ignition[2030]: DEBUG : files: compiled without relabeling support, skipping Jul 7 00:14:03.423744 ignition[2030]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 00:14:03.423744 ignition[2030]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 00:14:03.423744 ignition[2030]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 00:14:03.423744 ignition[2030]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 00:14:03.423744 ignition[2030]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 00:14:03.423744 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 7 00:14:03.423744 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 7 00:14:03.419894 unknown[2030]: wrote ssh authorized keys file for user: core Jul 7 00:14:03.523237 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 00:14:03.681688 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 7 00:14:03.692799 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 00:14:03.692799 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 00:14:03.692799 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:14:03.692799 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:14:03.692799 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:14:03.692799 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:14:03.692799 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:14:03.692799 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:14:03.692799 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:14:03.692799 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:14:03.692799 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 7 00:14:03.692799 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 7 00:14:03.692799 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 7 00:14:03.692799 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 7 00:14:04.293183 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 00:14:05.358828 ignition[2030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 7 00:14:05.371472 ignition[2030]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 7 00:14:05.371472 ignition[2030]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:14:05.371472 ignition[2030]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:14:05.371472 ignition[2030]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 7 00:14:05.371472 ignition[2030]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 7 00:14:05.371472 ignition[2030]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 00:14:05.371472 ignition[2030]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:14:05.371472 ignition[2030]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:14:05.371472 ignition[2030]: INFO : files: files passed Jul 7 00:14:05.371472 ignition[2030]: INFO : POST message to Packet Timeline Jul 7 00:14:05.371472 ignition[2030]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 7 00:14:06.244705 ignition[2030]: INFO : GET result: OK Jul 7 00:14:06.560756 ignition[2030]: INFO : Ignition finished successfully Jul 7 00:14:06.564991 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 00:14:06.568969 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 00:14:06.597573 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 00:14:06.616393 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 00:14:06.616581 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 00:14:06.634339 initrd-setup-root-after-ignition[2076]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:14:06.634339 initrd-setup-root-after-ignition[2076]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:14:06.629959 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:14:06.685784 initrd-setup-root-after-ignition[2080]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:14:06.641702 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 00:14:06.658477 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 00:14:06.717475 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 00:14:06.717653 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 00:14:06.729383 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 00:14:06.745719 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 00:14:06.756930 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 00:14:06.757857 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 00:14:06.790454 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:14:06.803149 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 00:14:06.835915 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:14:06.847746 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:14:06.859564 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 00:14:06.865465 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 00:14:06.865564 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:14:06.877180 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 00:14:06.888452 systemd[1]: Stopped target basic.target - Basic System. Jul 7 00:14:06.899906 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 00:14:06.911366 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:14:06.922627 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 00:14:06.933912 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 7 00:14:06.945205 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 00:14:06.956577 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:14:06.967835 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 00:14:06.979176 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 00:14:06.996080 systemd[1]: Stopped target swap.target - Swaps. Jul 7 00:14:07.007381 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 00:14:07.007497 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:14:07.018939 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:14:07.030143 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:14:07.041242 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 00:14:07.044963 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:14:07.052454 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 00:14:07.052545 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 00:14:07.063833 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 00:14:07.063920 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:14:07.075162 systemd[1]: Stopped target paths.target - Path Units. Jul 7 00:14:07.086462 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 00:14:07.086568 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:14:07.103629 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 00:14:07.115172 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 00:14:07.126974 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 00:14:07.127058 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:14:07.138734 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 00:14:07.138825 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:14:07.245563 ignition[2102]: INFO : Ignition 2.21.0 Jul 7 00:14:07.245563 ignition[2102]: INFO : Stage: umount Jul 7 00:14:07.245563 ignition[2102]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:14:07.245563 ignition[2102]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 00:14:07.245563 ignition[2102]: INFO : umount: umount passed Jul 7 00:14:07.245563 ignition[2102]: INFO : POST message to Packet Timeline Jul 7 00:14:07.245563 ignition[2102]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 7 00:14:07.150510 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 00:14:07.150598 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:14:07.162191 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 00:14:07.162274 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 00:14:07.174010 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 7 00:14:07.174092 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 00:14:07.192290 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 00:14:07.217487 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 00:14:07.226850 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 00:14:07.226955 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:14:07.239477 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 00:14:07.239562 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:14:07.253719 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 00:14:07.254521 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 00:14:07.255961 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 00:14:07.264350 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 00:14:07.264438 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 00:14:08.047594 ignition[2102]: INFO : GET result: OK Jul 7 00:14:08.347889 ignition[2102]: INFO : Ignition finished successfully Jul 7 00:14:08.350463 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 00:14:08.350714 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 00:14:08.362836 systemd[1]: Stopped target network.target - Network. Jul 7 00:14:08.372079 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 00:14:08.372153 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 00:14:08.381503 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 00:14:08.381561 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 00:14:08.390973 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 00:14:08.391030 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 00:14:08.400617 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 00:14:08.400649 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 00:14:08.410297 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 00:14:08.410343 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 00:14:08.420238 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 00:14:08.429947 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 00:14:08.440130 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 00:14:08.440265 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 00:14:08.453896 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 7 00:14:08.454890 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 00:14:08.454963 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:14:08.467177 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:14:08.467450 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 00:14:08.467567 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 00:14:08.475957 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 7 00:14:08.476810 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 7 00:14:08.485473 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 00:14:08.485620 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:14:08.497543 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 00:14:08.506123 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 00:14:08.506169 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:14:08.516897 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 00:14:08.516932 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:14:08.533016 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 00:14:08.533079 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 00:14:08.543781 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:14:08.561653 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 00:14:08.566506 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 00:14:08.567964 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:14:08.577944 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 00:14:08.578002 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 00:14:08.588828 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 00:14:08.588873 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:14:08.600071 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 00:14:08.600106 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:14:08.617142 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 00:14:08.617181 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 00:14:08.628461 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 00:14:08.628507 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:14:08.640966 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 00:14:08.651802 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 7 00:14:08.651847 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:14:08.663719 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 00:14:08.663755 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:14:08.675822 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 7 00:14:08.675858 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:14:08.693758 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 00:14:08.693789 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:14:08.705563 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:14:08.705595 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:14:08.719389 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 7 00:14:08.719435 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 7 00:14:08.719463 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 7 00:14:08.719494 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:14:08.719802 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 00:14:08.719872 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 00:14:09.234711 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 00:14:09.234872 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 00:14:09.246331 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 00:14:09.257653 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 00:14:09.284014 systemd[1]: Switching root. Jul 7 00:14:09.350931 systemd-journald[909]: Journal stopped Jul 7 00:14:11.527011 systemd-journald[909]: Received SIGTERM from PID 1 (systemd). Jul 7 00:14:11.527038 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 00:14:11.527048 kernel: SELinux: policy capability open_perms=1 Jul 7 00:14:11.527055 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 00:14:11.527062 kernel: SELinux: policy capability always_check_network=0 Jul 7 00:14:11.527069 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 00:14:11.527077 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 00:14:11.527086 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 00:14:11.527093 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 00:14:11.527101 kernel: SELinux: policy capability userspace_initial_context=0 Jul 7 00:14:11.527108 kernel: audit: type=1403 audit(1751847249.555:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 00:14:11.527117 systemd[1]: Successfully loaded SELinux policy in 140.802ms. Jul 7 00:14:11.527126 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.707ms. Jul 7 00:14:11.527135 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 00:14:11.527146 systemd[1]: Detected architecture arm64. Jul 7 00:14:11.527154 systemd[1]: Detected first boot. Jul 7 00:14:11.527162 systemd[1]: Hostname set to . Jul 7 00:14:11.527171 systemd[1]: Initializing machine ID from random generator. Jul 7 00:14:11.527179 zram_generator::config[2159]: No configuration found. Jul 7 00:14:11.527189 systemd[1]: Populated /etc with preset unit settings. Jul 7 00:14:11.527198 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 7 00:14:11.527206 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 00:14:11.527215 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 00:14:11.527223 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 00:14:11.527232 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 00:14:11.527240 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 00:14:11.527250 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 00:14:11.527259 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 00:14:11.527268 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 00:14:11.527276 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 00:14:11.527285 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 00:14:11.527293 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 00:14:11.527302 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:14:11.527311 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:14:11.527321 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 00:14:11.527329 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 00:14:11.527338 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 00:14:11.527347 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:14:11.527355 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 7 00:14:11.527364 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:14:11.527376 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:14:11.527385 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 00:14:11.527395 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 00:14:11.527404 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 00:14:11.527413 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 00:14:11.527421 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:14:11.527430 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:14:11.527439 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:14:11.527448 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:14:11.527458 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 00:14:11.527467 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 00:14:11.527476 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 7 00:14:11.527486 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:14:11.527495 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:14:11.527505 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:14:11.527514 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 00:14:11.527523 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 00:14:11.527532 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 00:14:11.527541 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 00:14:11.527550 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 00:14:11.527558 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 00:14:11.527567 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 00:14:11.527578 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 00:14:11.527587 systemd[1]: Reached target machines.target - Containers. Jul 7 00:14:11.527596 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 00:14:11.527605 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:14:11.527614 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:14:11.527622 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 00:14:11.527631 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:14:11.527640 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:14:11.527649 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:14:11.527659 kernel: ACPI: bus type drm_connector registered Jul 7 00:14:11.527667 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 00:14:11.527676 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:14:11.527684 kernel: fuse: init (API version 7.41) Jul 7 00:14:11.527692 kernel: loop: module loaded Jul 7 00:14:11.527700 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 00:14:11.527709 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 00:14:11.527718 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 00:14:11.527728 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 00:14:11.527737 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 00:14:11.527746 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:14:11.527755 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:14:11.527764 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:14:11.527773 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 00:14:11.527782 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 00:14:11.527808 systemd-journald[2268]: Collecting audit messages is disabled. Jul 7 00:14:11.527829 systemd-journald[2268]: Journal started Jul 7 00:14:11.527847 systemd-journald[2268]: Runtime Journal (/run/log/journal/0e4c12f4007249f4a2b1a305111a21e2) is 8M, max 4G, 3.9G free. Jul 7 00:14:10.104838 systemd[1]: Queued start job for default target multi-user.target. Jul 7 00:14:10.126620 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 7 00:14:10.126980 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 00:14:10.127293 systemd[1]: systemd-journald.service: Consumed 3.594s CPU time. Jul 7 00:14:11.550951 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 7 00:14:11.572948 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:14:11.595743 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 00:14:11.595766 systemd[1]: Stopped verity-setup.service. Jul 7 00:14:11.620948 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:14:11.627088 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 00:14:11.632673 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 00:14:11.638228 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 00:14:11.643711 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 00:14:11.649247 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 00:14:11.654687 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 00:14:11.660250 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 00:14:11.665879 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:14:11.671411 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 00:14:11.671594 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 00:14:11.677042 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:14:11.678048 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:14:11.683744 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:14:11.683942 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:14:11.689239 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:14:11.689423 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:14:11.694848 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 00:14:11.695028 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 00:14:11.700414 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:14:11.700587 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:14:11.706955 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:14:11.712196 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:14:11.717576 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 00:14:11.723958 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 7 00:14:11.738369 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 00:14:11.744737 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 00:14:11.763752 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 00:14:11.768816 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 00:14:11.768845 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:14:11.774388 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 7 00:14:11.780136 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 00:14:11.785039 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:14:11.786367 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 00:14:11.792169 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 00:14:11.797094 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:14:11.798153 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 00:14:11.803034 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:14:11.803354 systemd-journald[2268]: Time spent on flushing to /var/log/journal/0e4c12f4007249f4a2b1a305111a21e2 is 24.807ms for 2481 entries. Jul 7 00:14:11.803354 systemd-journald[2268]: System Journal (/var/log/journal/0e4c12f4007249f4a2b1a305111a21e2) is 8M, max 195.6M, 187.6M free. Jul 7 00:14:11.846101 systemd-journald[2268]: Received client request to flush runtime journal. Jul 7 00:14:11.846142 kernel: loop0: detected capacity change from 0 to 107312 Jul 7 00:14:11.804186 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:14:11.821686 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 00:14:11.827485 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 00:14:11.833631 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 00:14:11.849686 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 00:14:11.849947 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 00:14:11.852837 systemd-tmpfiles[2310]: ACLs are not supported, ignoring. Jul 7 00:14:11.852849 systemd-tmpfiles[2310]: ACLs are not supported, ignoring. Jul 7 00:14:11.864784 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:14:11.871177 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 00:14:11.875961 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 00:14:11.880791 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:14:11.885669 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:14:11.894124 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 00:14:11.900037 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 7 00:14:11.918943 kernel: loop1: detected capacity change from 0 to 8 Jul 7 00:14:11.921780 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 00:14:11.929611 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 00:14:11.930196 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 7 00:14:11.945976 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 00:14:11.952474 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:14:11.971941 kernel: loop2: detected capacity change from 0 to 211168 Jul 7 00:14:11.984526 systemd-tmpfiles[2342]: ACLs are not supported, ignoring. Jul 7 00:14:11.984539 systemd-tmpfiles[2342]: ACLs are not supported, ignoring. Jul 7 00:14:11.988002 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:14:12.035951 kernel: loop3: detected capacity change from 0 to 138376 Jul 7 00:14:12.037163 ldconfig[2299]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 00:14:12.038348 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 00:14:12.101950 kernel: loop4: detected capacity change from 0 to 107312 Jul 7 00:14:12.117948 kernel: loop5: detected capacity change from 0 to 8 Jul 7 00:14:12.129948 kernel: loop6: detected capacity change from 0 to 211168 Jul 7 00:14:12.147948 kernel: loop7: detected capacity change from 0 to 138376 Jul 7 00:14:12.161592 (sd-merge)[2362]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Jul 7 00:14:12.162019 (sd-merge)[2362]: Merged extensions into '/usr'. Jul 7 00:14:12.165219 systemd[1]: Reload requested from client PID 2307 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 00:14:12.165231 systemd[1]: Reloading... Jul 7 00:14:12.209949 zram_generator::config[2390]: No configuration found. Jul 7 00:14:12.286983 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:14:12.361244 systemd[1]: Reloading finished in 195 ms. Jul 7 00:14:12.392349 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 00:14:12.397525 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 00:14:12.425245 systemd[1]: Starting ensure-sysext.service... Jul 7 00:14:12.431368 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:14:12.438226 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:14:12.449321 systemd[1]: Reload requested from client PID 2443 ('systemctl') (unit ensure-sysext.service)... Jul 7 00:14:12.449332 systemd[1]: Reloading... Jul 7 00:14:12.449983 systemd-tmpfiles[2444]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 7 00:14:12.450009 systemd-tmpfiles[2444]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 7 00:14:12.450231 systemd-tmpfiles[2444]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 00:14:12.450413 systemd-tmpfiles[2444]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 00:14:12.450985 systemd-tmpfiles[2444]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 00:14:12.451177 systemd-tmpfiles[2444]: ACLs are not supported, ignoring. Jul 7 00:14:12.451219 systemd-tmpfiles[2444]: ACLs are not supported, ignoring. Jul 7 00:14:12.454105 systemd-tmpfiles[2444]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:14:12.454113 systemd-tmpfiles[2444]: Skipping /boot Jul 7 00:14:12.462575 systemd-tmpfiles[2444]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:14:12.462583 systemd-tmpfiles[2444]: Skipping /boot Jul 7 00:14:12.466441 systemd-udevd[2445]: Using default interface naming scheme 'v255'. Jul 7 00:14:12.488942 zram_generator::config[2473]: No configuration found. Jul 7 00:14:12.533948 kernel: IPMI message handler: version 39.2 Jul 7 00:14:12.543940 kernel: ipmi device interface Jul 7 00:14:12.555941 kernel: ipmi_ssif: IPMI SSIF Interface driver Jul 7 00:14:12.561945 kernel: MACsec IEEE 802.1AE Jul 7 00:14:12.561958 kernel: ipmi_si: IPMI System Interface driver Jul 7 00:14:12.578024 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:14:12.578693 kernel: ipmi_si: Unable to find any System Interface(s) Jul 7 00:14:12.669301 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. Jul 7 00:14:12.674280 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 7 00:14:12.674482 systemd[1]: Reloading finished in 224 ms. Jul 7 00:14:12.698181 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:14:12.720444 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:14:12.742393 systemd[1]: Finished ensure-sysext.service. Jul 7 00:14:12.763785 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 00:14:12.786690 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 00:14:12.791900 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:14:12.792778 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:14:12.798749 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:14:12.804661 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:14:12.810605 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:14:12.815517 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:14:12.816352 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 00:14:12.821188 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:14:12.822257 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 00:14:12.827009 augenrules[2692]: No rules Jul 7 00:14:12.828915 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:14:12.835601 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:14:12.841872 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 00:14:12.847335 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 00:14:12.852885 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:14:12.858146 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:14:12.858372 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 00:14:12.863736 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 00:14:12.869485 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:14:12.869665 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:14:12.874162 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:14:12.874323 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:14:12.878764 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:14:12.879502 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:14:12.884163 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:14:12.884332 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:14:12.889033 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 00:14:12.893827 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 00:14:12.900907 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:14:12.912101 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 00:14:12.917006 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:14:12.917106 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:14:12.918374 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 00:14:12.946247 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 00:14:12.950907 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 00:14:12.953194 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 00:14:12.983748 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 00:14:13.051273 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 00:14:13.056033 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 00:14:13.056457 systemd-resolved[2699]: Positive Trust Anchors: Jul 7 00:14:13.056470 systemd-resolved[2699]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:14:13.056501 systemd-resolved[2699]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:14:13.060478 systemd-resolved[2699]: Using system hostname 'ci-4344.1.1-a-1996c8fb49'. Jul 7 00:14:13.061773 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:14:13.065572 systemd-networkd[2698]: lo: Link UP Jul 7 00:14:13.065577 systemd-networkd[2698]: lo: Gained carrier Jul 7 00:14:13.067222 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:14:13.068874 systemd-networkd[2698]: bond0: netdev ready Jul 7 00:14:13.071587 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:14:13.076002 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 00:14:13.077866 systemd-networkd[2698]: Enumeration completed Jul 7 00:14:13.080365 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 00:14:13.084861 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 00:14:13.089307 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 00:14:13.090176 systemd-networkd[2698]: enP1p1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:49:b4:d4.network. Jul 7 00:14:13.093625 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 00:14:13.097940 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 00:14:13.097960 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:14:13.102224 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:14:13.107165 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 00:14:13.112828 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 00:14:13.119002 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 7 00:14:13.129771 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 00:14:13.134718 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 7 00:14:13.139565 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:14:13.144126 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 00:14:13.148668 systemd[1]: Reached target network.target - Network. Jul 7 00:14:13.153089 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:14:13.157398 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:14:13.161722 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:14:13.161742 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:14:13.162739 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 00:14:13.191713 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 00:14:13.197236 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 00:14:13.202780 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 00:14:13.208283 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 00:14:13.213770 coreos-metadata[2741]: Jul 07 00:14:13.213 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 7 00:14:13.213786 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 00:14:13.217054 coreos-metadata[2741]: Jul 07 00:14:13.217 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jul 7 00:14:13.218262 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 00:14:13.218659 jq[2746]: false Jul 7 00:14:13.219302 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 00:14:13.224829 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 00:14:13.229825 extend-filesystems[2747]: Found /dev/nvme0n1p6 Jul 7 00:14:13.234737 extend-filesystems[2747]: Found /dev/nvme0n1p9 Jul 7 00:14:13.230394 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 00:14:13.244290 extend-filesystems[2747]: Checking size of /dev/nvme0n1p9 Jul 7 00:14:13.240665 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 00:14:13.253553 extend-filesystems[2747]: Resized partition /dev/nvme0n1p9 Jul 7 00:14:13.275542 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 233815889 blocks Jul 7 00:14:13.252930 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 00:14:13.275729 extend-filesystems[2772]: resize2fs 1.47.2 (1-Jan-2025) Jul 7 00:14:13.271837 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 7 00:14:13.281385 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 00:14:13.290226 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 00:14:13.290741 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 00:14:13.291283 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 00:14:13.297174 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 00:14:13.303533 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 00:14:13.304824 jq[2786]: true Jul 7 00:14:13.308889 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 00:14:13.309106 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 00:14:13.309353 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 00:14:13.309539 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 00:14:13.314747 systemd-logind[2773]: Watching system buttons on /dev/input/event0 (Power Button) Jul 7 00:14:13.314955 systemd-logind[2773]: New seat seat0. Jul 7 00:14:13.315259 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 00:14:13.315462 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 00:14:13.320953 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 00:14:13.330503 update_engine[2785]: I20250707 00:14:13.330375 2785 main.cc:92] Flatcar Update Engine starting Jul 7 00:14:13.330995 (ntainerd)[2793]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 00:14:13.333015 jq[2792]: true Jul 7 00:14:13.339724 tar[2791]: linux-arm64/LICENSE Jul 7 00:14:13.339903 tar[2791]: linux-arm64/helm Jul 7 00:14:13.346868 dbus-daemon[2742]: [system] SELinux support is enabled Jul 7 00:14:13.348278 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 00:14:13.350040 update_engine[2785]: I20250707 00:14:13.350010 2785 update_check_scheduler.cc:74] Next update check in 3m49s Jul 7 00:14:13.357229 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 00:14:13.357253 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 00:14:13.357777 dbus-daemon[2742]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 7 00:14:13.362117 bash[2819]: Updated "/home/core/.ssh/authorized_keys" Jul 7 00:14:13.362242 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 00:14:13.362256 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 00:14:13.367317 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 00:14:13.373054 systemd[1]: Started update-engine.service - Update Engine. Jul 7 00:14:13.380129 systemd[1]: Starting sshkeys.service... Jul 7 00:14:13.399337 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 00:14:13.408889 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 7 00:14:13.414786 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 7 00:14:13.430924 locksmithd[2822]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 00:14:13.434445 coreos-metadata[2830]: Jul 07 00:14:13.434 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 7 00:14:13.435558 coreos-metadata[2830]: Jul 07 00:14:13.435 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jul 7 00:14:13.485967 containerd[2793]: time="2025-07-07T00:14:13Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 7 00:14:13.486548 containerd[2793]: time="2025-07-07T00:14:13.486517720Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 7 00:14:13.494506 containerd[2793]: time="2025-07-07T00:14:13.494476400Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.92µs" Jul 7 00:14:13.494524 containerd[2793]: time="2025-07-07T00:14:13.494507360Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 7 00:14:13.494540 containerd[2793]: time="2025-07-07T00:14:13.494524320Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 7 00:14:13.494689 containerd[2793]: time="2025-07-07T00:14:13.494677480Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 7 00:14:13.494708 containerd[2793]: time="2025-07-07T00:14:13.494693920Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 7 00:14:13.494730 containerd[2793]: time="2025-07-07T00:14:13.494717120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 00:14:13.494776 containerd[2793]: time="2025-07-07T00:14:13.494763680Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 00:14:13.494795 containerd[2793]: time="2025-07-07T00:14:13.494776120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 00:14:13.494992 containerd[2793]: time="2025-07-07T00:14:13.494977280Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 00:14:13.495011 containerd[2793]: time="2025-07-07T00:14:13.494991800Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 00:14:13.495011 containerd[2793]: time="2025-07-07T00:14:13.495003600Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 00:14:13.495041 containerd[2793]: time="2025-07-07T00:14:13.495011520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 7 00:14:13.495099 containerd[2793]: time="2025-07-07T00:14:13.495088560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 7 00:14:13.495280 containerd[2793]: time="2025-07-07T00:14:13.495267000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 00:14:13.495306 containerd[2793]: time="2025-07-07T00:14:13.495296040Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 00:14:13.495324 containerd[2793]: time="2025-07-07T00:14:13.495306800Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 7 00:14:13.495829 containerd[2793]: time="2025-07-07T00:14:13.495810640Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 7 00:14:13.496032 containerd[2793]: time="2025-07-07T00:14:13.496020600Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 7 00:14:13.496114 containerd[2793]: time="2025-07-07T00:14:13.496103480Z" level=info msg="metadata content store policy set" policy=shared Jul 7 00:14:13.502866 containerd[2793]: time="2025-07-07T00:14:13.502845840Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 7 00:14:13.502901 containerd[2793]: time="2025-07-07T00:14:13.502888360Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 7 00:14:13.502999 containerd[2793]: time="2025-07-07T00:14:13.502901200Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 7 00:14:13.502999 containerd[2793]: time="2025-07-07T00:14:13.502916840Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 7 00:14:13.502999 containerd[2793]: time="2025-07-07T00:14:13.502928320Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 7 00:14:13.502999 containerd[2793]: time="2025-07-07T00:14:13.502943760Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 7 00:14:13.502999 containerd[2793]: time="2025-07-07T00:14:13.502956040Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 7 00:14:13.502999 containerd[2793]: time="2025-07-07T00:14:13.502967240Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 7 00:14:13.502999 containerd[2793]: time="2025-07-07T00:14:13.502977560Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 7 00:14:13.502999 containerd[2793]: time="2025-07-07T00:14:13.502988640Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 7 00:14:13.502999 containerd[2793]: time="2025-07-07T00:14:13.502998520Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 7 00:14:13.503225 containerd[2793]: time="2025-07-07T00:14:13.503010520Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 7 00:14:13.503225 containerd[2793]: time="2025-07-07T00:14:13.503122320Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 7 00:14:13.503225 containerd[2793]: time="2025-07-07T00:14:13.503140320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 7 00:14:13.503225 containerd[2793]: time="2025-07-07T00:14:13.503157920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 7 00:14:13.503225 containerd[2793]: time="2025-07-07T00:14:13.503168000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 7 00:14:13.503225 containerd[2793]: time="2025-07-07T00:14:13.503177640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 7 00:14:13.503225 containerd[2793]: time="2025-07-07T00:14:13.503204000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 7 00:14:13.503225 containerd[2793]: time="2025-07-07T00:14:13.503215560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 7 00:14:13.503225 containerd[2793]: time="2025-07-07T00:14:13.503225200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 7 00:14:13.503365 containerd[2793]: time="2025-07-07T00:14:13.503239360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 7 00:14:13.503365 containerd[2793]: time="2025-07-07T00:14:13.503249640Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 7 00:14:13.503365 containerd[2793]: time="2025-07-07T00:14:13.503260040Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 7 00:14:13.503449 containerd[2793]: time="2025-07-07T00:14:13.503437360Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 7 00:14:13.503468 containerd[2793]: time="2025-07-07T00:14:13.503452560Z" level=info msg="Start snapshots syncer" Jul 7 00:14:13.503489 containerd[2793]: time="2025-07-07T00:14:13.503474320Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 7 00:14:13.503698 containerd[2793]: time="2025-07-07T00:14:13.503668880Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 7 00:14:13.503780 containerd[2793]: time="2025-07-07T00:14:13.503713400Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 7 00:14:13.503799 containerd[2793]: time="2025-07-07T00:14:13.503777640Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 7 00:14:13.503894 containerd[2793]: time="2025-07-07T00:14:13.503882160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 7 00:14:13.503920 containerd[2793]: time="2025-07-07T00:14:13.503911600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 7 00:14:13.503945 containerd[2793]: time="2025-07-07T00:14:13.503923720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 7 00:14:13.503962 containerd[2793]: time="2025-07-07T00:14:13.503942720Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 7 00:14:13.503962 containerd[2793]: time="2025-07-07T00:14:13.503954360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 7 00:14:13.504000 containerd[2793]: time="2025-07-07T00:14:13.503964280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 7 00:14:13.504000 containerd[2793]: time="2025-07-07T00:14:13.503974520Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 7 00:14:13.504000 containerd[2793]: time="2025-07-07T00:14:13.503997640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 7 00:14:13.504048 containerd[2793]: time="2025-07-07T00:14:13.504008720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 7 00:14:13.504048 containerd[2793]: time="2025-07-07T00:14:13.504018960Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 7 00:14:13.504080 containerd[2793]: time="2025-07-07T00:14:13.504054600Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 00:14:13.504080 containerd[2793]: time="2025-07-07T00:14:13.504067480Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 00:14:13.504080 containerd[2793]: time="2025-07-07T00:14:13.504075480Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 00:14:13.504126 containerd[2793]: time="2025-07-07T00:14:13.504084280Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 00:14:13.504126 containerd[2793]: time="2025-07-07T00:14:13.504091920Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 7 00:14:13.504126 containerd[2793]: time="2025-07-07T00:14:13.504101240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 7 00:14:13.504126 containerd[2793]: time="2025-07-07T00:14:13.504111520Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 7 00:14:13.504193 containerd[2793]: time="2025-07-07T00:14:13.504187320Z" level=info msg="runtime interface created" Jul 7 00:14:13.504211 containerd[2793]: time="2025-07-07T00:14:13.504192480Z" level=info msg="created NRI interface" Jul 7 00:14:13.504211 containerd[2793]: time="2025-07-07T00:14:13.504200480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 7 00:14:13.504241 containerd[2793]: time="2025-07-07T00:14:13.504212160Z" level=info msg="Connect containerd service" Jul 7 00:14:13.504241 containerd[2793]: time="2025-07-07T00:14:13.504237080Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 00:14:13.504875 containerd[2793]: time="2025-07-07T00:14:13.504854200Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 00:14:13.585508 containerd[2793]: time="2025-07-07T00:14:13.585460760Z" level=info msg="Start subscribing containerd event" Jul 7 00:14:13.585536 containerd[2793]: time="2025-07-07T00:14:13.585519760Z" level=info msg="Start recovering state" Jul 7 00:14:13.585610 containerd[2793]: time="2025-07-07T00:14:13.585597960Z" level=info msg="Start event monitor" Jul 7 00:14:13.585630 containerd[2793]: time="2025-07-07T00:14:13.585616720Z" level=info msg="Start cni network conf syncer for default" Jul 7 00:14:13.585630 containerd[2793]: time="2025-07-07T00:14:13.585625080Z" level=info msg="Start streaming server" Jul 7 00:14:13.585671 containerd[2793]: time="2025-07-07T00:14:13.585633920Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 7 00:14:13.585671 containerd[2793]: time="2025-07-07T00:14:13.585640840Z" level=info msg="runtime interface starting up..." Jul 7 00:14:13.585671 containerd[2793]: time="2025-07-07T00:14:13.585646760Z" level=info msg="starting plugins..." Jul 7 00:14:13.585671 containerd[2793]: time="2025-07-07T00:14:13.585658960Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 7 00:14:13.585775 containerd[2793]: time="2025-07-07T00:14:13.585751760Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 00:14:13.585830 containerd[2793]: time="2025-07-07T00:14:13.585821800Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 00:14:13.585884 containerd[2793]: time="2025-07-07T00:14:13.585876200Z" level=info msg="containerd successfully booted in 0.100302s" Jul 7 00:14:13.585938 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 00:14:13.675041 tar[2791]: linux-arm64/README.md Jul 7 00:14:13.702982 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 00:14:13.759511 sshd_keygen[2777]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 00:14:13.765941 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 233815889 Jul 7 00:14:13.777908 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 00:14:13.783226 extend-filesystems[2772]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 7 00:14:13.783226 extend-filesystems[2772]: old_desc_blocks = 1, new_desc_blocks = 112 Jul 7 00:14:13.783226 extend-filesystems[2772]: The filesystem on /dev/nvme0n1p9 is now 233815889 (4k) blocks long. Jul 7 00:14:13.814362 extend-filesystems[2747]: Resized filesystem in /dev/nvme0n1p9 Jul 7 00:14:13.786402 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 00:14:13.799406 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 00:14:13.814117 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 00:14:13.820042 systemd[1]: extend-filesystems.service: Consumed 205ms CPU time, 69.2M memory peak. Jul 7 00:14:13.825913 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 00:14:13.826112 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 00:14:13.834418 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 00:14:13.870576 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 00:14:13.877519 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 00:14:13.884213 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 7 00:14:13.889827 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 00:14:14.217200 coreos-metadata[2741]: Jul 07 00:14:14.217 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jul 7 00:14:14.217691 coreos-metadata[2741]: Jul 07 00:14:14.217 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jul 7 00:14:14.389944 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up Jul 7 00:14:14.406949 kernel: bond0: (slave enP1p1s0f0np0): Enslaving as a backup interface with an up link Jul 7 00:14:14.408244 systemd-networkd[2698]: enP1p1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:49:b4:d5.network. Jul 7 00:14:14.435645 coreos-metadata[2830]: Jul 07 00:14:14.435 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jul 7 00:14:14.436042 coreos-metadata[2830]: Jul 07 00:14:14.436 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jul 7 00:14:15.024950 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up Jul 7 00:14:15.041950 kernel: bond0: (slave enP1p1s0f1np1): Enslaving as a backup interface with an up link Jul 7 00:14:15.042413 systemd-networkd[2698]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Jul 7 00:14:15.043663 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 7 00:14:15.044297 systemd-networkd[2698]: enP1p1s0f0np0: Link UP Jul 7 00:14:15.044525 systemd-networkd[2698]: enP1p1s0f0np0: Gained carrier Jul 7 00:14:15.044939 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jul 7 00:14:15.064344 systemd-networkd[2698]: enP1p1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:49:b4:d4.network. Jul 7 00:14:15.064611 systemd-networkd[2698]: enP1p1s0f1np1: Link UP Jul 7 00:14:15.064798 systemd-networkd[2698]: enP1p1s0f1np1: Gained carrier Jul 7 00:14:15.087111 systemd-networkd[2698]: bond0: Link UP Jul 7 00:14:15.087295 systemd-networkd[2698]: bond0: Gained carrier Jul 7 00:14:15.087445 systemd-timesyncd[2700]: Network configuration changed, trying to establish connection. Jul 7 00:14:15.088077 systemd-timesyncd[2700]: Network configuration changed, trying to establish connection. Jul 7 00:14:15.088313 systemd-timesyncd[2700]: Network configuration changed, trying to establish connection. Jul 7 00:14:15.088440 systemd-timesyncd[2700]: Network configuration changed, trying to establish connection. Jul 7 00:14:15.164949 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jul 7 00:14:15.183853 kernel: bond0: (slave enP1p1s0f0np0): link status definitely up, 25000 Mbps full duplex Jul 7 00:14:15.183879 kernel: bond0: active interface up! Jul 7 00:14:15.303945 kernel: bond0: (slave enP1p1s0f1np1): link status definitely up, 25000 Mbps full duplex Jul 7 00:14:16.217269 systemd-timesyncd[2700]: Network configuration changed, trying to establish connection. Jul 7 00:14:16.217786 coreos-metadata[2741]: Jul 07 00:14:16.217 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Jul 7 00:14:16.436145 coreos-metadata[2830]: Jul 07 00:14:16.436 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Jul 7 00:14:16.473997 systemd-networkd[2698]: bond0: Gained IPv6LL Jul 7 00:14:16.474333 systemd-timesyncd[2700]: Network configuration changed, trying to establish connection. Jul 7 00:14:16.476111 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 00:14:16.482144 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 00:14:16.489234 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:14:16.516384 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 00:14:16.538213 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 00:14:17.136560 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:14:17.142874 (kubelet)[2910]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:14:17.508898 kubelet[2910]: E0707 00:14:17.508859 2910 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:14:17.511336 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:14:17.511459 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:14:17.511751 systemd[1]: kubelet.service: Consumed 729ms CPU time, 265.4M memory peak. Jul 7 00:14:18.559671 kernel: mlx5_core 0001:01:00.0: lag map: port 1:1 port 2:2 Jul 7 00:14:18.559998 kernel: mlx5_core 0001:01:00.0: shared_fdb:0 mode:queue_affinity Jul 7 00:14:18.932699 login[2889]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jul 7 00:14:18.934259 login[2890]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:14:18.944003 systemd-logind[2773]: New session 2 of user core. Jul 7 00:14:18.945237 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 00:14:18.946471 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 00:14:18.967995 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 00:14:18.970801 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 00:14:18.975788 (systemd)[2942]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 00:14:18.977805 systemd-logind[2773]: New session c1 of user core. Jul 7 00:14:19.054612 coreos-metadata[2741]: Jul 07 00:14:19.054 INFO Fetch successful Jul 7 00:14:19.094411 systemd[2942]: Queued start job for default target default.target. Jul 7 00:14:19.103192 systemd[2942]: Created slice app.slice - User Application Slice. Jul 7 00:14:19.103217 systemd[2942]: Reached target paths.target - Paths. Jul 7 00:14:19.103249 systemd[2942]: Reached target timers.target - Timers. Jul 7 00:14:19.104409 systemd[2942]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 00:14:19.107384 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 00:14:19.108647 systemd[1]: Started sshd@0-147.28.143.210:22-147.75.109.163:51974.service - OpenSSH per-connection server daemon (147.75.109.163:51974). Jul 7 00:14:19.112619 systemd[2942]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 00:14:19.112669 systemd[2942]: Reached target sockets.target - Sockets. Jul 7 00:14:19.112707 systemd[2942]: Reached target basic.target - Basic System. Jul 7 00:14:19.112733 systemd[2942]: Reached target default.target - Main User Target. Jul 7 00:14:19.112755 systemd[2942]: Startup finished in 130ms. Jul 7 00:14:19.113286 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 00:14:19.114875 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 00:14:19.117950 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 00:14:19.120177 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Jul 7 00:14:19.220646 coreos-metadata[2830]: Jul 07 00:14:19.220 INFO Fetch successful Jul 7 00:14:19.267479 unknown[2830]: wrote ssh authorized keys file for user: core Jul 7 00:14:19.305925 update-ssh-keys[2974]: Updated "/home/core/.ssh/authorized_keys" Jul 7 00:14:19.307722 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 7 00:14:19.309289 systemd[1]: Finished sshkeys.service. Jul 7 00:14:19.402280 sshd[2953]: Accepted publickey for core from 147.75.109.163 port 51974 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 00:14:19.403619 sshd-session[2953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:14:19.406764 systemd-logind[2773]: New session 3 of user core. Jul 7 00:14:19.429095 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 00:14:19.483676 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Jul 7 00:14:19.484097 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 00:14:19.484851 systemd[1]: Startup finished in 4.907s (kernel) + 22.114s (initrd) + 10.069s (userspace) = 37.091s. Jul 7 00:14:19.664654 systemd[1]: Started sshd@1-147.28.143.210:22-147.75.109.163:43208.service - OpenSSH per-connection server daemon (147.75.109.163:43208). Jul 7 00:14:19.928429 sshd[2983]: Accepted publickey for core from 147.75.109.163 port 43208 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 00:14:19.929588 sshd-session[2983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:14:19.932522 systemd-logind[2773]: New session 4 of user core. Jul 7 00:14:19.932980 login[2889]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:14:19.943094 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 00:14:19.945956 systemd-logind[2773]: New session 1 of user core. Jul 7 00:14:19.947076 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 00:14:20.128016 sshd[2985]: Connection closed by 147.75.109.163 port 43208 Jul 7 00:14:20.128351 sshd-session[2983]: pam_unix(sshd:session): session closed for user core Jul 7 00:14:20.131185 systemd[1]: sshd@1-147.28.143.210:22-147.75.109.163:43208.service: Deactivated successfully. Jul 7 00:14:20.132706 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 00:14:20.134428 systemd-logind[2773]: Session 4 logged out. Waiting for processes to exit. Jul 7 00:14:20.135284 systemd-logind[2773]: Removed session 4. Jul 7 00:14:20.190571 systemd[1]: Started sshd@2-147.28.143.210:22-147.75.109.163:43210.service - OpenSSH per-connection server daemon (147.75.109.163:43210). Jul 7 00:14:20.485646 sshd[3003]: Accepted publickey for core from 147.75.109.163 port 43210 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 00:14:20.486760 sshd-session[3003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:14:20.489802 systemd-logind[2773]: New session 5 of user core. Jul 7 00:14:20.500100 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 00:14:20.700517 sshd[3005]: Connection closed by 147.75.109.163 port 43210 Jul 7 00:14:20.700838 sshd-session[3003]: pam_unix(sshd:session): session closed for user core Jul 7 00:14:20.703431 systemd[1]: sshd@2-147.28.143.210:22-147.75.109.163:43210.service: Deactivated successfully. Jul 7 00:14:20.705499 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 00:14:20.706105 systemd-logind[2773]: Session 5 logged out. Waiting for processes to exit. Jul 7 00:14:20.706938 systemd-logind[2773]: Removed session 5. Jul 7 00:14:20.750518 systemd[1]: Started sshd@3-147.28.143.210:22-147.75.109.163:43216.service - OpenSSH per-connection server daemon (147.75.109.163:43216). Jul 7 00:14:21.017450 sshd[3012]: Accepted publickey for core from 147.75.109.163 port 43216 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 00:14:21.018676 sshd-session[3012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:14:21.021687 systemd-logind[2773]: New session 6 of user core. Jul 7 00:14:21.032109 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 00:14:21.221741 sshd[3014]: Connection closed by 147.75.109.163 port 43216 Jul 7 00:14:21.222063 sshd-session[3012]: pam_unix(sshd:session): session closed for user core Jul 7 00:14:21.224697 systemd[1]: sshd@3-147.28.143.210:22-147.75.109.163:43216.service: Deactivated successfully. Jul 7 00:14:21.226148 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 00:14:21.226690 systemd-logind[2773]: Session 6 logged out. Waiting for processes to exit. Jul 7 00:14:21.227499 systemd-logind[2773]: Removed session 6. Jul 7 00:14:21.279495 systemd[1]: Started sshd@4-147.28.143.210:22-147.75.109.163:43228.service - OpenSSH per-connection server daemon (147.75.109.163:43228). Jul 7 00:14:21.576107 sshd[3021]: Accepted publickey for core from 147.75.109.163 port 43228 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 00:14:21.577206 sshd-session[3021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:14:21.580095 systemd-logind[2773]: New session 7 of user core. Jul 7 00:14:21.591105 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 00:14:21.772654 sudo[3024]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 00:14:21.772899 sudo[3024]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:14:21.793451 sudo[3024]: pam_unix(sudo:session): session closed for user root Jul 7 00:14:21.837716 sshd[3023]: Connection closed by 147.75.109.163 port 43228 Jul 7 00:14:21.837993 sshd-session[3021]: pam_unix(sshd:session): session closed for user core Jul 7 00:14:21.840992 systemd[1]: sshd@4-147.28.143.210:22-147.75.109.163:43228.service: Deactivated successfully. Jul 7 00:14:21.844330 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 00:14:21.844952 systemd-logind[2773]: Session 7 logged out. Waiting for processes to exit. Jul 7 00:14:21.845774 systemd-logind[2773]: Removed session 7. Jul 7 00:14:21.888612 systemd[1]: Started sshd@5-147.28.143.210:22-147.75.109.163:43238.service - OpenSSH per-connection server daemon (147.75.109.163:43238). Jul 7 00:14:21.953781 systemd-timesyncd[2700]: Network configuration changed, trying to establish connection. Jul 7 00:14:22.160420 sshd[3031]: Accepted publickey for core from 147.75.109.163 port 43238 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 00:14:22.161566 sshd-session[3031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:14:22.164522 systemd-logind[2773]: New session 8 of user core. Jul 7 00:14:22.186038 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 00:14:22.323127 sudo[3035]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 00:14:22.323372 sudo[3035]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:14:22.326086 sudo[3035]: pam_unix(sudo:session): session closed for user root Jul 7 00:14:22.330435 sudo[3034]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 7 00:14:22.330677 sudo[3034]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:14:22.337874 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 00:14:22.369225 augenrules[3057]: No rules Jul 7 00:14:22.370245 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:14:22.371972 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 00:14:22.372743 sudo[3034]: pam_unix(sudo:session): session closed for user root Jul 7 00:14:22.411779 sshd[3033]: Connection closed by 147.75.109.163 port 43238 Jul 7 00:14:22.412062 sshd-session[3031]: pam_unix(sshd:session): session closed for user core Jul 7 00:14:22.414843 systemd[1]: sshd@5-147.28.143.210:22-147.75.109.163:43238.service: Deactivated successfully. Jul 7 00:14:22.417309 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 00:14:22.417861 systemd-logind[2773]: Session 8 logged out. Waiting for processes to exit. Jul 7 00:14:22.418625 systemd-logind[2773]: Removed session 8. Jul 7 00:14:22.466382 systemd[1]: Started sshd@6-147.28.143.210:22-147.75.109.163:43254.service - OpenSSH per-connection server daemon (147.75.109.163:43254). Jul 7 00:14:22.763680 sshd[3067]: Accepted publickey for core from 147.75.109.163 port 43254 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 00:14:22.764751 sshd-session[3067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:14:22.767894 systemd-logind[2773]: New session 9 of user core. Jul 7 00:14:22.789046 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 00:14:22.938092 sudo[3070]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 00:14:22.938337 sudo[3070]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:14:23.241593 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 00:14:23.269307 (dockerd)[3100]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 00:14:23.471842 dockerd[3100]: time="2025-07-07T00:14:23.471795040Z" level=info msg="Starting up" Jul 7 00:14:23.473039 dockerd[3100]: time="2025-07-07T00:14:23.473017200Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 7 00:14:23.499557 dockerd[3100]: time="2025-07-07T00:14:23.499502040Z" level=info msg="Loading containers: start." Jul 7 00:14:23.511945 kernel: Initializing XFRM netlink socket Jul 7 00:14:23.676196 systemd-timesyncd[2700]: Network configuration changed, trying to establish connection. Jul 7 00:14:23.706497 systemd-networkd[2698]: docker0: Link UP Jul 7 00:14:23.707280 dockerd[3100]: time="2025-07-07T00:14:23.707249240Z" level=info msg="Loading containers: done." Jul 7 00:14:23.716096 dockerd[3100]: time="2025-07-07T00:14:23.716071640Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 00:14:23.716158 dockerd[3100]: time="2025-07-07T00:14:23.716133200Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 7 00:14:23.716239 dockerd[3100]: time="2025-07-07T00:14:23.716227480Z" level=info msg="Initializing buildkit" Jul 7 00:14:23.730023 dockerd[3100]: time="2025-07-07T00:14:23.729996480Z" level=info msg="Completed buildkit initialization" Jul 7 00:14:23.735560 dockerd[3100]: time="2025-07-07T00:14:23.735534080Z" level=info msg="Daemon has completed initialization" Jul 7 00:14:23.735631 dockerd[3100]: time="2025-07-07T00:14:23.735586960Z" level=info msg="API listen on /run/docker.sock" Jul 7 00:14:23.735738 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 00:14:23.482220 systemd-resolved[2699]: Clock change detected. Flushing caches. Jul 7 00:14:23.490060 systemd-journald[2268]: Time jumped backwards, rotating. Jul 7 00:14:23.482380 systemd-timesyncd[2700]: Contacted time server [2604:2dc0:202:300::140d]:123 (2.flatcar.pool.ntp.org). Jul 7 00:14:23.482426 systemd-timesyncd[2700]: Initial clock synchronization to Mon 2025-07-07 00:14:23.482174 UTC. Jul 7 00:14:23.653244 containerd[2793]: time="2025-07-07T00:14:23.653211112Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 7 00:14:23.927902 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1622532042-merged.mount: Deactivated successfully. Jul 7 00:14:24.273724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount678147502.mount: Deactivated successfully. Jul 7 00:14:25.544668 containerd[2793]: time="2025-07-07T00:14:25.544631232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:25.544906 containerd[2793]: time="2025-07-07T00:14:25.544673512Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351718" Jul 7 00:14:25.545532 containerd[2793]: time="2025-07-07T00:14:25.545507032Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:25.547877 containerd[2793]: time="2025-07-07T00:14:25.547854232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:25.549842 containerd[2793]: time="2025-07-07T00:14:25.549806112Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 1.89654588s" Jul 7 00:14:25.549904 containerd[2793]: time="2025-07-07T00:14:25.549881352Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 7 00:14:25.551165 containerd[2793]: time="2025-07-07T00:14:25.551143432Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 7 00:14:26.836848 containerd[2793]: time="2025-07-07T00:14:26.836808672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:26.837072 containerd[2793]: time="2025-07-07T00:14:26.836820032Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537625" Jul 7 00:14:26.837747 containerd[2793]: time="2025-07-07T00:14:26.837729392Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:26.840038 containerd[2793]: time="2025-07-07T00:14:26.840011432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:26.841034 containerd[2793]: time="2025-07-07T00:14:26.840994352Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.28981024s" Jul 7 00:14:26.841066 containerd[2793]: time="2025-07-07T00:14:26.841046392Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 7 00:14:26.841422 containerd[2793]: time="2025-07-07T00:14:26.841405952Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 7 00:14:27.093794 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 00:14:27.095264 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:14:27.240151 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:14:27.243504 (kubelet)[3440]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:14:27.276280 kubelet[3440]: E0707 00:14:27.276246 3440 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:14:27.279704 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:14:27.279824 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:14:27.280789 systemd[1]: kubelet.service: Consumed 147ms CPU time, 113.4M memory peak. Jul 7 00:14:28.039453 containerd[2793]: time="2025-07-07T00:14:28.039418472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:28.039693 containerd[2793]: time="2025-07-07T00:14:28.039480192Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293517" Jul 7 00:14:28.040309 containerd[2793]: time="2025-07-07T00:14:28.040287032Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:28.042599 containerd[2793]: time="2025-07-07T00:14:28.042575352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:28.043625 containerd[2793]: time="2025-07-07T00:14:28.043599912Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.202168s" Jul 7 00:14:28.043688 containerd[2793]: time="2025-07-07T00:14:28.043630072Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 7 00:14:28.044056 containerd[2793]: time="2025-07-07T00:14:28.044034152Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 7 00:14:28.895621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1555678795.mount: Deactivated successfully. Jul 7 00:14:29.099665 containerd[2793]: time="2025-07-07T00:14:29.099623592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:29.099835 containerd[2793]: time="2025-07-07T00:14:29.099624072Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199474" Jul 7 00:14:29.100294 containerd[2793]: time="2025-07-07T00:14:29.100275912Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:29.101654 containerd[2793]: time="2025-07-07T00:14:29.101631512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:29.102354 containerd[2793]: time="2025-07-07T00:14:29.102328232Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 1.05826076s" Jul 7 00:14:29.102374 containerd[2793]: time="2025-07-07T00:14:29.102361232Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 7 00:14:29.102671 containerd[2793]: time="2025-07-07T00:14:29.102646792Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 7 00:14:29.685499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3041166923.mount: Deactivated successfully. Jul 7 00:14:30.757033 containerd[2793]: time="2025-07-07T00:14:30.756959392Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Jul 7 00:14:30.757033 containerd[2793]: time="2025-07-07T00:14:30.756967272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:30.757927 containerd[2793]: time="2025-07-07T00:14:30.757900112Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:30.760556 containerd[2793]: time="2025-07-07T00:14:30.760507152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:30.761560 containerd[2793]: time="2025-07-07T00:14:30.761506552Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.65882112s" Jul 7 00:14:30.761560 containerd[2793]: time="2025-07-07T00:14:30.761534712Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 7 00:14:30.762142 containerd[2793]: time="2025-07-07T00:14:30.761932672Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 00:14:31.172757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2136502313.mount: Deactivated successfully. Jul 7 00:14:31.173356 containerd[2793]: time="2025-07-07T00:14:31.173205312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:14:31.173356 containerd[2793]: time="2025-07-07T00:14:31.173233352Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 7 00:14:31.173952 containerd[2793]: time="2025-07-07T00:14:31.173928872Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:14:31.175523 containerd[2793]: time="2025-07-07T00:14:31.175498272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:14:31.176186 containerd[2793]: time="2025-07-07T00:14:31.176160432Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 414.19756ms" Jul 7 00:14:31.176211 containerd[2793]: time="2025-07-07T00:14:31.176188392Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 7 00:14:31.176502 containerd[2793]: time="2025-07-07T00:14:31.176486152Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 7 00:14:31.601448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1181167987.mount: Deactivated successfully. Jul 7 00:14:33.766801 containerd[2793]: time="2025-07-07T00:14:33.766706432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:33.766801 containerd[2793]: time="2025-07-07T00:14:33.766683432Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334601" Jul 7 00:14:33.767696 containerd[2793]: time="2025-07-07T00:14:33.767671152Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:33.770076 containerd[2793]: time="2025-07-07T00:14:33.770052632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:33.771172 containerd[2793]: time="2025-07-07T00:14:33.771133352Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.59461004s" Jul 7 00:14:33.771208 containerd[2793]: time="2025-07-07T00:14:33.771183872Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 7 00:14:37.343943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 00:14:37.345881 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:14:37.484842 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:14:37.488112 (kubelet)[3657]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:14:37.517967 kubelet[3657]: E0707 00:14:37.517937 3657 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:14:37.520483 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:14:37.520606 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:14:37.521099 systemd[1]: kubelet.service: Consumed 136ms CPU time, 116.3M memory peak. Jul 7 00:14:39.055255 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:14:39.055462 systemd[1]: kubelet.service: Consumed 136ms CPU time, 116.3M memory peak. Jul 7 00:14:39.058023 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:14:39.074835 systemd[1]: Reload requested from client PID 3687 ('systemctl') (unit session-9.scope)... Jul 7 00:14:39.074847 systemd[1]: Reloading... Jul 7 00:14:39.149674 zram_generator::config[3733]: No configuration found. Jul 7 00:14:39.225710 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:14:39.329091 systemd[1]: Reloading finished in 253 ms. Jul 7 00:14:39.380702 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 00:14:39.380884 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 00:14:39.381323 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:14:39.384191 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:14:39.510539 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:14:39.514002 (kubelet)[3796]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:14:39.545117 kubelet[3796]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:14:39.545117 kubelet[3796]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 00:14:39.545117 kubelet[3796]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:14:39.545323 kubelet[3796]: I0707 00:14:39.545169 3796 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:14:40.091474 kubelet[3796]: I0707 00:14:40.091441 3796 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 7 00:14:40.091474 kubelet[3796]: I0707 00:14:40.091466 3796 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:14:40.091666 kubelet[3796]: I0707 00:14:40.091648 3796 server.go:956] "Client rotation is on, will bootstrap in background" Jul 7 00:14:40.115607 kubelet[3796]: E0707 00:14:40.115581 3796 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://147.28.143.210:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.28.143.210:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 7 00:14:40.115969 kubelet[3796]: I0707 00:14:40.115956 3796 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:14:40.121466 kubelet[3796]: I0707 00:14:40.121446 3796 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 00:14:40.141691 kubelet[3796]: I0707 00:14:40.141644 3796 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:14:40.141998 kubelet[3796]: I0707 00:14:40.141969 3796 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:14:40.142129 kubelet[3796]: I0707 00:14:40.141998 3796 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-a-1996c8fb49","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:14:40.142210 kubelet[3796]: I0707 00:14:40.142198 3796 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:14:40.142210 kubelet[3796]: I0707 00:14:40.142207 3796 container_manager_linux.go:303] "Creating device plugin manager" Jul 7 00:14:40.142975 kubelet[3796]: I0707 00:14:40.142957 3796 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:14:40.160906 kubelet[3796]: I0707 00:14:40.160887 3796 kubelet.go:480] "Attempting to sync node with API server" Jul 7 00:14:40.160947 kubelet[3796]: I0707 00:14:40.160908 3796 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:14:40.160947 kubelet[3796]: I0707 00:14:40.160933 3796 kubelet.go:386] "Adding apiserver pod source" Jul 7 00:14:40.162354 kubelet[3796]: I0707 00:14:40.162339 3796 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:14:40.165222 kubelet[3796]: E0707 00:14:40.165193 3796 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://147.28.143.210:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.28.143.210:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 7 00:14:40.165368 kubelet[3796]: E0707 00:14:40.165344 3796 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://147.28.143.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-a-1996c8fb49&limit=500&resourceVersion=0\": dial tcp 147.28.143.210:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 7 00:14:40.165409 kubelet[3796]: I0707 00:14:40.165396 3796 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 00:14:40.166075 kubelet[3796]: I0707 00:14:40.166063 3796 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 7 00:14:40.166186 kubelet[3796]: W0707 00:14:40.166178 3796 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 00:14:40.168403 kubelet[3796]: I0707 00:14:40.168392 3796 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 00:14:40.168437 kubelet[3796]: I0707 00:14:40.168430 3796 server.go:1289] "Started kubelet" Jul 7 00:14:40.168515 kubelet[3796]: I0707 00:14:40.168486 3796 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:14:40.169628 kubelet[3796]: I0707 00:14:40.169564 3796 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:14:40.169628 kubelet[3796]: I0707 00:14:40.169588 3796 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:14:40.169729 kubelet[3796]: I0707 00:14:40.169629 3796 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 00:14:40.169729 kubelet[3796]: E0707 00:14:40.169695 3796 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-1996c8fb49\" not found" Jul 7 00:14:40.169781 kubelet[3796]: I0707 00:14:40.169751 3796 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 00:14:40.169839 kubelet[3796]: I0707 00:14:40.169819 3796 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:14:40.170063 kubelet[3796]: E0707 00:14:40.170036 3796 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://147.28.143.210:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.28.143.210:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 7 00:14:40.170121 kubelet[3796]: E0707 00:14:40.170081 3796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.143.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-a-1996c8fb49?timeout=10s\": dial tcp 147.28.143.210:6443: connect: connection refused" interval="200ms" Jul 7 00:14:40.170262 kubelet[3796]: I0707 00:14:40.170175 3796 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:14:40.170585 kubelet[3796]: I0707 00:14:40.170569 3796 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:14:40.170618 kubelet[3796]: I0707 00:14:40.170604 3796 server.go:317] "Adding debug handlers to kubelet server" Jul 7 00:14:40.171090 kubelet[3796]: I0707 00:14:40.171075 3796 factory.go:223] Registration of the systemd container factory successfully Jul 7 00:14:40.171183 kubelet[3796]: I0707 00:14:40.171170 3796 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:14:40.171886 kubelet[3796]: I0707 00:14:40.171871 3796 factory.go:223] Registration of the containerd container factory successfully Jul 7 00:14:40.175521 kubelet[3796]: E0707 00:14:40.175487 3796 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:14:40.175549 kubelet[3796]: E0707 00:14:40.171781 3796 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.28.143.210:6443/api/v1/namespaces/default/events\": dial tcp 147.28.143.210:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.1.1-a-1996c8fb49.184fcfd1d32c0378 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.1-a-1996c8fb49,UID:ci-4344.1.1-a-1996c8fb49,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.1-a-1996c8fb49,},FirstTimestamp:2025-07-07 00:14:40.168403832 +0000 UTC m=+0.651361081,LastTimestamp:2025-07-07 00:14:40.168403832 +0000 UTC m=+0.651361081,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.1-a-1996c8fb49,}" Jul 7 00:14:40.187685 kubelet[3796]: I0707 00:14:40.187635 3796 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 7 00:14:40.187713 kubelet[3796]: I0707 00:14:40.187681 3796 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 00:14:40.187713 kubelet[3796]: I0707 00:14:40.187695 3796 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 00:14:40.187713 kubelet[3796]: I0707 00:14:40.187711 3796 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:14:40.188590 kubelet[3796]: I0707 00:14:40.188579 3796 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 7 00:14:40.188608 kubelet[3796]: I0707 00:14:40.188595 3796 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 7 00:14:40.188627 kubelet[3796]: I0707 00:14:40.188611 3796 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 00:14:40.188627 kubelet[3796]: I0707 00:14:40.188619 3796 kubelet.go:2436] "Starting kubelet main sync loop" Jul 7 00:14:40.188674 kubelet[3796]: I0707 00:14:40.188658 3796 policy_none.go:49] "None policy: Start" Jul 7 00:14:40.188674 kubelet[3796]: E0707 00:14:40.188655 3796 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:14:40.188715 kubelet[3796]: I0707 00:14:40.188685 3796 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 00:14:40.188715 kubelet[3796]: I0707 00:14:40.188696 3796 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:14:40.189053 kubelet[3796]: E0707 00:14:40.189029 3796 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://147.28.143.210:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.28.143.210:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 7 00:14:40.192220 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 00:14:40.204841 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 00:14:40.207360 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 00:14:40.219441 kubelet[3796]: E0707 00:14:40.219421 3796 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 7 00:14:40.219620 kubelet[3796]: I0707 00:14:40.219607 3796 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:14:40.219662 kubelet[3796]: I0707 00:14:40.219621 3796 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:14:40.219830 kubelet[3796]: I0707 00:14:40.219814 3796 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:14:40.220320 kubelet[3796]: E0707 00:14:40.220302 3796 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 00:14:40.220345 kubelet[3796]: E0707 00:14:40.220338 3796 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.1.1-a-1996c8fb49\" not found" Jul 7 00:14:40.298458 systemd[1]: Created slice kubepods-burstable-pod034ccec060c3801ebd90bee962e3305d.slice - libcontainer container kubepods-burstable-pod034ccec060c3801ebd90bee962e3305d.slice. Jul 7 00:14:40.318050 kubelet[3796]: E0707 00:14:40.318020 3796 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-1996c8fb49\" not found" node="ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:40.319583 systemd[1]: Created slice kubepods-burstable-pod49cddecb93115476c84c84dfb2623d24.slice - libcontainer container kubepods-burstable-pod49cddecb93115476c84c84dfb2623d24.slice. Jul 7 00:14:40.320953 kubelet[3796]: I0707 00:14:40.320932 3796 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:40.321283 kubelet[3796]: E0707 00:14:40.321262 3796 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.28.143.210:6443/api/v1/nodes\": dial tcp 147.28.143.210:6443: connect: connection refused" node="ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:40.332674 kubelet[3796]: E0707 00:14:40.332651 3796 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-1996c8fb49\" not found" node="ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:40.334829 systemd[1]: Created slice kubepods-burstable-podab6a37e837cb5e28930852612c2cc51d.slice - libcontainer container kubepods-burstable-podab6a37e837cb5e28930852612c2cc51d.slice. Jul 7 00:14:40.336085 kubelet[3796]: E0707 00:14:40.336066 3796 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-1996c8fb49\" not found" node="ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:40.370486 kubelet[3796]: E0707 00:14:40.370452 3796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.143.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-a-1996c8fb49?timeout=10s\": dial tcp 147.28.143.210:6443: connect: connection refused" interval="400ms" Jul 7 00:14:40.470744 kubelet[3796]: I0707 00:14:40.470707 3796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/034ccec060c3801ebd90bee962e3305d-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-a-1996c8fb49\" (UID: \"034ccec060c3801ebd90bee962e3305d\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:40.470784 kubelet[3796]: I0707 00:14:40.470772 3796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/034ccec060c3801ebd90bee962e3305d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-a-1996c8fb49\" (UID: \"034ccec060c3801ebd90bee962e3305d\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:40.470829 kubelet[3796]: I0707 00:14:40.470794 3796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49cddecb93115476c84c84dfb2623d24-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-a-1996c8fb49\" (UID: \"49cddecb93115476c84c84dfb2623d24\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:40.470829 kubelet[3796]: I0707 00:14:40.470811 3796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49cddecb93115476c84c84dfb2623d24-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-a-1996c8fb49\" (UID: \"49cddecb93115476c84c84dfb2623d24\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:40.470909 kubelet[3796]: I0707 00:14:40.470851 3796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49cddecb93115476c84c84dfb2623d24-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-a-1996c8fb49\" (UID: \"49cddecb93115476c84c84dfb2623d24\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:40.470909 kubelet[3796]: I0707 00:14:40.470883 3796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/49cddecb93115476c84c84dfb2623d24-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-a-1996c8fb49\" (UID: \"49cddecb93115476c84c84dfb2623d24\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:40.470909 kubelet[3796]: I0707 00:14:40.470903 3796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/49cddecb93115476c84c84dfb2623d24-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-a-1996c8fb49\" (UID: \"49cddecb93115476c84c84dfb2623d24\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:40.471001 kubelet[3796]: I0707 00:14:40.470920 3796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab6a37e837cb5e28930852612c2cc51d-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-a-1996c8fb49\" (UID: \"ab6a37e837cb5e28930852612c2cc51d\") " pod="kube-system/kube-scheduler-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:40.471001 kubelet[3796]: I0707 00:14:40.470944 3796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/034ccec060c3801ebd90bee962e3305d-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-a-1996c8fb49\" (UID: \"034ccec060c3801ebd90bee962e3305d\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:40.522799 kubelet[3796]: I0707 00:14:40.522777 3796 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:40.523051 kubelet[3796]: E0707 00:14:40.523026 3796 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.28.143.210:6443/api/v1/nodes\": dial tcp 147.28.143.210:6443: connect: connection refused" node="ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:40.618878 containerd[2793]: time="2025-07-07T00:14:40.618846792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-a-1996c8fb49,Uid:034ccec060c3801ebd90bee962e3305d,Namespace:kube-system,Attempt:0,}" Jul 7 00:14:40.629314 containerd[2793]: time="2025-07-07T00:14:40.629261632Z" level=info msg="connecting to shim 0f097dbeeb414148b292cc4e6b6b56c945b89b7bb2b38a4636de8b7c8d9af294" address="unix:///run/containerd/s/7b11b132c82b26aa3d75c4af1c3b16f8ff800fb9d3cbf45807b15087858a530b" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:14:40.633298 containerd[2793]: time="2025-07-07T00:14:40.633270952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-a-1996c8fb49,Uid:49cddecb93115476c84c84dfb2623d24,Namespace:kube-system,Attempt:0,}" Jul 7 00:14:40.636855 containerd[2793]: time="2025-07-07T00:14:40.636830152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-a-1996c8fb49,Uid:ab6a37e837cb5e28930852612c2cc51d,Namespace:kube-system,Attempt:0,}" Jul 7 00:14:40.642179 containerd[2793]: time="2025-07-07T00:14:40.642150552Z" level=info msg="connecting to shim 92a7ad97749626c3c900615f38ecf9a7ee73fbbed003952fdd366c9fa86b3f34" address="unix:///run/containerd/s/d9e5d3d3d16c543aa2ffb0a01a50b8380b54a438ee85fa3417b9c887391ac0e2" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:14:40.644797 containerd[2793]: time="2025-07-07T00:14:40.644768352Z" level=info msg="connecting to shim 1efeea5f619d824e4f1fa66beca1cc91338d07b34c82aee2ba8cbe7db7a01567" address="unix:///run/containerd/s/b589cb8666a99a11a58845fe62c73f29e3b9e124faa3b6b8174e15f9c468ffea" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:14:40.656808 systemd[1]: Started cri-containerd-0f097dbeeb414148b292cc4e6b6b56c945b89b7bb2b38a4636de8b7c8d9af294.scope - libcontainer container 0f097dbeeb414148b292cc4e6b6b56c945b89b7bb2b38a4636de8b7c8d9af294. Jul 7 00:14:40.663014 systemd[1]: Started cri-containerd-1efeea5f619d824e4f1fa66beca1cc91338d07b34c82aee2ba8cbe7db7a01567.scope - libcontainer container 1efeea5f619d824e4f1fa66beca1cc91338d07b34c82aee2ba8cbe7db7a01567. Jul 7 00:14:40.664288 systemd[1]: Started cri-containerd-92a7ad97749626c3c900615f38ecf9a7ee73fbbed003952fdd366c9fa86b3f34.scope - libcontainer container 92a7ad97749626c3c900615f38ecf9a7ee73fbbed003952fdd366c9fa86b3f34. Jul 7 00:14:40.684345 containerd[2793]: time="2025-07-07T00:14:40.684319192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-a-1996c8fb49,Uid:034ccec060c3801ebd90bee962e3305d,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f097dbeeb414148b292cc4e6b6b56c945b89b7bb2b38a4636de8b7c8d9af294\"" Jul 7 00:14:40.686996 containerd[2793]: time="2025-07-07T00:14:40.686976392Z" level=info msg="CreateContainer within sandbox \"0f097dbeeb414148b292cc4e6b6b56c945b89b7bb2b38a4636de8b7c8d9af294\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 00:14:40.688388 containerd[2793]: time="2025-07-07T00:14:40.688365712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-a-1996c8fb49,Uid:ab6a37e837cb5e28930852612c2cc51d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1efeea5f619d824e4f1fa66beca1cc91338d07b34c82aee2ba8cbe7db7a01567\"" Jul 7 00:14:40.704059 containerd[2793]: time="2025-07-07T00:14:40.704031272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-a-1996c8fb49,Uid:49cddecb93115476c84c84dfb2623d24,Namespace:kube-system,Attempt:0,} returns sandbox id \"92a7ad97749626c3c900615f38ecf9a7ee73fbbed003952fdd366c9fa86b3f34\"" Jul 7 00:14:40.704602 containerd[2793]: time="2025-07-07T00:14:40.704585592Z" level=info msg="CreateContainer within sandbox \"1efeea5f619d824e4f1fa66beca1cc91338d07b34c82aee2ba8cbe7db7a01567\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 00:14:40.705544 containerd[2793]: time="2025-07-07T00:14:40.705519192Z" level=info msg="Container 5b23b610aae036b80864d8ee81114ac0071df1986864c16ee31f87d7275ff831: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:14:40.705979 containerd[2793]: time="2025-07-07T00:14:40.705949992Z" level=info msg="CreateContainer within sandbox \"92a7ad97749626c3c900615f38ecf9a7ee73fbbed003952fdd366c9fa86b3f34\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 00:14:40.709415 containerd[2793]: time="2025-07-07T00:14:40.709386192Z" level=info msg="Container 5c0e66a6816b597cae3a506b55600996f5d4c1c367f420efbf43a7060ccfbc31: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:14:40.710933 containerd[2793]: time="2025-07-07T00:14:40.710905152Z" level=info msg="CreateContainer within sandbox \"0f097dbeeb414148b292cc4e6b6b56c945b89b7bb2b38a4636de8b7c8d9af294\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5b23b610aae036b80864d8ee81114ac0071df1986864c16ee31f87d7275ff831\"" Jul 7 00:14:40.711484 containerd[2793]: time="2025-07-07T00:14:40.711462952Z" level=info msg="StartContainer for \"5b23b610aae036b80864d8ee81114ac0071df1986864c16ee31f87d7275ff831\"" Jul 7 00:14:40.712168 containerd[2793]: time="2025-07-07T00:14:40.712145072Z" level=info msg="Container 04d8400a4413563455243978fccb219411f9d485f575256d561d3972fa834c10: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:14:40.712493 containerd[2793]: time="2025-07-07T00:14:40.712468632Z" level=info msg="connecting to shim 5b23b610aae036b80864d8ee81114ac0071df1986864c16ee31f87d7275ff831" address="unix:///run/containerd/s/7b11b132c82b26aa3d75c4af1c3b16f8ff800fb9d3cbf45807b15087858a530b" protocol=ttrpc version=3 Jul 7 00:14:40.712980 containerd[2793]: time="2025-07-07T00:14:40.712955872Z" level=info msg="CreateContainer within sandbox \"1efeea5f619d824e4f1fa66beca1cc91338d07b34c82aee2ba8cbe7db7a01567\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5c0e66a6816b597cae3a506b55600996f5d4c1c367f420efbf43a7060ccfbc31\"" Jul 7 00:14:40.713248 containerd[2793]: time="2025-07-07T00:14:40.713231632Z" level=info msg="StartContainer for \"5c0e66a6816b597cae3a506b55600996f5d4c1c367f420efbf43a7060ccfbc31\"" Jul 7 00:14:40.714144 containerd[2793]: time="2025-07-07T00:14:40.714123872Z" level=info msg="connecting to shim 5c0e66a6816b597cae3a506b55600996f5d4c1c367f420efbf43a7060ccfbc31" address="unix:///run/containerd/s/b589cb8666a99a11a58845fe62c73f29e3b9e124faa3b6b8174e15f9c468ffea" protocol=ttrpc version=3 Jul 7 00:14:40.715143 containerd[2793]: time="2025-07-07T00:14:40.715124432Z" level=info msg="CreateContainer within sandbox \"92a7ad97749626c3c900615f38ecf9a7ee73fbbed003952fdd366c9fa86b3f34\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"04d8400a4413563455243978fccb219411f9d485f575256d561d3972fa834c10\"" Jul 7 00:14:40.715366 containerd[2793]: time="2025-07-07T00:14:40.715347752Z" level=info msg="StartContainer for \"04d8400a4413563455243978fccb219411f9d485f575256d561d3972fa834c10\"" Jul 7 00:14:40.716286 containerd[2793]: time="2025-07-07T00:14:40.716264432Z" level=info msg="connecting to shim 04d8400a4413563455243978fccb219411f9d485f575256d561d3972fa834c10" address="unix:///run/containerd/s/d9e5d3d3d16c543aa2ffb0a01a50b8380b54a438ee85fa3417b9c887391ac0e2" protocol=ttrpc version=3 Jul 7 00:14:40.731780 systemd[1]: Started cri-containerd-5b23b610aae036b80864d8ee81114ac0071df1986864c16ee31f87d7275ff831.scope - libcontainer container 5b23b610aae036b80864d8ee81114ac0071df1986864c16ee31f87d7275ff831. Jul 7 00:14:40.734850 systemd[1]: Started cri-containerd-04d8400a4413563455243978fccb219411f9d485f575256d561d3972fa834c10.scope - libcontainer container 04d8400a4413563455243978fccb219411f9d485f575256d561d3972fa834c10. Jul 7 00:14:40.735920 systemd[1]: Started cri-containerd-5c0e66a6816b597cae3a506b55600996f5d4c1c367f420efbf43a7060ccfbc31.scope - libcontainer container 5c0e66a6816b597cae3a506b55600996f5d4c1c367f420efbf43a7060ccfbc31. Jul 7 00:14:40.761002 containerd[2793]: time="2025-07-07T00:14:40.760970472Z" level=info msg="StartContainer for \"5b23b610aae036b80864d8ee81114ac0071df1986864c16ee31f87d7275ff831\" returns successfully" Jul 7 00:14:40.763389 containerd[2793]: time="2025-07-07T00:14:40.763369832Z" level=info msg="StartContainer for \"04d8400a4413563455243978fccb219411f9d485f575256d561d3972fa834c10\" returns successfully" Jul 7 00:14:40.764816 containerd[2793]: time="2025-07-07T00:14:40.764795552Z" level=info msg="StartContainer for \"5c0e66a6816b597cae3a506b55600996f5d4c1c367f420efbf43a7060ccfbc31\" returns successfully" Jul 7 00:14:40.771499 kubelet[3796]: E0707 00:14:40.771470 3796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.143.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-a-1996c8fb49?timeout=10s\": dial tcp 147.28.143.210:6443: connect: connection refused" interval="800ms" Jul 7 00:14:40.925386 kubelet[3796]: I0707 00:14:40.925329 3796 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:41.193227 kubelet[3796]: E0707 00:14:41.193181 3796 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-1996c8fb49\" not found" node="ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:41.194184 kubelet[3796]: E0707 00:14:41.194166 3796 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-1996c8fb49\" not found" node="ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:41.195288 kubelet[3796]: E0707 00:14:41.195272 3796 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-1996c8fb49\" not found" node="ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:41.738490 kubelet[3796]: E0707 00:14:41.738459 3796 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344.1.1-a-1996c8fb49\" not found" node="ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:41.840305 kubelet[3796]: I0707 00:14:41.840282 3796 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:41.870492 kubelet[3796]: I0707 00:14:41.870475 3796 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:41.874120 kubelet[3796]: E0707 00:14:41.874102 3796 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.1-a-1996c8fb49\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:41.874160 kubelet[3796]: I0707 00:14:41.874122 3796 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:41.875540 kubelet[3796]: E0707 00:14:41.875525 3796 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.1.1-a-1996c8fb49\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:41.875568 kubelet[3796]: I0707 00:14:41.875543 3796 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:41.876819 kubelet[3796]: E0707 00:14:41.876804 3796 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-a-1996c8fb49\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:42.163641 kubelet[3796]: I0707 00:14:42.163626 3796 apiserver.go:52] "Watching apiserver" Jul 7 00:14:42.169822 kubelet[3796]: I0707 00:14:42.169803 3796 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 00:14:42.195607 kubelet[3796]: I0707 00:14:42.195595 3796 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:42.195735 kubelet[3796]: I0707 00:14:42.195725 3796 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:42.197078 kubelet[3796]: E0707 00:14:42.197065 3796 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.1-a-1996c8fb49\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:42.197106 kubelet[3796]: E0707 00:14:42.197080 3796 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-a-1996c8fb49\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:43.197112 kubelet[3796]: I0707 00:14:43.197088 3796 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:43.200111 kubelet[3796]: I0707 00:14:43.200082 3796 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 00:14:43.772848 kubelet[3796]: I0707 00:14:43.772826 3796 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:43.775424 kubelet[3796]: I0707 00:14:43.775412 3796 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 00:14:44.006350 systemd[1]: Reload requested from client PID 4220 ('systemctl') (unit session-9.scope)... Jul 7 00:14:44.006360 systemd[1]: Reloading... Jul 7 00:14:44.071676 zram_generator::config[4266]: No configuration found. Jul 7 00:14:44.147935 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:14:44.260943 systemd[1]: Reloading finished in 254 ms. Jul 7 00:14:44.288519 kubelet[3796]: I0707 00:14:44.288499 3796 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:14:44.289932 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:14:44.299401 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 00:14:44.299677 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:14:44.299730 systemd[1]: kubelet.service: Consumed 1.055s CPU time, 143.2M memory peak. Jul 7 00:14:44.301460 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:14:44.440446 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:14:44.443739 (kubelet)[4328]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:14:44.472410 kubelet[4328]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:14:44.472410 kubelet[4328]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 00:14:44.472410 kubelet[4328]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:14:44.472593 kubelet[4328]: I0707 00:14:44.472452 4328 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:14:44.478107 kubelet[4328]: I0707 00:14:44.478086 4328 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 7 00:14:44.478133 kubelet[4328]: I0707 00:14:44.478109 4328 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:14:44.478303 kubelet[4328]: I0707 00:14:44.478293 4328 server.go:956] "Client rotation is on, will bootstrap in background" Jul 7 00:14:44.479464 kubelet[4328]: I0707 00:14:44.479452 4328 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 7 00:14:44.481534 kubelet[4328]: I0707 00:14:44.481519 4328 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:14:44.484250 kubelet[4328]: I0707 00:14:44.484235 4328 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 00:14:44.502852 kubelet[4328]: I0707 00:14:44.502826 4328 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:14:44.503042 kubelet[4328]: I0707 00:14:44.503017 4328 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:14:44.503175 kubelet[4328]: I0707 00:14:44.503039 4328 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-a-1996c8fb49","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:14:44.503245 kubelet[4328]: I0707 00:14:44.503183 4328 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:14:44.503245 kubelet[4328]: I0707 00:14:44.503191 4328 container_manager_linux.go:303] "Creating device plugin manager" Jul 7 00:14:44.503285 kubelet[4328]: I0707 00:14:44.503249 4328 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:14:44.503562 kubelet[4328]: I0707 00:14:44.503550 4328 kubelet.go:480] "Attempting to sync node with API server" Jul 7 00:14:44.503587 kubelet[4328]: I0707 00:14:44.503567 4328 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:14:44.503606 kubelet[4328]: I0707 00:14:44.503589 4328 kubelet.go:386] "Adding apiserver pod source" Jul 7 00:14:44.503606 kubelet[4328]: I0707 00:14:44.503601 4328 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:14:44.504310 kubelet[4328]: I0707 00:14:44.504294 4328 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 00:14:44.504855 kubelet[4328]: I0707 00:14:44.504841 4328 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 7 00:14:44.506477 kubelet[4328]: I0707 00:14:44.506463 4328 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 00:14:44.506513 kubelet[4328]: I0707 00:14:44.506500 4328 server.go:1289] "Started kubelet" Jul 7 00:14:44.506570 kubelet[4328]: I0707 00:14:44.506543 4328 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:14:44.507092 kubelet[4328]: I0707 00:14:44.506890 4328 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:14:44.507337 kubelet[4328]: I0707 00:14:44.507325 4328 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:14:44.508794 kubelet[4328]: I0707 00:14:44.508780 4328 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:14:44.509785 kubelet[4328]: I0707 00:14:44.509763 4328 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:14:44.510238 kubelet[4328]: E0707 00:14:44.510222 4328 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-1996c8fb49\" not found" Jul 7 00:14:44.510339 kubelet[4328]: I0707 00:14:44.510325 4328 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 00:14:44.510358 kubelet[4328]: E0707 00:14:44.510339 4328 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:14:44.510377 kubelet[4328]: I0707 00:14:44.510373 4328 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 00:14:44.510490 kubelet[4328]: I0707 00:14:44.510478 4328 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:14:44.510763 kubelet[4328]: I0707 00:14:44.510733 4328 factory.go:223] Registration of the systemd container factory successfully Jul 7 00:14:44.510864 kubelet[4328]: I0707 00:14:44.510849 4328 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:14:44.511433 kubelet[4328]: I0707 00:14:44.511418 4328 server.go:317] "Adding debug handlers to kubelet server" Jul 7 00:14:44.511574 kubelet[4328]: I0707 00:14:44.511560 4328 factory.go:223] Registration of the containerd container factory successfully Jul 7 00:14:44.519520 kubelet[4328]: I0707 00:14:44.519473 4328 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 7 00:14:44.520499 kubelet[4328]: I0707 00:14:44.520479 4328 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 7 00:14:44.520520 kubelet[4328]: I0707 00:14:44.520500 4328 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 7 00:14:44.520520 kubelet[4328]: I0707 00:14:44.520518 4328 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 00:14:44.520561 kubelet[4328]: I0707 00:14:44.520526 4328 kubelet.go:2436] "Starting kubelet main sync loop" Jul 7 00:14:44.520580 kubelet[4328]: E0707 00:14:44.520569 4328 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:14:44.542969 kubelet[4328]: I0707 00:14:44.542950 4328 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 00:14:44.542969 kubelet[4328]: I0707 00:14:44.542965 4328 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 00:14:44.543009 kubelet[4328]: I0707 00:14:44.542984 4328 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:14:44.543131 kubelet[4328]: I0707 00:14:44.543112 4328 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 00:14:44.543153 kubelet[4328]: I0707 00:14:44.543129 4328 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 00:14:44.543153 kubelet[4328]: I0707 00:14:44.543146 4328 policy_none.go:49] "None policy: Start" Jul 7 00:14:44.543186 kubelet[4328]: I0707 00:14:44.543154 4328 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 00:14:44.543186 kubelet[4328]: I0707 00:14:44.543163 4328 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:14:44.543246 kubelet[4328]: I0707 00:14:44.543239 4328 state_mem.go:75] "Updated machine memory state" Jul 7 00:14:44.546250 kubelet[4328]: E0707 00:14:44.546233 4328 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 7 00:14:44.546407 kubelet[4328]: I0707 00:14:44.546392 4328 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:14:44.546460 kubelet[4328]: I0707 00:14:44.546404 4328 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:14:44.546600 kubelet[4328]: I0707 00:14:44.546586 4328 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:14:44.547149 kubelet[4328]: E0707 00:14:44.547131 4328 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 00:14:44.621803 kubelet[4328]: I0707 00:14:44.621784 4328 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:44.621862 kubelet[4328]: I0707 00:14:44.621810 4328 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:44.621884 kubelet[4328]: I0707 00:14:44.621857 4328 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:44.628536 kubelet[4328]: I0707 00:14:44.628516 4328 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 00:14:44.628713 kubelet[4328]: I0707 00:14:44.628695 4328 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 00:14:44.628757 kubelet[4328]: I0707 00:14:44.628734 4328 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 00:14:44.628794 kubelet[4328]: E0707 00:14:44.628783 4328 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.1-a-1996c8fb49\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:44.628833 kubelet[4328]: E0707 00:14:44.628740 4328 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-a-1996c8fb49\" already exists" pod="kube-system/kube-scheduler-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:44.648835 kubelet[4328]: I0707 00:14:44.648818 4328 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:44.652081 kubelet[4328]: I0707 00:14:44.652064 4328 kubelet_node_status.go:124] "Node was previously registered" node="ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:44.652145 kubelet[4328]: I0707 00:14:44.652135 4328 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:44.812232 kubelet[4328]: I0707 00:14:44.812141 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/034ccec060c3801ebd90bee962e3305d-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-a-1996c8fb49\" (UID: \"034ccec060c3801ebd90bee962e3305d\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:44.812232 kubelet[4328]: I0707 00:14:44.812170 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/49cddecb93115476c84c84dfb2623d24-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-a-1996c8fb49\" (UID: \"49cddecb93115476c84c84dfb2623d24\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:44.812232 kubelet[4328]: I0707 00:14:44.812194 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49cddecb93115476c84c84dfb2623d24-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-a-1996c8fb49\" (UID: \"49cddecb93115476c84c84dfb2623d24\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:44.812232 kubelet[4328]: I0707 00:14:44.812211 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49cddecb93115476c84c84dfb2623d24-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-a-1996c8fb49\" (UID: \"49cddecb93115476c84c84dfb2623d24\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:44.812542 kubelet[4328]: I0707 00:14:44.812280 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/034ccec060c3801ebd90bee962e3305d-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-a-1996c8fb49\" (UID: \"034ccec060c3801ebd90bee962e3305d\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:44.812542 kubelet[4328]: I0707 00:14:44.812325 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/034ccec060c3801ebd90bee962e3305d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-a-1996c8fb49\" (UID: \"034ccec060c3801ebd90bee962e3305d\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:44.812542 kubelet[4328]: I0707 00:14:44.812357 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49cddecb93115476c84c84dfb2623d24-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-a-1996c8fb49\" (UID: \"49cddecb93115476c84c84dfb2623d24\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:44.812542 kubelet[4328]: I0707 00:14:44.812375 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/49cddecb93115476c84c84dfb2623d24-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-a-1996c8fb49\" (UID: \"49cddecb93115476c84c84dfb2623d24\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:44.812542 kubelet[4328]: I0707 00:14:44.812389 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab6a37e837cb5e28930852612c2cc51d-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-a-1996c8fb49\" (UID: \"ab6a37e837cb5e28930852612c2cc51d\") " pod="kube-system/kube-scheduler-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:45.504707 kubelet[4328]: I0707 00:14:45.504654 4328 apiserver.go:52] "Watching apiserver" Jul 7 00:14:45.510859 kubelet[4328]: I0707 00:14:45.510832 4328 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 00:14:45.527408 kubelet[4328]: I0707 00:14:45.527385 4328 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:45.527475 kubelet[4328]: I0707 00:14:45.527428 4328 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:45.530332 kubelet[4328]: I0707 00:14:45.530309 4328 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 00:14:45.530387 kubelet[4328]: E0707 00:14:45.530358 4328 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-a-1996c8fb49\" already exists" pod="kube-system/kube-scheduler-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:45.530428 kubelet[4328]: I0707 00:14:45.530408 4328 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 00:14:45.530465 kubelet[4328]: E0707 00:14:45.530448 4328 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.1-a-1996c8fb49\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.1-a-1996c8fb49" Jul 7 00:14:45.546075 kubelet[4328]: I0707 00:14:45.546031 4328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.1.1-a-1996c8fb49" podStartSLOduration=1.546018552 podStartE2EDuration="1.546018552s" podCreationTimestamp="2025-07-07 00:14:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:14:45.545980952 +0000 UTC m=+1.099240081" watchObservedRunningTime="2025-07-07 00:14:45.546018552 +0000 UTC m=+1.099277641" Jul 7 00:14:45.546168 kubelet[4328]: I0707 00:14:45.546116 4328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.1.1-a-1996c8fb49" podStartSLOduration=2.546112312 podStartE2EDuration="2.546112312s" podCreationTimestamp="2025-07-07 00:14:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:14:45.540950752 +0000 UTC m=+1.094209921" watchObservedRunningTime="2025-07-07 00:14:45.546112312 +0000 UTC m=+1.099371441" Jul 7 00:14:45.556559 kubelet[4328]: I0707 00:14:45.556480 4328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.1.1-a-1996c8fb49" podStartSLOduration=2.556438592 podStartE2EDuration="2.556438592s" podCreationTimestamp="2025-07-07 00:14:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:14:45.552750592 +0000 UTC m=+1.106009721" watchObservedRunningTime="2025-07-07 00:14:45.556438592 +0000 UTC m=+1.109697681" Jul 7 00:14:49.341755 kubelet[4328]: I0707 00:14:49.341719 4328 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 00:14:49.342062 containerd[2793]: time="2025-07-07T00:14:49.341994072Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 00:14:49.342207 kubelet[4328]: I0707 00:14:49.342113 4328 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 00:14:50.428436 systemd[1]: Created slice kubepods-besteffort-pod38793a12_1a5c_4bc4_bc96_6714d1d18632.slice - libcontainer container kubepods-besteffort-pod38793a12_1a5c_4bc4_bc96_6714d1d18632.slice. Jul 7 00:14:50.448064 kubelet[4328]: I0707 00:14:50.448026 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/38793a12-1a5c-4bc4-bc96-6714d1d18632-kube-proxy\") pod \"kube-proxy-4pj24\" (UID: \"38793a12-1a5c-4bc4-bc96-6714d1d18632\") " pod="kube-system/kube-proxy-4pj24" Jul 7 00:14:50.448325 kubelet[4328]: I0707 00:14:50.448070 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m28vr\" (UniqueName: \"kubernetes.io/projected/38793a12-1a5c-4bc4-bc96-6714d1d18632-kube-api-access-m28vr\") pod \"kube-proxy-4pj24\" (UID: \"38793a12-1a5c-4bc4-bc96-6714d1d18632\") " pod="kube-system/kube-proxy-4pj24" Jul 7 00:14:50.448325 kubelet[4328]: I0707 00:14:50.448090 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38793a12-1a5c-4bc4-bc96-6714d1d18632-xtables-lock\") pod \"kube-proxy-4pj24\" (UID: \"38793a12-1a5c-4bc4-bc96-6714d1d18632\") " pod="kube-system/kube-proxy-4pj24" Jul 7 00:14:50.448325 kubelet[4328]: I0707 00:14:50.448114 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38793a12-1a5c-4bc4-bc96-6714d1d18632-lib-modules\") pod \"kube-proxy-4pj24\" (UID: \"38793a12-1a5c-4bc4-bc96-6714d1d18632\") " pod="kube-system/kube-proxy-4pj24" Jul 7 00:14:50.581116 systemd[1]: Created slice kubepods-besteffort-pod7d2e1f2e_c2e0_4527_a147_8ca62f6b78c1.slice - libcontainer container kubepods-besteffort-pod7d2e1f2e_c2e0_4527_a147_8ca62f6b78c1.slice. Jul 7 00:14:50.649568 kubelet[4328]: I0707 00:14:50.649534 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7d2e1f2e-c2e0-4527-a147-8ca62f6b78c1-var-lib-calico\") pod \"tigera-operator-747864d56d-dzf72\" (UID: \"7d2e1f2e-c2e0-4527-a147-8ca62f6b78c1\") " pod="tigera-operator/tigera-operator-747864d56d-dzf72" Jul 7 00:14:50.649568 kubelet[4328]: I0707 00:14:50.649569 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7qmm\" (UniqueName: \"kubernetes.io/projected/7d2e1f2e-c2e0-4527-a147-8ca62f6b78c1-kube-api-access-z7qmm\") pod \"tigera-operator-747864d56d-dzf72\" (UID: \"7d2e1f2e-c2e0-4527-a147-8ca62f6b78c1\") " pod="tigera-operator/tigera-operator-747864d56d-dzf72" Jul 7 00:14:50.749940 containerd[2793]: time="2025-07-07T00:14:50.749879312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4pj24,Uid:38793a12-1a5c-4bc4-bc96-6714d1d18632,Namespace:kube-system,Attempt:0,}" Jul 7 00:14:50.773091 containerd[2793]: time="2025-07-07T00:14:50.773058992Z" level=info msg="connecting to shim c848c7222a0766053698374084c4c330e30e78aec166970f08f05d7ea21cda01" address="unix:///run/containerd/s/cb899a06706591d7d8fbc860f9abe1a1cb14ddf5555b0cb2413f547cb40818fa" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:14:50.803846 systemd[1]: Started cri-containerd-c848c7222a0766053698374084c4c330e30e78aec166970f08f05d7ea21cda01.scope - libcontainer container c848c7222a0766053698374084c4c330e30e78aec166970f08f05d7ea21cda01. Jul 7 00:14:50.821225 containerd[2793]: time="2025-07-07T00:14:50.821197952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4pj24,Uid:38793a12-1a5c-4bc4-bc96-6714d1d18632,Namespace:kube-system,Attempt:0,} returns sandbox id \"c848c7222a0766053698374084c4c330e30e78aec166970f08f05d7ea21cda01\"" Jul 7 00:14:50.823547 containerd[2793]: time="2025-07-07T00:14:50.823522472Z" level=info msg="CreateContainer within sandbox \"c848c7222a0766053698374084c4c330e30e78aec166970f08f05d7ea21cda01\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 00:14:50.828522 containerd[2793]: time="2025-07-07T00:14:50.828490352Z" level=info msg="Container 90e6fd50bf0f5401914aac8e5f9fc64fcd61dc849b84861c19ddc68e6c474fc3: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:14:50.832342 containerd[2793]: time="2025-07-07T00:14:50.832311872Z" level=info msg="CreateContainer within sandbox \"c848c7222a0766053698374084c4c330e30e78aec166970f08f05d7ea21cda01\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"90e6fd50bf0f5401914aac8e5f9fc64fcd61dc849b84861c19ddc68e6c474fc3\"" Jul 7 00:14:50.832761 containerd[2793]: time="2025-07-07T00:14:50.832740232Z" level=info msg="StartContainer for \"90e6fd50bf0f5401914aac8e5f9fc64fcd61dc849b84861c19ddc68e6c474fc3\"" Jul 7 00:14:50.833992 containerd[2793]: time="2025-07-07T00:14:50.833967992Z" level=info msg="connecting to shim 90e6fd50bf0f5401914aac8e5f9fc64fcd61dc849b84861c19ddc68e6c474fc3" address="unix:///run/containerd/s/cb899a06706591d7d8fbc860f9abe1a1cb14ddf5555b0cb2413f547cb40818fa" protocol=ttrpc version=3 Jul 7 00:14:50.862774 systemd[1]: Started cri-containerd-90e6fd50bf0f5401914aac8e5f9fc64fcd61dc849b84861c19ddc68e6c474fc3.scope - libcontainer container 90e6fd50bf0f5401914aac8e5f9fc64fcd61dc849b84861c19ddc68e6c474fc3. Jul 7 00:14:50.883259 containerd[2793]: time="2025-07-07T00:14:50.883227872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-dzf72,Uid:7d2e1f2e-c2e0-4527-a147-8ca62f6b78c1,Namespace:tigera-operator,Attempt:0,}" Jul 7 00:14:50.891318 containerd[2793]: time="2025-07-07T00:14:50.891290912Z" level=info msg="connecting to shim cd471e93ee843663ed67e0d26504f26244999926a9e40d8e143bef94122d7613" address="unix:///run/containerd/s/1501c7f7764aa69891453eca731285356d9c5215954f3c26c03aea993d1ba637" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:14:50.903326 containerd[2793]: time="2025-07-07T00:14:50.903247992Z" level=info msg="StartContainer for \"90e6fd50bf0f5401914aac8e5f9fc64fcd61dc849b84861c19ddc68e6c474fc3\" returns successfully" Jul 7 00:14:50.916783 systemd[1]: Started cri-containerd-cd471e93ee843663ed67e0d26504f26244999926a9e40d8e143bef94122d7613.scope - libcontainer container cd471e93ee843663ed67e0d26504f26244999926a9e40d8e143bef94122d7613. Jul 7 00:14:50.942424 containerd[2793]: time="2025-07-07T00:14:50.942390432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-dzf72,Uid:7d2e1f2e-c2e0-4527-a147-8ca62f6b78c1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"cd471e93ee843663ed67e0d26504f26244999926a9e40d8e143bef94122d7613\"" Jul 7 00:14:50.943553 containerd[2793]: time="2025-07-07T00:14:50.943493192Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 00:14:51.543037 kubelet[4328]: I0707 00:14:51.542976 4328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4pj24" podStartSLOduration=1.5429591120000001 podStartE2EDuration="1.542959112s" podCreationTimestamp="2025-07-07 00:14:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:14:51.542879952 +0000 UTC m=+7.096139081" watchObservedRunningTime="2025-07-07 00:14:51.542959112 +0000 UTC m=+7.096218241" Jul 7 00:14:51.560906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3086305897.mount: Deactivated successfully. Jul 7 00:14:51.705566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2554026097.mount: Deactivated successfully. Jul 7 00:14:52.499237 containerd[2793]: time="2025-07-07T00:14:52.499198072Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:52.499468 containerd[2793]: time="2025-07-07T00:14:52.499243352Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 7 00:14:52.499919 containerd[2793]: time="2025-07-07T00:14:52.499903072Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:52.501528 containerd[2793]: time="2025-07-07T00:14:52.501505472Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:52.502181 containerd[2793]: time="2025-07-07T00:14:52.502158592Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.55863596s" Jul 7 00:14:52.502209 containerd[2793]: time="2025-07-07T00:14:52.502187432Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 7 00:14:52.504199 containerd[2793]: time="2025-07-07T00:14:52.504179112Z" level=info msg="CreateContainer within sandbox \"cd471e93ee843663ed67e0d26504f26244999926a9e40d8e143bef94122d7613\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 00:14:52.507821 containerd[2793]: time="2025-07-07T00:14:52.507799232Z" level=info msg="Container ff73fd4c57b07432971fa3f3dc3661cfe112e99c2cb0a29b8c5b05f3c440c480: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:14:52.510378 containerd[2793]: time="2025-07-07T00:14:52.510358112Z" level=info msg="CreateContainer within sandbox \"cd471e93ee843663ed67e0d26504f26244999926a9e40d8e143bef94122d7613\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ff73fd4c57b07432971fa3f3dc3661cfe112e99c2cb0a29b8c5b05f3c440c480\"" Jul 7 00:14:52.510672 containerd[2793]: time="2025-07-07T00:14:52.510650872Z" level=info msg="StartContainer for \"ff73fd4c57b07432971fa3f3dc3661cfe112e99c2cb0a29b8c5b05f3c440c480\"" Jul 7 00:14:52.511343 containerd[2793]: time="2025-07-07T00:14:52.511323032Z" level=info msg="connecting to shim ff73fd4c57b07432971fa3f3dc3661cfe112e99c2cb0a29b8c5b05f3c440c480" address="unix:///run/containerd/s/1501c7f7764aa69891453eca731285356d9c5215954f3c26c03aea993d1ba637" protocol=ttrpc version=3 Jul 7 00:14:52.539771 systemd[1]: Started cri-containerd-ff73fd4c57b07432971fa3f3dc3661cfe112e99c2cb0a29b8c5b05f3c440c480.scope - libcontainer container ff73fd4c57b07432971fa3f3dc3661cfe112e99c2cb0a29b8c5b05f3c440c480. Jul 7 00:14:52.559458 containerd[2793]: time="2025-07-07T00:14:52.559429952Z" level=info msg="StartContainer for \"ff73fd4c57b07432971fa3f3dc3661cfe112e99c2cb0a29b8c5b05f3c440c480\" returns successfully" Jul 7 00:14:53.546711 kubelet[4328]: I0707 00:14:53.546652 4328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-dzf72" podStartSLOduration=1.987110352 podStartE2EDuration="3.546636192s" podCreationTimestamp="2025-07-07 00:14:50 +0000 UTC" firstStartedPulling="2025-07-07 00:14:50.943250272 +0000 UTC m=+6.496509401" lastFinishedPulling="2025-07-07 00:14:52.502776112 +0000 UTC m=+8.056035241" observedRunningTime="2025-07-07 00:14:53.546475952 +0000 UTC m=+9.099735081" watchObservedRunningTime="2025-07-07 00:14:53.546636192 +0000 UTC m=+9.099895321" Jul 7 00:14:57.194342 sudo[3070]: pam_unix(sudo:session): session closed for user root Jul 7 00:14:57.239212 sshd[3069]: Connection closed by 147.75.109.163 port 43254 Jul 7 00:14:57.239526 sshd-session[3067]: pam_unix(sshd:session): session closed for user core Jul 7 00:14:57.242572 systemd[1]: sshd@6-147.28.143.210:22-147.75.109.163:43254.service: Deactivated successfully. Jul 7 00:14:57.244804 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 00:14:57.245029 systemd[1]: session-9.scope: Consumed 7.712s CPU time, 247M memory peak. Jul 7 00:14:57.246091 systemd-logind[2773]: Session 9 logged out. Waiting for processes to exit. Jul 7 00:14:57.247066 systemd-logind[2773]: Removed session 9. Jul 7 00:14:57.831752 update_engine[2785]: I20250707 00:14:57.831693 2785 update_attempter.cc:509] Updating boot flags... Jul 7 00:15:03.539443 systemd[1]: Created slice kubepods-besteffort-podc0557614_a6d5_4955_bfb4_3e24c36e99c1.slice - libcontainer container kubepods-besteffort-podc0557614_a6d5_4955_bfb4_3e24c36e99c1.slice. Jul 7 00:15:03.632102 kubelet[4328]: I0707 00:15:03.632065 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0557614-a6d5-4955-bfb4-3e24c36e99c1-tigera-ca-bundle\") pod \"calico-typha-bd9f75dbb-45gnh\" (UID: \"c0557614-a6d5-4955-bfb4-3e24c36e99c1\") " pod="calico-system/calico-typha-bd9f75dbb-45gnh" Jul 7 00:15:03.632102 kubelet[4328]: I0707 00:15:03.632100 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c0557614-a6d5-4955-bfb4-3e24c36e99c1-typha-certs\") pod \"calico-typha-bd9f75dbb-45gnh\" (UID: \"c0557614-a6d5-4955-bfb4-3e24c36e99c1\") " pod="calico-system/calico-typha-bd9f75dbb-45gnh" Jul 7 00:15:03.632458 kubelet[4328]: I0707 00:15:03.632121 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm9qm\" (UniqueName: \"kubernetes.io/projected/c0557614-a6d5-4955-bfb4-3e24c36e99c1-kube-api-access-jm9qm\") pod \"calico-typha-bd9f75dbb-45gnh\" (UID: \"c0557614-a6d5-4955-bfb4-3e24c36e99c1\") " pod="calico-system/calico-typha-bd9f75dbb-45gnh" Jul 7 00:15:03.842060 containerd[2793]: time="2025-07-07T00:15:03.841995074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bd9f75dbb-45gnh,Uid:c0557614-a6d5-4955-bfb4-3e24c36e99c1,Namespace:calico-system,Attempt:0,}" Jul 7 00:15:03.851812 containerd[2793]: time="2025-07-07T00:15:03.851772110Z" level=info msg="connecting to shim 6e0e5dd03c1107d8f54ffb7e1932efa4dbd78948ffd2f04ba55218bde6131950" address="unix:///run/containerd/s/28bf37d9c90599eab1317dedc081373f76a8f7ba7508f47f3211cf6e1d46009e" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:15:03.880845 systemd[1]: Started cri-containerd-6e0e5dd03c1107d8f54ffb7e1932efa4dbd78948ffd2f04ba55218bde6131950.scope - libcontainer container 6e0e5dd03c1107d8f54ffb7e1932efa4dbd78948ffd2f04ba55218bde6131950. Jul 7 00:15:03.905906 containerd[2793]: time="2025-07-07T00:15:03.905870764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bd9f75dbb-45gnh,Uid:c0557614-a6d5-4955-bfb4-3e24c36e99c1,Namespace:calico-system,Attempt:0,} returns sandbox id \"6e0e5dd03c1107d8f54ffb7e1932efa4dbd78948ffd2f04ba55218bde6131950\"" Jul 7 00:15:03.906947 containerd[2793]: time="2025-07-07T00:15:03.906925444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 00:15:03.937702 systemd[1]: Created slice kubepods-besteffort-pod708560db_7096_495b_b594_1ceb013ae5b3.slice - libcontainer container kubepods-besteffort-pod708560db_7096_495b_b594_1ceb013ae5b3.slice. Jul 7 00:15:04.034607 kubelet[4328]: I0707 00:15:04.034569 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/708560db-7096-495b-b594-1ceb013ae5b3-node-certs\") pod \"calico-node-f564g\" (UID: \"708560db-7096-495b-b594-1ceb013ae5b3\") " pod="calico-system/calico-node-f564g" Jul 7 00:15:04.034711 kubelet[4328]: I0707 00:15:04.034652 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/708560db-7096-495b-b594-1ceb013ae5b3-var-run-calico\") pod \"calico-node-f564g\" (UID: \"708560db-7096-495b-b594-1ceb013ae5b3\") " pod="calico-system/calico-node-f564g" Jul 7 00:15:04.034779 kubelet[4328]: I0707 00:15:04.034713 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/708560db-7096-495b-b594-1ceb013ae5b3-xtables-lock\") pod \"calico-node-f564g\" (UID: \"708560db-7096-495b-b594-1ceb013ae5b3\") " pod="calico-system/calico-node-f564g" Jul 7 00:15:04.034779 kubelet[4328]: I0707 00:15:04.034749 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdwnk\" (UniqueName: \"kubernetes.io/projected/708560db-7096-495b-b594-1ceb013ae5b3-kube-api-access-tdwnk\") pod \"calico-node-f564g\" (UID: \"708560db-7096-495b-b594-1ceb013ae5b3\") " pod="calico-system/calico-node-f564g" Jul 7 00:15:04.034833 kubelet[4328]: I0707 00:15:04.034781 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/708560db-7096-495b-b594-1ceb013ae5b3-tigera-ca-bundle\") pod \"calico-node-f564g\" (UID: \"708560db-7096-495b-b594-1ceb013ae5b3\") " pod="calico-system/calico-node-f564g" Jul 7 00:15:04.034833 kubelet[4328]: I0707 00:15:04.034798 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/708560db-7096-495b-b594-1ceb013ae5b3-lib-modules\") pod \"calico-node-f564g\" (UID: \"708560db-7096-495b-b594-1ceb013ae5b3\") " pod="calico-system/calico-node-f564g" Jul 7 00:15:04.034833 kubelet[4328]: I0707 00:15:04.034813 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/708560db-7096-495b-b594-1ceb013ae5b3-var-lib-calico\") pod \"calico-node-f564g\" (UID: \"708560db-7096-495b-b594-1ceb013ae5b3\") " pod="calico-system/calico-node-f564g" Jul 7 00:15:04.034984 kubelet[4328]: I0707 00:15:04.034854 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/708560db-7096-495b-b594-1ceb013ae5b3-cni-net-dir\") pod \"calico-node-f564g\" (UID: \"708560db-7096-495b-b594-1ceb013ae5b3\") " pod="calico-system/calico-node-f564g" Jul 7 00:15:04.034984 kubelet[4328]: I0707 00:15:04.034885 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/708560db-7096-495b-b594-1ceb013ae5b3-cni-bin-dir\") pod \"calico-node-f564g\" (UID: \"708560db-7096-495b-b594-1ceb013ae5b3\") " pod="calico-system/calico-node-f564g" Jul 7 00:15:04.034984 kubelet[4328]: I0707 00:15:04.034904 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/708560db-7096-495b-b594-1ceb013ae5b3-flexvol-driver-host\") pod \"calico-node-f564g\" (UID: \"708560db-7096-495b-b594-1ceb013ae5b3\") " pod="calico-system/calico-node-f564g" Jul 7 00:15:04.034984 kubelet[4328]: I0707 00:15:04.034923 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/708560db-7096-495b-b594-1ceb013ae5b3-cni-log-dir\") pod \"calico-node-f564g\" (UID: \"708560db-7096-495b-b594-1ceb013ae5b3\") " pod="calico-system/calico-node-f564g" Jul 7 00:15:04.034984 kubelet[4328]: I0707 00:15:04.034937 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/708560db-7096-495b-b594-1ceb013ae5b3-policysync\") pod \"calico-node-f564g\" (UID: \"708560db-7096-495b-b594-1ceb013ae5b3\") " pod="calico-system/calico-node-f564g" Jul 7 00:15:04.136745 kubelet[4328]: E0707 00:15:04.136724 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.136745 kubelet[4328]: W0707 00:15:04.136742 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.136801 kubelet[4328]: E0707 00:15:04.136761 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.138263 kubelet[4328]: E0707 00:15:04.138244 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.138291 kubelet[4328]: W0707 00:15:04.138261 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.138291 kubelet[4328]: E0707 00:15:04.138275 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.144314 kubelet[4328]: E0707 00:15:04.144298 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.144314 kubelet[4328]: W0707 00:15:04.144310 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.144363 kubelet[4328]: E0707 00:15:04.144324 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.238874 kubelet[4328]: E0707 00:15:04.238836 4328 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgws5" podUID="a91e23ac-d148-41e5-b88b-f561933b89b5" Jul 7 00:15:04.239539 containerd[2793]: time="2025-07-07T00:15:04.239507736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f564g,Uid:708560db-7096-495b-b594-1ceb013ae5b3,Namespace:calico-system,Attempt:0,}" Jul 7 00:15:04.247898 containerd[2793]: time="2025-07-07T00:15:04.247833093Z" level=info msg="connecting to shim 41485e577c85e047f1b19bc2166a8dd8786df9e239c0d2dba9f72fae76b613da" address="unix:///run/containerd/s/672c8b00d2d04b02604fcacdae4b2b432aeee12e00a7ee844363750544811eab" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:15:04.279784 systemd[1]: Started cri-containerd-41485e577c85e047f1b19bc2166a8dd8786df9e239c0d2dba9f72fae76b613da.scope - libcontainer container 41485e577c85e047f1b19bc2166a8dd8786df9e239c0d2dba9f72fae76b613da. Jul 7 00:15:04.297008 containerd[2793]: time="2025-07-07T00:15:04.296981191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f564g,Uid:708560db-7096-495b-b594-1ceb013ae5b3,Namespace:calico-system,Attempt:0,} returns sandbox id \"41485e577c85e047f1b19bc2166a8dd8786df9e239c0d2dba9f72fae76b613da\"" Jul 7 00:15:04.332387 kubelet[4328]: E0707 00:15:04.332363 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.332387 kubelet[4328]: W0707 00:15:04.332382 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.332447 kubelet[4328]: E0707 00:15:04.332401 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.332614 kubelet[4328]: E0707 00:15:04.332603 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.332649 kubelet[4328]: W0707 00:15:04.332611 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.332686 kubelet[4328]: E0707 00:15:04.332649 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.332826 kubelet[4328]: E0707 00:15:04.332815 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.332826 kubelet[4328]: W0707 00:15:04.332823 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.332872 kubelet[4328]: E0707 00:15:04.332830 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.332990 kubelet[4328]: E0707 00:15:04.332980 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.332990 kubelet[4328]: W0707 00:15:04.332987 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.333031 kubelet[4328]: E0707 00:15:04.332997 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.333175 kubelet[4328]: E0707 00:15:04.333164 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.333175 kubelet[4328]: W0707 00:15:04.333172 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.333215 kubelet[4328]: E0707 00:15:04.333180 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.333338 kubelet[4328]: E0707 00:15:04.333329 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.333338 kubelet[4328]: W0707 00:15:04.333336 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.333374 kubelet[4328]: E0707 00:15:04.333343 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.333501 kubelet[4328]: E0707 00:15:04.333493 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.333520 kubelet[4328]: W0707 00:15:04.333500 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.333520 kubelet[4328]: E0707 00:15:04.333507 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.333705 kubelet[4328]: E0707 00:15:04.333694 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.333724 kubelet[4328]: W0707 00:15:04.333705 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.333724 kubelet[4328]: E0707 00:15:04.333715 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.333886 kubelet[4328]: E0707 00:15:04.333876 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.333918 kubelet[4328]: W0707 00:15:04.333886 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.333955 kubelet[4328]: E0707 00:15:04.333945 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.334142 kubelet[4328]: E0707 00:15:04.334133 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.334162 kubelet[4328]: W0707 00:15:04.334142 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.334162 kubelet[4328]: E0707 00:15:04.334150 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.334350 kubelet[4328]: E0707 00:15:04.334343 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.334371 kubelet[4328]: W0707 00:15:04.334350 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.334371 kubelet[4328]: E0707 00:15:04.334357 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.334470 kubelet[4328]: E0707 00:15:04.334462 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.334491 kubelet[4328]: W0707 00:15:04.334469 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.334491 kubelet[4328]: E0707 00:15:04.334476 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.334631 kubelet[4328]: E0707 00:15:04.334623 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.334652 kubelet[4328]: W0707 00:15:04.334631 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.334652 kubelet[4328]: E0707 00:15:04.334639 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.334765 kubelet[4328]: E0707 00:15:04.334757 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.334785 kubelet[4328]: W0707 00:15:04.334765 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.334785 kubelet[4328]: E0707 00:15:04.334772 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.334887 kubelet[4328]: E0707 00:15:04.334880 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.334908 kubelet[4328]: W0707 00:15:04.334887 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.334908 kubelet[4328]: E0707 00:15:04.334894 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.335088 kubelet[4328]: E0707 00:15:04.335080 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.335108 kubelet[4328]: W0707 00:15:04.335088 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.335108 kubelet[4328]: E0707 00:15:04.335094 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.335228 kubelet[4328]: E0707 00:15:04.335220 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.335247 kubelet[4328]: W0707 00:15:04.335229 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.335247 kubelet[4328]: E0707 00:15:04.335235 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.335349 kubelet[4328]: E0707 00:15:04.335342 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.335368 kubelet[4328]: W0707 00:15:04.335349 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.335368 kubelet[4328]: E0707 00:15:04.335355 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.335543 kubelet[4328]: E0707 00:15:04.335536 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.335564 kubelet[4328]: W0707 00:15:04.335543 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.335564 kubelet[4328]: E0707 00:15:04.335550 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.335712 kubelet[4328]: E0707 00:15:04.335705 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.335712 kubelet[4328]: W0707 00:15:04.335712 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.335751 kubelet[4328]: E0707 00:15:04.335718 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.336960 kubelet[4328]: E0707 00:15:04.336946 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.336983 kubelet[4328]: W0707 00:15:04.336961 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.336983 kubelet[4328]: E0707 00:15:04.336977 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.337019 kubelet[4328]: I0707 00:15:04.336998 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a91e23ac-d148-41e5-b88b-f561933b89b5-kubelet-dir\") pod \"csi-node-driver-lgws5\" (UID: \"a91e23ac-d148-41e5-b88b-f561933b89b5\") " pod="calico-system/csi-node-driver-lgws5" Jul 7 00:15:04.337219 kubelet[4328]: E0707 00:15:04.337208 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.337240 kubelet[4328]: W0707 00:15:04.337219 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.337240 kubelet[4328]: E0707 00:15:04.337228 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.337281 kubelet[4328]: I0707 00:15:04.337243 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a91e23ac-d148-41e5-b88b-f561933b89b5-socket-dir\") pod \"csi-node-driver-lgws5\" (UID: \"a91e23ac-d148-41e5-b88b-f561933b89b5\") " pod="calico-system/csi-node-driver-lgws5" Jul 7 00:15:04.337545 kubelet[4328]: E0707 00:15:04.337531 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.337643 kubelet[4328]: W0707 00:15:04.337545 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.337666 kubelet[4328]: E0707 00:15:04.337650 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.337831 kubelet[4328]: E0707 00:15:04.337821 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.337852 kubelet[4328]: W0707 00:15:04.337831 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.337852 kubelet[4328]: E0707 00:15:04.337839 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.338045 kubelet[4328]: E0707 00:15:04.338037 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.338065 kubelet[4328]: W0707 00:15:04.338044 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.338065 kubelet[4328]: E0707 00:15:04.338051 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.338103 kubelet[4328]: I0707 00:15:04.338070 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a91e23ac-d148-41e5-b88b-f561933b89b5-varrun\") pod \"csi-node-driver-lgws5\" (UID: \"a91e23ac-d148-41e5-b88b-f561933b89b5\") " pod="calico-system/csi-node-driver-lgws5" Jul 7 00:15:04.338279 kubelet[4328]: E0707 00:15:04.338269 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.338300 kubelet[4328]: W0707 00:15:04.338279 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.338300 kubelet[4328]: E0707 00:15:04.338288 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.338447 kubelet[4328]: E0707 00:15:04.338439 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.338468 kubelet[4328]: W0707 00:15:04.338447 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.338468 kubelet[4328]: E0707 00:15:04.338454 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.338630 kubelet[4328]: E0707 00:15:04.338622 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.338651 kubelet[4328]: W0707 00:15:04.338630 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.338651 kubelet[4328]: E0707 00:15:04.338637 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.338696 kubelet[4328]: I0707 00:15:04.338655 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7ndp\" (UniqueName: \"kubernetes.io/projected/a91e23ac-d148-41e5-b88b-f561933b89b5-kube-api-access-m7ndp\") pod \"csi-node-driver-lgws5\" (UID: \"a91e23ac-d148-41e5-b88b-f561933b89b5\") " pod="calico-system/csi-node-driver-lgws5" Jul 7 00:15:04.338853 kubelet[4328]: E0707 00:15:04.338838 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.338875 kubelet[4328]: W0707 00:15:04.338854 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.338875 kubelet[4328]: E0707 00:15:04.338867 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.338994 kubelet[4328]: E0707 00:15:04.338986 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.339015 kubelet[4328]: W0707 00:15:04.338994 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.339015 kubelet[4328]: E0707 00:15:04.339003 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.339148 kubelet[4328]: E0707 00:15:04.339140 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.339169 kubelet[4328]: W0707 00:15:04.339148 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.339169 kubelet[4328]: E0707 00:15:04.339156 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.339277 kubelet[4328]: E0707 00:15:04.339269 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.339296 kubelet[4328]: W0707 00:15:04.339277 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.339296 kubelet[4328]: E0707 00:15:04.339284 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.339417 kubelet[4328]: E0707 00:15:04.339409 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.339438 kubelet[4328]: W0707 00:15:04.339417 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.339438 kubelet[4328]: E0707 00:15:04.339424 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.339474 kubelet[4328]: I0707 00:15:04.339442 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a91e23ac-d148-41e5-b88b-f561933b89b5-registration-dir\") pod \"csi-node-driver-lgws5\" (UID: \"a91e23ac-d148-41e5-b88b-f561933b89b5\") " pod="calico-system/csi-node-driver-lgws5" Jul 7 00:15:04.339604 kubelet[4328]: E0707 00:15:04.339594 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.339625 kubelet[4328]: W0707 00:15:04.339604 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.339625 kubelet[4328]: E0707 00:15:04.339612 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.339772 kubelet[4328]: E0707 00:15:04.339764 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.339791 kubelet[4328]: W0707 00:15:04.339771 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.339791 kubelet[4328]: E0707 00:15:04.339778 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.440524 kubelet[4328]: E0707 00:15:04.440458 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.440524 kubelet[4328]: W0707 00:15:04.440472 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.440524 kubelet[4328]: E0707 00:15:04.440484 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.440653 kubelet[4328]: E0707 00:15:04.440643 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.440653 kubelet[4328]: W0707 00:15:04.440651 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.440748 kubelet[4328]: E0707 00:15:04.440667 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.440868 kubelet[4328]: E0707 00:15:04.440854 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.440889 kubelet[4328]: W0707 00:15:04.440868 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.440889 kubelet[4328]: E0707 00:15:04.440880 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.441016 kubelet[4328]: E0707 00:15:04.441007 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.441040 kubelet[4328]: W0707 00:15:04.441015 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.441040 kubelet[4328]: E0707 00:15:04.441023 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.441197 kubelet[4328]: E0707 00:15:04.441188 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.441221 kubelet[4328]: W0707 00:15:04.441197 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.441221 kubelet[4328]: E0707 00:15:04.441204 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.441402 kubelet[4328]: E0707 00:15:04.441387 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.441402 kubelet[4328]: W0707 00:15:04.441397 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.441442 kubelet[4328]: E0707 00:15:04.441405 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.441649 kubelet[4328]: E0707 00:15:04.441635 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.441687 kubelet[4328]: W0707 00:15:04.441650 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.441687 kubelet[4328]: E0707 00:15:04.441667 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.441850 kubelet[4328]: E0707 00:15:04.441838 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.441870 kubelet[4328]: W0707 00:15:04.441851 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.441870 kubelet[4328]: E0707 00:15:04.441861 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.441995 kubelet[4328]: E0707 00:15:04.441987 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.442018 kubelet[4328]: W0707 00:15:04.441995 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.442018 kubelet[4328]: E0707 00:15:04.442003 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.442176 kubelet[4328]: E0707 00:15:04.442168 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.442199 kubelet[4328]: W0707 00:15:04.442176 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.442199 kubelet[4328]: E0707 00:15:04.442183 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.442368 kubelet[4328]: E0707 00:15:04.442359 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.442393 kubelet[4328]: W0707 00:15:04.442368 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.442393 kubelet[4328]: E0707 00:15:04.442376 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.442525 kubelet[4328]: E0707 00:15:04.442516 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.442547 kubelet[4328]: W0707 00:15:04.442524 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.442547 kubelet[4328]: E0707 00:15:04.442532 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.442806 kubelet[4328]: E0707 00:15:04.442791 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.442833 kubelet[4328]: W0707 00:15:04.442807 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.442854 kubelet[4328]: E0707 00:15:04.442840 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.443089 kubelet[4328]: E0707 00:15:04.443078 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.443111 kubelet[4328]: W0707 00:15:04.443088 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.443111 kubelet[4328]: E0707 00:15:04.443099 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.443273 kubelet[4328]: E0707 00:15:04.443265 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.443298 kubelet[4328]: W0707 00:15:04.443273 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.443298 kubelet[4328]: E0707 00:15:04.443281 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.443524 kubelet[4328]: E0707 00:15:04.443510 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.443543 kubelet[4328]: W0707 00:15:04.443525 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.443543 kubelet[4328]: E0707 00:15:04.443536 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.443842 kubelet[4328]: E0707 00:15:04.443833 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.443866 kubelet[4328]: W0707 00:15:04.443843 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.443866 kubelet[4328]: E0707 00:15:04.443852 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.444030 kubelet[4328]: E0707 00:15:04.444022 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.444049 kubelet[4328]: W0707 00:15:04.444030 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.444049 kubelet[4328]: E0707 00:15:04.444037 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.444224 kubelet[4328]: E0707 00:15:04.444216 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.444245 kubelet[4328]: W0707 00:15:04.444224 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.444245 kubelet[4328]: E0707 00:15:04.444231 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.444409 kubelet[4328]: E0707 00:15:04.444400 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.444428 kubelet[4328]: W0707 00:15:04.444409 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.444428 kubelet[4328]: E0707 00:15:04.444416 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.444580 kubelet[4328]: E0707 00:15:04.444572 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.444601 kubelet[4328]: W0707 00:15:04.444579 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.444601 kubelet[4328]: E0707 00:15:04.444586 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.444800 kubelet[4328]: E0707 00:15:04.444792 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.444819 kubelet[4328]: W0707 00:15:04.444801 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.444819 kubelet[4328]: E0707 00:15:04.444810 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.444963 kubelet[4328]: E0707 00:15:04.444954 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.444984 kubelet[4328]: W0707 00:15:04.444962 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.444984 kubelet[4328]: E0707 00:15:04.444969 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.445180 kubelet[4328]: E0707 00:15:04.445171 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.445199 kubelet[4328]: W0707 00:15:04.445179 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.445199 kubelet[4328]: E0707 00:15:04.445186 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.445478 kubelet[4328]: E0707 00:15:04.445468 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.445499 kubelet[4328]: W0707 00:15:04.445478 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.445499 kubelet[4328]: E0707 00:15:04.445487 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.449979 kubelet[4328]: E0707 00:15:04.449963 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:04.449979 kubelet[4328]: W0707 00:15:04.449976 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:04.450022 kubelet[4328]: E0707 00:15:04.449988 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:04.935530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2867384750.mount: Deactivated successfully. Jul 7 00:15:05.247307 containerd[2793]: time="2025-07-07T00:15:05.247225864Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:05.247307 containerd[2793]: time="2025-07-07T00:15:05.247225784Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 7 00:15:05.247863 containerd[2793]: time="2025-07-07T00:15:05.247840424Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:05.249294 containerd[2793]: time="2025-07-07T00:15:05.249273343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:05.249903 containerd[2793]: time="2025-07-07T00:15:05.249882223Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.342928299s" Jul 7 00:15:05.249923 containerd[2793]: time="2025-07-07T00:15:05.249909863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 7 00:15:05.250627 containerd[2793]: time="2025-07-07T00:15:05.250608263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 00:15:05.255608 containerd[2793]: time="2025-07-07T00:15:05.255586941Z" level=info msg="CreateContainer within sandbox \"6e0e5dd03c1107d8f54ffb7e1932efa4dbd78948ffd2f04ba55218bde6131950\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 00:15:05.259287 containerd[2793]: time="2025-07-07T00:15:05.259259699Z" level=info msg="Container b8148b3892f40af515df0d090fd06fe336c90d2c43b3e4101393491be21fea22: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:05.262605 containerd[2793]: time="2025-07-07T00:15:05.262580098Z" level=info msg="CreateContainer within sandbox \"6e0e5dd03c1107d8f54ffb7e1932efa4dbd78948ffd2f04ba55218bde6131950\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b8148b3892f40af515df0d090fd06fe336c90d2c43b3e4101393491be21fea22\"" Jul 7 00:15:05.262948 containerd[2793]: time="2025-07-07T00:15:05.262929578Z" level=info msg="StartContainer for \"b8148b3892f40af515df0d090fd06fe336c90d2c43b3e4101393491be21fea22\"" Jul 7 00:15:05.263866 containerd[2793]: time="2025-07-07T00:15:05.263845737Z" level=info msg="connecting to shim b8148b3892f40af515df0d090fd06fe336c90d2c43b3e4101393491be21fea22" address="unix:///run/containerd/s/28bf37d9c90599eab1317dedc081373f76a8f7ba7508f47f3211cf6e1d46009e" protocol=ttrpc version=3 Jul 7 00:15:05.282841 systemd[1]: Started cri-containerd-b8148b3892f40af515df0d090fd06fe336c90d2c43b3e4101393491be21fea22.scope - libcontainer container b8148b3892f40af515df0d090fd06fe336c90d2c43b3e4101393491be21fea22. Jul 7 00:15:05.310964 containerd[2793]: time="2025-07-07T00:15:05.310932638Z" level=info msg="StartContainer for \"b8148b3892f40af515df0d090fd06fe336c90d2c43b3e4101393491be21fea22\" returns successfully" Jul 7 00:15:05.521377 kubelet[4328]: E0707 00:15:05.521266 4328 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgws5" podUID="a91e23ac-d148-41e5-b88b-f561933b89b5" Jul 7 00:15:05.568222 kubelet[4328]: I0707 00:15:05.568172 4328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-bd9f75dbb-45gnh" podStartSLOduration=1.224398354 podStartE2EDuration="2.568156573s" podCreationTimestamp="2025-07-07 00:15:03 +0000 UTC" firstStartedPulling="2025-07-07 00:15:03.906735724 +0000 UTC m=+19.459994853" lastFinishedPulling="2025-07-07 00:15:05.250493983 +0000 UTC m=+20.803753072" observedRunningTime="2025-07-07 00:15:05.568137053 +0000 UTC m=+21.121396142" watchObservedRunningTime="2025-07-07 00:15:05.568156573 +0000 UTC m=+21.121415702" Jul 7 00:15:05.641847 kubelet[4328]: E0707 00:15:05.641821 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:05.641847 kubelet[4328]: W0707 00:15:05.641836 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:05.641938 kubelet[4328]: E0707 00:15:05.641851 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:05.642085 kubelet[4328]: E0707 00:15:05.642073 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:05.642085 kubelet[4328]: W0707 00:15:05.642081 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:05.642144 kubelet[4328]: E0707 00:15:05.642088 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:05.642293 kubelet[4328]: E0707 00:15:05.642285 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:05.642293 kubelet[4328]: W0707 00:15:05.642293 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:05.642329 kubelet[4328]: E0707 00:15:05.642299 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:05.642504 kubelet[4328]: E0707 00:15:05.642496 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:05.642523 kubelet[4328]: W0707 00:15:05.642503 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:05.642523 kubelet[4328]: E0707 00:15:05.642510 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:05.642760 kubelet[4328]: E0707 00:15:05.642747 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:05.642760 kubelet[4328]: W0707 00:15:05.642756 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:05.642797 kubelet[4328]: E0707 00:15:05.642764 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:05.642891 kubelet[4328]: E0707 00:15:05.642882 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:05.642891 kubelet[4328]: W0707 00:15:05.642890 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:05.642929 kubelet[4328]: E0707 00:15:05.642897 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:05.643018 kubelet[4328]: E0707 00:15:05.643010 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:05.643018 kubelet[4328]: W0707 00:15:05.643017 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:05.643053 kubelet[4328]: E0707 00:15:05.643024 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:05.643145 kubelet[4328]: E0707 00:15:05.643138 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:05.643164 kubelet[4328]: W0707 00:15:05.643145 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:05.643164 kubelet[4328]: E0707 00:15:05.643152 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:05.643282 kubelet[4328]: E0707 00:15:05.643274 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:05.643301 kubelet[4328]: W0707 00:15:05.643282 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:05.643301 kubelet[4328]: E0707 00:15:05.643290 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:05.643455 kubelet[4328]: E0707 00:15:05.643447 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:05.643476 kubelet[4328]: W0707 00:15:05.643455 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:05.643476 kubelet[4328]: E0707 00:15:05.643462 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:05.643614 kubelet[4328]: E0707 00:15:05.643606 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:05.643614 kubelet[4328]: W0707 00:15:05.643613 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:05.643649 kubelet[4328]: E0707 00:15:05.643620 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:05.643743 kubelet[4328]: E0707 00:15:05.643735 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:05.643743 kubelet[4328]: W0707 00:15:05.643742 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:05.643824 kubelet[4328]: E0707 00:15:05.643749 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:05.643875 kubelet[4328]: E0707 00:15:05.643866 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:05.643875 kubelet[4328]: W0707 00:15:05.643874 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:05.643918 kubelet[4328]: E0707 00:15:05.643881 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:05.644003 kubelet[4328]: E0707 00:15:05.643996 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:05.644023 kubelet[4328]: W0707 00:15:05.644003 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:05.644023 kubelet[4328]: E0707 00:15:05.644010 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:05.644186 kubelet[4328]: E0707 00:15:05.644178 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:05.644206 kubelet[4328]: W0707 00:15:05.644185 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:05.644206 kubelet[4328]: E0707 00:15:05.644192 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:05.652483 kubelet[4328]: E0707 00:15:05.652465 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:05.652483 kubelet[4328]: W0707 00:15:05.652480 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:05.652532 kubelet[4328]: E0707 00:15:05.652496 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:05.652754 kubelet[4328]: E0707 00:15:05.652739 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:05.652754 kubelet[4328]: W0707 00:15:05.652751 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:05.652792 kubelet[4328]: E0707 00:15:05.652759 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:05.653055 kubelet[4328]: E0707 00:15:05.653038 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:05.653076 kubelet[4328]: W0707 00:15:05.653053 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:05.653076 kubelet[4328]: E0707 00:15:05.653064 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:05.653210 kubelet[4328]: E0707 00:15:05.653199 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:05.653210 kubelet[4328]: W0707 00:15:05.653207 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:05.653250 kubelet[4328]: E0707 00:15:05.653215 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:05.653393 kubelet[4328]: E0707 00:15:05.653382 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:05.653393 kubelet[4328]: W0707 00:15:05.653389 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:05.653435 kubelet[4328]: E0707 00:15:05.653396 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:05.653626 kubelet[4328]: E0707 00:15:05.653615 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:05.653626 kubelet[4328]: W0707 00:15:05.653622 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:05.653666 kubelet[4328]: E0707 00:15:05.653630 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:05.653821 kubelet[4328]: E0707 00:15:05.653808 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:05.653842 kubelet[4328]: W0707 00:15:05.653820 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:05.653842 kubelet[4328]: E0707 00:15:05.653829 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:05.653990 kubelet[4328]: E0707 00:15:05.653980 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:05.654009 kubelet[4328]: W0707 00:15:05.653990 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:05.654009 kubelet[4328]: E0707 00:15:05.653998 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:05.654136 kubelet[4328]: E0707 00:15:05.654127 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:05.654155 kubelet[4328]: W0707 00:15:05.654135 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:05.654155 kubelet[4328]: E0707 00:15:05.654143 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:05.654359 kubelet[4328]: E0707 00:15:05.654351 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:05.654379 kubelet[4328]: W0707 00:15:05.654359 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:05.654379 kubelet[4328]: E0707 00:15:05.654366 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:05.654538 kubelet[4328]: E0707 00:15:05.654530 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:05.654558 kubelet[4328]: W0707 00:15:05.654538 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:05.654558 kubelet[4328]: E0707 00:15:05.654546 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:05.654808 kubelet[4328]: E0707 00:15:05.654794 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:05.654828 kubelet[4328]: W0707 00:15:05.654809 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:05.654828 kubelet[4328]: E0707 00:15:05.654820 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:05.654954 kubelet[4328]: E0707 00:15:05.654946 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:05.654974 kubelet[4328]: W0707 00:15:05.654954 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:05.654974 kubelet[4328]: E0707 00:15:05.654961 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:05.655173 kubelet[4328]: E0707 00:15:05.655165 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:05.655192 kubelet[4328]: W0707 00:15:05.655173 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:05.655192 kubelet[4328]: E0707 00:15:05.655180 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:05.655372 kubelet[4328]: E0707 00:15:05.655364 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:05.655391 kubelet[4328]: W0707 00:15:05.655371 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:05.655391 kubelet[4328]: E0707 00:15:05.655381 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:05.655499 kubelet[4328]: E0707 00:15:05.655492 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:05.655519 kubelet[4328]: W0707 00:15:05.655499 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:05.655519 kubelet[4328]: E0707 00:15:05.655506 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:05.655685 kubelet[4328]: E0707 00:15:05.655676 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:05.655704 kubelet[4328]: W0707 00:15:05.655684 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:05.655704 kubelet[4328]: E0707 00:15:05.655692 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:05.655983 kubelet[4328]: E0707 00:15:05.655974 4328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:15:05.656003 kubelet[4328]: W0707 00:15:05.655984 4328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:15:05.656003 kubelet[4328]: E0707 00:15:05.655992 4328 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:15:06.190344 containerd[2793]: time="2025-07-07T00:15:06.190303844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:06.190412 containerd[2793]: time="2025-07-07T00:15:06.190388724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 7 00:15:06.190986 containerd[2793]: time="2025-07-07T00:15:06.190969003Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:06.192403 containerd[2793]: time="2025-07-07T00:15:06.192379963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:06.192979 containerd[2793]: time="2025-07-07T00:15:06.192953323Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 942.31694ms" Jul 7 00:15:06.193001 containerd[2793]: time="2025-07-07T00:15:06.192983003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 7 00:15:06.194762 containerd[2793]: time="2025-07-07T00:15:06.194742082Z" level=info msg="CreateContainer within sandbox \"41485e577c85e047f1b19bc2166a8dd8786df9e239c0d2dba9f72fae76b613da\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 00:15:06.198954 containerd[2793]: time="2025-07-07T00:15:06.198929520Z" level=info msg="Container ab9a0f16ec026fc53a6107fbcd718d15789676e3e88174119c716dc0c4435d7d: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:06.202709 containerd[2793]: time="2025-07-07T00:15:06.202680759Z" level=info msg="CreateContainer within sandbox \"41485e577c85e047f1b19bc2166a8dd8786df9e239c0d2dba9f72fae76b613da\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ab9a0f16ec026fc53a6107fbcd718d15789676e3e88174119c716dc0c4435d7d\"" Jul 7 00:15:06.203021 containerd[2793]: time="2025-07-07T00:15:06.203002319Z" level=info msg="StartContainer for \"ab9a0f16ec026fc53a6107fbcd718d15789676e3e88174119c716dc0c4435d7d\"" Jul 7 00:15:06.204265 containerd[2793]: time="2025-07-07T00:15:06.204246118Z" level=info msg="connecting to shim ab9a0f16ec026fc53a6107fbcd718d15789676e3e88174119c716dc0c4435d7d" address="unix:///run/containerd/s/672c8b00d2d04b02604fcacdae4b2b432aeee12e00a7ee844363750544811eab" protocol=ttrpc version=3 Jul 7 00:15:06.230851 systemd[1]: Started cri-containerd-ab9a0f16ec026fc53a6107fbcd718d15789676e3e88174119c716dc0c4435d7d.scope - libcontainer container ab9a0f16ec026fc53a6107fbcd718d15789676e3e88174119c716dc0c4435d7d. Jul 7 00:15:06.257749 containerd[2793]: time="2025-07-07T00:15:06.257715378Z" level=info msg="StartContainer for \"ab9a0f16ec026fc53a6107fbcd718d15789676e3e88174119c716dc0c4435d7d\" returns successfully" Jul 7 00:15:06.267578 systemd[1]: cri-containerd-ab9a0f16ec026fc53a6107fbcd718d15789676e3e88174119c716dc0c4435d7d.scope: Deactivated successfully. Jul 7 00:15:06.269211 containerd[2793]: time="2025-07-07T00:15:06.269181733Z" level=info msg="received exit event container_id:\"ab9a0f16ec026fc53a6107fbcd718d15789676e3e88174119c716dc0c4435d7d\" id:\"ab9a0f16ec026fc53a6107fbcd718d15789676e3e88174119c716dc0c4435d7d\" pid:5385 exited_at:{seconds:1751847306 nanos:268892054}" Jul 7 00:15:06.269274 containerd[2793]: time="2025-07-07T00:15:06.269235813Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab9a0f16ec026fc53a6107fbcd718d15789676e3e88174119c716dc0c4435d7d\" id:\"ab9a0f16ec026fc53a6107fbcd718d15789676e3e88174119c716dc0c4435d7d\" pid:5385 exited_at:{seconds:1751847306 nanos:268892054}" Jul 7 00:15:06.284230 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab9a0f16ec026fc53a6107fbcd718d15789676e3e88174119c716dc0c4435d7d-rootfs.mount: Deactivated successfully. Jul 7 00:15:06.563978 kubelet[4328]: I0707 00:15:06.563908 4328 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:15:07.520964 kubelet[4328]: E0707 00:15:07.520927 4328 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgws5" podUID="a91e23ac-d148-41e5-b88b-f561933b89b5" Jul 7 00:15:07.567177 containerd[2793]: time="2025-07-07T00:15:07.567150730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 00:15:09.401184 containerd[2793]: time="2025-07-07T00:15:09.401102151Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:09.401477 containerd[2793]: time="2025-07-07T00:15:09.401163551Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 7 00:15:09.401753 containerd[2793]: time="2025-07-07T00:15:09.401728071Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:09.403329 containerd[2793]: time="2025-07-07T00:15:09.403305951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:09.403968 containerd[2793]: time="2025-07-07T00:15:09.403924311Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 1.836741701s" Jul 7 00:15:09.403968 containerd[2793]: time="2025-07-07T00:15:09.403954071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 7 00:15:09.405919 containerd[2793]: time="2025-07-07T00:15:09.405851030Z" level=info msg="CreateContainer within sandbox \"41485e577c85e047f1b19bc2166a8dd8786df9e239c0d2dba9f72fae76b613da\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 00:15:09.410200 containerd[2793]: time="2025-07-07T00:15:09.410175469Z" level=info msg="Container de8681d178b96249f5b354ad2d152658be57839f3cfeae8b275af372b6cb0f5c: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:09.414576 containerd[2793]: time="2025-07-07T00:15:09.414553107Z" level=info msg="CreateContainer within sandbox \"41485e577c85e047f1b19bc2166a8dd8786df9e239c0d2dba9f72fae76b613da\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"de8681d178b96249f5b354ad2d152658be57839f3cfeae8b275af372b6cb0f5c\"" Jul 7 00:15:09.414880 containerd[2793]: time="2025-07-07T00:15:09.414857307Z" level=info msg="StartContainer for \"de8681d178b96249f5b354ad2d152658be57839f3cfeae8b275af372b6cb0f5c\"" Jul 7 00:15:09.416173 containerd[2793]: time="2025-07-07T00:15:09.416152587Z" level=info msg="connecting to shim de8681d178b96249f5b354ad2d152658be57839f3cfeae8b275af372b6cb0f5c" address="unix:///run/containerd/s/672c8b00d2d04b02604fcacdae4b2b432aeee12e00a7ee844363750544811eab" protocol=ttrpc version=3 Jul 7 00:15:09.444839 systemd[1]: Started cri-containerd-de8681d178b96249f5b354ad2d152658be57839f3cfeae8b275af372b6cb0f5c.scope - libcontainer container de8681d178b96249f5b354ad2d152658be57839f3cfeae8b275af372b6cb0f5c. Jul 7 00:15:09.471768 containerd[2793]: time="2025-07-07T00:15:09.471744209Z" level=info msg="StartContainer for \"de8681d178b96249f5b354ad2d152658be57839f3cfeae8b275af372b6cb0f5c\" returns successfully" Jul 7 00:15:09.521808 kubelet[4328]: E0707 00:15:09.521766 4328 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgws5" podUID="a91e23ac-d148-41e5-b88b-f561933b89b5" Jul 7 00:15:09.858372 containerd[2793]: time="2025-07-07T00:15:09.858340207Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 00:15:09.860049 systemd[1]: cri-containerd-de8681d178b96249f5b354ad2d152658be57839f3cfeae8b275af372b6cb0f5c.scope: Deactivated successfully. Jul 7 00:15:09.860382 systemd[1]: cri-containerd-de8681d178b96249f5b354ad2d152658be57839f3cfeae8b275af372b6cb0f5c.scope: Consumed 989ms CPU time, 199.8M memory peak, 165.8M written to disk. Jul 7 00:15:09.860828 containerd[2793]: time="2025-07-07T00:15:09.860808406Z" level=info msg="received exit event container_id:\"de8681d178b96249f5b354ad2d152658be57839f3cfeae8b275af372b6cb0f5c\" id:\"de8681d178b96249f5b354ad2d152658be57839f3cfeae8b275af372b6cb0f5c\" pid:5451 exited_at:{seconds:1751847309 nanos:860678726}" Jul 7 00:15:09.860918 containerd[2793]: time="2025-07-07T00:15:09.860898246Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de8681d178b96249f5b354ad2d152658be57839f3cfeae8b275af372b6cb0f5c\" id:\"de8681d178b96249f5b354ad2d152658be57839f3cfeae8b275af372b6cb0f5c\" pid:5451 exited_at:{seconds:1751847309 nanos:860678726}" Jul 7 00:15:09.866100 kubelet[4328]: I0707 00:15:09.866083 4328 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 00:15:09.875996 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de8681d178b96249f5b354ad2d152658be57839f3cfeae8b275af372b6cb0f5c-rootfs.mount: Deactivated successfully. Jul 7 00:15:09.934710 systemd[1]: Created slice kubepods-besteffort-pod9eafa8d9_bfc1_45ff_aebd_3491ab52a523.slice - libcontainer container kubepods-besteffort-pod9eafa8d9_bfc1_45ff_aebd_3491ab52a523.slice. Jul 7 00:15:09.951025 systemd[1]: Created slice kubepods-burstable-pod52966d71_b395_4a93_8847_a0030bcea280.slice - libcontainer container kubepods-burstable-pod52966d71_b395_4a93_8847_a0030bcea280.slice. Jul 7 00:15:09.976755 systemd[1]: Created slice kubepods-besteffort-podd1677b41_eff4_484f_9cd7_af1065bd5ef1.slice - libcontainer container kubepods-besteffort-podd1677b41_eff4_484f_9cd7_af1065bd5ef1.slice. Jul 7 00:15:09.983975 kubelet[4328]: I0707 00:15:09.983943 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eafa8d9-bfc1-45ff-aebd-3491ab52a523-config\") pod \"goldmane-768f4c5c69-h7c7c\" (UID: \"9eafa8d9-bfc1-45ff-aebd-3491ab52a523\") " pod="calico-system/goldmane-768f4c5c69-h7c7c" Jul 7 00:15:09.984139 kubelet[4328]: I0707 00:15:09.983982 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52966d71-b395-4a93-8847-a0030bcea280-config-volume\") pod \"coredns-674b8bbfcf-6h5w4\" (UID: \"52966d71-b395-4a93-8847-a0030bcea280\") " pod="kube-system/coredns-674b8bbfcf-6h5w4" Jul 7 00:15:09.984139 kubelet[4328]: I0707 00:15:09.984002 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d1677b41-eff4-484f-9cd7-af1065bd5ef1-calico-apiserver-certs\") pod \"calico-apiserver-65bf96dcf9-9jlmf\" (UID: \"d1677b41-eff4-484f-9cd7-af1065bd5ef1\") " pod="calico-apiserver/calico-apiserver-65bf96dcf9-9jlmf" Jul 7 00:15:09.984139 kubelet[4328]: I0707 00:15:09.984041 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9eafa8d9-bfc1-45ff-aebd-3491ab52a523-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-h7c7c\" (UID: \"9eafa8d9-bfc1-45ff-aebd-3491ab52a523\") " pod="calico-system/goldmane-768f4c5c69-h7c7c" Jul 7 00:15:09.984139 kubelet[4328]: I0707 00:15:09.984110 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9eafa8d9-bfc1-45ff-aebd-3491ab52a523-goldmane-key-pair\") pod \"goldmane-768f4c5c69-h7c7c\" (UID: \"9eafa8d9-bfc1-45ff-aebd-3491ab52a523\") " pod="calico-system/goldmane-768f4c5c69-h7c7c" Jul 7 00:15:09.984139 kubelet[4328]: I0707 00:15:09.984127 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxvsx\" (UniqueName: \"kubernetes.io/projected/9eafa8d9-bfc1-45ff-aebd-3491ab52a523-kube-api-access-qxvsx\") pod \"goldmane-768f4c5c69-h7c7c\" (UID: \"9eafa8d9-bfc1-45ff-aebd-3491ab52a523\") " pod="calico-system/goldmane-768f4c5c69-h7c7c" Jul 7 00:15:09.984327 kubelet[4328]: I0707 00:15:09.984142 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwzsh\" (UniqueName: \"kubernetes.io/projected/52966d71-b395-4a93-8847-a0030bcea280-kube-api-access-cwzsh\") pod \"coredns-674b8bbfcf-6h5w4\" (UID: \"52966d71-b395-4a93-8847-a0030bcea280\") " pod="kube-system/coredns-674b8bbfcf-6h5w4" Jul 7 00:15:09.984327 kubelet[4328]: I0707 00:15:09.984171 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf28l\" (UniqueName: \"kubernetes.io/projected/d1677b41-eff4-484f-9cd7-af1065bd5ef1-kube-api-access-qf28l\") pod \"calico-apiserver-65bf96dcf9-9jlmf\" (UID: \"d1677b41-eff4-484f-9cd7-af1065bd5ef1\") " pod="calico-apiserver/calico-apiserver-65bf96dcf9-9jlmf" Jul 7 00:15:09.987958 systemd[1]: Created slice kubepods-besteffort-pod7805c043_5570_435c_b994_e32b0a9aca69.slice - libcontainer container kubepods-besteffort-pod7805c043_5570_435c_b994_e32b0a9aca69.slice. Jul 7 00:15:09.991689 systemd[1]: Created slice kubepods-besteffort-podf971860c_60ba_4abe_8874_2aa97ce3fb8a.slice - libcontainer container kubepods-besteffort-podf971860c_60ba_4abe_8874_2aa97ce3fb8a.slice. Jul 7 00:15:09.995427 systemd[1]: Created slice kubepods-besteffort-pod614b6b3f_ba24_4e96_a0b5_ccdb79126af8.slice - libcontainer container kubepods-besteffort-pod614b6b3f_ba24_4e96_a0b5_ccdb79126af8.slice. Jul 7 00:15:09.998963 systemd[1]: Created slice kubepods-burstable-pod7ef128c5_e42d_45ce_96d1_e1202d22f977.slice - libcontainer container kubepods-burstable-pod7ef128c5_e42d_45ce_96d1_e1202d22f977.slice. Jul 7 00:15:10.084565 kubelet[4328]: I0707 00:15:10.084535 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f971860c-60ba-4abe-8874-2aa97ce3fb8a-whisker-backend-key-pair\") pod \"whisker-787c56f79c-mttg2\" (UID: \"f971860c-60ba-4abe-8874-2aa97ce3fb8a\") " pod="calico-system/whisker-787c56f79c-mttg2" Jul 7 00:15:10.084633 kubelet[4328]: I0707 00:15:10.084579 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7805c043-5570-435c-b994-e32b0a9aca69-tigera-ca-bundle\") pod \"calico-kube-controllers-85d4d4968-2tx7r\" (UID: \"7805c043-5570-435c-b994-e32b0a9aca69\") " pod="calico-system/calico-kube-controllers-85d4d4968-2tx7r" Jul 7 00:15:10.084697 kubelet[4328]: I0707 00:15:10.084654 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/614b6b3f-ba24-4e96-a0b5-ccdb79126af8-calico-apiserver-certs\") pod \"calico-apiserver-65bf96dcf9-86vdd\" (UID: \"614b6b3f-ba24-4e96-a0b5-ccdb79126af8\") " pod="calico-apiserver/calico-apiserver-65bf96dcf9-86vdd" Jul 7 00:15:10.084768 kubelet[4328]: I0707 00:15:10.084756 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpgms\" (UniqueName: \"kubernetes.io/projected/614b6b3f-ba24-4e96-a0b5-ccdb79126af8-kube-api-access-vpgms\") pod \"calico-apiserver-65bf96dcf9-86vdd\" (UID: \"614b6b3f-ba24-4e96-a0b5-ccdb79126af8\") " pod="calico-apiserver/calico-apiserver-65bf96dcf9-86vdd" Jul 7 00:15:10.084818 kubelet[4328]: I0707 00:15:10.084789 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f971860c-60ba-4abe-8874-2aa97ce3fb8a-whisker-ca-bundle\") pod \"whisker-787c56f79c-mttg2\" (UID: \"f971860c-60ba-4abe-8874-2aa97ce3fb8a\") " pod="calico-system/whisker-787c56f79c-mttg2" Jul 7 00:15:10.084888 kubelet[4328]: I0707 00:15:10.084859 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ef128c5-e42d-45ce-96d1-e1202d22f977-config-volume\") pod \"coredns-674b8bbfcf-nm59z\" (UID: \"7ef128c5-e42d-45ce-96d1-e1202d22f977\") " pod="kube-system/coredns-674b8bbfcf-nm59z" Jul 7 00:15:10.085024 kubelet[4328]: I0707 00:15:10.085000 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwq4n\" (UniqueName: \"kubernetes.io/projected/7ef128c5-e42d-45ce-96d1-e1202d22f977-kube-api-access-pwq4n\") pod \"coredns-674b8bbfcf-nm59z\" (UID: \"7ef128c5-e42d-45ce-96d1-e1202d22f977\") " pod="kube-system/coredns-674b8bbfcf-nm59z" Jul 7 00:15:10.085163 kubelet[4328]: I0707 00:15:10.085153 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-475th\" (UniqueName: \"kubernetes.io/projected/f971860c-60ba-4abe-8874-2aa97ce3fb8a-kube-api-access-475th\") pod \"whisker-787c56f79c-mttg2\" (UID: \"f971860c-60ba-4abe-8874-2aa97ce3fb8a\") " pod="calico-system/whisker-787c56f79c-mttg2" Jul 7 00:15:10.085243 kubelet[4328]: I0707 00:15:10.085210 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blsc4\" (UniqueName: \"kubernetes.io/projected/7805c043-5570-435c-b994-e32b0a9aca69-kube-api-access-blsc4\") pod \"calico-kube-controllers-85d4d4968-2tx7r\" (UID: \"7805c043-5570-435c-b994-e32b0a9aca69\") " pod="calico-system/calico-kube-controllers-85d4d4968-2tx7r" Jul 7 00:15:10.242031 containerd[2793]: time="2025-07-07T00:15:10.241956731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-h7c7c,Uid:9eafa8d9-bfc1-45ff-aebd-3491ab52a523,Namespace:calico-system,Attempt:0,}" Jul 7 00:15:10.253478 containerd[2793]: time="2025-07-07T00:15:10.253453967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6h5w4,Uid:52966d71-b395-4a93-8847-a0030bcea280,Namespace:kube-system,Attempt:0,}" Jul 7 00:15:10.283139 containerd[2793]: time="2025-07-07T00:15:10.283109399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65bf96dcf9-9jlmf,Uid:d1677b41-eff4-484f-9cd7-af1065bd5ef1,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:15:10.290824 containerd[2793]: time="2025-07-07T00:15:10.290792796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85d4d4968-2tx7r,Uid:7805c043-5570-435c-b994-e32b0a9aca69,Namespace:calico-system,Attempt:0,}" Jul 7 00:15:10.294295 containerd[2793]: time="2025-07-07T00:15:10.294268595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-787c56f79c-mttg2,Uid:f971860c-60ba-4abe-8874-2aa97ce3fb8a,Namespace:calico-system,Attempt:0,}" Jul 7 00:15:10.297814 containerd[2793]: time="2025-07-07T00:15:10.297785754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65bf96dcf9-86vdd,Uid:614b6b3f-ba24-4e96-a0b5-ccdb79126af8,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:15:10.298332 containerd[2793]: time="2025-07-07T00:15:10.298298834Z" level=error msg="Failed to destroy network for sandbox \"d5cc65925845576b9750b60cb06029d30d6e02788454b4002ddbf23ef0afcbd9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:15:10.298725 containerd[2793]: time="2025-07-07T00:15:10.298697514Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-h7c7c,Uid:9eafa8d9-bfc1-45ff-aebd-3491ab52a523,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5cc65925845576b9750b60cb06029d30d6e02788454b4002ddbf23ef0afcbd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:15:10.298902 kubelet[4328]: E0707 00:15:10.298864 4328 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5cc65925845576b9750b60cb06029d30d6e02788454b4002ddbf23ef0afcbd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:15:10.298948 kubelet[4328]: E0707 00:15:10.298930 4328 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5cc65925845576b9750b60cb06029d30d6e02788454b4002ddbf23ef0afcbd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-h7c7c" Jul 7 00:15:10.298979 kubelet[4328]: E0707 00:15:10.298948 4328 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5cc65925845576b9750b60cb06029d30d6e02788454b4002ddbf23ef0afcbd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-h7c7c" Jul 7 00:15:10.299017 kubelet[4328]: E0707 00:15:10.298995 4328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-h7c7c_calico-system(9eafa8d9-bfc1-45ff-aebd-3491ab52a523)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-h7c7c_calico-system(9eafa8d9-bfc1-45ff-aebd-3491ab52a523)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5cc65925845576b9750b60cb06029d30d6e02788454b4002ddbf23ef0afcbd9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-h7c7c" podUID="9eafa8d9-bfc1-45ff-aebd-3491ab52a523" Jul 7 00:15:10.299570 containerd[2793]: time="2025-07-07T00:15:10.299546834Z" level=error msg="Failed to destroy network for sandbox \"8e26a3c20b0dad95853e1048ac028f122e6f95a80ad65fcdc546778b947f44db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:15:10.300579 containerd[2793]: time="2025-07-07T00:15:10.300549394Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6h5w4,Uid:52966d71-b395-4a93-8847-a0030bcea280,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e26a3c20b0dad95853e1048ac028f122e6f95a80ad65fcdc546778b947f44db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:15:10.300717 kubelet[4328]: E0707 00:15:10.300682 4328 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e26a3c20b0dad95853e1048ac028f122e6f95a80ad65fcdc546778b947f44db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:15:10.300802 kubelet[4328]: E0707 00:15:10.300737 4328 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e26a3c20b0dad95853e1048ac028f122e6f95a80ad65fcdc546778b947f44db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6h5w4" Jul 7 00:15:10.300802 kubelet[4328]: E0707 00:15:10.300757 4328 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e26a3c20b0dad95853e1048ac028f122e6f95a80ad65fcdc546778b947f44db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6h5w4" Jul 7 00:15:10.300843 kubelet[4328]: E0707 00:15:10.300797 4328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6h5w4_kube-system(52966d71-b395-4a93-8847-a0030bcea280)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6h5w4_kube-system(52966d71-b395-4a93-8847-a0030bcea280)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e26a3c20b0dad95853e1048ac028f122e6f95a80ad65fcdc546778b947f44db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6h5w4" podUID="52966d71-b395-4a93-8847-a0030bcea280" Jul 7 00:15:10.301115 containerd[2793]: time="2025-07-07T00:15:10.301094633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nm59z,Uid:7ef128c5-e42d-45ce-96d1-e1202d22f977,Namespace:kube-system,Attempt:0,}" Jul 7 00:15:10.323602 containerd[2793]: time="2025-07-07T00:15:10.323560427Z" level=error msg="Failed to destroy network for sandbox \"90530eedeb90462096e4b77d55330e20ba235020e080b49593f09b6eb4a3c287\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:15:10.324392 containerd[2793]: time="2025-07-07T00:15:10.324302067Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65bf96dcf9-9jlmf,Uid:d1677b41-eff4-484f-9cd7-af1065bd5ef1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"90530eedeb90462096e4b77d55330e20ba235020e080b49593f09b6eb4a3c287\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:15:10.324520 kubelet[4328]: E0707 00:15:10.324478 4328 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90530eedeb90462096e4b77d55330e20ba235020e080b49593f09b6eb4a3c287\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:15:10.324562 kubelet[4328]: E0707 00:15:10.324540 4328 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90530eedeb90462096e4b77d55330e20ba235020e080b49593f09b6eb4a3c287\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65bf96dcf9-9jlmf" Jul 7 00:15:10.324600 kubelet[4328]: E0707 00:15:10.324559 4328 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90530eedeb90462096e4b77d55330e20ba235020e080b49593f09b6eb4a3c287\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65bf96dcf9-9jlmf" Jul 7 00:15:10.324632 kubelet[4328]: E0707 00:15:10.324605 4328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65bf96dcf9-9jlmf_calico-apiserver(d1677b41-eff4-484f-9cd7-af1065bd5ef1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65bf96dcf9-9jlmf_calico-apiserver(d1677b41-eff4-484f-9cd7-af1065bd5ef1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90530eedeb90462096e4b77d55330e20ba235020e080b49593f09b6eb4a3c287\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65bf96dcf9-9jlmf" podUID="d1677b41-eff4-484f-9cd7-af1065bd5ef1" Jul 7 00:15:10.331738 containerd[2793]: time="2025-07-07T00:15:10.331701224Z" level=error msg="Failed to destroy network for sandbox \"9f57afe8202cc45f0c731672b39cd43b6da77749d03024087150897b0d6a04d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:15:10.332111 containerd[2793]: time="2025-07-07T00:15:10.332078304Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85d4d4968-2tx7r,Uid:7805c043-5570-435c-b994-e32b0a9aca69,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f57afe8202cc45f0c731672b39cd43b6da77749d03024087150897b0d6a04d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:15:10.332242 kubelet[4328]: E0707 00:15:10.332217 4328 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f57afe8202cc45f0c731672b39cd43b6da77749d03024087150897b0d6a04d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:15:10.332290 kubelet[4328]: E0707 00:15:10.332254 4328 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f57afe8202cc45f0c731672b39cd43b6da77749d03024087150897b0d6a04d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85d4d4968-2tx7r" Jul 7 00:15:10.332290 kubelet[4328]: E0707 00:15:10.332272 4328 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f57afe8202cc45f0c731672b39cd43b6da77749d03024087150897b0d6a04d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85d4d4968-2tx7r" Jul 7 00:15:10.332342 kubelet[4328]: E0707 00:15:10.332309 4328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85d4d4968-2tx7r_calico-system(7805c043-5570-435c-b994-e32b0a9aca69)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85d4d4968-2tx7r_calico-system(7805c043-5570-435c-b994-e32b0a9aca69)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f57afe8202cc45f0c731672b39cd43b6da77749d03024087150897b0d6a04d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85d4d4968-2tx7r" podUID="7805c043-5570-435c-b994-e32b0a9aca69" Jul 7 00:15:10.336643 containerd[2793]: time="2025-07-07T00:15:10.336611943Z" level=error msg="Failed to destroy network for sandbox \"87cf0ef8fd2e5bed358dfa66b47a63d5f06fdf3f991b52827cf1f3260127d37d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:15:10.337030 containerd[2793]: time="2025-07-07T00:15:10.337003103Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-787c56f79c-mttg2,Uid:f971860c-60ba-4abe-8874-2aa97ce3fb8a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"87cf0ef8fd2e5bed358dfa66b47a63d5f06fdf3f991b52827cf1f3260127d37d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:15:10.337166 kubelet[4328]: E0707 00:15:10.337140 4328 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87cf0ef8fd2e5bed358dfa66b47a63d5f06fdf3f991b52827cf1f3260127d37d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:15:10.337202 kubelet[4328]: E0707 00:15:10.337178 4328 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87cf0ef8fd2e5bed358dfa66b47a63d5f06fdf3f991b52827cf1f3260127d37d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-787c56f79c-mttg2" Jul 7 00:15:10.337202 kubelet[4328]: E0707 00:15:10.337196 4328 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87cf0ef8fd2e5bed358dfa66b47a63d5f06fdf3f991b52827cf1f3260127d37d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-787c56f79c-mttg2" Jul 7 00:15:10.337250 kubelet[4328]: E0707 00:15:10.337232 4328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-787c56f79c-mttg2_calico-system(f971860c-60ba-4abe-8874-2aa97ce3fb8a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-787c56f79c-mttg2_calico-system(f971860c-60ba-4abe-8874-2aa97ce3fb8a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87cf0ef8fd2e5bed358dfa66b47a63d5f06fdf3f991b52827cf1f3260127d37d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-787c56f79c-mttg2" podUID="f971860c-60ba-4abe-8874-2aa97ce3fb8a" Jul 7 00:15:10.340059 containerd[2793]: time="2025-07-07T00:15:10.340026022Z" level=error msg="Failed to destroy network for sandbox \"b7d80b26cec3e25977615cee271292f85e26c6f8cf89032f6afc737d549e54ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:15:10.340394 containerd[2793]: time="2025-07-07T00:15:10.340366942Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65bf96dcf9-86vdd,Uid:614b6b3f-ba24-4e96-a0b5-ccdb79126af8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7d80b26cec3e25977615cee271292f85e26c6f8cf89032f6afc737d549e54ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:15:10.340519 kubelet[4328]: E0707 00:15:10.340494 4328 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7d80b26cec3e25977615cee271292f85e26c6f8cf89032f6afc737d549e54ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:15:10.340543 kubelet[4328]: E0707 00:15:10.340534 4328 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7d80b26cec3e25977615cee271292f85e26c6f8cf89032f6afc737d549e54ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65bf96dcf9-86vdd" Jul 7 00:15:10.340566 kubelet[4328]: E0707 00:15:10.340551 4328 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7d80b26cec3e25977615cee271292f85e26c6f8cf89032f6afc737d549e54ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65bf96dcf9-86vdd" Jul 7 00:15:10.340607 kubelet[4328]: E0707 00:15:10.340587 4328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65bf96dcf9-86vdd_calico-apiserver(614b6b3f-ba24-4e96-a0b5-ccdb79126af8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65bf96dcf9-86vdd_calico-apiserver(614b6b3f-ba24-4e96-a0b5-ccdb79126af8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7d80b26cec3e25977615cee271292f85e26c6f8cf89032f6afc737d549e54ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65bf96dcf9-86vdd" podUID="614b6b3f-ba24-4e96-a0b5-ccdb79126af8" Jul 7 00:15:10.341152 containerd[2793]: time="2025-07-07T00:15:10.341124262Z" level=error msg="Failed to destroy network for sandbox \"a94652b5bf14d952d7bfc9aabadf3dc8571506c74dffd0904012e3d79171115f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:15:10.341473 containerd[2793]: time="2025-07-07T00:15:10.341448981Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nm59z,Uid:7ef128c5-e42d-45ce-96d1-e1202d22f977,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a94652b5bf14d952d7bfc9aabadf3dc8571506c74dffd0904012e3d79171115f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:15:10.341592 kubelet[4328]: E0707 00:15:10.341571 4328 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a94652b5bf14d952d7bfc9aabadf3dc8571506c74dffd0904012e3d79171115f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:15:10.341618 kubelet[4328]: E0707 00:15:10.341603 4328 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a94652b5bf14d952d7bfc9aabadf3dc8571506c74dffd0904012e3d79171115f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nm59z" Jul 7 00:15:10.341640 kubelet[4328]: E0707 00:15:10.341621 4328 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a94652b5bf14d952d7bfc9aabadf3dc8571506c74dffd0904012e3d79171115f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nm59z" Jul 7 00:15:10.341680 kubelet[4328]: E0707 00:15:10.341658 4328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-nm59z_kube-system(7ef128c5-e42d-45ce-96d1-e1202d22f977)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-nm59z_kube-system(7ef128c5-e42d-45ce-96d1-e1202d22f977)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a94652b5bf14d952d7bfc9aabadf3dc8571506c74dffd0904012e3d79171115f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-nm59z" podUID="7ef128c5-e42d-45ce-96d1-e1202d22f977" Jul 7 00:15:10.576159 containerd[2793]: time="2025-07-07T00:15:10.576101112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 00:15:11.525184 systemd[1]: Created slice kubepods-besteffort-poda91e23ac_d148_41e5_b88b_f561933b89b5.slice - libcontainer container kubepods-besteffort-poda91e23ac_d148_41e5_b88b_f561933b89b5.slice. Jul 7 00:15:11.526896 containerd[2793]: time="2025-07-07T00:15:11.526852161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgws5,Uid:a91e23ac-d148-41e5-b88b-f561933b89b5,Namespace:calico-system,Attempt:0,}" Jul 7 00:15:11.566916 containerd[2793]: time="2025-07-07T00:15:11.566862069Z" level=error msg="Failed to destroy network for sandbox \"6155569e15adef684a7e57af7eb5e87635ff8eaa245b99622fc7646dbf046fad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:15:11.567283 containerd[2793]: time="2025-07-07T00:15:11.567252109Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgws5,Uid:a91e23ac-d148-41e5-b88b-f561933b89b5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6155569e15adef684a7e57af7eb5e87635ff8eaa245b99622fc7646dbf046fad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:15:11.567443 kubelet[4328]: E0707 00:15:11.567412 4328 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6155569e15adef684a7e57af7eb5e87635ff8eaa245b99622fc7646dbf046fad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:15:11.567696 kubelet[4328]: E0707 00:15:11.567467 4328 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6155569e15adef684a7e57af7eb5e87635ff8eaa245b99622fc7646dbf046fad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lgws5" Jul 7 00:15:11.567696 kubelet[4328]: E0707 00:15:11.567486 4328 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6155569e15adef684a7e57af7eb5e87635ff8eaa245b99622fc7646dbf046fad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lgws5" Jul 7 00:15:11.567696 kubelet[4328]: E0707 00:15:11.567537 4328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lgws5_calico-system(a91e23ac-d148-41e5-b88b-f561933b89b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lgws5_calico-system(a91e23ac-d148-41e5-b88b-f561933b89b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6155569e15adef684a7e57af7eb5e87635ff8eaa245b99622fc7646dbf046fad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lgws5" podUID="a91e23ac-d148-41e5-b88b-f561933b89b5" Jul 7 00:15:11.568622 systemd[1]: run-netns-cni\x2d8d482c74\x2da246\x2de7dc\x2d7e6b\x2d79524133639e.mount: Deactivated successfully. Jul 7 00:15:13.800213 kubelet[4328]: I0707 00:15:13.800182 4328 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:15:14.122925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2184662504.mount: Deactivated successfully. Jul 7 00:15:14.143192 containerd[2793]: time="2025-07-07T00:15:14.143147213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 7 00:15:14.143383 containerd[2793]: time="2025-07-07T00:15:14.143148413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:14.143946 containerd[2793]: time="2025-07-07T00:15:14.143919333Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:14.145217 containerd[2793]: time="2025-07-07T00:15:14.145196732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:14.145799 containerd[2793]: time="2025-07-07T00:15:14.145773052Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 3.56964054s" Jul 7 00:15:14.145822 containerd[2793]: time="2025-07-07T00:15:14.145805972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 7 00:15:14.151598 containerd[2793]: time="2025-07-07T00:15:14.151573531Z" level=info msg="CreateContainer within sandbox \"41485e577c85e047f1b19bc2166a8dd8786df9e239c0d2dba9f72fae76b613da\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 00:15:14.156665 containerd[2793]: time="2025-07-07T00:15:14.156635370Z" level=info msg="Container 24ceabf0d47ff5b2708d52fbf1f9a74a20e5f96adf03085c6da651b5c661d5be: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:14.163797 containerd[2793]: time="2025-07-07T00:15:14.163762488Z" level=info msg="CreateContainer within sandbox \"41485e577c85e047f1b19bc2166a8dd8786df9e239c0d2dba9f72fae76b613da\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"24ceabf0d47ff5b2708d52fbf1f9a74a20e5f96adf03085c6da651b5c661d5be\"" Jul 7 00:15:14.164139 containerd[2793]: time="2025-07-07T00:15:14.164119688Z" level=info msg="StartContainer for \"24ceabf0d47ff5b2708d52fbf1f9a74a20e5f96adf03085c6da651b5c661d5be\"" Jul 7 00:15:14.165470 containerd[2793]: time="2025-07-07T00:15:14.165448208Z" level=info msg="connecting to shim 24ceabf0d47ff5b2708d52fbf1f9a74a20e5f96adf03085c6da651b5c661d5be" address="unix:///run/containerd/s/672c8b00d2d04b02604fcacdae4b2b432aeee12e00a7ee844363750544811eab" protocol=ttrpc version=3 Jul 7 00:15:14.195782 systemd[1]: Started cri-containerd-24ceabf0d47ff5b2708d52fbf1f9a74a20e5f96adf03085c6da651b5c661d5be.scope - libcontainer container 24ceabf0d47ff5b2708d52fbf1f9a74a20e5f96adf03085c6da651b5c661d5be. Jul 7 00:15:14.232571 containerd[2793]: time="2025-07-07T00:15:14.232544072Z" level=info msg="StartContainer for \"24ceabf0d47ff5b2708d52fbf1f9a74a20e5f96adf03085c6da651b5c661d5be\" returns successfully" Jul 7 00:15:14.353047 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 00:15:14.353124 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 00:15:14.509507 kubelet[4328]: I0707 00:15:14.509398 4328 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-475th\" (UniqueName: \"kubernetes.io/projected/f971860c-60ba-4abe-8874-2aa97ce3fb8a-kube-api-access-475th\") pod \"f971860c-60ba-4abe-8874-2aa97ce3fb8a\" (UID: \"f971860c-60ba-4abe-8874-2aa97ce3fb8a\") " Jul 7 00:15:14.509507 kubelet[4328]: I0707 00:15:14.509438 4328 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f971860c-60ba-4abe-8874-2aa97ce3fb8a-whisker-ca-bundle\") pod \"f971860c-60ba-4abe-8874-2aa97ce3fb8a\" (UID: \"f971860c-60ba-4abe-8874-2aa97ce3fb8a\") " Jul 7 00:15:14.509507 kubelet[4328]: I0707 00:15:14.509467 4328 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f971860c-60ba-4abe-8874-2aa97ce3fb8a-whisker-backend-key-pair\") pod \"f971860c-60ba-4abe-8874-2aa97ce3fb8a\" (UID: \"f971860c-60ba-4abe-8874-2aa97ce3fb8a\") " Jul 7 00:15:14.509863 kubelet[4328]: I0707 00:15:14.509836 4328 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f971860c-60ba-4abe-8874-2aa97ce3fb8a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f971860c-60ba-4abe-8874-2aa97ce3fb8a" (UID: "f971860c-60ba-4abe-8874-2aa97ce3fb8a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 00:15:14.511678 kubelet[4328]: I0707 00:15:14.511651 4328 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f971860c-60ba-4abe-8874-2aa97ce3fb8a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f971860c-60ba-4abe-8874-2aa97ce3fb8a" (UID: "f971860c-60ba-4abe-8874-2aa97ce3fb8a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 00:15:14.511755 kubelet[4328]: I0707 00:15:14.511724 4328 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f971860c-60ba-4abe-8874-2aa97ce3fb8a-kube-api-access-475th" (OuterVolumeSpecName: "kube-api-access-475th") pod "f971860c-60ba-4abe-8874-2aa97ce3fb8a" (UID: "f971860c-60ba-4abe-8874-2aa97ce3fb8a"). InnerVolumeSpecName "kube-api-access-475th". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 00:15:14.526637 systemd[1]: Removed slice kubepods-besteffort-podf971860c_60ba_4abe_8874_2aa97ce3fb8a.slice - libcontainer container kubepods-besteffort-podf971860c_60ba_4abe_8874_2aa97ce3fb8a.slice. Jul 7 00:15:14.597212 kubelet[4328]: I0707 00:15:14.597164 4328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-f564g" podStartSLOduration=1.748682528 podStartE2EDuration="11.597152029s" podCreationTimestamp="2025-07-07 00:15:03 +0000 UTC" firstStartedPulling="2025-07-07 00:15:04.297839991 +0000 UTC m=+19.851099120" lastFinishedPulling="2025-07-07 00:15:14.146309492 +0000 UTC m=+29.699568621" observedRunningTime="2025-07-07 00:15:14.596773429 +0000 UTC m=+30.150032558" watchObservedRunningTime="2025-07-07 00:15:14.597152029 +0000 UTC m=+30.150411158" Jul 7 00:15:14.610033 kubelet[4328]: I0707 00:15:14.609999 4328 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f971860c-60ba-4abe-8874-2aa97ce3fb8a-whisker-backend-key-pair\") on node \"ci-4344.1.1-a-1996c8fb49\" DevicePath \"\"" Jul 7 00:15:14.610033 kubelet[4328]: I0707 00:15:14.610022 4328 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-475th\" (UniqueName: \"kubernetes.io/projected/f971860c-60ba-4abe-8874-2aa97ce3fb8a-kube-api-access-475th\") on node \"ci-4344.1.1-a-1996c8fb49\" DevicePath \"\"" Jul 7 00:15:14.610033 kubelet[4328]: I0707 00:15:14.610032 4328 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f971860c-60ba-4abe-8874-2aa97ce3fb8a-whisker-ca-bundle\") on node \"ci-4344.1.1-a-1996c8fb49\" DevicePath \"\"" Jul 7 00:15:14.622381 systemd[1]: Created slice kubepods-besteffort-podc50efb8e_d18a_4cc1_ad93_7724530f6923.slice - libcontainer container kubepods-besteffort-podc50efb8e_d18a_4cc1_ad93_7724530f6923.slice. Jul 7 00:15:14.710990 kubelet[4328]: I0707 00:15:14.710953 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c50efb8e-d18a-4cc1-ad93-7724530f6923-whisker-ca-bundle\") pod \"whisker-ffbb5dd59-6wxpv\" (UID: \"c50efb8e-d18a-4cc1-ad93-7724530f6923\") " pod="calico-system/whisker-ffbb5dd59-6wxpv" Jul 7 00:15:14.711091 kubelet[4328]: I0707 00:15:14.711001 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c50efb8e-d18a-4cc1-ad93-7724530f6923-whisker-backend-key-pair\") pod \"whisker-ffbb5dd59-6wxpv\" (UID: \"c50efb8e-d18a-4cc1-ad93-7724530f6923\") " pod="calico-system/whisker-ffbb5dd59-6wxpv" Jul 7 00:15:14.711091 kubelet[4328]: I0707 00:15:14.711024 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tppd\" (UniqueName: \"kubernetes.io/projected/c50efb8e-d18a-4cc1-ad93-7724530f6923-kube-api-access-9tppd\") pod \"whisker-ffbb5dd59-6wxpv\" (UID: \"c50efb8e-d18a-4cc1-ad93-7724530f6923\") " pod="calico-system/whisker-ffbb5dd59-6wxpv" Jul 7 00:15:14.924323 containerd[2793]: time="2025-07-07T00:15:14.924277954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-ffbb5dd59-6wxpv,Uid:c50efb8e-d18a-4cc1-ad93-7724530f6923,Namespace:calico-system,Attempt:0,}" Jul 7 00:15:15.028766 systemd-networkd[2698]: calia2004176191: Link UP Jul 7 00:15:15.028991 systemd-networkd[2698]: calia2004176191: Gained carrier Jul 7 00:15:15.037005 containerd[2793]: 2025-07-07 00:15:14.942 [INFO][6052] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:15:15.037005 containerd[2793]: 2025-07-07 00:15:14.957 [INFO][6052] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--a--1996c8fb49-k8s-whisker--ffbb5dd59--6wxpv-eth0 whisker-ffbb5dd59- calico-system c50efb8e-d18a-4cc1-ad93-7724530f6923 901 0 2025-07-07 00:15:14 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:ffbb5dd59 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4344.1.1-a-1996c8fb49 whisker-ffbb5dd59-6wxpv eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia2004176191 [] [] }} ContainerID="0a983f2e87172fa8bba818214a5de6046d18673ee162d875c332acf2015d0efa" Namespace="calico-system" Pod="whisker-ffbb5dd59-6wxpv" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-whisker--ffbb5dd59--6wxpv-" Jul 7 00:15:15.037005 containerd[2793]: 2025-07-07 00:15:14.957 [INFO][6052] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0a983f2e87172fa8bba818214a5de6046d18673ee162d875c332acf2015d0efa" Namespace="calico-system" Pod="whisker-ffbb5dd59-6wxpv" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-whisker--ffbb5dd59--6wxpv-eth0" Jul 7 00:15:15.037005 containerd[2793]: 2025-07-07 00:15:14.994 [INFO][6079] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a983f2e87172fa8bba818214a5de6046d18673ee162d875c332acf2015d0efa" HandleID="k8s-pod-network.0a983f2e87172fa8bba818214a5de6046d18673ee162d875c332acf2015d0efa" Workload="ci--4344.1.1--a--1996c8fb49-k8s-whisker--ffbb5dd59--6wxpv-eth0" Jul 7 00:15:15.037189 containerd[2793]: 2025-07-07 00:15:14.994 [INFO][6079] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0a983f2e87172fa8bba818214a5de6046d18673ee162d875c332acf2015d0efa" HandleID="k8s-pod-network.0a983f2e87172fa8bba818214a5de6046d18673ee162d875c332acf2015d0efa" Workload="ci--4344.1.1--a--1996c8fb49-k8s-whisker--ffbb5dd59--6wxpv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000720780), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.1-a-1996c8fb49", "pod":"whisker-ffbb5dd59-6wxpv", "timestamp":"2025-07-07 00:15:14.994163018 +0000 UTC"}, Hostname:"ci-4344.1.1-a-1996c8fb49", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:15:15.037189 containerd[2793]: 2025-07-07 00:15:14.994 [INFO][6079] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:15:15.037189 containerd[2793]: 2025-07-07 00:15:14.994 [INFO][6079] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:15:15.037189 containerd[2793]: 2025-07-07 00:15:14.994 [INFO][6079] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-a-1996c8fb49' Jul 7 00:15:15.037189 containerd[2793]: 2025-07-07 00:15:15.003 [INFO][6079] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0a983f2e87172fa8bba818214a5de6046d18673ee162d875c332acf2015d0efa" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:15.037189 containerd[2793]: 2025-07-07 00:15:15.007 [INFO][6079] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:15.037189 containerd[2793]: 2025-07-07 00:15:15.010 [INFO][6079] ipam/ipam.go 511: Trying affinity for 192.168.13.192/26 host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:15.037189 containerd[2793]: 2025-07-07 00:15:15.011 [INFO][6079] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.192/26 host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:15.037189 containerd[2793]: 2025-07-07 00:15:15.014 [INFO][6079] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.192/26 host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:15.037356 containerd[2793]: 2025-07-07 00:15:15.014 [INFO][6079] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.13.192/26 handle="k8s-pod-network.0a983f2e87172fa8bba818214a5de6046d18673ee162d875c332acf2015d0efa" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:15.037356 containerd[2793]: 2025-07-07 00:15:15.015 [INFO][6079] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0a983f2e87172fa8bba818214a5de6046d18673ee162d875c332acf2015d0efa Jul 7 00:15:15.037356 containerd[2793]: 2025-07-07 00:15:15.017 [INFO][6079] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.13.192/26 handle="k8s-pod-network.0a983f2e87172fa8bba818214a5de6046d18673ee162d875c332acf2015d0efa" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:15.037356 containerd[2793]: 2025-07-07 00:15:15.020 [INFO][6079] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.13.193/26] block=192.168.13.192/26 handle="k8s-pod-network.0a983f2e87172fa8bba818214a5de6046d18673ee162d875c332acf2015d0efa" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:15.037356 containerd[2793]: 2025-07-07 00:15:15.020 [INFO][6079] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.193/26] handle="k8s-pod-network.0a983f2e87172fa8bba818214a5de6046d18673ee162d875c332acf2015d0efa" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:15.037356 containerd[2793]: 2025-07-07 00:15:15.020 [INFO][6079] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:15:15.037356 containerd[2793]: 2025-07-07 00:15:15.020 [INFO][6079] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.193/26] IPv6=[] ContainerID="0a983f2e87172fa8bba818214a5de6046d18673ee162d875c332acf2015d0efa" HandleID="k8s-pod-network.0a983f2e87172fa8bba818214a5de6046d18673ee162d875c332acf2015d0efa" Workload="ci--4344.1.1--a--1996c8fb49-k8s-whisker--ffbb5dd59--6wxpv-eth0" Jul 7 00:15:15.037475 containerd[2793]: 2025-07-07 00:15:15.023 [INFO][6052] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0a983f2e87172fa8bba818214a5de6046d18673ee162d875c332acf2015d0efa" Namespace="calico-system" Pod="whisker-ffbb5dd59-6wxpv" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-whisker--ffbb5dd59--6wxpv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--1996c8fb49-k8s-whisker--ffbb5dd59--6wxpv-eth0", GenerateName:"whisker-ffbb5dd59-", Namespace:"calico-system", SelfLink:"", UID:"c50efb8e-d18a-4cc1-ad93-7724530f6923", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 15, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"ffbb5dd59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-1996c8fb49", ContainerID:"", Pod:"whisker-ffbb5dd59-6wxpv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.13.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia2004176191", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:15:15.037475 containerd[2793]: 2025-07-07 00:15:15.023 [INFO][6052] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.193/32] ContainerID="0a983f2e87172fa8bba818214a5de6046d18673ee162d875c332acf2015d0efa" Namespace="calico-system" Pod="whisker-ffbb5dd59-6wxpv" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-whisker--ffbb5dd59--6wxpv-eth0" Jul 7 00:15:15.037534 containerd[2793]: 2025-07-07 00:15:15.023 [INFO][6052] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2004176191 ContainerID="0a983f2e87172fa8bba818214a5de6046d18673ee162d875c332acf2015d0efa" Namespace="calico-system" Pod="whisker-ffbb5dd59-6wxpv" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-whisker--ffbb5dd59--6wxpv-eth0" Jul 7 00:15:15.037534 containerd[2793]: 2025-07-07 00:15:15.029 [INFO][6052] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a983f2e87172fa8bba818214a5de6046d18673ee162d875c332acf2015d0efa" Namespace="calico-system" Pod="whisker-ffbb5dd59-6wxpv" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-whisker--ffbb5dd59--6wxpv-eth0" Jul 7 00:15:15.037572 containerd[2793]: 2025-07-07 00:15:15.029 [INFO][6052] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0a983f2e87172fa8bba818214a5de6046d18673ee162d875c332acf2015d0efa" Namespace="calico-system" Pod="whisker-ffbb5dd59-6wxpv" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-whisker--ffbb5dd59--6wxpv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--1996c8fb49-k8s-whisker--ffbb5dd59--6wxpv-eth0", GenerateName:"whisker-ffbb5dd59-", Namespace:"calico-system", SelfLink:"", UID:"c50efb8e-d18a-4cc1-ad93-7724530f6923", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 15, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"ffbb5dd59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-1996c8fb49", ContainerID:"0a983f2e87172fa8bba818214a5de6046d18673ee162d875c332acf2015d0efa", Pod:"whisker-ffbb5dd59-6wxpv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.13.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia2004176191", MAC:"3e:ac:c9:4c:ea:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:15:15.037614 containerd[2793]: 2025-07-07 00:15:15.035 [INFO][6052] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0a983f2e87172fa8bba818214a5de6046d18673ee162d875c332acf2015d0efa" Namespace="calico-system" Pod="whisker-ffbb5dd59-6wxpv" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-whisker--ffbb5dd59--6wxpv-eth0" Jul 7 00:15:15.053524 containerd[2793]: time="2025-07-07T00:15:15.053494726Z" level=info msg="connecting to shim 0a983f2e87172fa8bba818214a5de6046d18673ee162d875c332acf2015d0efa" address="unix:///run/containerd/s/59be4df7e13885f51872a22ab15771d576d4da38c847b802ed0bdda4f5a36946" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:15:15.079792 systemd[1]: Started cri-containerd-0a983f2e87172fa8bba818214a5de6046d18673ee162d875c332acf2015d0efa.scope - libcontainer container 0a983f2e87172fa8bba818214a5de6046d18673ee162d875c332acf2015d0efa. Jul 7 00:15:15.105649 containerd[2793]: time="2025-07-07T00:15:15.105623474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-ffbb5dd59-6wxpv,Uid:c50efb8e-d18a-4cc1-ad93-7724530f6923,Namespace:calico-system,Attempt:0,} returns sandbox id \"0a983f2e87172fa8bba818214a5de6046d18673ee162d875c332acf2015d0efa\"" Jul 7 00:15:15.106729 containerd[2793]: time="2025-07-07T00:15:15.106702834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 00:15:15.126840 systemd[1]: var-lib-kubelet-pods-f971860c\x2d60ba\x2d4abe\x2d8874\x2d2aa97ce3fb8a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d475th.mount: Deactivated successfully. Jul 7 00:15:15.126920 systemd[1]: var-lib-kubelet-pods-f971860c\x2d60ba\x2d4abe\x2d8874\x2d2aa97ce3fb8a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 00:15:15.657877 containerd[2793]: time="2025-07-07T00:15:15.657823876Z" level=info msg="TaskExit event in podsandbox handler container_id:\"24ceabf0d47ff5b2708d52fbf1f9a74a20e5f96adf03085c6da651b5c661d5be\" id:\"8d24d640d5774970aef22577b15ab5cfef92eb51ff920a76c1ba23442c2bbb82\" pid:6325 exit_status:1 exited_at:{seconds:1751847315 nanos:657552156}" Jul 7 00:15:15.813820 systemd-networkd[2698]: vxlan.calico: Link UP Jul 7 00:15:15.813825 systemd-networkd[2698]: vxlan.calico: Gained carrier Jul 7 00:15:16.330392 containerd[2793]: time="2025-07-07T00:15:16.330348977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 7 00:15:16.330392 containerd[2793]: time="2025-07-07T00:15:16.330344937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:16.331077 containerd[2793]: time="2025-07-07T00:15:16.331050696Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:16.332650 containerd[2793]: time="2025-07-07T00:15:16.332622736Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:16.333294 containerd[2793]: time="2025-07-07T00:15:16.333269976Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.226539182s" Jul 7 00:15:16.333316 containerd[2793]: time="2025-07-07T00:15:16.333297176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 7 00:15:16.335200 containerd[2793]: time="2025-07-07T00:15:16.335182456Z" level=info msg="CreateContainer within sandbox \"0a983f2e87172fa8bba818214a5de6046d18673ee162d875c332acf2015d0efa\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 00:15:16.338188 containerd[2793]: time="2025-07-07T00:15:16.338162975Z" level=info msg="Container ab9c40d3b257e9b2f3c00bff3c644e8832b4f9c2dcd66e6d0270389ffc3687f3: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:16.341458 containerd[2793]: time="2025-07-07T00:15:16.341431174Z" level=info msg="CreateContainer within sandbox \"0a983f2e87172fa8bba818214a5de6046d18673ee162d875c332acf2015d0efa\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"ab9c40d3b257e9b2f3c00bff3c644e8832b4f9c2dcd66e6d0270389ffc3687f3\"" Jul 7 00:15:16.341785 containerd[2793]: time="2025-07-07T00:15:16.341765134Z" level=info msg="StartContainer for \"ab9c40d3b257e9b2f3c00bff3c644e8832b4f9c2dcd66e6d0270389ffc3687f3\"" Jul 7 00:15:16.342740 containerd[2793]: time="2025-07-07T00:15:16.342720214Z" level=info msg="connecting to shim ab9c40d3b257e9b2f3c00bff3c644e8832b4f9c2dcd66e6d0270389ffc3687f3" address="unix:///run/containerd/s/59be4df7e13885f51872a22ab15771d576d4da38c847b802ed0bdda4f5a36946" protocol=ttrpc version=3 Jul 7 00:15:16.370829 systemd[1]: Started cri-containerd-ab9c40d3b257e9b2f3c00bff3c644e8832b4f9c2dcd66e6d0270389ffc3687f3.scope - libcontainer container ab9c40d3b257e9b2f3c00bff3c644e8832b4f9c2dcd66e6d0270389ffc3687f3. Jul 7 00:15:16.406175 containerd[2793]: time="2025-07-07T00:15:16.406144801Z" level=info msg="StartContainer for \"ab9c40d3b257e9b2f3c00bff3c644e8832b4f9c2dcd66e6d0270389ffc3687f3\" returns successfully" Jul 7 00:15:16.406859 containerd[2793]: time="2025-07-07T00:15:16.406843801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 00:15:16.519829 systemd-networkd[2698]: calia2004176191: Gained IPv6LL Jul 7 00:15:16.523962 kubelet[4328]: I0707 00:15:16.523937 4328 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f971860c-60ba-4abe-8874-2aa97ce3fb8a" path="/var/lib/kubelet/pods/f971860c-60ba-4abe-8874-2aa97ce3fb8a/volumes" Jul 7 00:15:16.656526 containerd[2793]: time="2025-07-07T00:15:16.656491991Z" level=info msg="TaskExit event in podsandbox handler container_id:\"24ceabf0d47ff5b2708d52fbf1f9a74a20e5f96adf03085c6da651b5c661d5be\" id:\"807f1e253c0b222485d947e4ddfbc1a3b704ed37fbf85bcb541536c1bcee4bd5\" pid:6755 exit_status:1 exited_at:{seconds:1751847316 nanos:656241551}" Jul 7 00:15:17.543776 systemd-networkd[2698]: vxlan.calico: Gained IPv6LL Jul 7 00:15:17.748970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2482714108.mount: Deactivated successfully. Jul 7 00:15:17.763823 containerd[2793]: time="2025-07-07T00:15:17.763788618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:17.764069 containerd[2793]: time="2025-07-07T00:15:17.763862658Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 7 00:15:17.764609 containerd[2793]: time="2025-07-07T00:15:17.764589658Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:17.766185 containerd[2793]: time="2025-07-07T00:15:17.766165778Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:17.766854 containerd[2793]: time="2025-07-07T00:15:17.766830418Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.359961737s" Jul 7 00:15:17.766879 containerd[2793]: time="2025-07-07T00:15:17.766861138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 7 00:15:17.768954 containerd[2793]: time="2025-07-07T00:15:17.768926697Z" level=info msg="CreateContainer within sandbox \"0a983f2e87172fa8bba818214a5de6046d18673ee162d875c332acf2015d0efa\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 00:15:17.772576 containerd[2793]: time="2025-07-07T00:15:17.772549937Z" level=info msg="Container 7a641b5c963eb4a51181171a7f117d8e5cf8bd0015d4799497fc63b153f3e38b: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:17.776180 containerd[2793]: time="2025-07-07T00:15:17.776154696Z" level=info msg="CreateContainer within sandbox \"0a983f2e87172fa8bba818214a5de6046d18673ee162d875c332acf2015d0efa\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"7a641b5c963eb4a51181171a7f117d8e5cf8bd0015d4799497fc63b153f3e38b\"" Jul 7 00:15:17.776501 containerd[2793]: time="2025-07-07T00:15:17.776483256Z" level=info msg="StartContainer for \"7a641b5c963eb4a51181171a7f117d8e5cf8bd0015d4799497fc63b153f3e38b\"" Jul 7 00:15:17.777493 containerd[2793]: time="2025-07-07T00:15:17.777472736Z" level=info msg="connecting to shim 7a641b5c963eb4a51181171a7f117d8e5cf8bd0015d4799497fc63b153f3e38b" address="unix:///run/containerd/s/59be4df7e13885f51872a22ab15771d576d4da38c847b802ed0bdda4f5a36946" protocol=ttrpc version=3 Jul 7 00:15:17.808840 systemd[1]: Started cri-containerd-7a641b5c963eb4a51181171a7f117d8e5cf8bd0015d4799497fc63b153f3e38b.scope - libcontainer container 7a641b5c963eb4a51181171a7f117d8e5cf8bd0015d4799497fc63b153f3e38b. Jul 7 00:15:17.838019 containerd[2793]: time="2025-07-07T00:15:17.837991244Z" level=info msg="StartContainer for \"7a641b5c963eb4a51181171a7f117d8e5cf8bd0015d4799497fc63b153f3e38b\" returns successfully" Jul 7 00:15:18.600160 kubelet[4328]: I0707 00:15:18.600109 4328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-ffbb5dd59-6wxpv" podStartSLOduration=1.939211604 podStartE2EDuration="4.600094868s" podCreationTimestamp="2025-07-07 00:15:14 +0000 UTC" firstStartedPulling="2025-07-07 00:15:15.106509994 +0000 UTC m=+30.659769123" lastFinishedPulling="2025-07-07 00:15:17.767393258 +0000 UTC m=+33.320652387" observedRunningTime="2025-07-07 00:15:18.599829988 +0000 UTC m=+34.153089117" watchObservedRunningTime="2025-07-07 00:15:18.600094868 +0000 UTC m=+34.153353997" Jul 7 00:15:21.521743 containerd[2793]: time="2025-07-07T00:15:21.521674761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6h5w4,Uid:52966d71-b395-4a93-8847-a0030bcea280,Namespace:kube-system,Attempt:0,}" Jul 7 00:15:21.522096 containerd[2793]: time="2025-07-07T00:15:21.521656201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65bf96dcf9-9jlmf,Uid:d1677b41-eff4-484f-9cd7-af1065bd5ef1,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:15:21.603622 systemd-networkd[2698]: calib6343b2059c: Link UP Jul 7 00:15:21.603910 systemd-networkd[2698]: calib6343b2059c: Gained carrier Jul 7 00:15:21.625519 containerd[2793]: 2025-07-07 00:15:21.554 [INFO][6846] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--a--1996c8fb49-k8s-coredns--674b8bbfcf--6h5w4-eth0 coredns-674b8bbfcf- kube-system 52966d71-b395-4a93-8847-a0030bcea280 833 0 2025-07-07 00:14:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.1.1-a-1996c8fb49 coredns-674b8bbfcf-6h5w4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib6343b2059c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9a9b24e31bf7d37df47a58e6131a5737e8d7811fb2f56b5ec0964d77e73082d1" Namespace="kube-system" Pod="coredns-674b8bbfcf-6h5w4" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-coredns--674b8bbfcf--6h5w4-" Jul 7 00:15:21.625519 containerd[2793]: 2025-07-07 00:15:21.554 [INFO][6846] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9a9b24e31bf7d37df47a58e6131a5737e8d7811fb2f56b5ec0964d77e73082d1" Namespace="kube-system" Pod="coredns-674b8bbfcf-6h5w4" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-coredns--674b8bbfcf--6h5w4-eth0" Jul 7 00:15:21.625519 containerd[2793]: 2025-07-07 00:15:21.574 [INFO][6902] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a9b24e31bf7d37df47a58e6131a5737e8d7811fb2f56b5ec0964d77e73082d1" HandleID="k8s-pod-network.9a9b24e31bf7d37df47a58e6131a5737e8d7811fb2f56b5ec0964d77e73082d1" Workload="ci--4344.1.1--a--1996c8fb49-k8s-coredns--674b8bbfcf--6h5w4-eth0" Jul 7 00:15:21.625963 containerd[2793]: 2025-07-07 00:15:21.574 [INFO][6902] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9a9b24e31bf7d37df47a58e6131a5737e8d7811fb2f56b5ec0964d77e73082d1" HandleID="k8s-pod-network.9a9b24e31bf7d37df47a58e6131a5737e8d7811fb2f56b5ec0964d77e73082d1" Workload="ci--4344.1.1--a--1996c8fb49-k8s-coredns--674b8bbfcf--6h5w4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003e1720), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.1.1-a-1996c8fb49", "pod":"coredns-674b8bbfcf-6h5w4", "timestamp":"2025-07-07 00:15:21.574625513 +0000 UTC"}, Hostname:"ci-4344.1.1-a-1996c8fb49", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:15:21.625963 containerd[2793]: 2025-07-07 00:15:21.574 [INFO][6902] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:15:21.625963 containerd[2793]: 2025-07-07 00:15:21.574 [INFO][6902] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:15:21.625963 containerd[2793]: 2025-07-07 00:15:21.574 [INFO][6902] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-a-1996c8fb49' Jul 7 00:15:21.625963 containerd[2793]: 2025-07-07 00:15:21.583 [INFO][6902] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9a9b24e31bf7d37df47a58e6131a5737e8d7811fb2f56b5ec0964d77e73082d1" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:21.625963 containerd[2793]: 2025-07-07 00:15:21.586 [INFO][6902] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:21.625963 containerd[2793]: 2025-07-07 00:15:21.590 [INFO][6902] ipam/ipam.go 511: Trying affinity for 192.168.13.192/26 host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:21.625963 containerd[2793]: 2025-07-07 00:15:21.591 [INFO][6902] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.192/26 host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:21.625963 containerd[2793]: 2025-07-07 00:15:21.593 [INFO][6902] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.192/26 host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:21.627384 containerd[2793]: 2025-07-07 00:15:21.593 [INFO][6902] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.13.192/26 handle="k8s-pod-network.9a9b24e31bf7d37df47a58e6131a5737e8d7811fb2f56b5ec0964d77e73082d1" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:21.627384 containerd[2793]: 2025-07-07 00:15:21.594 [INFO][6902] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9a9b24e31bf7d37df47a58e6131a5737e8d7811fb2f56b5ec0964d77e73082d1 Jul 7 00:15:21.627384 containerd[2793]: 2025-07-07 00:15:21.596 [INFO][6902] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.13.192/26 handle="k8s-pod-network.9a9b24e31bf7d37df47a58e6131a5737e8d7811fb2f56b5ec0964d77e73082d1" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:21.627384 containerd[2793]: 2025-07-07 00:15:21.599 [INFO][6902] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.13.194/26] block=192.168.13.192/26 handle="k8s-pod-network.9a9b24e31bf7d37df47a58e6131a5737e8d7811fb2f56b5ec0964d77e73082d1" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:21.627384 containerd[2793]: 2025-07-07 00:15:21.599 [INFO][6902] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.194/26] handle="k8s-pod-network.9a9b24e31bf7d37df47a58e6131a5737e8d7811fb2f56b5ec0964d77e73082d1" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:21.627384 containerd[2793]: 2025-07-07 00:15:21.599 [INFO][6902] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:15:21.627384 containerd[2793]: 2025-07-07 00:15:21.599 [INFO][6902] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.194/26] IPv6=[] ContainerID="9a9b24e31bf7d37df47a58e6131a5737e8d7811fb2f56b5ec0964d77e73082d1" HandleID="k8s-pod-network.9a9b24e31bf7d37df47a58e6131a5737e8d7811fb2f56b5ec0964d77e73082d1" Workload="ci--4344.1.1--a--1996c8fb49-k8s-coredns--674b8bbfcf--6h5w4-eth0" Jul 7 00:15:21.627857 containerd[2793]: 2025-07-07 00:15:21.601 [INFO][6846] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9a9b24e31bf7d37df47a58e6131a5737e8d7811fb2f56b5ec0964d77e73082d1" Namespace="kube-system" Pod="coredns-674b8bbfcf-6h5w4" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-coredns--674b8bbfcf--6h5w4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--1996c8fb49-k8s-coredns--674b8bbfcf--6h5w4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"52966d71-b395-4a93-8847-a0030bcea280", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-1996c8fb49", ContainerID:"", Pod:"coredns-674b8bbfcf-6h5w4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib6343b2059c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:15:21.627857 containerd[2793]: 2025-07-07 00:15:21.601 [INFO][6846] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.194/32] ContainerID="9a9b24e31bf7d37df47a58e6131a5737e8d7811fb2f56b5ec0964d77e73082d1" Namespace="kube-system" Pod="coredns-674b8bbfcf-6h5w4" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-coredns--674b8bbfcf--6h5w4-eth0" Jul 7 00:15:21.627857 containerd[2793]: 2025-07-07 00:15:21.601 [INFO][6846] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6343b2059c ContainerID="9a9b24e31bf7d37df47a58e6131a5737e8d7811fb2f56b5ec0964d77e73082d1" Namespace="kube-system" Pod="coredns-674b8bbfcf-6h5w4" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-coredns--674b8bbfcf--6h5w4-eth0" Jul 7 00:15:21.627857 containerd[2793]: 2025-07-07 00:15:21.604 [INFO][6846] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a9b24e31bf7d37df47a58e6131a5737e8d7811fb2f56b5ec0964d77e73082d1" Namespace="kube-system" Pod="coredns-674b8bbfcf-6h5w4" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-coredns--674b8bbfcf--6h5w4-eth0" Jul 7 00:15:21.627857 containerd[2793]: 2025-07-07 00:15:21.604 [INFO][6846] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9a9b24e31bf7d37df47a58e6131a5737e8d7811fb2f56b5ec0964d77e73082d1" Namespace="kube-system" Pod="coredns-674b8bbfcf-6h5w4" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-coredns--674b8bbfcf--6h5w4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--1996c8fb49-k8s-coredns--674b8bbfcf--6h5w4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"52966d71-b395-4a93-8847-a0030bcea280", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-1996c8fb49", ContainerID:"9a9b24e31bf7d37df47a58e6131a5737e8d7811fb2f56b5ec0964d77e73082d1", Pod:"coredns-674b8bbfcf-6h5w4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib6343b2059c", MAC:"9a:50:8a:ba:b3:d6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:15:21.627857 containerd[2793]: 2025-07-07 00:15:21.612 [INFO][6846] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9a9b24e31bf7d37df47a58e6131a5737e8d7811fb2f56b5ec0964d77e73082d1" Namespace="kube-system" Pod="coredns-674b8bbfcf-6h5w4" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-coredns--674b8bbfcf--6h5w4-eth0" Jul 7 00:15:21.638236 containerd[2793]: time="2025-07-07T00:15:21.638202264Z" level=info msg="connecting to shim 9a9b24e31bf7d37df47a58e6131a5737e8d7811fb2f56b5ec0964d77e73082d1" address="unix:///run/containerd/s/a40627ced766f7a2c7da865391ccce9b2e266a0edfec55df584d8dc843a3f201" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:15:21.674844 systemd[1]: Started cri-containerd-9a9b24e31bf7d37df47a58e6131a5737e8d7811fb2f56b5ec0964d77e73082d1.scope - libcontainer container 9a9b24e31bf7d37df47a58e6131a5737e8d7811fb2f56b5ec0964d77e73082d1. Jul 7 00:15:21.704474 systemd-networkd[2698]: cali0e13fe52323: Link UP Jul 7 00:15:21.704724 systemd-networkd[2698]: cali0e13fe52323: Gained carrier Jul 7 00:15:21.709847 containerd[2793]: time="2025-07-07T00:15:21.709817013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6h5w4,Uid:52966d71-b395-4a93-8847-a0030bcea280,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a9b24e31bf7d37df47a58e6131a5737e8d7811fb2f56b5ec0964d77e73082d1\"" Jul 7 00:15:21.711632 containerd[2793]: 2025-07-07 00:15:21.554 [INFO][6852] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--a--1996c8fb49-k8s-calico--apiserver--65bf96dcf9--9jlmf-eth0 calico-apiserver-65bf96dcf9- calico-apiserver d1677b41-eff4-484f-9cd7-af1065bd5ef1 834 0 2025-07-07 00:14:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65bf96dcf9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.1-a-1996c8fb49 calico-apiserver-65bf96dcf9-9jlmf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0e13fe52323 [] [] }} ContainerID="122ffb870ada758a24377b3ec901f963e6b00467649eca3b41e2d3b315b0175f" Namespace="calico-apiserver" Pod="calico-apiserver-65bf96dcf9-9jlmf" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-calico--apiserver--65bf96dcf9--9jlmf-" Jul 7 00:15:21.711632 containerd[2793]: 2025-07-07 00:15:21.555 [INFO][6852] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="122ffb870ada758a24377b3ec901f963e6b00467649eca3b41e2d3b315b0175f" Namespace="calico-apiserver" Pod="calico-apiserver-65bf96dcf9-9jlmf" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-calico--apiserver--65bf96dcf9--9jlmf-eth0" Jul 7 00:15:21.711632 containerd[2793]: 2025-07-07 00:15:21.575 [INFO][6908] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="122ffb870ada758a24377b3ec901f963e6b00467649eca3b41e2d3b315b0175f" HandleID="k8s-pod-network.122ffb870ada758a24377b3ec901f963e6b00467649eca3b41e2d3b315b0175f" Workload="ci--4344.1.1--a--1996c8fb49-k8s-calico--apiserver--65bf96dcf9--9jlmf-eth0" Jul 7 00:15:21.711632 containerd[2793]: 2025-07-07 00:15:21.575 [INFO][6908] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="122ffb870ada758a24377b3ec901f963e6b00467649eca3b41e2d3b315b0175f" HandleID="k8s-pod-network.122ffb870ada758a24377b3ec901f963e6b00467649eca3b41e2d3b315b0175f" Workload="ci--4344.1.1--a--1996c8fb49-k8s-calico--apiserver--65bf96dcf9--9jlmf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003e1dd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.1-a-1996c8fb49", "pod":"calico-apiserver-65bf96dcf9-9jlmf", "timestamp":"2025-07-07 00:15:21.575415793 +0000 UTC"}, Hostname:"ci-4344.1.1-a-1996c8fb49", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:15:21.711632 containerd[2793]: 2025-07-07 00:15:21.575 [INFO][6908] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:15:21.711632 containerd[2793]: 2025-07-07 00:15:21.599 [INFO][6908] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:15:21.711632 containerd[2793]: 2025-07-07 00:15:21.599 [INFO][6908] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-a-1996c8fb49' Jul 7 00:15:21.711632 containerd[2793]: 2025-07-07 00:15:21.684 [INFO][6908] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.122ffb870ada758a24377b3ec901f963e6b00467649eca3b41e2d3b315b0175f" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:21.711632 containerd[2793]: 2025-07-07 00:15:21.688 [INFO][6908] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:21.711632 containerd[2793]: 2025-07-07 00:15:21.692 [INFO][6908] ipam/ipam.go 511: Trying affinity for 192.168.13.192/26 host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:21.711632 containerd[2793]: 2025-07-07 00:15:21.693 [INFO][6908] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.192/26 host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:21.711632 containerd[2793]: 2025-07-07 00:15:21.694 [INFO][6908] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.192/26 host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:21.711632 containerd[2793]: 2025-07-07 00:15:21.694 [INFO][6908] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.13.192/26 handle="k8s-pod-network.122ffb870ada758a24377b3ec901f963e6b00467649eca3b41e2d3b315b0175f" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:21.711632 containerd[2793]: 2025-07-07 00:15:21.695 [INFO][6908] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.122ffb870ada758a24377b3ec901f963e6b00467649eca3b41e2d3b315b0175f Jul 7 00:15:21.711632 containerd[2793]: 2025-07-07 00:15:21.698 [INFO][6908] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.13.192/26 handle="k8s-pod-network.122ffb870ada758a24377b3ec901f963e6b00467649eca3b41e2d3b315b0175f" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:21.711632 containerd[2793]: 2025-07-07 00:15:21.701 [INFO][6908] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.13.195/26] block=192.168.13.192/26 handle="k8s-pod-network.122ffb870ada758a24377b3ec901f963e6b00467649eca3b41e2d3b315b0175f" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:21.711632 containerd[2793]: 2025-07-07 00:15:21.701 [INFO][6908] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.195/26] handle="k8s-pod-network.122ffb870ada758a24377b3ec901f963e6b00467649eca3b41e2d3b315b0175f" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:21.711632 containerd[2793]: 2025-07-07 00:15:21.701 [INFO][6908] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:15:21.711632 containerd[2793]: 2025-07-07 00:15:21.701 [INFO][6908] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.195/26] IPv6=[] ContainerID="122ffb870ada758a24377b3ec901f963e6b00467649eca3b41e2d3b315b0175f" HandleID="k8s-pod-network.122ffb870ada758a24377b3ec901f963e6b00467649eca3b41e2d3b315b0175f" Workload="ci--4344.1.1--a--1996c8fb49-k8s-calico--apiserver--65bf96dcf9--9jlmf-eth0" Jul 7 00:15:21.712209 containerd[2793]: 2025-07-07 00:15:21.703 [INFO][6852] cni-plugin/k8s.go 418: Populated endpoint ContainerID="122ffb870ada758a24377b3ec901f963e6b00467649eca3b41e2d3b315b0175f" Namespace="calico-apiserver" Pod="calico-apiserver-65bf96dcf9-9jlmf" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-calico--apiserver--65bf96dcf9--9jlmf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--1996c8fb49-k8s-calico--apiserver--65bf96dcf9--9jlmf-eth0", GenerateName:"calico-apiserver-65bf96dcf9-", Namespace:"calico-apiserver", SelfLink:"", UID:"d1677b41-eff4-484f-9cd7-af1065bd5ef1", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65bf96dcf9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-1996c8fb49", ContainerID:"", Pod:"calico-apiserver-65bf96dcf9-9jlmf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0e13fe52323", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:15:21.712209 containerd[2793]: 2025-07-07 00:15:21.703 [INFO][6852] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.195/32] ContainerID="122ffb870ada758a24377b3ec901f963e6b00467649eca3b41e2d3b315b0175f" Namespace="calico-apiserver" Pod="calico-apiserver-65bf96dcf9-9jlmf" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-calico--apiserver--65bf96dcf9--9jlmf-eth0" Jul 7 00:15:21.712209 containerd[2793]: 2025-07-07 00:15:21.703 [INFO][6852] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e13fe52323 ContainerID="122ffb870ada758a24377b3ec901f963e6b00467649eca3b41e2d3b315b0175f" Namespace="calico-apiserver" Pod="calico-apiserver-65bf96dcf9-9jlmf" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-calico--apiserver--65bf96dcf9--9jlmf-eth0" Jul 7 00:15:21.712209 containerd[2793]: 2025-07-07 00:15:21.704 [INFO][6852] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="122ffb870ada758a24377b3ec901f963e6b00467649eca3b41e2d3b315b0175f" Namespace="calico-apiserver" Pod="calico-apiserver-65bf96dcf9-9jlmf" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-calico--apiserver--65bf96dcf9--9jlmf-eth0" Jul 7 00:15:21.712209 containerd[2793]: 2025-07-07 00:15:21.705 [INFO][6852] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="122ffb870ada758a24377b3ec901f963e6b00467649eca3b41e2d3b315b0175f" Namespace="calico-apiserver" Pod="calico-apiserver-65bf96dcf9-9jlmf" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-calico--apiserver--65bf96dcf9--9jlmf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--1996c8fb49-k8s-calico--apiserver--65bf96dcf9--9jlmf-eth0", GenerateName:"calico-apiserver-65bf96dcf9-", Namespace:"calico-apiserver", SelfLink:"", UID:"d1677b41-eff4-484f-9cd7-af1065bd5ef1", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65bf96dcf9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-1996c8fb49", ContainerID:"122ffb870ada758a24377b3ec901f963e6b00467649eca3b41e2d3b315b0175f", Pod:"calico-apiserver-65bf96dcf9-9jlmf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0e13fe52323", MAC:"ce:e5:bd:e3:90:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:15:21.712209 containerd[2793]: 2025-07-07 00:15:21.710 [INFO][6852] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="122ffb870ada758a24377b3ec901f963e6b00467649eca3b41e2d3b315b0175f" Namespace="calico-apiserver" Pod="calico-apiserver-65bf96dcf9-9jlmf" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-calico--apiserver--65bf96dcf9--9jlmf-eth0" Jul 7 00:15:21.713152 containerd[2793]: time="2025-07-07T00:15:21.713126933Z" level=info msg="CreateContainer within sandbox \"9a9b24e31bf7d37df47a58e6131a5737e8d7811fb2f56b5ec0964d77e73082d1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:15:21.718006 containerd[2793]: time="2025-07-07T00:15:21.717980772Z" level=info msg="Container 5e7aab653be7906efd77bf6168e1291a3fbc110dd5044bfc68888ff76d3dbdb6: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:21.720669 containerd[2793]: time="2025-07-07T00:15:21.720637612Z" level=info msg="CreateContainer within sandbox \"9a9b24e31bf7d37df47a58e6131a5737e8d7811fb2f56b5ec0964d77e73082d1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5e7aab653be7906efd77bf6168e1291a3fbc110dd5044bfc68888ff76d3dbdb6\"" Jul 7 00:15:21.721013 containerd[2793]: time="2025-07-07T00:15:21.720993292Z" level=info msg="StartContainer for \"5e7aab653be7906efd77bf6168e1291a3fbc110dd5044bfc68888ff76d3dbdb6\"" Jul 7 00:15:21.721775 containerd[2793]: time="2025-07-07T00:15:21.721752652Z" level=info msg="connecting to shim 122ffb870ada758a24377b3ec901f963e6b00467649eca3b41e2d3b315b0175f" address="unix:///run/containerd/s/ad129d14aa4a3abdece03b804a06377ad960ea403b92d12084ac0e52028f96cd" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:15:21.721803 containerd[2793]: time="2025-07-07T00:15:21.721772452Z" level=info msg="connecting to shim 5e7aab653be7906efd77bf6168e1291a3fbc110dd5044bfc68888ff76d3dbdb6" address="unix:///run/containerd/s/a40627ced766f7a2c7da865391ccce9b2e266a0edfec55df584d8dc843a3f201" protocol=ttrpc version=3 Jul 7 00:15:21.754838 systemd[1]: Started cri-containerd-5e7aab653be7906efd77bf6168e1291a3fbc110dd5044bfc68888ff76d3dbdb6.scope - libcontainer container 5e7aab653be7906efd77bf6168e1291a3fbc110dd5044bfc68888ff76d3dbdb6. Jul 7 00:15:21.757280 systemd[1]: Started cri-containerd-122ffb870ada758a24377b3ec901f963e6b00467649eca3b41e2d3b315b0175f.scope - libcontainer container 122ffb870ada758a24377b3ec901f963e6b00467649eca3b41e2d3b315b0175f. Jul 7 00:15:21.775000 containerd[2793]: time="2025-07-07T00:15:21.774905644Z" level=info msg="StartContainer for \"5e7aab653be7906efd77bf6168e1291a3fbc110dd5044bfc68888ff76d3dbdb6\" returns successfully" Jul 7 00:15:21.782649 containerd[2793]: time="2025-07-07T00:15:21.782617603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65bf96dcf9-9jlmf,Uid:d1677b41-eff4-484f-9cd7-af1065bd5ef1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"122ffb870ada758a24377b3ec901f963e6b00467649eca3b41e2d3b315b0175f\"" Jul 7 00:15:21.783805 containerd[2793]: time="2025-07-07T00:15:21.783786763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 00:15:22.521566 containerd[2793]: time="2025-07-07T00:15:22.521522820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-h7c7c,Uid:9eafa8d9-bfc1-45ff-aebd-3491ab52a523,Namespace:calico-system,Attempt:0,}" Jul 7 00:15:22.604524 systemd-networkd[2698]: cali772a0e02897: Link UP Jul 7 00:15:22.604831 systemd-networkd[2698]: cali772a0e02897: Gained carrier Jul 7 00:15:22.607531 kubelet[4328]: I0707 00:15:22.607474 4328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6h5w4" podStartSLOduration=32.607450208 podStartE2EDuration="32.607450208s" podCreationTimestamp="2025-07-07 00:14:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:15:22.607347488 +0000 UTC m=+38.160606617" watchObservedRunningTime="2025-07-07 00:15:22.607450208 +0000 UTC m=+38.160709377" Jul 7 00:15:22.612366 containerd[2793]: 2025-07-07 00:15:22.555 [INFO][7116] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--a--1996c8fb49-k8s-goldmane--768f4c5c69--h7c7c-eth0 goldmane-768f4c5c69- calico-system 9eafa8d9-bfc1-45ff-aebd-3491ab52a523 832 0 2025-07-07 00:15:03 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4344.1.1-a-1996c8fb49 goldmane-768f4c5c69-h7c7c eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali772a0e02897 [] [] }} ContainerID="db3e4262ad67aeb5b0ff248e5d75c8760673522942f631a59d325a9aa58b82bf" Namespace="calico-system" Pod="goldmane-768f4c5c69-h7c7c" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-goldmane--768f4c5c69--h7c7c-" Jul 7 00:15:22.612366 containerd[2793]: 2025-07-07 00:15:22.555 [INFO][7116] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="db3e4262ad67aeb5b0ff248e5d75c8760673522942f631a59d325a9aa58b82bf" Namespace="calico-system" Pod="goldmane-768f4c5c69-h7c7c" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-goldmane--768f4c5c69--h7c7c-eth0" Jul 7 00:15:22.612366 containerd[2793]: 2025-07-07 00:15:22.575 [INFO][7141] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db3e4262ad67aeb5b0ff248e5d75c8760673522942f631a59d325a9aa58b82bf" HandleID="k8s-pod-network.db3e4262ad67aeb5b0ff248e5d75c8760673522942f631a59d325a9aa58b82bf" Workload="ci--4344.1.1--a--1996c8fb49-k8s-goldmane--768f4c5c69--h7c7c-eth0" Jul 7 00:15:22.612366 containerd[2793]: 2025-07-07 00:15:22.576 [INFO][7141] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="db3e4262ad67aeb5b0ff248e5d75c8760673522942f631a59d325a9aa58b82bf" HandleID="k8s-pod-network.db3e4262ad67aeb5b0ff248e5d75c8760673522942f631a59d325a9aa58b82bf" Workload="ci--4344.1.1--a--1996c8fb49-k8s-goldmane--768f4c5c69--h7c7c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000711cd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.1-a-1996c8fb49", "pod":"goldmane-768f4c5c69-h7c7c", "timestamp":"2025-07-07 00:15:22.575964533 +0000 UTC"}, Hostname:"ci-4344.1.1-a-1996c8fb49", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:15:22.612366 containerd[2793]: 2025-07-07 00:15:22.576 [INFO][7141] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:15:22.612366 containerd[2793]: 2025-07-07 00:15:22.576 [INFO][7141] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:15:22.612366 containerd[2793]: 2025-07-07 00:15:22.576 [INFO][7141] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-a-1996c8fb49' Jul 7 00:15:22.612366 containerd[2793]: 2025-07-07 00:15:22.584 [INFO][7141] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.db3e4262ad67aeb5b0ff248e5d75c8760673522942f631a59d325a9aa58b82bf" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:22.612366 containerd[2793]: 2025-07-07 00:15:22.587 [INFO][7141] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:22.612366 containerd[2793]: 2025-07-07 00:15:22.591 [INFO][7141] ipam/ipam.go 511: Trying affinity for 192.168.13.192/26 host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:22.612366 containerd[2793]: 2025-07-07 00:15:22.592 [INFO][7141] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.192/26 host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:22.612366 containerd[2793]: 2025-07-07 00:15:22.594 [INFO][7141] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.192/26 host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:22.612366 containerd[2793]: 2025-07-07 00:15:22.594 [INFO][7141] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.13.192/26 handle="k8s-pod-network.db3e4262ad67aeb5b0ff248e5d75c8760673522942f631a59d325a9aa58b82bf" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:22.612366 containerd[2793]: 2025-07-07 00:15:22.595 [INFO][7141] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.db3e4262ad67aeb5b0ff248e5d75c8760673522942f631a59d325a9aa58b82bf Jul 7 00:15:22.612366 containerd[2793]: 2025-07-07 00:15:22.597 [INFO][7141] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.13.192/26 handle="k8s-pod-network.db3e4262ad67aeb5b0ff248e5d75c8760673522942f631a59d325a9aa58b82bf" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:22.612366 containerd[2793]: 2025-07-07 00:15:22.601 [INFO][7141] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.13.196/26] block=192.168.13.192/26 handle="k8s-pod-network.db3e4262ad67aeb5b0ff248e5d75c8760673522942f631a59d325a9aa58b82bf" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:22.612366 containerd[2793]: 2025-07-07 00:15:22.601 [INFO][7141] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.196/26] handle="k8s-pod-network.db3e4262ad67aeb5b0ff248e5d75c8760673522942f631a59d325a9aa58b82bf" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:22.612366 containerd[2793]: 2025-07-07 00:15:22.601 [INFO][7141] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:15:22.612366 containerd[2793]: 2025-07-07 00:15:22.601 [INFO][7141] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.196/26] IPv6=[] ContainerID="db3e4262ad67aeb5b0ff248e5d75c8760673522942f631a59d325a9aa58b82bf" HandleID="k8s-pod-network.db3e4262ad67aeb5b0ff248e5d75c8760673522942f631a59d325a9aa58b82bf" Workload="ci--4344.1.1--a--1996c8fb49-k8s-goldmane--768f4c5c69--h7c7c-eth0" Jul 7 00:15:22.612920 containerd[2793]: 2025-07-07 00:15:22.603 [INFO][7116] cni-plugin/k8s.go 418: Populated endpoint ContainerID="db3e4262ad67aeb5b0ff248e5d75c8760673522942f631a59d325a9aa58b82bf" Namespace="calico-system" Pod="goldmane-768f4c5c69-h7c7c" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-goldmane--768f4c5c69--h7c7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--1996c8fb49-k8s-goldmane--768f4c5c69--h7c7c-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"9eafa8d9-bfc1-45ff-aebd-3491ab52a523", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 15, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-1996c8fb49", ContainerID:"", Pod:"goldmane-768f4c5c69-h7c7c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.13.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali772a0e02897", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:15:22.612920 containerd[2793]: 2025-07-07 00:15:22.603 [INFO][7116] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.196/32] ContainerID="db3e4262ad67aeb5b0ff248e5d75c8760673522942f631a59d325a9aa58b82bf" Namespace="calico-system" Pod="goldmane-768f4c5c69-h7c7c" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-goldmane--768f4c5c69--h7c7c-eth0" Jul 7 00:15:22.612920 containerd[2793]: 2025-07-07 00:15:22.603 [INFO][7116] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali772a0e02897 ContainerID="db3e4262ad67aeb5b0ff248e5d75c8760673522942f631a59d325a9aa58b82bf" Namespace="calico-system" Pod="goldmane-768f4c5c69-h7c7c" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-goldmane--768f4c5c69--h7c7c-eth0" Jul 7 00:15:22.612920 containerd[2793]: 2025-07-07 00:15:22.605 [INFO][7116] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db3e4262ad67aeb5b0ff248e5d75c8760673522942f631a59d325a9aa58b82bf" Namespace="calico-system" Pod="goldmane-768f4c5c69-h7c7c" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-goldmane--768f4c5c69--h7c7c-eth0" Jul 7 00:15:22.612920 containerd[2793]: 2025-07-07 00:15:22.605 [INFO][7116] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="db3e4262ad67aeb5b0ff248e5d75c8760673522942f631a59d325a9aa58b82bf" Namespace="calico-system" Pod="goldmane-768f4c5c69-h7c7c" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-goldmane--768f4c5c69--h7c7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--1996c8fb49-k8s-goldmane--768f4c5c69--h7c7c-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"9eafa8d9-bfc1-45ff-aebd-3491ab52a523", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 15, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-1996c8fb49", ContainerID:"db3e4262ad67aeb5b0ff248e5d75c8760673522942f631a59d325a9aa58b82bf", Pod:"goldmane-768f4c5c69-h7c7c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.13.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali772a0e02897", MAC:"4a:2a:75:8f:ed:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:15:22.612920 containerd[2793]: 2025-07-07 00:15:22.610 [INFO][7116] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="db3e4262ad67aeb5b0ff248e5d75c8760673522942f631a59d325a9aa58b82bf" Namespace="calico-system" Pod="goldmane-768f4c5c69-h7c7c" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-goldmane--768f4c5c69--h7c7c-eth0" Jul 7 00:15:22.622886 containerd[2793]: time="2025-07-07T00:15:22.622852326Z" level=info msg="connecting to shim db3e4262ad67aeb5b0ff248e5d75c8760673522942f631a59d325a9aa58b82bf" address="unix:///run/containerd/s/0c051b9501369ab300c606a2cbecea6cc94df1ea7c97cf0b11190dc44035a804" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:15:22.646836 systemd[1]: Started cri-containerd-db3e4262ad67aeb5b0ff248e5d75c8760673522942f631a59d325a9aa58b82bf.scope - libcontainer container db3e4262ad67aeb5b0ff248e5d75c8760673522942f631a59d325a9aa58b82bf. Jul 7 00:15:22.673235 containerd[2793]: time="2025-07-07T00:15:22.673204039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-h7c7c,Uid:9eafa8d9-bfc1-45ff-aebd-3491ab52a523,Namespace:calico-system,Attempt:0,} returns sandbox id \"db3e4262ad67aeb5b0ff248e5d75c8760673522942f631a59d325a9aa58b82bf\"" Jul 7 00:15:22.728741 systemd-networkd[2698]: calib6343b2059c: Gained IPv6LL Jul 7 00:15:23.239756 systemd-networkd[2698]: cali0e13fe52323: Gained IPv6LL Jul 7 00:15:23.252993 containerd[2793]: time="2025-07-07T00:15:23.252955762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:23.253051 containerd[2793]: time="2025-07-07T00:15:23.252987642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 7 00:15:23.253595 containerd[2793]: time="2025-07-07T00:15:23.253575202Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:23.255196 containerd[2793]: time="2025-07-07T00:15:23.255165082Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:23.255813 containerd[2793]: time="2025-07-07T00:15:23.255784002Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 1.471968079s" Jul 7 00:15:23.255838 containerd[2793]: time="2025-07-07T00:15:23.255817522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 7 00:15:23.256507 containerd[2793]: time="2025-07-07T00:15:23.256486522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 00:15:23.257768 containerd[2793]: time="2025-07-07T00:15:23.257746802Z" level=info msg="CreateContainer within sandbox \"122ffb870ada758a24377b3ec901f963e6b00467649eca3b41e2d3b315b0175f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 00:15:23.261156 containerd[2793]: time="2025-07-07T00:15:23.261125361Z" level=info msg="Container d8a034471d3697b8494d987bdd539e2d8ba1a18f56e5dcc07296b88c1ea47335: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:23.264476 containerd[2793]: time="2025-07-07T00:15:23.264450601Z" level=info msg="CreateContainer within sandbox \"122ffb870ada758a24377b3ec901f963e6b00467649eca3b41e2d3b315b0175f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d8a034471d3697b8494d987bdd539e2d8ba1a18f56e5dcc07296b88c1ea47335\"" Jul 7 00:15:23.264790 containerd[2793]: time="2025-07-07T00:15:23.264768481Z" level=info msg="StartContainer for \"d8a034471d3697b8494d987bdd539e2d8ba1a18f56e5dcc07296b88c1ea47335\"" Jul 7 00:15:23.265688 containerd[2793]: time="2025-07-07T00:15:23.265668121Z" level=info msg="connecting to shim d8a034471d3697b8494d987bdd539e2d8ba1a18f56e5dcc07296b88c1ea47335" address="unix:///run/containerd/s/ad129d14aa4a3abdece03b804a06377ad960ea403b92d12084ac0e52028f96cd" protocol=ttrpc version=3 Jul 7 00:15:23.290780 systemd[1]: Started cri-containerd-d8a034471d3697b8494d987bdd539e2d8ba1a18f56e5dcc07296b88c1ea47335.scope - libcontainer container d8a034471d3697b8494d987bdd539e2d8ba1a18f56e5dcc07296b88c1ea47335. Jul 7 00:15:23.319639 containerd[2793]: time="2025-07-07T00:15:23.319607714Z" level=info msg="StartContainer for \"d8a034471d3697b8494d987bdd539e2d8ba1a18f56e5dcc07296b88c1ea47335\" returns successfully" Jul 7 00:15:23.522343 containerd[2793]: time="2025-07-07T00:15:23.522243368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65bf96dcf9-86vdd,Uid:614b6b3f-ba24-4e96-a0b5-ccdb79126af8,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:15:23.522432 containerd[2793]: time="2025-07-07T00:15:23.522333768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85d4d4968-2tx7r,Uid:7805c043-5570-435c-b994-e32b0a9aca69,Namespace:calico-system,Attempt:0,}" Jul 7 00:15:23.610903 systemd-networkd[2698]: cali6a96a59e89c: Link UP Jul 7 00:15:23.611159 systemd-networkd[2698]: cali6a96a59e89c: Gained carrier Jul 7 00:15:23.624423 kubelet[4328]: I0707 00:15:23.624373 4328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-65bf96dcf9-9jlmf" podStartSLOduration=24.151590836 podStartE2EDuration="25.624357715s" podCreationTimestamp="2025-07-07 00:14:58 +0000 UTC" firstStartedPulling="2025-07-07 00:15:21.783612043 +0000 UTC m=+37.336871172" lastFinishedPulling="2025-07-07 00:15:23.256378922 +0000 UTC m=+38.809638051" observedRunningTime="2025-07-07 00:15:23.623924395 +0000 UTC m=+39.177183524" watchObservedRunningTime="2025-07-07 00:15:23.624357715 +0000 UTC m=+39.177616844" Jul 7 00:15:23.625437 containerd[2793]: 2025-07-07 00:15:23.555 [INFO][7287] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--a--1996c8fb49-k8s-calico--kube--controllers--85d4d4968--2tx7r-eth0 calico-kube-controllers-85d4d4968- calico-system 7805c043-5570-435c-b994-e32b0a9aca69 835 0 2025-07-07 00:15:04 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:85d4d4968 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4344.1.1-a-1996c8fb49 calico-kube-controllers-85d4d4968-2tx7r eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6a96a59e89c [] [] }} ContainerID="843dc2645034457719a5bc701eb5b7efa9cafb4287de6388c6eaf646ae259b5b" Namespace="calico-system" Pod="calico-kube-controllers-85d4d4968-2tx7r" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-calico--kube--controllers--85d4d4968--2tx7r-" Jul 7 00:15:23.625437 containerd[2793]: 2025-07-07 00:15:23.556 [INFO][7287] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="843dc2645034457719a5bc701eb5b7efa9cafb4287de6388c6eaf646ae259b5b" Namespace="calico-system" Pod="calico-kube-controllers-85d4d4968-2tx7r" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-calico--kube--controllers--85d4d4968--2tx7r-eth0" Jul 7 00:15:23.625437 containerd[2793]: 2025-07-07 00:15:23.575 [INFO][7336] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="843dc2645034457719a5bc701eb5b7efa9cafb4287de6388c6eaf646ae259b5b" HandleID="k8s-pod-network.843dc2645034457719a5bc701eb5b7efa9cafb4287de6388c6eaf646ae259b5b" Workload="ci--4344.1.1--a--1996c8fb49-k8s-calico--kube--controllers--85d4d4968--2tx7r-eth0" Jul 7 00:15:23.625437 containerd[2793]: 2025-07-07 00:15:23.575 [INFO][7336] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="843dc2645034457719a5bc701eb5b7efa9cafb4287de6388c6eaf646ae259b5b" HandleID="k8s-pod-network.843dc2645034457719a5bc701eb5b7efa9cafb4287de6388c6eaf646ae259b5b" Workload="ci--4344.1.1--a--1996c8fb49-k8s-calico--kube--controllers--85d4d4968--2tx7r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001b6e00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.1-a-1996c8fb49", "pod":"calico-kube-controllers-85d4d4968-2tx7r", "timestamp":"2025-07-07 00:15:23.575785561 +0000 UTC"}, Hostname:"ci-4344.1.1-a-1996c8fb49", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:15:23.625437 containerd[2793]: 2025-07-07 00:15:23.575 [INFO][7336] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:15:23.625437 containerd[2793]: 2025-07-07 00:15:23.576 [INFO][7336] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:15:23.625437 containerd[2793]: 2025-07-07 00:15:23.576 [INFO][7336] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-a-1996c8fb49' Jul 7 00:15:23.625437 containerd[2793]: 2025-07-07 00:15:23.584 [INFO][7336] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.843dc2645034457719a5bc701eb5b7efa9cafb4287de6388c6eaf646ae259b5b" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:23.625437 containerd[2793]: 2025-07-07 00:15:23.587 [INFO][7336] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:23.625437 containerd[2793]: 2025-07-07 00:15:23.592 [INFO][7336] ipam/ipam.go 511: Trying affinity for 192.168.13.192/26 host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:23.625437 containerd[2793]: 2025-07-07 00:15:23.593 [INFO][7336] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.192/26 host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:23.625437 containerd[2793]: 2025-07-07 00:15:23.595 [INFO][7336] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.192/26 host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:23.625437 containerd[2793]: 2025-07-07 00:15:23.595 [INFO][7336] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.13.192/26 handle="k8s-pod-network.843dc2645034457719a5bc701eb5b7efa9cafb4287de6388c6eaf646ae259b5b" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:23.625437 containerd[2793]: 2025-07-07 00:15:23.596 [INFO][7336] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.843dc2645034457719a5bc701eb5b7efa9cafb4287de6388c6eaf646ae259b5b Jul 7 00:15:23.625437 containerd[2793]: 2025-07-07 00:15:23.598 [INFO][7336] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.13.192/26 handle="k8s-pod-network.843dc2645034457719a5bc701eb5b7efa9cafb4287de6388c6eaf646ae259b5b" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:23.625437 containerd[2793]: 2025-07-07 00:15:23.602 [INFO][7336] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.13.197/26] block=192.168.13.192/26 handle="k8s-pod-network.843dc2645034457719a5bc701eb5b7efa9cafb4287de6388c6eaf646ae259b5b" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:23.625437 containerd[2793]: 2025-07-07 00:15:23.602 [INFO][7336] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.197/26] handle="k8s-pod-network.843dc2645034457719a5bc701eb5b7efa9cafb4287de6388c6eaf646ae259b5b" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:23.625437 containerd[2793]: 2025-07-07 00:15:23.602 [INFO][7336] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:15:23.625437 containerd[2793]: 2025-07-07 00:15:23.602 [INFO][7336] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.197/26] IPv6=[] ContainerID="843dc2645034457719a5bc701eb5b7efa9cafb4287de6388c6eaf646ae259b5b" HandleID="k8s-pod-network.843dc2645034457719a5bc701eb5b7efa9cafb4287de6388c6eaf646ae259b5b" Workload="ci--4344.1.1--a--1996c8fb49-k8s-calico--kube--controllers--85d4d4968--2tx7r-eth0" Jul 7 00:15:23.626016 containerd[2793]: 2025-07-07 00:15:23.608 [INFO][7287] cni-plugin/k8s.go 418: Populated endpoint ContainerID="843dc2645034457719a5bc701eb5b7efa9cafb4287de6388c6eaf646ae259b5b" Namespace="calico-system" Pod="calico-kube-controllers-85d4d4968-2tx7r" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-calico--kube--controllers--85d4d4968--2tx7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--1996c8fb49-k8s-calico--kube--controllers--85d4d4968--2tx7r-eth0", GenerateName:"calico-kube-controllers-85d4d4968-", Namespace:"calico-system", SelfLink:"", UID:"7805c043-5570-435c-b994-e32b0a9aca69", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 15, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85d4d4968", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-1996c8fb49", ContainerID:"", Pod:"calico-kube-controllers-85d4d4968-2tx7r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6a96a59e89c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:15:23.626016 containerd[2793]: 2025-07-07 00:15:23.608 [INFO][7287] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.197/32] ContainerID="843dc2645034457719a5bc701eb5b7efa9cafb4287de6388c6eaf646ae259b5b" Namespace="calico-system" Pod="calico-kube-controllers-85d4d4968-2tx7r" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-calico--kube--controllers--85d4d4968--2tx7r-eth0" Jul 7 00:15:23.626016 containerd[2793]: 2025-07-07 00:15:23.608 [INFO][7287] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6a96a59e89c ContainerID="843dc2645034457719a5bc701eb5b7efa9cafb4287de6388c6eaf646ae259b5b" Namespace="calico-system" Pod="calico-kube-controllers-85d4d4968-2tx7r" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-calico--kube--controllers--85d4d4968--2tx7r-eth0" Jul 7 00:15:23.626016 containerd[2793]: 2025-07-07 00:15:23.611 [INFO][7287] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="843dc2645034457719a5bc701eb5b7efa9cafb4287de6388c6eaf646ae259b5b" Namespace="calico-system" Pod="calico-kube-controllers-85d4d4968-2tx7r" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-calico--kube--controllers--85d4d4968--2tx7r-eth0" Jul 7 00:15:23.626016 containerd[2793]: 2025-07-07 00:15:23.611 [INFO][7287] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="843dc2645034457719a5bc701eb5b7efa9cafb4287de6388c6eaf646ae259b5b" Namespace="calico-system" Pod="calico-kube-controllers-85d4d4968-2tx7r" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-calico--kube--controllers--85d4d4968--2tx7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--1996c8fb49-k8s-calico--kube--controllers--85d4d4968--2tx7r-eth0", GenerateName:"calico-kube-controllers-85d4d4968-", Namespace:"calico-system", SelfLink:"", UID:"7805c043-5570-435c-b994-e32b0a9aca69", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 15, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85d4d4968", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-1996c8fb49", ContainerID:"843dc2645034457719a5bc701eb5b7efa9cafb4287de6388c6eaf646ae259b5b", Pod:"calico-kube-controllers-85d4d4968-2tx7r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6a96a59e89c", MAC:"e6:64:fe:e5:f9:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:15:23.626016 containerd[2793]: 2025-07-07 00:15:23.624 [INFO][7287] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="843dc2645034457719a5bc701eb5b7efa9cafb4287de6388c6eaf646ae259b5b" Namespace="calico-system" Pod="calico-kube-controllers-85d4d4968-2tx7r" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-calico--kube--controllers--85d4d4968--2tx7r-eth0" Jul 7 00:15:23.637755 containerd[2793]: time="2025-07-07T00:15:23.637721993Z" level=info msg="connecting to shim 843dc2645034457719a5bc701eb5b7efa9cafb4287de6388c6eaf646ae259b5b" address="unix:///run/containerd/s/b7fa7a8f588f647e2a30f89ce7b05027da9b0a21bd4f39dcd802975a81862c6d" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:15:23.670774 systemd[1]: Started cri-containerd-843dc2645034457719a5bc701eb5b7efa9cafb4287de6388c6eaf646ae259b5b.scope - libcontainer container 843dc2645034457719a5bc701eb5b7efa9cafb4287de6388c6eaf646ae259b5b. Jul 7 00:15:23.696404 containerd[2793]: time="2025-07-07T00:15:23.696378066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85d4d4968-2tx7r,Uid:7805c043-5570-435c-b994-e32b0a9aca69,Namespace:calico-system,Attempt:0,} returns sandbox id \"843dc2645034457719a5bc701eb5b7efa9cafb4287de6388c6eaf646ae259b5b\"" Jul 7 00:15:23.707234 systemd-networkd[2698]: cali3f6ae9b65b1: Link UP Jul 7 00:15:23.707674 systemd-networkd[2698]: cali3f6ae9b65b1: Gained carrier Jul 7 00:15:23.719031 containerd[2793]: 2025-07-07 00:15:23.555 [INFO][7285] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--a--1996c8fb49-k8s-calico--apiserver--65bf96dcf9--86vdd-eth0 calico-apiserver-65bf96dcf9- calico-apiserver 614b6b3f-ba24-4e96-a0b5-ccdb79126af8 836 0 2025-07-07 00:14:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65bf96dcf9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.1-a-1996c8fb49 calico-apiserver-65bf96dcf9-86vdd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3f6ae9b65b1 [] [] }} ContainerID="8dd2a64c3e5d9ef9f2824ea21482e5db08b112ede311afecd74836210d319bbf" Namespace="calico-apiserver" Pod="calico-apiserver-65bf96dcf9-86vdd" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-calico--apiserver--65bf96dcf9--86vdd-" Jul 7 00:15:23.719031 containerd[2793]: 2025-07-07 00:15:23.555 [INFO][7285] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8dd2a64c3e5d9ef9f2824ea21482e5db08b112ede311afecd74836210d319bbf" Namespace="calico-apiserver" Pod="calico-apiserver-65bf96dcf9-86vdd" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-calico--apiserver--65bf96dcf9--86vdd-eth0" Jul 7 00:15:23.719031 containerd[2793]: 2025-07-07 00:15:23.575 [INFO][7332] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8dd2a64c3e5d9ef9f2824ea21482e5db08b112ede311afecd74836210d319bbf" HandleID="k8s-pod-network.8dd2a64c3e5d9ef9f2824ea21482e5db08b112ede311afecd74836210d319bbf" Workload="ci--4344.1.1--a--1996c8fb49-k8s-calico--apiserver--65bf96dcf9--86vdd-eth0" Jul 7 00:15:23.719031 containerd[2793]: 2025-07-07 00:15:23.575 [INFO][7332] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8dd2a64c3e5d9ef9f2824ea21482e5db08b112ede311afecd74836210d319bbf" HandleID="k8s-pod-network.8dd2a64c3e5d9ef9f2824ea21482e5db08b112ede311afecd74836210d319bbf" Workload="ci--4344.1.1--a--1996c8fb49-k8s-calico--apiserver--65bf96dcf9--86vdd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000812960), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.1-a-1996c8fb49", "pod":"calico-apiserver-65bf96dcf9-86vdd", "timestamp":"2025-07-07 00:15:23.575787401 +0000 UTC"}, Hostname:"ci-4344.1.1-a-1996c8fb49", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:15:23.719031 containerd[2793]: 2025-07-07 00:15:23.575 [INFO][7332] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:15:23.719031 containerd[2793]: 2025-07-07 00:15:23.602 [INFO][7332] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:15:23.719031 containerd[2793]: 2025-07-07 00:15:23.602 [INFO][7332] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-a-1996c8fb49' Jul 7 00:15:23.719031 containerd[2793]: 2025-07-07 00:15:23.686 [INFO][7332] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8dd2a64c3e5d9ef9f2824ea21482e5db08b112ede311afecd74836210d319bbf" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:23.719031 containerd[2793]: 2025-07-07 00:15:23.690 [INFO][7332] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:23.719031 containerd[2793]: 2025-07-07 00:15:23.694 [INFO][7332] ipam/ipam.go 511: Trying affinity for 192.168.13.192/26 host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:23.719031 containerd[2793]: 2025-07-07 00:15:23.695 [INFO][7332] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.192/26 host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:23.719031 containerd[2793]: 2025-07-07 00:15:23.697 [INFO][7332] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.192/26 host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:23.719031 containerd[2793]: 2025-07-07 00:15:23.697 [INFO][7332] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.13.192/26 handle="k8s-pod-network.8dd2a64c3e5d9ef9f2824ea21482e5db08b112ede311afecd74836210d319bbf" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:23.719031 containerd[2793]: 2025-07-07 00:15:23.698 [INFO][7332] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8dd2a64c3e5d9ef9f2824ea21482e5db08b112ede311afecd74836210d319bbf Jul 7 00:15:23.719031 containerd[2793]: 2025-07-07 00:15:23.700 [INFO][7332] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.13.192/26 handle="k8s-pod-network.8dd2a64c3e5d9ef9f2824ea21482e5db08b112ede311afecd74836210d319bbf" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:23.719031 containerd[2793]: 2025-07-07 00:15:23.704 [INFO][7332] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.13.198/26] block=192.168.13.192/26 handle="k8s-pod-network.8dd2a64c3e5d9ef9f2824ea21482e5db08b112ede311afecd74836210d319bbf" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:23.719031 containerd[2793]: 2025-07-07 00:15:23.704 [INFO][7332] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.198/26] handle="k8s-pod-network.8dd2a64c3e5d9ef9f2824ea21482e5db08b112ede311afecd74836210d319bbf" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:23.719031 containerd[2793]: 2025-07-07 00:15:23.704 [INFO][7332] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:15:23.719031 containerd[2793]: 2025-07-07 00:15:23.704 [INFO][7332] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.198/26] IPv6=[] ContainerID="8dd2a64c3e5d9ef9f2824ea21482e5db08b112ede311afecd74836210d319bbf" HandleID="k8s-pod-network.8dd2a64c3e5d9ef9f2824ea21482e5db08b112ede311afecd74836210d319bbf" Workload="ci--4344.1.1--a--1996c8fb49-k8s-calico--apiserver--65bf96dcf9--86vdd-eth0" Jul 7 00:15:23.719541 containerd[2793]: 2025-07-07 00:15:23.705 [INFO][7285] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8dd2a64c3e5d9ef9f2824ea21482e5db08b112ede311afecd74836210d319bbf" Namespace="calico-apiserver" Pod="calico-apiserver-65bf96dcf9-86vdd" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-calico--apiserver--65bf96dcf9--86vdd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--1996c8fb49-k8s-calico--apiserver--65bf96dcf9--86vdd-eth0", GenerateName:"calico-apiserver-65bf96dcf9-", Namespace:"calico-apiserver", SelfLink:"", UID:"614b6b3f-ba24-4e96-a0b5-ccdb79126af8", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65bf96dcf9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-1996c8fb49", ContainerID:"", Pod:"calico-apiserver-65bf96dcf9-86vdd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3f6ae9b65b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:15:23.719541 containerd[2793]: 2025-07-07 00:15:23.706 [INFO][7285] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.198/32] ContainerID="8dd2a64c3e5d9ef9f2824ea21482e5db08b112ede311afecd74836210d319bbf" Namespace="calico-apiserver" Pod="calico-apiserver-65bf96dcf9-86vdd" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-calico--apiserver--65bf96dcf9--86vdd-eth0" Jul 7 00:15:23.719541 containerd[2793]: 2025-07-07 00:15:23.706 [INFO][7285] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f6ae9b65b1 ContainerID="8dd2a64c3e5d9ef9f2824ea21482e5db08b112ede311afecd74836210d319bbf" Namespace="calico-apiserver" Pod="calico-apiserver-65bf96dcf9-86vdd" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-calico--apiserver--65bf96dcf9--86vdd-eth0" Jul 7 00:15:23.719541 containerd[2793]: 2025-07-07 00:15:23.707 [INFO][7285] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8dd2a64c3e5d9ef9f2824ea21482e5db08b112ede311afecd74836210d319bbf" Namespace="calico-apiserver" Pod="calico-apiserver-65bf96dcf9-86vdd" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-calico--apiserver--65bf96dcf9--86vdd-eth0" Jul 7 00:15:23.719541 containerd[2793]: 2025-07-07 00:15:23.707 [INFO][7285] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8dd2a64c3e5d9ef9f2824ea21482e5db08b112ede311afecd74836210d319bbf" Namespace="calico-apiserver" Pod="calico-apiserver-65bf96dcf9-86vdd" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-calico--apiserver--65bf96dcf9--86vdd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--1996c8fb49-k8s-calico--apiserver--65bf96dcf9--86vdd-eth0", GenerateName:"calico-apiserver-65bf96dcf9-", Namespace:"calico-apiserver", SelfLink:"", UID:"614b6b3f-ba24-4e96-a0b5-ccdb79126af8", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65bf96dcf9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-1996c8fb49", ContainerID:"8dd2a64c3e5d9ef9f2824ea21482e5db08b112ede311afecd74836210d319bbf", Pod:"calico-apiserver-65bf96dcf9-86vdd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3f6ae9b65b1", MAC:"32:ff:6d:76:8b:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:15:23.719541 containerd[2793]: 2025-07-07 00:15:23.713 [INFO][7285] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8dd2a64c3e5d9ef9f2824ea21482e5db08b112ede311afecd74836210d319bbf" Namespace="calico-apiserver" Pod="calico-apiserver-65bf96dcf9-86vdd" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-calico--apiserver--65bf96dcf9--86vdd-eth0" Jul 7 00:15:23.728904 containerd[2793]: time="2025-07-07T00:15:23.728870382Z" level=info msg="connecting to shim 8dd2a64c3e5d9ef9f2824ea21482e5db08b112ede311afecd74836210d319bbf" address="unix:///run/containerd/s/788584713fa0d04555a4714f32dcd25b6fedbedeb9a0f98062d72799086f4b0c" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:15:23.753798 systemd[1]: Started cri-containerd-8dd2a64c3e5d9ef9f2824ea21482e5db08b112ede311afecd74836210d319bbf.scope - libcontainer container 8dd2a64c3e5d9ef9f2824ea21482e5db08b112ede311afecd74836210d319bbf. Jul 7 00:15:23.779648 containerd[2793]: time="2025-07-07T00:15:23.779590615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65bf96dcf9-86vdd,Uid:614b6b3f-ba24-4e96-a0b5-ccdb79126af8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8dd2a64c3e5d9ef9f2824ea21482e5db08b112ede311afecd74836210d319bbf\"" Jul 7 00:15:23.781778 containerd[2793]: time="2025-07-07T00:15:23.781751295Z" level=info msg="CreateContainer within sandbox \"8dd2a64c3e5d9ef9f2824ea21482e5db08b112ede311afecd74836210d319bbf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 00:15:23.785109 containerd[2793]: time="2025-07-07T00:15:23.785086254Z" level=info msg="Container f0c56f50c490d3267bdece6490b4f9b6b31307c24057ea51419cffd0af8761ab: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:23.788141 containerd[2793]: time="2025-07-07T00:15:23.788118694Z" level=info msg="CreateContainer within sandbox \"8dd2a64c3e5d9ef9f2824ea21482e5db08b112ede311afecd74836210d319bbf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f0c56f50c490d3267bdece6490b4f9b6b31307c24057ea51419cffd0af8761ab\"" Jul 7 00:15:23.788436 containerd[2793]: time="2025-07-07T00:15:23.788415774Z" level=info msg="StartContainer for \"f0c56f50c490d3267bdece6490b4f9b6b31307c24057ea51419cffd0af8761ab\"" Jul 7 00:15:23.789340 containerd[2793]: time="2025-07-07T00:15:23.789319694Z" level=info msg="connecting to shim f0c56f50c490d3267bdece6490b4f9b6b31307c24057ea51419cffd0af8761ab" address="unix:///run/containerd/s/788584713fa0d04555a4714f32dcd25b6fedbedeb9a0f98062d72799086f4b0c" protocol=ttrpc version=3 Jul 7 00:15:23.815844 systemd[1]: Started cri-containerd-f0c56f50c490d3267bdece6490b4f9b6b31307c24057ea51419cffd0af8761ab.scope - libcontainer container f0c56f50c490d3267bdece6490b4f9b6b31307c24057ea51419cffd0af8761ab. Jul 7 00:15:23.843919 containerd[2793]: time="2025-07-07T00:15:23.843870767Z" level=info msg="StartContainer for \"f0c56f50c490d3267bdece6490b4f9b6b31307c24057ea51419cffd0af8761ab\" returns successfully" Jul 7 00:15:24.522453 containerd[2793]: time="2025-07-07T00:15:24.522410124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nm59z,Uid:7ef128c5-e42d-45ce-96d1-e1202d22f977,Namespace:kube-system,Attempt:0,}" Jul 7 00:15:24.605780 systemd-networkd[2698]: cali57147b8652e: Link UP Jul 7 00:15:24.606140 systemd-networkd[2698]: cali57147b8652e: Gained carrier Jul 7 00:15:24.607277 kubelet[4328]: I0707 00:15:24.607260 4328 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:15:24.613344 containerd[2793]: 2025-07-07 00:15:24.551 [INFO][7588] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--a--1996c8fb49-k8s-coredns--674b8bbfcf--nm59z-eth0 coredns-674b8bbfcf- kube-system 7ef128c5-e42d-45ce-96d1-e1202d22f977 838 0 2025-07-07 00:14:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.1.1-a-1996c8fb49 coredns-674b8bbfcf-nm59z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali57147b8652e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4de288ca2f9803788889727a0da7210d8357ffa380d822492004ccbef75bef89" Namespace="kube-system" Pod="coredns-674b8bbfcf-nm59z" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-coredns--674b8bbfcf--nm59z-" Jul 7 00:15:24.613344 containerd[2793]: 2025-07-07 00:15:24.552 [INFO][7588] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4de288ca2f9803788889727a0da7210d8357ffa380d822492004ccbef75bef89" Namespace="kube-system" Pod="coredns-674b8bbfcf-nm59z" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-coredns--674b8bbfcf--nm59z-eth0" Jul 7 00:15:24.613344 containerd[2793]: 2025-07-07 00:15:24.572 [INFO][7614] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4de288ca2f9803788889727a0da7210d8357ffa380d822492004ccbef75bef89" HandleID="k8s-pod-network.4de288ca2f9803788889727a0da7210d8357ffa380d822492004ccbef75bef89" Workload="ci--4344.1.1--a--1996c8fb49-k8s-coredns--674b8bbfcf--nm59z-eth0" Jul 7 00:15:24.613344 containerd[2793]: 2025-07-07 00:15:24.572 [INFO][7614] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4de288ca2f9803788889727a0da7210d8357ffa380d822492004ccbef75bef89" HandleID="k8s-pod-network.4de288ca2f9803788889727a0da7210d8357ffa380d822492004ccbef75bef89" Workload="ci--4344.1.1--a--1996c8fb49-k8s-coredns--674b8bbfcf--nm59z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000592780), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.1.1-a-1996c8fb49", "pod":"coredns-674b8bbfcf-nm59z", "timestamp":"2025-07-07 00:15:24.572679758 +0000 UTC"}, Hostname:"ci-4344.1.1-a-1996c8fb49", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:15:24.613344 containerd[2793]: 2025-07-07 00:15:24.572 [INFO][7614] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:15:24.613344 containerd[2793]: 2025-07-07 00:15:24.572 [INFO][7614] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:15:24.613344 containerd[2793]: 2025-07-07 00:15:24.572 [INFO][7614] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-a-1996c8fb49' Jul 7 00:15:24.613344 containerd[2793]: 2025-07-07 00:15:24.581 [INFO][7614] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4de288ca2f9803788889727a0da7210d8357ffa380d822492004ccbef75bef89" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:24.613344 containerd[2793]: 2025-07-07 00:15:24.585 [INFO][7614] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:24.613344 containerd[2793]: 2025-07-07 00:15:24.591 [INFO][7614] ipam/ipam.go 511: Trying affinity for 192.168.13.192/26 host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:24.613344 containerd[2793]: 2025-07-07 00:15:24.592 [INFO][7614] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.192/26 host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:24.613344 containerd[2793]: 2025-07-07 00:15:24.594 [INFO][7614] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.192/26 host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:24.613344 containerd[2793]: 2025-07-07 00:15:24.594 [INFO][7614] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.13.192/26 handle="k8s-pod-network.4de288ca2f9803788889727a0da7210d8357ffa380d822492004ccbef75bef89" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:24.613344 containerd[2793]: 2025-07-07 00:15:24.595 [INFO][7614] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4de288ca2f9803788889727a0da7210d8357ffa380d822492004ccbef75bef89 Jul 7 00:15:24.613344 containerd[2793]: 2025-07-07 00:15:24.597 [INFO][7614] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.13.192/26 handle="k8s-pod-network.4de288ca2f9803788889727a0da7210d8357ffa380d822492004ccbef75bef89" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:24.613344 containerd[2793]: 2025-07-07 00:15:24.601 [INFO][7614] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.13.199/26] block=192.168.13.192/26 handle="k8s-pod-network.4de288ca2f9803788889727a0da7210d8357ffa380d822492004ccbef75bef89" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:24.613344 containerd[2793]: 2025-07-07 00:15:24.601 [INFO][7614] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.199/26] handle="k8s-pod-network.4de288ca2f9803788889727a0da7210d8357ffa380d822492004ccbef75bef89" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:24.613344 containerd[2793]: 2025-07-07 00:15:24.601 [INFO][7614] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:15:24.613344 containerd[2793]: 2025-07-07 00:15:24.601 [INFO][7614] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.199/26] IPv6=[] ContainerID="4de288ca2f9803788889727a0da7210d8357ffa380d822492004ccbef75bef89" HandleID="k8s-pod-network.4de288ca2f9803788889727a0da7210d8357ffa380d822492004ccbef75bef89" Workload="ci--4344.1.1--a--1996c8fb49-k8s-coredns--674b8bbfcf--nm59z-eth0" Jul 7 00:15:24.613754 containerd[2793]: 2025-07-07 00:15:24.603 [INFO][7588] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4de288ca2f9803788889727a0da7210d8357ffa380d822492004ccbef75bef89" Namespace="kube-system" Pod="coredns-674b8bbfcf-nm59z" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-coredns--674b8bbfcf--nm59z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--1996c8fb49-k8s-coredns--674b8bbfcf--nm59z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7ef128c5-e42d-45ce-96d1-e1202d22f977", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-1996c8fb49", ContainerID:"", Pod:"coredns-674b8bbfcf-nm59z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali57147b8652e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:15:24.613754 containerd[2793]: 2025-07-07 00:15:24.603 [INFO][7588] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.199/32] ContainerID="4de288ca2f9803788889727a0da7210d8357ffa380d822492004ccbef75bef89" Namespace="kube-system" Pod="coredns-674b8bbfcf-nm59z" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-coredns--674b8bbfcf--nm59z-eth0" Jul 7 00:15:24.613754 containerd[2793]: 2025-07-07 00:15:24.603 [INFO][7588] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali57147b8652e ContainerID="4de288ca2f9803788889727a0da7210d8357ffa380d822492004ccbef75bef89" Namespace="kube-system" Pod="coredns-674b8bbfcf-nm59z" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-coredns--674b8bbfcf--nm59z-eth0" Jul 7 00:15:24.613754 containerd[2793]: 2025-07-07 00:15:24.606 [INFO][7588] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4de288ca2f9803788889727a0da7210d8357ffa380d822492004ccbef75bef89" Namespace="kube-system" Pod="coredns-674b8bbfcf-nm59z" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-coredns--674b8bbfcf--nm59z-eth0" Jul 7 00:15:24.613754 containerd[2793]: 2025-07-07 00:15:24.606 [INFO][7588] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4de288ca2f9803788889727a0da7210d8357ffa380d822492004ccbef75bef89" Namespace="kube-system" Pod="coredns-674b8bbfcf-nm59z" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-coredns--674b8bbfcf--nm59z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--1996c8fb49-k8s-coredns--674b8bbfcf--nm59z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7ef128c5-e42d-45ce-96d1-e1202d22f977", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 14, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-1996c8fb49", ContainerID:"4de288ca2f9803788889727a0da7210d8357ffa380d822492004ccbef75bef89", Pod:"coredns-674b8bbfcf-nm59z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali57147b8652e", MAC:"e6:9a:6e:2c:6d:a6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:15:24.613754 containerd[2793]: 2025-07-07 00:15:24.611 [INFO][7588] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4de288ca2f9803788889727a0da7210d8357ffa380d822492004ccbef75bef89" Namespace="kube-system" Pod="coredns-674b8bbfcf-nm59z" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-coredns--674b8bbfcf--nm59z-eth0" Jul 7 00:15:24.614031 kubelet[4328]: I0707 00:15:24.613989 4328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-65bf96dcf9-86vdd" podStartSLOduration=26.613975953 podStartE2EDuration="26.613975953s" podCreationTimestamp="2025-07-07 00:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:15:24.613699513 +0000 UTC m=+40.166958642" watchObservedRunningTime="2025-07-07 00:15:24.613975953 +0000 UTC m=+40.167235082" Jul 7 00:15:24.624214 containerd[2793]: time="2025-07-07T00:15:24.624177752Z" level=info msg="connecting to shim 4de288ca2f9803788889727a0da7210d8357ffa380d822492004ccbef75bef89" address="unix:///run/containerd/s/5badaa0076be49d4f9b71cb9ece1a761ae5304ec624ddb1738392da80fc2bf93" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:15:24.647764 systemd-networkd[2698]: cali772a0e02897: Gained IPv6LL Jul 7 00:15:24.659796 systemd[1]: Started cri-containerd-4de288ca2f9803788889727a0da7210d8357ffa380d822492004ccbef75bef89.scope - libcontainer container 4de288ca2f9803788889727a0da7210d8357ffa380d822492004ccbef75bef89. Jul 7 00:15:24.686700 containerd[2793]: time="2025-07-07T00:15:24.686638065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nm59z,Uid:7ef128c5-e42d-45ce-96d1-e1202d22f977,Namespace:kube-system,Attempt:0,} returns sandbox id \"4de288ca2f9803788889727a0da7210d8357ffa380d822492004ccbef75bef89\"" Jul 7 00:15:24.700624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2200119726.mount: Deactivated successfully. Jul 7 00:15:24.702189 containerd[2793]: time="2025-07-07T00:15:24.702162823Z" level=info msg="CreateContainer within sandbox \"4de288ca2f9803788889727a0da7210d8357ffa380d822492004ccbef75bef89\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:15:24.706720 containerd[2793]: time="2025-07-07T00:15:24.706692502Z" level=info msg="Container 2a33ca64dcb4acc11e3f4450865ae5cf9085b4b0f5aa49c9756f3caa0eadb654: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:24.709252 containerd[2793]: time="2025-07-07T00:15:24.709230822Z" level=info msg="CreateContainer within sandbox \"4de288ca2f9803788889727a0da7210d8357ffa380d822492004ccbef75bef89\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2a33ca64dcb4acc11e3f4450865ae5cf9085b4b0f5aa49c9756f3caa0eadb654\"" Jul 7 00:15:24.709612 containerd[2793]: time="2025-07-07T00:15:24.709594542Z" level=info msg="StartContainer for \"2a33ca64dcb4acc11e3f4450865ae5cf9085b4b0f5aa49c9756f3caa0eadb654\"" Jul 7 00:15:24.710418 containerd[2793]: time="2025-07-07T00:15:24.710394582Z" level=info msg="connecting to shim 2a33ca64dcb4acc11e3f4450865ae5cf9085b4b0f5aa49c9756f3caa0eadb654" address="unix:///run/containerd/s/5badaa0076be49d4f9b71cb9ece1a761ae5304ec624ddb1738392da80fc2bf93" protocol=ttrpc version=3 Jul 7 00:15:24.737783 systemd[1]: Started cri-containerd-2a33ca64dcb4acc11e3f4450865ae5cf9085b4b0f5aa49c9756f3caa0eadb654.scope - libcontainer container 2a33ca64dcb4acc11e3f4450865ae5cf9085b4b0f5aa49c9756f3caa0eadb654. Jul 7 00:15:24.758028 containerd[2793]: time="2025-07-07T00:15:24.758002896Z" level=info msg="StartContainer for \"2a33ca64dcb4acc11e3f4450865ae5cf9085b4b0f5aa49c9756f3caa0eadb654\" returns successfully" Jul 7 00:15:24.896344 containerd[2793]: time="2025-07-07T00:15:24.896306880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 7 00:15:24.896413 containerd[2793]: time="2025-07-07T00:15:24.896321520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:24.897067 containerd[2793]: time="2025-07-07T00:15:24.897045879Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:24.898800 containerd[2793]: time="2025-07-07T00:15:24.898766239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:24.899503 containerd[2793]: time="2025-07-07T00:15:24.899477359Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 1.642961357s" Jul 7 00:15:24.899535 containerd[2793]: time="2025-07-07T00:15:24.899507799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 7 00:15:24.900302 containerd[2793]: time="2025-07-07T00:15:24.900281079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 00:15:24.901681 containerd[2793]: time="2025-07-07T00:15:24.901655839Z" level=info msg="CreateContainer within sandbox \"db3e4262ad67aeb5b0ff248e5d75c8760673522942f631a59d325a9aa58b82bf\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 00:15:24.905492 containerd[2793]: time="2025-07-07T00:15:24.905461918Z" level=info msg="Container 5a53069ddda669de6a7eaf64f6e91800d46d86fb1cbb49408d13ac953860d555: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:24.908759 containerd[2793]: time="2025-07-07T00:15:24.908736278Z" level=info msg="CreateContainer within sandbox \"db3e4262ad67aeb5b0ff248e5d75c8760673522942f631a59d325a9aa58b82bf\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"5a53069ddda669de6a7eaf64f6e91800d46d86fb1cbb49408d13ac953860d555\"" Jul 7 00:15:24.909082 containerd[2793]: time="2025-07-07T00:15:24.909059798Z" level=info msg="StartContainer for \"5a53069ddda669de6a7eaf64f6e91800d46d86fb1cbb49408d13ac953860d555\"" Jul 7 00:15:24.910045 containerd[2793]: time="2025-07-07T00:15:24.910023518Z" level=info msg="connecting to shim 5a53069ddda669de6a7eaf64f6e91800d46d86fb1cbb49408d13ac953860d555" address="unix:///run/containerd/s/0c051b9501369ab300c606a2cbecea6cc94df1ea7c97cf0b11190dc44035a804" protocol=ttrpc version=3 Jul 7 00:15:24.934841 systemd[1]: Started cri-containerd-5a53069ddda669de6a7eaf64f6e91800d46d86fb1cbb49408d13ac953860d555.scope - libcontainer container 5a53069ddda669de6a7eaf64f6e91800d46d86fb1cbb49408d13ac953860d555. Jul 7 00:15:24.974167 containerd[2793]: time="2025-07-07T00:15:24.974138030Z" level=info msg="StartContainer for \"5a53069ddda669de6a7eaf64f6e91800d46d86fb1cbb49408d13ac953860d555\" returns successfully" Jul 7 00:15:25.223778 systemd-networkd[2698]: cali6a96a59e89c: Gained IPv6LL Jul 7 00:15:25.479731 systemd-networkd[2698]: cali3f6ae9b65b1: Gained IPv6LL Jul 7 00:15:25.521943 containerd[2793]: time="2025-07-07T00:15:25.521910329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgws5,Uid:a91e23ac-d148-41e5-b88b-f561933b89b5,Namespace:calico-system,Attempt:0,}" Jul 7 00:15:25.608246 systemd-networkd[2698]: cali4529ec3136a: Link UP Jul 7 00:15:25.608468 systemd-networkd[2698]: cali4529ec3136a: Gained carrier Jul 7 00:15:25.611317 kubelet[4328]: I0707 00:15:25.611301 4328 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:15:25.615487 containerd[2793]: 2025-07-07 00:15:25.550 [INFO][7823] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--a--1996c8fb49-k8s-csi--node--driver--lgws5-eth0 csi-node-driver- calico-system a91e23ac-d148-41e5-b88b-f561933b89b5 738 0 2025-07-07 00:15:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4344.1.1-a-1996c8fb49 csi-node-driver-lgws5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4529ec3136a [] [] }} ContainerID="183a8412053e2526e8c8b9e42b1f2d78cdff05ca983e8887db6eb6f0eef63223" Namespace="calico-system" Pod="csi-node-driver-lgws5" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-csi--node--driver--lgws5-" Jul 7 00:15:25.615487 containerd[2793]: 2025-07-07 00:15:25.550 [INFO][7823] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="183a8412053e2526e8c8b9e42b1f2d78cdff05ca983e8887db6eb6f0eef63223" Namespace="calico-system" Pod="csi-node-driver-lgws5" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-csi--node--driver--lgws5-eth0" Jul 7 00:15:25.615487 containerd[2793]: 2025-07-07 00:15:25.570 [INFO][7848] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="183a8412053e2526e8c8b9e42b1f2d78cdff05ca983e8887db6eb6f0eef63223" HandleID="k8s-pod-network.183a8412053e2526e8c8b9e42b1f2d78cdff05ca983e8887db6eb6f0eef63223" Workload="ci--4344.1.1--a--1996c8fb49-k8s-csi--node--driver--lgws5-eth0" Jul 7 00:15:25.615487 containerd[2793]: 2025-07-07 00:15:25.570 [INFO][7848] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="183a8412053e2526e8c8b9e42b1f2d78cdff05ca983e8887db6eb6f0eef63223" HandleID="k8s-pod-network.183a8412053e2526e8c8b9e42b1f2d78cdff05ca983e8887db6eb6f0eef63223" Workload="ci--4344.1.1--a--1996c8fb49-k8s-csi--node--driver--lgws5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003e1d20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.1-a-1996c8fb49", "pod":"csi-node-driver-lgws5", "timestamp":"2025-07-07 00:15:25.570644763 +0000 UTC"}, Hostname:"ci-4344.1.1-a-1996c8fb49", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:15:25.615487 containerd[2793]: 2025-07-07 00:15:25.570 [INFO][7848] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:15:25.615487 containerd[2793]: 2025-07-07 00:15:25.570 [INFO][7848] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:15:25.615487 containerd[2793]: 2025-07-07 00:15:25.570 [INFO][7848] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-a-1996c8fb49' Jul 7 00:15:25.615487 containerd[2793]: 2025-07-07 00:15:25.579 [INFO][7848] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.183a8412053e2526e8c8b9e42b1f2d78cdff05ca983e8887db6eb6f0eef63223" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:25.615487 containerd[2793]: 2025-07-07 00:15:25.582 [INFO][7848] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:25.615487 containerd[2793]: 2025-07-07 00:15:25.585 [INFO][7848] ipam/ipam.go 511: Trying affinity for 192.168.13.192/26 host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:25.615487 containerd[2793]: 2025-07-07 00:15:25.586 [INFO][7848] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.192/26 host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:25.615487 containerd[2793]: 2025-07-07 00:15:25.588 [INFO][7848] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.192/26 host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:25.615487 containerd[2793]: 2025-07-07 00:15:25.588 [INFO][7848] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.13.192/26 handle="k8s-pod-network.183a8412053e2526e8c8b9e42b1f2d78cdff05ca983e8887db6eb6f0eef63223" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:25.615487 containerd[2793]: 2025-07-07 00:15:25.589 [INFO][7848] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.183a8412053e2526e8c8b9e42b1f2d78cdff05ca983e8887db6eb6f0eef63223 Jul 7 00:15:25.615487 containerd[2793]: 2025-07-07 00:15:25.600 [INFO][7848] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.13.192/26 handle="k8s-pod-network.183a8412053e2526e8c8b9e42b1f2d78cdff05ca983e8887db6eb6f0eef63223" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:25.615487 containerd[2793]: 2025-07-07 00:15:25.604 [INFO][7848] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.13.200/26] block=192.168.13.192/26 handle="k8s-pod-network.183a8412053e2526e8c8b9e42b1f2d78cdff05ca983e8887db6eb6f0eef63223" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:25.615487 containerd[2793]: 2025-07-07 00:15:25.604 [INFO][7848] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.200/26] handle="k8s-pod-network.183a8412053e2526e8c8b9e42b1f2d78cdff05ca983e8887db6eb6f0eef63223" host="ci-4344.1.1-a-1996c8fb49" Jul 7 00:15:25.615487 containerd[2793]: 2025-07-07 00:15:25.604 [INFO][7848] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:15:25.615487 containerd[2793]: 2025-07-07 00:15:25.604 [INFO][7848] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.200/26] IPv6=[] ContainerID="183a8412053e2526e8c8b9e42b1f2d78cdff05ca983e8887db6eb6f0eef63223" HandleID="k8s-pod-network.183a8412053e2526e8c8b9e42b1f2d78cdff05ca983e8887db6eb6f0eef63223" Workload="ci--4344.1.1--a--1996c8fb49-k8s-csi--node--driver--lgws5-eth0" Jul 7 00:15:25.615954 containerd[2793]: 2025-07-07 00:15:25.605 [INFO][7823] cni-plugin/k8s.go 418: Populated endpoint ContainerID="183a8412053e2526e8c8b9e42b1f2d78cdff05ca983e8887db6eb6f0eef63223" Namespace="calico-system" Pod="csi-node-driver-lgws5" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-csi--node--driver--lgws5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--1996c8fb49-k8s-csi--node--driver--lgws5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a91e23ac-d148-41e5-b88b-f561933b89b5", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 15, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-1996c8fb49", ContainerID:"", Pod:"csi-node-driver-lgws5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4529ec3136a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:15:25.615954 containerd[2793]: 2025-07-07 00:15:25.606 [INFO][7823] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.200/32] ContainerID="183a8412053e2526e8c8b9e42b1f2d78cdff05ca983e8887db6eb6f0eef63223" Namespace="calico-system" Pod="csi-node-driver-lgws5" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-csi--node--driver--lgws5-eth0" Jul 7 00:15:25.615954 containerd[2793]: 2025-07-07 00:15:25.606 [INFO][7823] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4529ec3136a ContainerID="183a8412053e2526e8c8b9e42b1f2d78cdff05ca983e8887db6eb6f0eef63223" Namespace="calico-system" Pod="csi-node-driver-lgws5" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-csi--node--driver--lgws5-eth0" Jul 7 00:15:25.615954 containerd[2793]: 2025-07-07 00:15:25.608 [INFO][7823] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="183a8412053e2526e8c8b9e42b1f2d78cdff05ca983e8887db6eb6f0eef63223" Namespace="calico-system" Pod="csi-node-driver-lgws5" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-csi--node--driver--lgws5-eth0" Jul 7 00:15:25.615954 containerd[2793]: 2025-07-07 00:15:25.608 [INFO][7823] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="183a8412053e2526e8c8b9e42b1f2d78cdff05ca983e8887db6eb6f0eef63223" Namespace="calico-system" Pod="csi-node-driver-lgws5" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-csi--node--driver--lgws5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--1996c8fb49-k8s-csi--node--driver--lgws5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a91e23ac-d148-41e5-b88b-f561933b89b5", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 15, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-1996c8fb49", ContainerID:"183a8412053e2526e8c8b9e42b1f2d78cdff05ca983e8887db6eb6f0eef63223", Pod:"csi-node-driver-lgws5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4529ec3136a", MAC:"86:88:f3:52:40:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:15:25.615954 containerd[2793]: 2025-07-07 00:15:25.614 [INFO][7823] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="183a8412053e2526e8c8b9e42b1f2d78cdff05ca983e8887db6eb6f0eef63223" Namespace="calico-system" Pod="csi-node-driver-lgws5" WorkloadEndpoint="ci--4344.1.1--a--1996c8fb49-k8s-csi--node--driver--lgws5-eth0" Jul 7 00:15:25.617284 kubelet[4328]: I0707 00:15:25.617242 4328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nm59z" podStartSLOduration=35.617229918 podStartE2EDuration="35.617229918s" podCreationTimestamp="2025-07-07 00:14:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:15:25.617015238 +0000 UTC m=+41.170274367" watchObservedRunningTime="2025-07-07 00:15:25.617229918 +0000 UTC m=+41.170489047" Jul 7 00:15:25.623969 kubelet[4328]: I0707 00:15:25.623928 4328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-h7c7c" podStartSLOduration=20.397914077 podStartE2EDuration="22.623914277s" podCreationTimestamp="2025-07-07 00:15:03 +0000 UTC" firstStartedPulling="2025-07-07 00:15:22.674196759 +0000 UTC m=+38.227455888" lastFinishedPulling="2025-07-07 00:15:24.900196959 +0000 UTC m=+40.453456088" observedRunningTime="2025-07-07 00:15:25.623891717 +0000 UTC m=+41.177150846" watchObservedRunningTime="2025-07-07 00:15:25.623914277 +0000 UTC m=+41.177173406" Jul 7 00:15:25.625491 containerd[2793]: time="2025-07-07T00:15:25.625460957Z" level=info msg="connecting to shim 183a8412053e2526e8c8b9e42b1f2d78cdff05ca983e8887db6eb6f0eef63223" address="unix:///run/containerd/s/46669f2f4ab6b92f70331cd99045bb99da965b8baf07092e8f7e5f540c2aa56a" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:15:25.653787 systemd[1]: Started cri-containerd-183a8412053e2526e8c8b9e42b1f2d78cdff05ca983e8887db6eb6f0eef63223.scope - libcontainer container 183a8412053e2526e8c8b9e42b1f2d78cdff05ca983e8887db6eb6f0eef63223. Jul 7 00:15:25.671743 containerd[2793]: time="2025-07-07T00:15:25.671719912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgws5,Uid:a91e23ac-d148-41e5-b88b-f561933b89b5,Namespace:calico-system,Attempt:0,} returns sandbox id \"183a8412053e2526e8c8b9e42b1f2d78cdff05ca983e8887db6eb6f0eef63223\"" Jul 7 00:15:25.799804 systemd-networkd[2698]: cali57147b8652e: Gained IPv6LL Jul 7 00:15:26.518689 containerd[2793]: time="2025-07-07T00:15:26.518639420Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:26.519074 containerd[2793]: time="2025-07-07T00:15:26.518674100Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 7 00:15:26.519338 containerd[2793]: time="2025-07-07T00:15:26.519318340Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:26.520769 containerd[2793]: time="2025-07-07T00:15:26.520751900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:26.521380 containerd[2793]: time="2025-07-07T00:15:26.521358700Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 1.621048701s" Jul 7 00:15:26.521402 containerd[2793]: time="2025-07-07T00:15:26.521386660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 7 00:15:26.522066 containerd[2793]: time="2025-07-07T00:15:26.522050860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 00:15:26.527201 containerd[2793]: time="2025-07-07T00:15:26.527178139Z" level=info msg="CreateContainer within sandbox \"843dc2645034457719a5bc701eb5b7efa9cafb4287de6388c6eaf646ae259b5b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 00:15:26.530953 containerd[2793]: time="2025-07-07T00:15:26.530929699Z" level=info msg="Container d78d54853fb9a608aaa05fd402b454254f0b227ebf609756836307fc246e69da: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:26.534192 containerd[2793]: time="2025-07-07T00:15:26.534169379Z" level=info msg="CreateContainer within sandbox \"843dc2645034457719a5bc701eb5b7efa9cafb4287de6388c6eaf646ae259b5b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d78d54853fb9a608aaa05fd402b454254f0b227ebf609756836307fc246e69da\"" Jul 7 00:15:26.534480 containerd[2793]: time="2025-07-07T00:15:26.534460298Z" level=info msg="StartContainer for \"d78d54853fb9a608aaa05fd402b454254f0b227ebf609756836307fc246e69da\"" Jul 7 00:15:26.535409 containerd[2793]: time="2025-07-07T00:15:26.535389338Z" level=info msg="connecting to shim d78d54853fb9a608aaa05fd402b454254f0b227ebf609756836307fc246e69da" address="unix:///run/containerd/s/b7fa7a8f588f647e2a30f89ce7b05027da9b0a21bd4f39dcd802975a81862c6d" protocol=ttrpc version=3 Jul 7 00:15:26.561839 systemd[1]: Started cri-containerd-d78d54853fb9a608aaa05fd402b454254f0b227ebf609756836307fc246e69da.scope - libcontainer container d78d54853fb9a608aaa05fd402b454254f0b227ebf609756836307fc246e69da. Jul 7 00:15:26.590872 containerd[2793]: time="2025-07-07T00:15:26.590841493Z" level=info msg="StartContainer for \"d78d54853fb9a608aaa05fd402b454254f0b227ebf609756836307fc246e69da\" returns successfully" Jul 7 00:15:26.615388 kubelet[4328]: I0707 00:15:26.615365 4328 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:15:26.622456 kubelet[4328]: I0707 00:15:26.622414 4328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-85d4d4968-2tx7r" podStartSLOduration=19.797724655 podStartE2EDuration="22.622400209s" podCreationTimestamp="2025-07-07 00:15:04 +0000 UTC" firstStartedPulling="2025-07-07 00:15:23.697269026 +0000 UTC m=+39.250528155" lastFinishedPulling="2025-07-07 00:15:26.52194458 +0000 UTC m=+42.075203709" observedRunningTime="2025-07-07 00:15:26.621952169 +0000 UTC m=+42.175211298" watchObservedRunningTime="2025-07-07 00:15:26.622400209 +0000 UTC m=+42.175659338" Jul 7 00:15:26.951800 systemd-networkd[2698]: cali4529ec3136a: Gained IPv6LL Jul 7 00:15:27.493008 containerd[2793]: time="2025-07-07T00:15:27.492967201Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:27.493128 containerd[2793]: time="2025-07-07T00:15:27.492984041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 7 00:15:27.493641 containerd[2793]: time="2025-07-07T00:15:27.493622521Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:27.495131 containerd[2793]: time="2025-07-07T00:15:27.495107121Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:27.495755 containerd[2793]: time="2025-07-07T00:15:27.495727361Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 973.649381ms" Jul 7 00:15:27.495787 containerd[2793]: time="2025-07-07T00:15:27.495758241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 7 00:15:27.497717 containerd[2793]: time="2025-07-07T00:15:27.497693080Z" level=info msg="CreateContainer within sandbox \"183a8412053e2526e8c8b9e42b1f2d78cdff05ca983e8887db6eb6f0eef63223\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 00:15:27.502382 containerd[2793]: time="2025-07-07T00:15:27.502359000Z" level=info msg="Container 2b920c5b316f0deec257706a3c69e2bdc82985fb10725d9cff46d8d7f86a947b: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:27.506399 containerd[2793]: time="2025-07-07T00:15:27.506369839Z" level=info msg="CreateContainer within sandbox \"183a8412053e2526e8c8b9e42b1f2d78cdff05ca983e8887db6eb6f0eef63223\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2b920c5b316f0deec257706a3c69e2bdc82985fb10725d9cff46d8d7f86a947b\"" Jul 7 00:15:27.506723 containerd[2793]: time="2025-07-07T00:15:27.506705479Z" level=info msg="StartContainer for \"2b920c5b316f0deec257706a3c69e2bdc82985fb10725d9cff46d8d7f86a947b\"" Jul 7 00:15:27.508025 containerd[2793]: time="2025-07-07T00:15:27.508002959Z" level=info msg="connecting to shim 2b920c5b316f0deec257706a3c69e2bdc82985fb10725d9cff46d8d7f86a947b" address="unix:///run/containerd/s/46669f2f4ab6b92f70331cd99045bb99da965b8baf07092e8f7e5f540c2aa56a" protocol=ttrpc version=3 Jul 7 00:15:27.531843 systemd[1]: Started cri-containerd-2b920c5b316f0deec257706a3c69e2bdc82985fb10725d9cff46d8d7f86a947b.scope - libcontainer container 2b920c5b316f0deec257706a3c69e2bdc82985fb10725d9cff46d8d7f86a947b. Jul 7 00:15:27.567042 containerd[2793]: time="2025-07-07T00:15:27.567011033Z" level=info msg="StartContainer for \"2b920c5b316f0deec257706a3c69e2bdc82985fb10725d9cff46d8d7f86a947b\" returns successfully" Jul 7 00:15:27.567890 containerd[2793]: time="2025-07-07T00:15:27.567870233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 00:15:27.617976 kubelet[4328]: I0707 00:15:27.617949 4328 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:15:28.612059 containerd[2793]: time="2025-07-07T00:15:28.612021254Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:28.612357 containerd[2793]: time="2025-07-07T00:15:28.612075694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 7 00:15:28.612716 containerd[2793]: time="2025-07-07T00:15:28.612697854Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:28.614131 containerd[2793]: time="2025-07-07T00:15:28.614114694Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:15:28.614754 containerd[2793]: time="2025-07-07T00:15:28.614730054Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.046830101s" Jul 7 00:15:28.614778 containerd[2793]: time="2025-07-07T00:15:28.614761974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 7 00:15:28.616771 containerd[2793]: time="2025-07-07T00:15:28.616749854Z" level=info msg="CreateContainer within sandbox \"183a8412053e2526e8c8b9e42b1f2d78cdff05ca983e8887db6eb6f0eef63223\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 00:15:28.621064 containerd[2793]: time="2025-07-07T00:15:28.621038693Z" level=info msg="Container 70af949661e46120c354c344dfdb67f6c17204b93c01b91da1b5da0d19488c74: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:28.625549 containerd[2793]: time="2025-07-07T00:15:28.625524013Z" level=info msg="CreateContainer within sandbox \"183a8412053e2526e8c8b9e42b1f2d78cdff05ca983e8887db6eb6f0eef63223\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"70af949661e46120c354c344dfdb67f6c17204b93c01b91da1b5da0d19488c74\"" Jul 7 00:15:28.625885 containerd[2793]: time="2025-07-07T00:15:28.625866733Z" level=info msg="StartContainer for \"70af949661e46120c354c344dfdb67f6c17204b93c01b91da1b5da0d19488c74\"" Jul 7 00:15:28.627189 containerd[2793]: time="2025-07-07T00:15:28.627166973Z" level=info msg="connecting to shim 70af949661e46120c354c344dfdb67f6c17204b93c01b91da1b5da0d19488c74" address="unix:///run/containerd/s/46669f2f4ab6b92f70331cd99045bb99da965b8baf07092e8f7e5f540c2aa56a" protocol=ttrpc version=3 Jul 7 00:15:28.661839 systemd[1]: Started cri-containerd-70af949661e46120c354c344dfdb67f6c17204b93c01b91da1b5da0d19488c74.scope - libcontainer container 70af949661e46120c354c344dfdb67f6c17204b93c01b91da1b5da0d19488c74. Jul 7 00:15:28.689149 containerd[2793]: time="2025-07-07T00:15:28.689118607Z" level=info msg="StartContainer for \"70af949661e46120c354c344dfdb67f6c17204b93c01b91da1b5da0d19488c74\" returns successfully" Jul 7 00:15:29.379467 kubelet[4328]: I0707 00:15:29.379429 4328 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:15:29.419382 containerd[2793]: time="2025-07-07T00:15:29.419351262Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d78d54853fb9a608aaa05fd402b454254f0b227ebf609756836307fc246e69da\" id:\"c5f0abbd1d8d74b249cfbe73dbe03d15bad21c5d6115bf2344ead72a2e8976b9\" pid:8105 exited_at:{seconds:1751847329 nanos:419086542}" Jul 7 00:15:29.462937 containerd[2793]: time="2025-07-07T00:15:29.462899418Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d78d54853fb9a608aaa05fd402b454254f0b227ebf609756836307fc246e69da\" id:\"9b49690437f901a09ce4a34540d1f5780b06c605d021d3a6f769e17743a4eadc\" pid:8128 exited_at:{seconds:1751847329 nanos:462753978}" Jul 7 00:15:29.574058 kubelet[4328]: I0707 00:15:29.574039 4328 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 00:15:29.574058 kubelet[4328]: I0707 00:15:29.574062 4328 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 00:15:29.632094 kubelet[4328]: I0707 00:15:29.632010 4328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-lgws5" podStartSLOduration=22.689281581 podStartE2EDuration="25.631996163s" podCreationTimestamp="2025-07-07 00:15:04 +0000 UTC" firstStartedPulling="2025-07-07 00:15:25.672586712 +0000 UTC m=+41.225845841" lastFinishedPulling="2025-07-07 00:15:28.615301294 +0000 UTC m=+44.168560423" observedRunningTime="2025-07-07 00:15:29.631368883 +0000 UTC m=+45.184628012" watchObservedRunningTime="2025-07-07 00:15:29.631996163 +0000 UTC m=+45.185255292" Jul 7 00:15:30.017876 kubelet[4328]: I0707 00:15:30.017794 4328 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:15:35.649758 containerd[2793]: time="2025-07-07T00:15:35.649724694Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a53069ddda669de6a7eaf64f6e91800d46d86fb1cbb49408d13ac953860d555\" id:\"e40a68a512e479acbd4b02ad3fcf3f4b649225a109c63696eff4691cefb17386\" pid:8170 exited_at:{seconds:1751847335 nanos:649495974}" Jul 7 00:15:38.724645 kubelet[4328]: I0707 00:15:38.724524 4328 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:15:38.784137 containerd[2793]: time="2025-07-07T00:15:38.784101408Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a53069ddda669de6a7eaf64f6e91800d46d86fb1cbb49408d13ac953860d555\" id:\"056dcb3f741016e2238676f6ec3522fc5feacfb94e1c626ca7a6bba627318e4f\" pid:8215 exited_at:{seconds:1751847338 nanos:783916368}" Jul 7 00:15:38.846508 containerd[2793]: time="2025-07-07T00:15:38.846027125Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a53069ddda669de6a7eaf64f6e91800d46d86fb1cbb49408d13ac953860d555\" id:\"6241e3205454be8ceb8f0d05fe5a96bc3ef426660ba017c5c26ec35279571cf3\" pid:8252 exited_at:{seconds:1751847338 nanos:845839285}" Jul 7 00:15:39.358451 kubelet[4328]: I0707 00:15:39.358416 4328 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:15:46.662247 containerd[2793]: time="2025-07-07T00:15:46.662150634Z" level=info msg="TaskExit event in podsandbox handler container_id:\"24ceabf0d47ff5b2708d52fbf1f9a74a20e5f96adf03085c6da651b5c661d5be\" id:\"3c561b49336874e8f2ee969b00eef6a7960c1c193ab2d1871a459cf2efa3952a\" pid:8300 exited_at:{seconds:1751847346 nanos:661941074}" Jul 7 00:15:59.462413 containerd[2793]: time="2025-07-07T00:15:59.462374104Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d78d54853fb9a608aaa05fd402b454254f0b227ebf609756836307fc246e69da\" id:\"7b857ec9914c0c605f31437419ccd90e8b194b0a480fe5ab9d92caa50c8cb860\" pid:8363 exited_at:{seconds:1751847359 nanos:462240104}" Jul 7 00:16:08.853835 containerd[2793]: time="2025-07-07T00:16:08.853795377Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a53069ddda669de6a7eaf64f6e91800d46d86fb1cbb49408d13ac953860d555\" id:\"7450158f32e3eff7b2bb1173b07e298428a82263b0f684e091d68276e94d5d7f\" pid:8386 exited_at:{seconds:1751847368 nanos:853573695}" Jul 7 00:16:16.658069 containerd[2793]: time="2025-07-07T00:16:16.658035783Z" level=info msg="TaskExit event in podsandbox handler container_id:\"24ceabf0d47ff5b2708d52fbf1f9a74a20e5f96adf03085c6da651b5c661d5be\" id:\"08b16afe0c3df349e0b0b3fb4e744ec9e9da493f9bd031929e047f74eadd579a\" pid:8426 exited_at:{seconds:1751847376 nanos:657859382}" Jul 7 00:16:29.482818 containerd[2793]: time="2025-07-07T00:16:29.482767111Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d78d54853fb9a608aaa05fd402b454254f0b227ebf609756836307fc246e69da\" id:\"6cf38acd1d256615ff6e566096f48c934d2d617d034a1a09cd586d1c9efeb18d\" pid:8467 exited_at:{seconds:1751847389 nanos:482563550}" Jul 7 00:16:30.291092 containerd[2793]: time="2025-07-07T00:16:30.291046109Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d78d54853fb9a608aaa05fd402b454254f0b227ebf609756836307fc246e69da\" id:\"2f4a78c33853bc9fbb77269e0f57ff81f4bbf892b3c78a33d690b6479e79b18f\" pid:8491 exited_at:{seconds:1751847390 nanos:290900749}" Jul 7 00:16:35.641940 containerd[2793]: time="2025-07-07T00:16:35.641899534Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a53069ddda669de6a7eaf64f6e91800d46d86fb1cbb49408d13ac953860d555\" id:\"40fa9b08e9d5b269c694e95f2aab47d6d9299d0fb6e0ed679df3c13e3d00e516\" pid:8517 exited_at:{seconds:1751847395 nanos:641622293}" Jul 7 00:16:38.846900 containerd[2793]: time="2025-07-07T00:16:38.846858375Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a53069ddda669de6a7eaf64f6e91800d46d86fb1cbb49408d13ac953860d555\" id:\"98cbc31b88e3cbb2365a63086be3ba9e5f3b509091785cb0f3be7d79e4111fd5\" pid:8562 exited_at:{seconds:1751847398 nanos:846665014}" Jul 7 00:16:46.655100 containerd[2793]: time="2025-07-07T00:16:46.655055150Z" level=info msg="TaskExit event in podsandbox handler container_id:\"24ceabf0d47ff5b2708d52fbf1f9a74a20e5f96adf03085c6da651b5c661d5be\" id:\"827604915adc8d8324e3a20fde5cdfec4b7ed49e432f7686ada9ab14025c19ee\" pid:8603 exited_at:{seconds:1751847406 nanos:654784909}" Jul 7 00:16:59.460437 containerd[2793]: time="2025-07-07T00:16:59.460375247Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d78d54853fb9a608aaa05fd402b454254f0b227ebf609756836307fc246e69da\" id:\"728ef0b1f8936bc687d614c147df1731b3da22bdec63196da9fc2e96d3b2ddfb\" pid:8680 exited_at:{seconds:1751847419 nanos:460239487}" Jul 7 00:17:08.842493 containerd[2793]: time="2025-07-07T00:17:08.842445498Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a53069ddda669de6a7eaf64f6e91800d46d86fb1cbb49408d13ac953860d555\" id:\"647189f0cb5c9aae843448e09812ba97f5d49c1081c55e1210dcdf9ef5ed1466\" pid:8701 exited_at:{seconds:1751847428 nanos:842173137}" Jul 7 00:17:16.655603 containerd[2793]: time="2025-07-07T00:17:16.655515058Z" level=info msg="TaskExit event in podsandbox handler container_id:\"24ceabf0d47ff5b2708d52fbf1f9a74a20e5f96adf03085c6da651b5c661d5be\" id:\"6588541a2bda3f831ce6c91c06a6c9d7c01ab8271616e0ea3d4e2261b7acf4c6\" pid:8762 exited_at:{seconds:1751847436 nanos:655269418}" Jul 7 00:17:29.463730 containerd[2793]: time="2025-07-07T00:17:29.463695996Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d78d54853fb9a608aaa05fd402b454254f0b227ebf609756836307fc246e69da\" id:\"311e7990ff8e2f19556c575520532c150b9d0098a32d791a4e1f587728d272db\" pid:8808 exited_at:{seconds:1751847449 nanos:463517716}" Jul 7 00:17:30.293693 containerd[2793]: time="2025-07-07T00:17:30.293663787Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d78d54853fb9a608aaa05fd402b454254f0b227ebf609756836307fc246e69da\" id:\"67e761f3085d8786f589b54234ab1d793400d4d067a4dc85af5d1faca6bb8fa2\" pid:8832 exited_at:{seconds:1751847450 nanos:293487867}" Jul 7 00:17:35.638146 containerd[2793]: time="2025-07-07T00:17:35.638110109Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a53069ddda669de6a7eaf64f6e91800d46d86fb1cbb49408d13ac953860d555\" id:\"d81c09fe92baf5a137c863e5e11ff9d926943be641288e4cc7982af5537152cb\" pid:8856 exited_at:{seconds:1751847455 nanos:637898229}" Jul 7 00:17:38.853977 containerd[2793]: time="2025-07-07T00:17:38.853938615Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a53069ddda669de6a7eaf64f6e91800d46d86fb1cbb49408d13ac953860d555\" id:\"95ee52ab45ed8430b6465ca2ba82f27a2f8ef57b3664db7322ae0061f245856f\" pid:8895 exited_at:{seconds:1751847458 nanos:853699495}" Jul 7 00:17:46.091217 kubelet[4328]: I0707 00:17:46.091154 4328 ???:1] "http: TLS handshake error from 207.90.244.22:49094: EOF" Jul 7 00:17:46.099100 kubelet[4328]: I0707 00:17:46.099087 4328 ???:1] "http: TLS handshake error from 207.90.244.22:49096: EOF" Jul 7 00:17:46.101406 kubelet[4328]: I0707 00:17:46.101396 4328 ???:1] "http: TLS handshake error from 207.90.244.22:49098: tls: no cipher suite supported by both client and server" Jul 7 00:17:46.105184 kubelet[4328]: I0707 00:17:46.105171 4328 ???:1] "http: TLS handshake error from 207.90.244.22:49108: tls: client requested unsupported application protocols ([http/0.9 http/1.0 spdy/1 spdy/2 spdy/3 h2c hq])" Jul 7 00:17:46.109236 kubelet[4328]: I0707 00:17:46.109225 4328 ???:1] "http: TLS handshake error from 207.90.244.22:49124: tls: client requested unsupported application protocols ([hq h2c spdy/3 spdy/2 spdy/1 http/1.0 http/0.9])" Jul 7 00:17:46.113053 kubelet[4328]: I0707 00:17:46.113042 4328 ???:1] "http: TLS handshake error from 207.90.244.22:49138: tls: client offered only unsupported versions: [302 301]" Jul 7 00:17:46.124693 kubelet[4328]: I0707 00:17:46.124674 4328 ???:1] "http: TLS handshake error from 207.90.244.22:49150: EOF" Jul 7 00:17:46.133202 kubelet[4328]: I0707 00:17:46.133188 4328 ???:1] "http: TLS handshake error from 207.90.244.22:49164: EOF" Jul 7 00:17:46.141455 kubelet[4328]: I0707 00:17:46.141441 4328 ???:1] "http: TLS handshake error from 207.90.244.22:49178: EOF" Jul 7 00:17:46.149967 kubelet[4328]: I0707 00:17:46.149957 4328 ???:1] "http: TLS handshake error from 207.90.244.22:49192: EOF" Jul 7 00:17:46.191456 kubelet[4328]: I0707 00:17:46.191442 4328 ???:1] "http: TLS handshake error from 207.90.244.22:49202: tls: client offered only unsupported versions: [301]" Jul 7 00:17:46.240609 kubelet[4328]: I0707 00:17:46.240595 4328 ???:1] "http: TLS handshake error from 207.90.244.22:49204: tls: unsupported SSLv2 handshake received" Jul 7 00:17:46.283930 kubelet[4328]: I0707 00:17:46.283916 4328 ???:1] "http: TLS handshake error from 207.90.244.22:49214: tls: client offered only unsupported versions: []" Jul 7 00:17:46.545155 kubelet[4328]: I0707 00:17:46.545135 4328 ???:1] "http: TLS handshake error from 207.90.244.22:49230: tls: client offered only unsupported versions: [302 301]" Jul 7 00:17:46.660073 containerd[2793]: time="2025-07-07T00:17:46.660027031Z" level=info msg="TaskExit event in podsandbox handler container_id:\"24ceabf0d47ff5b2708d52fbf1f9a74a20e5f96adf03085c6da651b5c661d5be\" id:\"014d17c5b881a1c6e43656ffa3850bd754f36c2db51f5a84241944ec297c23b1\" pid:8939 exited_at:{seconds:1751847466 nanos:659758831}" Jul 7 00:17:59.460656 containerd[2793]: time="2025-07-07T00:17:59.460608724Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d78d54853fb9a608aaa05fd402b454254f0b227ebf609756836307fc246e69da\" id:\"a4bf0b6bdff42dbe34a152767f0b59bd72e977ea87c637f4fb2f89a433d12999\" pid:8981 exited_at:{seconds:1751847479 nanos:460463204}" Jul 7 00:18:01.853819 update_engine[2785]: I20250707 00:18:01.853760 2785 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 7 00:18:01.853819 update_engine[2785]: I20250707 00:18:01.853816 2785 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 7 00:18:01.854197 update_engine[2785]: I20250707 00:18:01.854044 2785 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 7 00:18:01.854370 update_engine[2785]: I20250707 00:18:01.854353 2785 omaha_request_params.cc:62] Current group set to beta Jul 7 00:18:01.854441 update_engine[2785]: I20250707 00:18:01.854429 2785 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 7 00:18:01.854462 update_engine[2785]: I20250707 00:18:01.854439 2785 update_attempter.cc:643] Scheduling an action processor start. Jul 7 00:18:01.854462 update_engine[2785]: I20250707 00:18:01.854453 2785 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 7 00:18:01.854499 update_engine[2785]: I20250707 00:18:01.854476 2785 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 7 00:18:01.854534 update_engine[2785]: I20250707 00:18:01.854522 2785 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 7 00:18:01.854554 update_engine[2785]: I20250707 00:18:01.854530 2785 omaha_request_action.cc:272] Request: Jul 7 00:18:01.854554 update_engine[2785]: Jul 7 00:18:01.854554 update_engine[2785]: Jul 7 00:18:01.854554 update_engine[2785]: Jul 7 00:18:01.854554 update_engine[2785]: Jul 7 00:18:01.854554 update_engine[2785]: Jul 7 00:18:01.854554 update_engine[2785]: Jul 7 00:18:01.854554 update_engine[2785]: Jul 7 00:18:01.854554 update_engine[2785]: Jul 7 00:18:01.854554 update_engine[2785]: I20250707 00:18:01.854537 2785 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:18:01.854814 locksmithd[2822]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 7 00:18:01.855589 update_engine[2785]: I20250707 00:18:01.855569 2785 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:18:01.855911 update_engine[2785]: I20250707 00:18:01.855890 2785 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:18:01.856409 update_engine[2785]: E20250707 00:18:01.856392 2785 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:18:01.856452 update_engine[2785]: I20250707 00:18:01.856441 2785 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 7 00:18:08.848432 containerd[2793]: time="2025-07-07T00:18:08.848391485Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a53069ddda669de6a7eaf64f6e91800d46d86fb1cbb49408d13ac953860d555\" id:\"bf8f73efbadc380586520c53132dc62829eb3bbfc5818f2e8f5911673ba10f96\" pid:9006 exited_at:{seconds:1751847488 nanos:848171005}" Jul 7 00:18:11.841756 update_engine[2785]: I20250707 00:18:11.841691 2785 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:18:11.842533 update_engine[2785]: I20250707 00:18:11.842262 2785 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:18:11.842533 update_engine[2785]: I20250707 00:18:11.842497 2785 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:18:11.842860 update_engine[2785]: E20250707 00:18:11.842783 2785 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:18:11.842860 update_engine[2785]: I20250707 00:18:11.842839 2785 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 7 00:18:16.665164 containerd[2793]: time="2025-07-07T00:18:16.665119021Z" level=info msg="TaskExit event in podsandbox handler container_id:\"24ceabf0d47ff5b2708d52fbf1f9a74a20e5f96adf03085c6da651b5c661d5be\" id:\"983374e764be6430d26295621f5e9b103dd9107210af5ce9680d03e95e38a09b\" pid:9044 exited_at:{seconds:1751847496 nanos:664871741}" Jul 7 00:18:21.842233 update_engine[2785]: I20250707 00:18:21.841683 2785 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:18:21.842233 update_engine[2785]: I20250707 00:18:21.841971 2785 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:18:21.842233 update_engine[2785]: I20250707 00:18:21.842191 2785 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:18:21.842700 update_engine[2785]: E20250707 00:18:21.842653 2785 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:18:21.842805 update_engine[2785]: I20250707 00:18:21.842787 2785 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 7 00:18:29.462576 containerd[2793]: time="2025-07-07T00:18:29.462538987Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d78d54853fb9a608aaa05fd402b454254f0b227ebf609756836307fc246e69da\" id:\"c2fd22752161f1a0e6ad4e963c25d3684fa6d4b1832e4165433a6bbe52c8d42c\" pid:9109 exited_at:{seconds:1751847509 nanos:462357067}" Jul 7 00:18:30.288514 containerd[2793]: time="2025-07-07T00:18:30.288473443Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d78d54853fb9a608aaa05fd402b454254f0b227ebf609756836307fc246e69da\" id:\"59440c22d828e0c36597fe92da6592c939b25d44e33f53eb049ebb1bf4566196\" pid:9131 exited_at:{seconds:1751847510 nanos:288329362}" Jul 7 00:18:31.842092 update_engine[2785]: I20250707 00:18:31.841688 2785 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:18:31.842092 update_engine[2785]: I20250707 00:18:31.841949 2785 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:18:31.842399 update_engine[2785]: I20250707 00:18:31.842176 2785 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:18:31.842530 update_engine[2785]: E20250707 00:18:31.842509 2785 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:18:31.842641 update_engine[2785]: I20250707 00:18:31.842624 2785 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 7 00:18:31.842711 update_engine[2785]: I20250707 00:18:31.842696 2785 omaha_request_action.cc:617] Omaha request response: Jul 7 00:18:31.842842 update_engine[2785]: E20250707 00:18:31.842825 2785 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 7 00:18:31.842905 update_engine[2785]: I20250707 00:18:31.842892 2785 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 7 00:18:31.842953 update_engine[2785]: I20250707 00:18:31.842940 2785 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 00:18:31.842995 update_engine[2785]: I20250707 00:18:31.842983 2785 update_attempter.cc:306] Processing Done. Jul 7 00:18:31.843046 update_engine[2785]: E20250707 00:18:31.843034 2785 update_attempter.cc:619] Update failed. Jul 7 00:18:31.843088 update_engine[2785]: I20250707 00:18:31.843076 2785 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 7 00:18:31.843561 update_engine[2785]: I20250707 00:18:31.843117 2785 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 7 00:18:31.843561 update_engine[2785]: I20250707 00:18:31.843127 2785 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 7 00:18:31.843561 update_engine[2785]: I20250707 00:18:31.843186 2785 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 7 00:18:31.843561 update_engine[2785]: I20250707 00:18:31.843205 2785 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 7 00:18:31.843561 update_engine[2785]: I20250707 00:18:31.843212 2785 omaha_request_action.cc:272] Request: Jul 7 00:18:31.843561 update_engine[2785]: Jul 7 00:18:31.843561 update_engine[2785]: Jul 7 00:18:31.843561 update_engine[2785]: Jul 7 00:18:31.843561 update_engine[2785]: Jul 7 00:18:31.843561 update_engine[2785]: Jul 7 00:18:31.843561 update_engine[2785]: Jul 7 00:18:31.843561 update_engine[2785]: I20250707 00:18:31.843217 2785 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:18:31.843561 update_engine[2785]: I20250707 00:18:31.843339 2785 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:18:31.843561 update_engine[2785]: I20250707 00:18:31.843528 2785 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:18:31.843852 locksmithd[2822]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 7 00:18:31.844245 update_engine[2785]: E20250707 00:18:31.844116 2785 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:18:31.844245 update_engine[2785]: I20250707 00:18:31.844157 2785 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 7 00:18:31.844245 update_engine[2785]: I20250707 00:18:31.844163 2785 omaha_request_action.cc:617] Omaha request response: Jul 7 00:18:31.844245 update_engine[2785]: I20250707 00:18:31.844168 2785 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 00:18:31.844245 update_engine[2785]: I20250707 00:18:31.844172 2785 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 00:18:31.844245 update_engine[2785]: I20250707 00:18:31.844177 2785 update_attempter.cc:306] Processing Done. Jul 7 00:18:31.844245 update_engine[2785]: I20250707 00:18:31.844182 2785 update_attempter.cc:310] Error event sent. Jul 7 00:18:31.844245 update_engine[2785]: I20250707 00:18:31.844188 2785 update_check_scheduler.cc:74] Next update check in 44m31s Jul 7 00:18:31.844416 locksmithd[2822]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 7 00:18:35.644891 containerd[2793]: time="2025-07-07T00:18:35.644844863Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a53069ddda669de6a7eaf64f6e91800d46d86fb1cbb49408d13ac953860d555\" id:\"ad06d19f96375063fd4c62b918f27b2c785813cd3363b73135c297ac8f767f93\" pid:9152 exited_at:{seconds:1751847515 nanos:644612942}" Jul 7 00:18:38.855884 containerd[2793]: time="2025-07-07T00:18:38.855840343Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a53069ddda669de6a7eaf64f6e91800d46d86fb1cbb49408d13ac953860d555\" id:\"e538ad752ee4bd0f1a68894a03217e9ef2e0ef0d520d85c7ac591dd58d35fd49\" pid:9191 exited_at:{seconds:1751847518 nanos:855635382}" Jul 7 00:18:46.664373 containerd[2793]: time="2025-07-07T00:18:46.664293871Z" level=info msg="TaskExit event in podsandbox handler container_id:\"24ceabf0d47ff5b2708d52fbf1f9a74a20e5f96adf03085c6da651b5c661d5be\" id:\"553f32d5208164f9bb399773be57dcb41f1a1517d128253189ec23b52e03590f\" pid:9233 exited_at:{seconds:1751847526 nanos:664025591}" Jul 7 00:18:59.460498 containerd[2793]: time="2025-07-07T00:18:59.460450947Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d78d54853fb9a608aaa05fd402b454254f0b227ebf609756836307fc246e69da\" id:\"6c2928a1835bde7e475cc7618e2df66446936b4a1888ed8324d015df3677b192\" pid:9273 exited_at:{seconds:1751847539 nanos:460264107}" Jul 7 00:19:08.843645 containerd[2793]: time="2025-07-07T00:19:08.843606133Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a53069ddda669de6a7eaf64f6e91800d46d86fb1cbb49408d13ac953860d555\" id:\"12c8f2cb01608931d35f3581a15e2f1e0dc94e67714041981d764f6ab97c3d90\" pid:9297 exited_at:{seconds:1751847548 nanos:843397932}" Jul 7 00:19:16.655180 containerd[2793]: time="2025-07-07T00:19:16.655132789Z" level=info msg="TaskExit event in podsandbox handler container_id:\"24ceabf0d47ff5b2708d52fbf1f9a74a20e5f96adf03085c6da651b5c661d5be\" id:\"864df67539c59e7f9801338106029a979420149ae763281cb28c666d02b387b0\" pid:9358 exited_at:{seconds:1751847556 nanos:654871429}" Jul 7 00:19:29.462592 containerd[2793]: time="2025-07-07T00:19:29.462554391Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d78d54853fb9a608aaa05fd402b454254f0b227ebf609756836307fc246e69da\" id:\"90ab748da6d1bf559ae4ab45e9b32cb1c5b1be7dc1caf9eddc04f1936a7c2ec1\" pid:9409 exited_at:{seconds:1751847569 nanos:462384631}" Jul 7 00:19:30.300650 containerd[2793]: time="2025-07-07T00:19:30.300616361Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d78d54853fb9a608aaa05fd402b454254f0b227ebf609756836307fc246e69da\" id:\"14a4ed015b4d31659f4fc3c4b45dc9583821636e59d1486e0bf2c9143640abee\" pid:9430 exited_at:{seconds:1751847570 nanos:300410080}" Jul 7 00:19:35.642839 containerd[2793]: time="2025-07-07T00:19:35.642795445Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a53069ddda669de6a7eaf64f6e91800d46d86fb1cbb49408d13ac953860d555\" id:\"380c1585865222bfc9e89301f0e2f57966fc29c80bc33a0edc5891e2e7bdde29\" pid:9454 exited_at:{seconds:1751847575 nanos:642556445}" Jul 7 00:19:38.849222 containerd[2793]: time="2025-07-07T00:19:38.849168212Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a53069ddda669de6a7eaf64f6e91800d46d86fb1cbb49408d13ac953860d555\" id:\"4e5942d3d490896a368fc4b731d2cf8a3f4cd97e77bf5f723e5e041dd6ccb0c1\" pid:9495 exited_at:{seconds:1751847578 nanos:848983772}" Jul 7 00:19:40.684521 containerd[2793]: time="2025-07-07T00:19:40.684398265Z" level=warning msg="container event discarded" container=0f097dbeeb414148b292cc4e6b6b56c945b89b7bb2b38a4636de8b7c8d9af294 type=CONTAINER_CREATED_EVENT Jul 7 00:19:40.684521 containerd[2793]: time="2025-07-07T00:19:40.684482545Z" level=warning msg="container event discarded" container=0f097dbeeb414148b292cc4e6b6b56c945b89b7bb2b38a4636de8b7c8d9af294 type=CONTAINER_STARTED_EVENT Jul 7 00:19:40.698695 containerd[2793]: time="2025-07-07T00:19:40.698657922Z" level=warning msg="container event discarded" container=1efeea5f619d824e4f1fa66beca1cc91338d07b34c82aee2ba8cbe7db7a01567 type=CONTAINER_CREATED_EVENT Jul 7 00:19:40.698825 containerd[2793]: time="2025-07-07T00:19:40.698794683Z" level=warning msg="container event discarded" container=1efeea5f619d824e4f1fa66beca1cc91338d07b34c82aee2ba8cbe7db7a01567 type=CONTAINER_STARTED_EVENT Jul 7 00:19:40.715029 containerd[2793]: time="2025-07-07T00:19:40.714969503Z" level=warning msg="container event discarded" container=92a7ad97749626c3c900615f38ecf9a7ee73fbbed003952fdd366c9fa86b3f34 type=CONTAINER_CREATED_EVENT Jul 7 00:19:40.715029 containerd[2793]: time="2025-07-07T00:19:40.714996583Z" level=warning msg="container event discarded" container=92a7ad97749626c3c900615f38ecf9a7ee73fbbed003952fdd366c9fa86b3f34 type=CONTAINER_STARTED_EVENT Jul 7 00:19:40.715029 containerd[2793]: time="2025-07-07T00:19:40.715003743Z" level=warning msg="container event discarded" container=5b23b610aae036b80864d8ee81114ac0071df1986864c16ee31f87d7275ff831 type=CONTAINER_CREATED_EVENT Jul 7 00:19:40.715029 containerd[2793]: time="2025-07-07T00:19:40.715010223Z" level=warning msg="container event discarded" container=5c0e66a6816b597cae3a506b55600996f5d4c1c367f420efbf43a7060ccfbc31 type=CONTAINER_CREATED_EVENT Jul 7 00:19:40.726206 containerd[2793]: time="2025-07-07T00:19:40.726173597Z" level=warning msg="container event discarded" container=04d8400a4413563455243978fccb219411f9d485f575256d561d3972fa834c10 type=CONTAINER_CREATED_EVENT Jul 7 00:19:40.770419 containerd[2793]: time="2025-07-07T00:19:40.770369652Z" level=warning msg="container event discarded" container=5b23b610aae036b80864d8ee81114ac0071df1986864c16ee31f87d7275ff831 type=CONTAINER_STARTED_EVENT Jul 7 00:19:40.770419 containerd[2793]: time="2025-07-07T00:19:40.770405892Z" level=warning msg="container event discarded" container=04d8400a4413563455243978fccb219411f9d485f575256d561d3972fa834c10 type=CONTAINER_STARTED_EVENT Jul 7 00:19:40.770419 containerd[2793]: time="2025-07-07T00:19:40.770416772Z" level=warning msg="container event discarded" container=5c0e66a6816b597cae3a506b55600996f5d4c1c367f420efbf43a7060ccfbc31 type=CONTAINER_STARTED_EVENT Jul 7 00:19:46.655674 containerd[2793]: time="2025-07-07T00:19:46.655627916Z" level=info msg="TaskExit event in podsandbox handler container_id:\"24ceabf0d47ff5b2708d52fbf1f9a74a20e5f96adf03085c6da651b5c661d5be\" id:\"c3a99578d726ede7681571d075a72db7b7a43cdb91b117196ad5468449ff1d28\" pid:9537 exited_at:{seconds:1751847586 nanos:655360036}" Jul 7 00:19:50.831808 containerd[2793]: time="2025-07-07T00:19:50.831743602Z" level=warning msg="container event discarded" container=c848c7222a0766053698374084c4c330e30e78aec166970f08f05d7ea21cda01 type=CONTAINER_CREATED_EVENT Jul 7 00:19:50.831808 containerd[2793]: time="2025-07-07T00:19:50.831789282Z" level=warning msg="container event discarded" container=c848c7222a0766053698374084c4c330e30e78aec166970f08f05d7ea21cda01 type=CONTAINER_STARTED_EVENT Jul 7 00:19:50.842984 containerd[2793]: time="2025-07-07T00:19:50.842946096Z" level=warning msg="container event discarded" container=90e6fd50bf0f5401914aac8e5f9fc64fcd61dc849b84861c19ddc68e6c474fc3 type=CONTAINER_CREATED_EVENT Jul 7 00:19:50.913239 containerd[2793]: time="2025-07-07T00:19:50.913214264Z" level=warning msg="container event discarded" container=90e6fd50bf0f5401914aac8e5f9fc64fcd61dc849b84861c19ddc68e6c474fc3 type=CONTAINER_STARTED_EVENT Jul 7 00:19:50.952445 containerd[2793]: time="2025-07-07T00:19:50.952424433Z" level=warning msg="container event discarded" container=cd471e93ee843663ed67e0d26504f26244999926a9e40d8e143bef94122d7613 type=CONTAINER_CREATED_EVENT Jul 7 00:19:50.952445 containerd[2793]: time="2025-07-07T00:19:50.952440793Z" level=warning msg="container event discarded" container=cd471e93ee843663ed67e0d26504f26244999926a9e40d8e143bef94122d7613 type=CONTAINER_STARTED_EVENT Jul 7 00:19:52.520053 containerd[2793]: time="2025-07-07T00:19:52.519980226Z" level=warning msg="container event discarded" container=ff73fd4c57b07432971fa3f3dc3661cfe112e99c2cb0a29b8c5b05f3c440c480 type=CONTAINER_CREATED_EVENT Jul 7 00:19:52.569165 containerd[2793]: time="2025-07-07T00:19:52.569133007Z" level=warning msg="container event discarded" container=ff73fd4c57b07432971fa3f3dc3661cfe112e99c2cb0a29b8c5b05f3c440c480 type=CONTAINER_STARTED_EVENT Jul 7 00:19:59.454472 containerd[2793]: time="2025-07-07T00:19:59.454435541Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d78d54853fb9a608aaa05fd402b454254f0b227ebf609756836307fc246e69da\" id:\"c1aa221157ff6c5590d549bf025dec899dd07b2e568704cae76346d933369bad\" pid:9584 exited_at:{seconds:1751847599 nanos:454261461}" Jul 7 00:20:03.916569 containerd[2793]: time="2025-07-07T00:20:03.916463973Z" level=warning msg="container event discarded" container=6e0e5dd03c1107d8f54ffb7e1932efa4dbd78948ffd2f04ba55218bde6131950 type=CONTAINER_CREATED_EVENT Jul 7 00:20:03.916569 containerd[2793]: time="2025-07-07T00:20:03.916535453Z" level=warning msg="container event discarded" container=6e0e5dd03c1107d8f54ffb7e1932efa4dbd78948ffd2f04ba55218bde6131950 type=CONTAINER_STARTED_EVENT Jul 7 00:20:04.307637 containerd[2793]: time="2025-07-07T00:20:04.307522939Z" level=warning msg="container event discarded" container=41485e577c85e047f1b19bc2166a8dd8786df9e239c0d2dba9f72fae76b613da type=CONTAINER_CREATED_EVENT Jul 7 00:20:04.307637 containerd[2793]: time="2025-07-07T00:20:04.307568739Z" level=warning msg="container event discarded" container=41485e577c85e047f1b19bc2166a8dd8786df9e239c0d2dba9f72fae76b613da type=CONTAINER_STARTED_EVENT Jul 7 00:20:05.272873 containerd[2793]: time="2025-07-07T00:20:05.272831820Z" level=warning msg="container event discarded" container=b8148b3892f40af515df0d090fd06fe336c90d2c43b3e4101393491be21fea22 type=CONTAINER_CREATED_EVENT Jul 7 00:20:05.321044 containerd[2793]: time="2025-07-07T00:20:05.321011160Z" level=warning msg="container event discarded" container=b8148b3892f40af515df0d090fd06fe336c90d2c43b3e4101393491be21fea22 type=CONTAINER_STARTED_EVENT Jul 7 00:20:06.212359 containerd[2793]: time="2025-07-07T00:20:06.212315348Z" level=warning msg="container event discarded" container=ab9a0f16ec026fc53a6107fbcd718d15789676e3e88174119c716dc0c4435d7d type=CONTAINER_CREATED_EVENT Jul 7 00:20:06.267552 containerd[2793]: time="2025-07-07T00:20:06.267505217Z" level=warning msg="container event discarded" container=ab9a0f16ec026fc53a6107fbcd718d15789676e3e88174119c716dc0c4435d7d type=CONTAINER_STARTED_EVENT Jul 7 00:20:06.652193 containerd[2793]: time="2025-07-07T00:20:06.652148335Z" level=warning msg="container event discarded" container=ab9a0f16ec026fc53a6107fbcd718d15789676e3e88174119c716dc0c4435d7d type=CONTAINER_STOPPED_EVENT Jul 7 00:20:08.851993 containerd[2793]: time="2025-07-07T00:20:08.851951151Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a53069ddda669de6a7eaf64f6e91800d46d86fb1cbb49408d13ac953860d555\" id:\"ddb5441747e4447b1080042db8a975d30de58e31bc75b1c4dc10d3fea2ab5efe\" pid:9626 exited_at:{seconds:1751847608 nanos:851692590}" Jul 7 00:20:09.424714 containerd[2793]: time="2025-07-07T00:20:09.424642943Z" level=warning msg="container event discarded" container=de8681d178b96249f5b354ad2d152658be57839f3cfeae8b275af372b6cb0f5c type=CONTAINER_CREATED_EVENT Jul 7 00:20:09.481893 containerd[2793]: time="2025-07-07T00:20:09.481859174Z" level=warning msg="container event discarded" container=de8681d178b96249f5b354ad2d152658be57839f3cfeae8b275af372b6cb0f5c type=CONTAINER_STARTED_EVENT Jul 7 00:20:09.997062 containerd[2793]: time="2025-07-07T00:20:09.997029414Z" level=warning msg="container event discarded" container=de8681d178b96249f5b354ad2d152658be57839f3cfeae8b275af372b6cb0f5c type=CONTAINER_STOPPED_EVENT Jul 7 00:20:14.173750 containerd[2793]: time="2025-07-07T00:20:14.173674766Z" level=warning msg="container event discarded" container=24ceabf0d47ff5b2708d52fbf1f9a74a20e5f96adf03085c6da651b5c661d5be type=CONTAINER_CREATED_EVENT Jul 7 00:20:14.241983 containerd[2793]: time="2025-07-07T00:20:14.241955931Z" level=warning msg="container event discarded" container=24ceabf0d47ff5b2708d52fbf1f9a74a20e5f96adf03085c6da651b5c661d5be type=CONTAINER_STARTED_EVENT Jul 7 00:20:15.115882 containerd[2793]: time="2025-07-07T00:20:15.115838297Z" level=warning msg="container event discarded" container=0a983f2e87172fa8bba818214a5de6046d18673ee162d875c332acf2015d0efa type=CONTAINER_CREATED_EVENT Jul 7 00:20:15.115882 containerd[2793]: time="2025-07-07T00:20:15.115862937Z" level=warning msg="container event discarded" container=0a983f2e87172fa8bba818214a5de6046d18673ee162d875c332acf2015d0efa type=CONTAINER_STARTED_EVENT Jul 7 00:20:16.351717 containerd[2793]: time="2025-07-07T00:20:16.351674592Z" level=warning msg="container event discarded" container=ab9c40d3b257e9b2f3c00bff3c644e8832b4f9c2dcd66e6d0270389ffc3687f3 type=CONTAINER_CREATED_EVENT Jul 7 00:20:16.415983 containerd[2793]: time="2025-07-07T00:20:16.415949712Z" level=warning msg="container event discarded" container=ab9c40d3b257e9b2f3c00bff3c644e8832b4f9c2dcd66e6d0270389ffc3687f3 type=CONTAINER_STARTED_EVENT Jul 7 00:20:16.655429 containerd[2793]: time="2025-07-07T00:20:16.655347050Z" level=info msg="TaskExit event in podsandbox handler container_id:\"24ceabf0d47ff5b2708d52fbf1f9a74a20e5f96adf03085c6da651b5c661d5be\" id:\"02ad248b18f8941de6f39b6d3c87113f249d5c74e30f43025a1efbfca0336d93\" pid:9663 exited_at:{seconds:1751847616 nanos:655157889}" Jul 7 00:20:17.786423 containerd[2793]: time="2025-07-07T00:20:17.786378375Z" level=warning msg="container event discarded" container=7a641b5c963eb4a51181171a7f117d8e5cf8bd0015d4799497fc63b153f3e38b type=CONTAINER_CREATED_EVENT Jul 7 00:20:17.847726 containerd[2793]: time="2025-07-07T00:20:17.847685771Z" level=warning msg="container event discarded" container=7a641b5c963eb4a51181171a7f117d8e5cf8bd0015d4799497fc63b153f3e38b type=CONTAINER_STARTED_EVENT Jul 7 00:20:21.720517 containerd[2793]: time="2025-07-07T00:20:21.720458582Z" level=warning msg="container event discarded" container=9a9b24e31bf7d37df47a58e6131a5737e8d7811fb2f56b5ec0964d77e73082d1 type=CONTAINER_CREATED_EVENT Jul 7 00:20:21.720517 containerd[2793]: time="2025-07-07T00:20:21.720495622Z" level=warning msg="container event discarded" container=9a9b24e31bf7d37df47a58e6131a5737e8d7811fb2f56b5ec0964d77e73082d1 type=CONTAINER_STARTED_EVENT Jul 7 00:20:21.731692 containerd[2793]: time="2025-07-07T00:20:21.731644956Z" level=warning msg="container event discarded" container=5e7aab653be7906efd77bf6168e1291a3fbc110dd5044bfc68888ff76d3dbdb6 type=CONTAINER_CREATED_EVENT Jul 7 00:20:21.785001 containerd[2793]: time="2025-07-07T00:20:21.784951502Z" level=warning msg="container event discarded" container=5e7aab653be7906efd77bf6168e1291a3fbc110dd5044bfc68888ff76d3dbdb6 type=CONTAINER_STARTED_EVENT Jul 7 00:20:21.785001 containerd[2793]: time="2025-07-07T00:20:21.784976662Z" level=warning msg="container event discarded" container=122ffb870ada758a24377b3ec901f963e6b00467649eca3b41e2d3b315b0175f type=CONTAINER_CREATED_EVENT Jul 7 00:20:21.785001 containerd[2793]: time="2025-07-07T00:20:21.784985582Z" level=warning msg="container event discarded" container=122ffb870ada758a24377b3ec901f963e6b00467649eca3b41e2d3b315b0175f type=CONTAINER_STARTED_EVENT Jul 7 00:20:22.683589 containerd[2793]: time="2025-07-07T00:20:22.683541378Z" level=warning msg="container event discarded" container=db3e4262ad67aeb5b0ff248e5d75c8760673522942f631a59d325a9aa58b82bf type=CONTAINER_CREATED_EVENT Jul 7 00:20:22.683589 containerd[2793]: time="2025-07-07T00:20:22.683572938Z" level=warning msg="container event discarded" container=db3e4262ad67aeb5b0ff248e5d75c8760673522942f631a59d325a9aa58b82bf type=CONTAINER_STARTED_EVENT Jul 7 00:20:23.274136 containerd[2793]: time="2025-07-07T00:20:23.274077791Z" level=warning msg="container event discarded" container=d8a034471d3697b8494d987bdd539e2d8ba1a18f56e5dcc07296b88c1ea47335 type=CONTAINER_CREATED_EVENT Jul 7 00:20:23.329408 containerd[2793]: time="2025-07-07T00:20:23.329366620Z" level=warning msg="container event discarded" container=d8a034471d3697b8494d987bdd539e2d8ba1a18f56e5dcc07296b88c1ea47335 type=CONTAINER_STARTED_EVENT Jul 7 00:20:23.706986 containerd[2793]: time="2025-07-07T00:20:23.706938169Z" level=warning msg="container event discarded" container=843dc2645034457719a5bc701eb5b7efa9cafb4287de6388c6eaf646ae259b5b type=CONTAINER_CREATED_EVENT Jul 7 00:20:23.706986 containerd[2793]: time="2025-07-07T00:20:23.706969049Z" level=warning msg="container event discarded" container=843dc2645034457719a5bc701eb5b7efa9cafb4287de6388c6eaf646ae259b5b type=CONTAINER_STARTED_EVENT Jul 7 00:20:23.790311 containerd[2793]: time="2025-07-07T00:20:23.790264153Z" level=warning msg="container event discarded" container=8dd2a64c3e5d9ef9f2824ea21482e5db08b112ede311afecd74836210d319bbf type=CONTAINER_CREATED_EVENT Jul 7 00:20:23.790311 containerd[2793]: time="2025-07-07T00:20:23.790286593Z" level=warning msg="container event discarded" container=8dd2a64c3e5d9ef9f2824ea21482e5db08b112ede311afecd74836210d319bbf type=CONTAINER_STARTED_EVENT Jul 7 00:20:23.790311 containerd[2793]: time="2025-07-07T00:20:23.790293513Z" level=warning msg="container event discarded" container=f0c56f50c490d3267bdece6490b4f9b6b31307c24057ea51419cffd0af8761ab type=CONTAINER_CREATED_EVENT Jul 7 00:20:23.853510 containerd[2793]: time="2025-07-07T00:20:23.853471791Z" level=warning msg="container event discarded" container=f0c56f50c490d3267bdece6490b4f9b6b31307c24057ea51419cffd0af8761ab type=CONTAINER_STARTED_EVENT Jul 7 00:20:24.696874 containerd[2793]: time="2025-07-07T00:20:24.696821678Z" level=warning msg="container event discarded" container=4de288ca2f9803788889727a0da7210d8357ffa380d822492004ccbef75bef89 type=CONTAINER_CREATED_EVENT Jul 7 00:20:24.696874 containerd[2793]: time="2025-07-07T00:20:24.696851038Z" level=warning msg="container event discarded" container=4de288ca2f9803788889727a0da7210d8357ffa380d822492004ccbef75bef89 type=CONTAINER_STARTED_EVENT Jul 7 00:20:24.719050 containerd[2793]: time="2025-07-07T00:20:24.719012826Z" level=warning msg="container event discarded" container=2a33ca64dcb4acc11e3f4450865ae5cf9085b4b0f5aa49c9756f3caa0eadb654 type=CONTAINER_CREATED_EVENT Jul 7 00:20:24.768336 containerd[2793]: time="2025-07-07T00:20:24.768294807Z" level=warning msg="container event discarded" container=2a33ca64dcb4acc11e3f4450865ae5cf9085b4b0f5aa49c9756f3caa0eadb654 type=CONTAINER_STARTED_EVENT Jul 7 00:20:24.918617 containerd[2793]: time="2025-07-07T00:20:24.918584874Z" level=warning msg="container event discarded" container=5a53069ddda669de6a7eaf64f6e91800d46d86fb1cbb49408d13ac953860d555 type=CONTAINER_CREATED_EVENT Jul 7 00:20:24.983960 containerd[2793]: time="2025-07-07T00:20:24.983883835Z" level=warning msg="container event discarded" container=5a53069ddda669de6a7eaf64f6e91800d46d86fb1cbb49408d13ac953860d555 type=CONTAINER_STARTED_EVENT Jul 7 00:20:25.682246 containerd[2793]: time="2025-07-07T00:20:25.682194302Z" level=warning msg="container event discarded" container=183a8412053e2526e8c8b9e42b1f2d78cdff05ca983e8887db6eb6f0eef63223 type=CONTAINER_CREATED_EVENT Jul 7 00:20:25.682246 containerd[2793]: time="2025-07-07T00:20:25.682226062Z" level=warning msg="container event discarded" container=183a8412053e2526e8c8b9e42b1f2d78cdff05ca983e8887db6eb6f0eef63223 type=CONTAINER_STARTED_EVENT Jul 7 00:20:26.543720 containerd[2793]: time="2025-07-07T00:20:26.543677692Z" level=warning msg="container event discarded" container=d78d54853fb9a608aaa05fd402b454254f0b227ebf609756836307fc246e69da type=CONTAINER_CREATED_EVENT Jul 7 00:20:26.600892 containerd[2793]: time="2025-07-07T00:20:26.600865443Z" level=warning msg="container event discarded" container=d78d54853fb9a608aaa05fd402b454254f0b227ebf609756836307fc246e69da type=CONTAINER_STARTED_EVENT Jul 7 00:20:27.516037 containerd[2793]: time="2025-07-07T00:20:27.516005019Z" level=warning msg="container event discarded" container=2b920c5b316f0deec257706a3c69e2bdc82985fb10725d9cff46d8d7f86a947b type=CONTAINER_CREATED_EVENT Jul 7 00:20:27.576217 containerd[2793]: time="2025-07-07T00:20:27.576186534Z" level=warning msg="container event discarded" container=2b920c5b316f0deec257706a3c69e2bdc82985fb10725d9cff46d8d7f86a947b type=CONTAINER_STARTED_EVENT Jul 7 00:20:28.634782 containerd[2793]: time="2025-07-07T00:20:28.634729168Z" level=warning msg="container event discarded" container=70af949661e46120c354c344dfdb67f6c17204b93c01b91da1b5da0d19488c74 type=CONTAINER_CREATED_EVENT Jul 7 00:20:28.698987 containerd[2793]: time="2025-07-07T00:20:28.698949088Z" level=warning msg="container event discarded" container=70af949661e46120c354c344dfdb67f6c17204b93c01b91da1b5da0d19488c74 type=CONTAINER_STARTED_EVENT Jul 7 00:20:29.454433 containerd[2793]: time="2025-07-07T00:20:29.454390706Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d78d54853fb9a608aaa05fd402b454254f0b227ebf609756836307fc246e69da\" id:\"54287fb57fd6d6ae22ade90f88661dd2aca346721081bb7576569edab1f3e4a9\" pid:9701 exited_at:{seconds:1751847629 nanos:454096945}" Jul 7 00:20:30.289305 containerd[2793]: time="2025-07-07T00:20:30.289276782Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d78d54853fb9a608aaa05fd402b454254f0b227ebf609756836307fc246e69da\" id:\"3ae0c178261ebaf38ee92eca5a557fa4f988eb163974097f46193f3a9da346c7\" pid:9722 exited_at:{seconds:1751847630 nanos:289138302}" Jul 7 00:20:35.639536 containerd[2793]: time="2025-07-07T00:20:35.639498344Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a53069ddda669de6a7eaf64f6e91800d46d86fb1cbb49408d13ac953860d555\" id:\"850221079df6cdad2d537e2b9b56069ac9217a865130c51145a99b732d4b692a\" pid:9744 exited_at:{seconds:1751847635 nanos:639262943}" Jul 7 00:20:38.849998 containerd[2793]: time="2025-07-07T00:20:38.849953128Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a53069ddda669de6a7eaf64f6e91800d46d86fb1cbb49408d13ac953860d555\" id:\"1dee38c45d1317e9bb446f1a5aa44f008d394442d5d8913e8633894e5062e0c4\" pid:9789 exited_at:{seconds:1751847638 nanos:849730688}" Jul 7 00:20:46.660703 containerd[2793]: time="2025-07-07T00:20:46.660649660Z" level=info msg="TaskExit event in podsandbox handler container_id:\"24ceabf0d47ff5b2708d52fbf1f9a74a20e5f96adf03085c6da651b5c661d5be\" id:\"6b4454e13a1e31b46dec718c587d084102e302a0479fd97b8869118130735daf\" pid:9830 exited_at:{seconds:1751847646 nanos:660360020}" Jul 7 00:20:59.461387 containerd[2793]: time="2025-07-07T00:20:59.461350580Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d78d54853fb9a608aaa05fd402b454254f0b227ebf609756836307fc246e69da\" id:\"a0831c7b377c8ad53023522660ee1c1427da10da6051844891b28fbdfb5d8fa2\" pid:9870 exited_at:{seconds:1751847659 nanos:461169459}" Jul 7 00:21:08.845413 containerd[2793]: time="2025-07-07T00:21:08.845370178Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a53069ddda669de6a7eaf64f6e91800d46d86fb1cbb49408d13ac953860d555\" id:\"1b91092ed26e1bc602b945281ce12eafbc2c2c57c0d2209a417ca52324a68788\" pid:9893 exited_at:{seconds:1751847668 nanos:845174378}" Jul 7 00:21:16.658992 containerd[2793]: time="2025-07-07T00:21:16.658938948Z" level=info msg="TaskExit event in podsandbox handler container_id:\"24ceabf0d47ff5b2708d52fbf1f9a74a20e5f96adf03085c6da651b5c661d5be\" id:\"6eb8746173a9c195999a352d71faddf80af3a778093210ebdc669e9b14057656\" pid:9937 exited_at:{seconds:1751847676 nanos:658682427}" Jul 7 00:21:29.460368 containerd[2793]: time="2025-07-07T00:21:29.460321621Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d78d54853fb9a608aaa05fd402b454254f0b227ebf609756836307fc246e69da\" id:\"204c74f8550571d7d62dc803c5785077b27b362de85b72859604d928790d5871\" pid:9989 exited_at:{seconds:1751847689 nanos:460170780}" Jul 7 00:21:30.285314 containerd[2793]: time="2025-07-07T00:21:30.285272683Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d78d54853fb9a608aaa05fd402b454254f0b227ebf609756836307fc246e69da\" id:\"126baedd941c481bde80a13c66dbdcea38df2993b367c6605d8dff55ae11cc60\" pid:10011 exited_at:{seconds:1751847690 nanos:285111363}" Jul 7 00:21:35.644699 containerd[2793]: time="2025-07-07T00:21:35.644636208Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a53069ddda669de6a7eaf64f6e91800d46d86fb1cbb49408d13ac953860d555\" id:\"79da02bd87b2e647b4ed9af9948f7c172b92b2563a26dccc406f3831e39c738f\" pid:10033 exited_at:{seconds:1751847695 nanos:644425208}" Jul 7 00:21:38.849282 containerd[2793]: time="2025-07-07T00:21:38.849225741Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a53069ddda669de6a7eaf64f6e91800d46d86fb1cbb49408d13ac953860d555\" id:\"46460f8f868173cfc8396cc98094acd1423efde716acfce5d3c462a73007e96d\" pid:10071 exited_at:{seconds:1751847698 nanos:849038741}" Jul 7 00:21:46.659264 containerd[2793]: time="2025-07-07T00:21:46.659176544Z" level=info msg="TaskExit event in podsandbox handler container_id:\"24ceabf0d47ff5b2708d52fbf1f9a74a20e5f96adf03085c6da651b5c661d5be\" id:\"e995149ccbc762a49249ff56b5054254d89877ead1b2cd34d3c271519e990b95\" pid:10132 exited_at:{seconds:1751847706 nanos:658910224}" Jul 7 00:21:59.463684 containerd[2793]: time="2025-07-07T00:21:59.463639178Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d78d54853fb9a608aaa05fd402b454254f0b227ebf609756836307fc246e69da\" id:\"a66e3e6175de5f9602ace12e81621d781eadfb8b2df906c276b8de81542cf367\" pid:10169 exited_at:{seconds:1751847719 nanos:463441218}" Jul 7 00:22:08.859712 containerd[2793]: time="2025-07-07T00:22:08.859670186Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a53069ddda669de6a7eaf64f6e91800d46d86fb1cbb49408d13ac953860d555\" id:\"1fb9552f973073d2e564dc75775894a14e829172a397535e07f0470904eff4c4\" pid:10191 exited_at:{seconds:1751847728 nanos:859471706}" Jul 7 00:22:16.659009 containerd[2793]: time="2025-07-07T00:22:16.658966455Z" level=info msg="TaskExit event in podsandbox handler container_id:\"24ceabf0d47ff5b2708d52fbf1f9a74a20e5f96adf03085c6da651b5c661d5be\" id:\"83e64b6a47a894c3e5e234b42be05f9bee787a3c7bfb0e403d5baad1e8466a22\" pid:10231 exited_at:{seconds:1751847736 nanos:658738654}" Jul 7 00:22:29.466390 containerd[2793]: time="2025-07-07T00:22:29.466337436Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d78d54853fb9a608aaa05fd402b454254f0b227ebf609756836307fc246e69da\" id:\"fffb0d07dc67047e41f9d9bb5759df7dd38bde863b4b72f830912ff7baed17de\" pid:10268 exited_at:{seconds:1751847749 nanos:466140797}" Jul 7 00:22:30.289545 containerd[2793]: time="2025-07-07T00:22:30.289495144Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d78d54853fb9a608aaa05fd402b454254f0b227ebf609756836307fc246e69da\" id:\"8087b7292d590445bde4756b725d7fd277ec30df00d0d16e5a66c9a937e61410\" pid:10289 exited_at:{seconds:1751847750 nanos:289308464}" Jul 7 00:22:35.644633 containerd[2793]: time="2025-07-07T00:22:35.644587572Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a53069ddda669de6a7eaf64f6e91800d46d86fb1cbb49408d13ac953860d555\" id:\"88f01dcc3608a28b21ab8de7009ce82feec1204516e1bd07472f978d10a6e98b\" pid:10310 exited_at:{seconds:1751847755 nanos:644341772}" Jul 7 00:22:38.850945 containerd[2793]: time="2025-07-07T00:22:38.850901489Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a53069ddda669de6a7eaf64f6e91800d46d86fb1cbb49408d13ac953860d555\" id:\"64d10bf29b9571fe0d8d4a17029308ee524a868ddb9c61c64961400d35e8af57\" pid:10349 exited_at:{seconds:1751847758 nanos:850697650}" Jul 7 00:22:46.651686 containerd[2793]: time="2025-07-07T00:22:46.651610839Z" level=info msg="TaskExit event in podsandbox handler container_id:\"24ceabf0d47ff5b2708d52fbf1f9a74a20e5f96adf03085c6da651b5c661d5be\" id:\"43979faf47224f7c27ea01548203650528eb88671dbfd68dea664d51ea409405\" pid:10389 exited_at:{seconds:1751847766 nanos:651314519}" Jul 7 00:22:51.187564 systemd[1]: Started sshd@7-147.28.143.210:22-147.75.109.163:56392.service - OpenSSH per-connection server daemon (147.75.109.163:56392). Jul 7 00:22:51.451976 sshd[10419]: Accepted publickey for core from 147.75.109.163 port 56392 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 00:22:51.453125 sshd-session[10419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:22:51.456667 systemd-logind[2773]: New session 10 of user core. Jul 7 00:22:51.466793 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 00:22:51.715402 sshd[10421]: Connection closed by 147.75.109.163 port 56392 Jul 7 00:22:51.715652 sshd-session[10419]: pam_unix(sshd:session): session closed for user core Jul 7 00:22:51.718637 systemd[1]: sshd@7-147.28.143.210:22-147.75.109.163:56392.service: Deactivated successfully. Jul 7 00:22:51.720817 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 00:22:51.721427 systemd-logind[2773]: Session 10 logged out. Waiting for processes to exit. Jul 7 00:22:51.722263 systemd-logind[2773]: Removed session 10. Jul 7 00:22:56.771442 systemd[1]: Started sshd@8-147.28.143.210:22-147.75.109.163:39626.service - OpenSSH per-connection server daemon (147.75.109.163:39626). Jul 7 00:22:57.063426 sshd[10463]: Accepted publickey for core from 147.75.109.163 port 39626 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 00:22:57.064732 sshd-session[10463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:22:57.068059 systemd-logind[2773]: New session 11 of user core. Jul 7 00:22:57.085864 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 00:22:57.335911 sshd[10465]: Connection closed by 147.75.109.163 port 39626 Jul 7 00:22:57.336219 sshd-session[10463]: pam_unix(sshd:session): session closed for user core Jul 7 00:22:57.339360 systemd[1]: sshd@8-147.28.143.210:22-147.75.109.163:39626.service: Deactivated successfully. Jul 7 00:22:57.340879 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 00:22:57.341463 systemd-logind[2773]: Session 11 logged out. Waiting for processes to exit. Jul 7 00:22:57.342364 systemd-logind[2773]: Removed session 11. Jul 7 00:22:59.462637 containerd[2793]: time="2025-07-07T00:22:59.462600581Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d78d54853fb9a608aaa05fd402b454254f0b227ebf609756836307fc246e69da\" id:\"b6aa60d3a22561dcf44a6c21fabcbfc046af404cf5c673fc820e69c63d1f481f\" pid:10517 exited_at:{seconds:1751847779 nanos:462428541}" Jul 7 00:23:02.393409 systemd[1]: Started sshd@9-147.28.143.210:22-147.75.109.163:39638.service - OpenSSH per-connection server daemon (147.75.109.163:39638). Jul 7 00:23:02.700223 sshd[10528]: Accepted publickey for core from 147.75.109.163 port 39638 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 00:23:02.701334 sshd-session[10528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:23:02.704478 systemd-logind[2773]: New session 12 of user core. Jul 7 00:23:02.714808 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 00:23:02.970402 sshd[10531]: Connection closed by 147.75.109.163 port 39638 Jul 7 00:23:02.970641 sshd-session[10528]: pam_unix(sshd:session): session closed for user core Jul 7 00:23:02.973538 systemd[1]: sshd@9-147.28.143.210:22-147.75.109.163:39638.service: Deactivated successfully. Jul 7 00:23:02.975680 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 00:23:02.976246 systemd-logind[2773]: Session 12 logged out. Waiting for processes to exit. Jul 7 00:23:02.977029 systemd-logind[2773]: Removed session 12. Jul 7 00:23:03.028312 systemd[1]: Started sshd@10-147.28.143.210:22-147.75.109.163:39652.service - OpenSSH per-connection server daemon (147.75.109.163:39652). Jul 7 00:23:03.291325 sshd[10569]: Accepted publickey for core from 147.75.109.163 port 39652 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 00:23:03.292410 sshd-session[10569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:23:03.295340 systemd-logind[2773]: New session 13 of user core. Jul 7 00:23:03.309763 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 00:23:03.570237 sshd[10571]: Connection closed by 147.75.109.163 port 39652 Jul 7 00:23:03.570487 sshd-session[10569]: pam_unix(sshd:session): session closed for user core Jul 7 00:23:03.573393 systemd[1]: sshd@10-147.28.143.210:22-147.75.109.163:39652.service: Deactivated successfully. Jul 7 00:23:03.575014 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 00:23:03.575594 systemd-logind[2773]: Session 13 logged out. Waiting for processes to exit. Jul 7 00:23:03.576378 systemd-logind[2773]: Removed session 13. Jul 7 00:23:03.626324 systemd[1]: Started sshd@11-147.28.143.210:22-147.75.109.163:39654.service - OpenSSH per-connection server daemon (147.75.109.163:39654). Jul 7 00:23:03.926838 sshd[10611]: Accepted publickey for core from 147.75.109.163 port 39654 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 00:23:03.927977 sshd-session[10611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:23:03.931099 systemd-logind[2773]: New session 14 of user core. Jul 7 00:23:03.942783 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 00:23:04.199426 sshd[10613]: Connection closed by 147.75.109.163 port 39654 Jul 7 00:23:04.199714 sshd-session[10611]: pam_unix(sshd:session): session closed for user core Jul 7 00:23:04.202570 systemd[1]: sshd@11-147.28.143.210:22-147.75.109.163:39654.service: Deactivated successfully. Jul 7 00:23:04.204919 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 00:23:04.205581 systemd-logind[2773]: Session 14 logged out. Waiting for processes to exit. Jul 7 00:23:04.206465 systemd-logind[2773]: Removed session 14. Jul 7 00:23:08.850377 containerd[2793]: time="2025-07-07T00:23:08.850337431Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a53069ddda669de6a7eaf64f6e91800d46d86fb1cbb49408d13ac953860d555\" id:\"ab1d5816dc2f1b484c71196c03814ad8d6ec7c2ad3ffa0a376f7a5055136f90c\" pid:10662 exited_at:{seconds:1751847788 nanos:850084431}" Jul 7 00:23:09.249462 systemd[1]: Started sshd@12-147.28.143.210:22-147.75.109.163:44240.service - OpenSSH per-connection server daemon (147.75.109.163:44240). Jul 7 00:23:09.513932 sshd[10690]: Accepted publickey for core from 147.75.109.163 port 44240 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 00:23:09.515054 sshd-session[10690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:23:09.518232 systemd-logind[2773]: New session 15 of user core. Jul 7 00:23:09.535775 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 00:23:09.770363 sshd[10692]: Connection closed by 147.75.109.163 port 44240 Jul 7 00:23:09.770683 sshd-session[10690]: pam_unix(sshd:session): session closed for user core Jul 7 00:23:09.773684 systemd[1]: sshd@12-147.28.143.210:22-147.75.109.163:44240.service: Deactivated successfully. Jul 7 00:23:09.776224 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 00:23:09.776832 systemd-logind[2773]: Session 15 logged out. Waiting for processes to exit. Jul 7 00:23:09.777617 systemd-logind[2773]: Removed session 15. Jul 7 00:23:09.828463 systemd[1]: Started sshd@13-147.28.143.210:22-147.75.109.163:44246.service - OpenSSH per-connection server daemon (147.75.109.163:44246). Jul 7 00:23:10.091164 sshd[10730]: Accepted publickey for core from 147.75.109.163 port 44246 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 00:23:10.092363 sshd-session[10730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:23:10.095394 systemd-logind[2773]: New session 16 of user core. Jul 7 00:23:10.113817 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 00:23:10.481678 sshd[10732]: Connection closed by 147.75.109.163 port 44246 Jul 7 00:23:10.482047 sshd-session[10730]: pam_unix(sshd:session): session closed for user core Jul 7 00:23:10.485118 systemd[1]: sshd@13-147.28.143.210:22-147.75.109.163:44246.service: Deactivated successfully. Jul 7 00:23:10.487168 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 00:23:10.487773 systemd-logind[2773]: Session 16 logged out. Waiting for processes to exit. Jul 7 00:23:10.488564 systemd-logind[2773]: Removed session 16. Jul 7 00:23:10.541336 systemd[1]: Started sshd@14-147.28.143.210:22-147.75.109.163:44250.service - OpenSSH per-connection server daemon (147.75.109.163:44250). Jul 7 00:23:10.835212 sshd[10759]: Accepted publickey for core from 147.75.109.163 port 44250 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 00:23:10.836321 sshd-session[10759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:23:10.839391 systemd-logind[2773]: New session 17 of user core. Jul 7 00:23:10.860765 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 00:23:11.808274 sshd[10761]: Connection closed by 147.75.109.163 port 44250 Jul 7 00:23:11.808650 sshd-session[10759]: pam_unix(sshd:session): session closed for user core Jul 7 00:23:11.811629 systemd[1]: sshd@14-147.28.143.210:22-147.75.109.163:44250.service: Deactivated successfully. Jul 7 00:23:11.814175 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 00:23:11.814774 systemd-logind[2773]: Session 17 logged out. Waiting for processes to exit. Jul 7 00:23:11.815500 systemd-logind[2773]: Removed session 17. Jul 7 00:23:11.866221 systemd[1]: Started sshd@15-147.28.143.210:22-147.75.109.163:44258.service - OpenSSH per-connection server daemon (147.75.109.163:44258). Jul 7 00:23:12.133527 sshd[10819]: Accepted publickey for core from 147.75.109.163 port 44258 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 00:23:12.134708 sshd-session[10819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:23:12.137950 systemd-logind[2773]: New session 18 of user core. Jul 7 00:23:12.162831 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 00:23:12.481569 sshd[10821]: Connection closed by 147.75.109.163 port 44258 Jul 7 00:23:12.481874 sshd-session[10819]: pam_unix(sshd:session): session closed for user core Jul 7 00:23:12.484857 systemd[1]: sshd@15-147.28.143.210:22-147.75.109.163:44258.service: Deactivated successfully. Jul 7 00:23:12.486429 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 00:23:12.487022 systemd-logind[2773]: Session 18 logged out. Waiting for processes to exit. Jul 7 00:23:12.487813 systemd-logind[2773]: Removed session 18. Jul 7 00:23:12.543221 systemd[1]: Started sshd@16-147.28.143.210:22-147.75.109.163:44270.service - OpenSSH per-connection server daemon (147.75.109.163:44270). Jul 7 00:23:12.808340 sshd[10869]: Accepted publickey for core from 147.75.109.163 port 44270 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 00:23:12.809421 sshd-session[10869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:23:12.812431 systemd-logind[2773]: New session 19 of user core. Jul 7 00:23:12.833813 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 00:23:13.063104 sshd[10872]: Connection closed by 147.75.109.163 port 44270 Jul 7 00:23:13.063388 sshd-session[10869]: pam_unix(sshd:session): session closed for user core Jul 7 00:23:13.066324 systemd[1]: sshd@16-147.28.143.210:22-147.75.109.163:44270.service: Deactivated successfully. Jul 7 00:23:13.067875 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 00:23:13.068446 systemd-logind[2773]: Session 19 logged out. Waiting for processes to exit. Jul 7 00:23:13.069238 systemd-logind[2773]: Removed session 19. Jul 7 00:23:16.658231 containerd[2793]: time="2025-07-07T00:23:16.658183106Z" level=info msg="TaskExit event in podsandbox handler container_id:\"24ceabf0d47ff5b2708d52fbf1f9a74a20e5f96adf03085c6da651b5c661d5be\" id:\"b6b5ac6b7bf7f671066485feb34d2f1cb3a84a9cac51276b7c20981f653152ab\" pid:10936 exited_at:{seconds:1751847796 nanos:657855906}" Jul 7 00:23:18.120364 systemd[1]: Started sshd@17-147.28.143.210:22-147.75.109.163:56632.service - OpenSSH per-connection server daemon (147.75.109.163:56632). Jul 7 00:23:18.411760 sshd[10960]: Accepted publickey for core from 147.75.109.163 port 56632 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 00:23:18.412886 sshd-session[10960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:23:18.416002 systemd-logind[2773]: New session 20 of user core. Jul 7 00:23:18.438812 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 00:23:18.680430 sshd[10962]: Connection closed by 147.75.109.163 port 56632 Jul 7 00:23:18.680726 sshd-session[10960]: pam_unix(sshd:session): session closed for user core Jul 7 00:23:18.683629 systemd[1]: sshd@17-147.28.143.210:22-147.75.109.163:56632.service: Deactivated successfully. Jul 7 00:23:18.685798 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 00:23:18.686387 systemd-logind[2773]: Session 20 logged out. Waiting for processes to exit. Jul 7 00:23:18.687195 systemd-logind[2773]: Removed session 20. Jul 7 00:23:23.738508 systemd[1]: Started sshd@18-147.28.143.210:22-147.75.109.163:56642.service - OpenSSH per-connection server daemon (147.75.109.163:56642). Jul 7 00:23:24.027731 sshd[10999]: Accepted publickey for core from 147.75.109.163 port 56642 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 00:23:24.028779 sshd-session[10999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:23:24.031883 systemd-logind[2773]: New session 21 of user core. Jul 7 00:23:24.053775 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 00:23:24.297376 sshd[11002]: Connection closed by 147.75.109.163 port 56642 Jul 7 00:23:24.297671 sshd-session[10999]: pam_unix(sshd:session): session closed for user core Jul 7 00:23:24.300553 systemd[1]: sshd@18-147.28.143.210:22-147.75.109.163:56642.service: Deactivated successfully. Jul 7 00:23:24.302275 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 00:23:24.302857 systemd-logind[2773]: Session 21 logged out. Waiting for processes to exit. Jul 7 00:23:24.303630 systemd-logind[2773]: Removed session 21.