Sep 11 00:32:54.705844 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 10 22:25:29 -00 2025 Sep 11 00:32:54.705861 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:32:54.705868 kernel: Disabled fast string operations Sep 11 00:32:54.705872 kernel: BIOS-provided physical RAM map: Sep 11 00:32:54.705876 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Sep 11 00:32:54.705881 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Sep 11 00:32:54.705886 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Sep 11 00:32:54.705891 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Sep 11 00:32:54.705895 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Sep 11 00:32:54.705899 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Sep 11 00:32:54.705903 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Sep 11 00:32:54.705908 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Sep 11 00:32:54.705912 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Sep 11 00:32:54.705916 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Sep 11 00:32:54.705923 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Sep 11 00:32:54.705928 kernel: NX (Execute Disable) protection: active Sep 11 00:32:54.705932 kernel: APIC: Static calls initialized Sep 11 00:32:54.705937 kernel: SMBIOS 2.7 present. Sep 11 00:32:54.705942 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Sep 11 00:32:54.705947 kernel: DMI: Memory slots populated: 1/128 Sep 11 00:32:54.705953 kernel: vmware: hypercall mode: 0x00 Sep 11 00:32:54.705958 kernel: Hypervisor detected: VMware Sep 11 00:32:54.705963 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Sep 11 00:32:54.705968 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Sep 11 00:32:54.705973 kernel: vmware: using clock offset of 4358952792 ns Sep 11 00:32:54.705978 kernel: tsc: Detected 3408.000 MHz processor Sep 11 00:32:54.705983 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 11 00:32:54.705989 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 11 00:32:54.705994 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Sep 11 00:32:54.705999 kernel: total RAM covered: 3072M Sep 11 00:32:54.706005 kernel: Found optimal setting for mtrr clean up Sep 11 00:32:54.706010 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Sep 11 00:32:54.706015 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Sep 11 00:32:54.706020 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 11 00:32:54.706025 kernel: Using GB pages for direct mapping Sep 11 00:32:54.706030 kernel: ACPI: Early table checksum verification disabled Sep 11 00:32:54.706035 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Sep 11 00:32:54.706040 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Sep 11 00:32:54.706045 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Sep 11 00:32:54.706051 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Sep 11 00:32:54.706058 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Sep 11 00:32:54.706063 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Sep 11 00:32:54.706068 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Sep 11 00:32:54.706073 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Sep 11 00:32:54.706080 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Sep 11 00:32:54.706085 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Sep 11 00:32:54.706091 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Sep 11 00:32:54.706096 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Sep 11 00:32:54.706102 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Sep 11 00:32:54.706107 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Sep 11 00:32:54.706112 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Sep 11 00:32:54.706117 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Sep 11 00:32:54.706122 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Sep 11 00:32:54.706129 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Sep 11 00:32:54.706134 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Sep 11 00:32:54.706139 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Sep 11 00:32:54.706144 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Sep 11 00:32:54.706149 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Sep 11 00:32:54.706155 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 11 00:32:54.706160 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 11 00:32:54.707753 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Sep 11 00:32:54.707766 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00001000-0x7fffffff] Sep 11 00:32:54.707791 kernel: NODE_DATA(0) allocated [mem 0x7fff8dc0-0x7fffffff] Sep 11 00:32:54.707800 kernel: Zone ranges: Sep 11 00:32:54.707806 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 11 00:32:54.707811 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Sep 11 00:32:54.707817 kernel: Normal empty Sep 11 00:32:54.707822 kernel: Device empty Sep 11 00:32:54.707827 kernel: Movable zone start for each node Sep 11 00:32:54.707832 kernel: Early memory node ranges Sep 11 00:32:54.707838 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Sep 11 00:32:54.707843 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Sep 11 00:32:54.707850 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Sep 11 00:32:54.707856 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Sep 11 00:32:54.707861 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 11 00:32:54.707866 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Sep 11 00:32:54.707871 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Sep 11 00:32:54.707876 kernel: ACPI: PM-Timer IO Port: 0x1008 Sep 11 00:32:54.707881 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Sep 11 00:32:54.707887 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Sep 11 00:32:54.707892 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Sep 11 00:32:54.707897 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Sep 11 00:32:54.707904 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Sep 11 00:32:54.707909 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Sep 11 00:32:54.707914 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Sep 11 00:32:54.707919 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Sep 11 00:32:54.707924 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Sep 11 00:32:54.707929 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Sep 11 00:32:54.707934 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Sep 11 00:32:54.707939 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Sep 11 00:32:54.707944 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Sep 11 00:32:54.707961 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Sep 11 00:32:54.707966 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Sep 11 00:32:54.707972 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Sep 11 00:32:54.707977 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Sep 11 00:32:54.707982 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Sep 11 00:32:54.707987 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Sep 11 00:32:54.707992 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Sep 11 00:32:54.707997 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Sep 11 00:32:54.708002 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Sep 11 00:32:54.708008 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Sep 11 00:32:54.708014 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Sep 11 00:32:54.708019 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Sep 11 00:32:54.708024 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Sep 11 00:32:54.708029 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Sep 11 00:32:54.708035 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Sep 11 00:32:54.708040 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Sep 11 00:32:54.708045 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Sep 11 00:32:54.708050 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Sep 11 00:32:54.708056 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Sep 11 00:32:54.708061 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Sep 11 00:32:54.708067 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Sep 11 00:32:54.708072 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Sep 11 00:32:54.708078 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Sep 11 00:32:54.708083 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Sep 11 00:32:54.708088 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Sep 11 00:32:54.708093 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Sep 11 00:32:54.708099 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Sep 11 00:32:54.708108 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Sep 11 00:32:54.708113 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Sep 11 00:32:54.708119 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Sep 11 00:32:54.708124 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Sep 11 00:32:54.708131 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Sep 11 00:32:54.708136 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Sep 11 00:32:54.708142 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Sep 11 00:32:54.708147 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Sep 11 00:32:54.708153 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Sep 11 00:32:54.708158 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Sep 11 00:32:54.708201 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Sep 11 00:32:54.708209 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Sep 11 00:32:54.708215 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Sep 11 00:32:54.708221 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Sep 11 00:32:54.708226 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Sep 11 00:32:54.708232 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Sep 11 00:32:54.708237 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Sep 11 00:32:54.708243 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Sep 11 00:32:54.708248 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Sep 11 00:32:54.708254 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Sep 11 00:32:54.708259 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Sep 11 00:32:54.708266 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Sep 11 00:32:54.708271 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Sep 11 00:32:54.708277 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Sep 11 00:32:54.708282 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Sep 11 00:32:54.708287 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Sep 11 00:32:54.708293 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Sep 11 00:32:54.708298 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Sep 11 00:32:54.708304 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Sep 11 00:32:54.708309 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Sep 11 00:32:54.708315 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Sep 11 00:32:54.708321 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Sep 11 00:32:54.708326 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Sep 11 00:32:54.708332 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Sep 11 00:32:54.708337 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Sep 11 00:32:54.708343 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Sep 11 00:32:54.708348 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Sep 11 00:32:54.708354 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Sep 11 00:32:54.708359 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Sep 11 00:32:54.708364 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Sep 11 00:32:54.708371 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Sep 11 00:32:54.708376 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Sep 11 00:32:54.708382 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Sep 11 00:32:54.708387 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Sep 11 00:32:54.708392 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Sep 11 00:32:54.708398 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Sep 11 00:32:54.708403 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Sep 11 00:32:54.708409 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Sep 11 00:32:54.708414 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Sep 11 00:32:54.708420 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Sep 11 00:32:54.708426 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Sep 11 00:32:54.708432 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Sep 11 00:32:54.708437 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Sep 11 00:32:54.708442 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Sep 11 00:32:54.708448 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Sep 11 00:32:54.708453 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Sep 11 00:32:54.708459 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Sep 11 00:32:54.708464 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Sep 11 00:32:54.708470 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Sep 11 00:32:54.708475 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Sep 11 00:32:54.708482 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Sep 11 00:32:54.708487 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Sep 11 00:32:54.708492 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Sep 11 00:32:54.708498 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Sep 11 00:32:54.708503 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Sep 11 00:32:54.708509 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Sep 11 00:32:54.708514 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Sep 11 00:32:54.708520 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Sep 11 00:32:54.708525 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Sep 11 00:32:54.708531 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Sep 11 00:32:54.708537 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Sep 11 00:32:54.708543 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Sep 11 00:32:54.708548 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Sep 11 00:32:54.708553 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Sep 11 00:32:54.708559 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Sep 11 00:32:54.708564 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Sep 11 00:32:54.708569 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Sep 11 00:32:54.708575 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Sep 11 00:32:54.708581 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Sep 11 00:32:54.708587 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Sep 11 00:32:54.708593 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Sep 11 00:32:54.708598 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Sep 11 00:32:54.708604 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Sep 11 00:32:54.708609 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Sep 11 00:32:54.708614 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Sep 11 00:32:54.708620 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Sep 11 00:32:54.708625 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Sep 11 00:32:54.708631 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Sep 11 00:32:54.708636 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Sep 11 00:32:54.708643 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Sep 11 00:32:54.708649 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 11 00:32:54.708706 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Sep 11 00:32:54.708715 kernel: TSC deadline timer available Sep 11 00:32:54.708720 kernel: CPU topo: Max. logical packages: 128 Sep 11 00:32:54.708726 kernel: CPU topo: Max. logical dies: 128 Sep 11 00:32:54.708732 kernel: CPU topo: Max. dies per package: 1 Sep 11 00:32:54.708737 kernel: CPU topo: Max. threads per core: 1 Sep 11 00:32:54.708743 kernel: CPU topo: Num. cores per package: 1 Sep 11 00:32:54.708748 kernel: CPU topo: Num. threads per package: 1 Sep 11 00:32:54.708756 kernel: CPU topo: Allowing 2 present CPUs plus 126 hotplug CPUs Sep 11 00:32:54.708761 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Sep 11 00:32:54.708767 kernel: Booting paravirtualized kernel on VMware hypervisor Sep 11 00:32:54.708773 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 11 00:32:54.708779 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Sep 11 00:32:54.708784 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Sep 11 00:32:54.708790 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Sep 11 00:32:54.708796 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Sep 11 00:32:54.708801 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Sep 11 00:32:54.708808 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Sep 11 00:32:54.708813 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Sep 11 00:32:54.708819 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Sep 11 00:32:54.708824 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Sep 11 00:32:54.708829 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Sep 11 00:32:54.708835 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Sep 11 00:32:54.708840 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Sep 11 00:32:54.708845 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Sep 11 00:32:54.708851 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Sep 11 00:32:54.708858 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Sep 11 00:32:54.708863 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Sep 11 00:32:54.708869 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Sep 11 00:32:54.708874 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Sep 11 00:32:54.708880 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Sep 11 00:32:54.708886 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:32:54.708892 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 11 00:32:54.708899 kernel: random: crng init done Sep 11 00:32:54.708905 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Sep 11 00:32:54.708910 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Sep 11 00:32:54.708915 kernel: printk: log_buf_len min size: 262144 bytes Sep 11 00:32:54.708921 kernel: printk: log_buf_len: 1048576 bytes Sep 11 00:32:54.708926 kernel: printk: early log buf free: 245576(93%) Sep 11 00:32:54.708932 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 11 00:32:54.708938 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 11 00:32:54.708943 kernel: Fallback order for Node 0: 0 Sep 11 00:32:54.708949 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524157 Sep 11 00:32:54.708956 kernel: Policy zone: DMA32 Sep 11 00:32:54.708961 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 11 00:32:54.708967 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Sep 11 00:32:54.708972 kernel: ftrace: allocating 40103 entries in 157 pages Sep 11 00:32:54.708978 kernel: ftrace: allocated 157 pages with 5 groups Sep 11 00:32:54.708983 kernel: Dynamic Preempt: voluntary Sep 11 00:32:54.708989 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 11 00:32:54.708995 kernel: rcu: RCU event tracing is enabled. Sep 11 00:32:54.709000 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Sep 11 00:32:54.709007 kernel: Trampoline variant of Tasks RCU enabled. Sep 11 00:32:54.709012 kernel: Rude variant of Tasks RCU enabled. Sep 11 00:32:54.709018 kernel: Tracing variant of Tasks RCU enabled. Sep 11 00:32:54.709023 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 11 00:32:54.709029 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Sep 11 00:32:54.709034 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 11 00:32:54.709040 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 11 00:32:54.709046 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 11 00:32:54.709051 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Sep 11 00:32:54.709058 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Sep 11 00:32:54.709063 kernel: Console: colour VGA+ 80x25 Sep 11 00:32:54.709069 kernel: printk: legacy console [tty0] enabled Sep 11 00:32:54.709074 kernel: printk: legacy console [ttyS0] enabled Sep 11 00:32:54.709080 kernel: ACPI: Core revision 20240827 Sep 11 00:32:54.709086 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Sep 11 00:32:54.709091 kernel: APIC: Switch to symmetric I/O mode setup Sep 11 00:32:54.709097 kernel: x2apic enabled Sep 11 00:32:54.709102 kernel: APIC: Switched APIC routing to: physical x2apic Sep 11 00:32:54.709108 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 11 00:32:54.709114 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Sep 11 00:32:54.709120 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Sep 11 00:32:54.709126 kernel: Disabled fast string operations Sep 11 00:32:54.709131 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 11 00:32:54.709137 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 11 00:32:54.709142 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 11 00:32:54.709148 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Sep 11 00:32:54.709154 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Sep 11 00:32:54.709160 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Sep 11 00:32:54.710603 kernel: RETBleed: Mitigation: Enhanced IBRS Sep 11 00:32:54.710610 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 11 00:32:54.710616 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 11 00:32:54.710622 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 11 00:32:54.710628 kernel: SRBDS: Unknown: Dependent on hypervisor status Sep 11 00:32:54.710633 kernel: GDS: Unknown: Dependent on hypervisor status Sep 11 00:32:54.710639 kernel: active return thunk: its_return_thunk Sep 11 00:32:54.710644 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 11 00:32:54.710652 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 11 00:32:54.710658 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 11 00:32:54.710663 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 11 00:32:54.710669 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 11 00:32:54.710675 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 11 00:32:54.710680 kernel: Freeing SMP alternatives memory: 32K Sep 11 00:32:54.710686 kernel: pid_max: default: 131072 minimum: 1024 Sep 11 00:32:54.710692 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 11 00:32:54.710697 kernel: landlock: Up and running. Sep 11 00:32:54.710704 kernel: SELinux: Initializing. Sep 11 00:32:54.710710 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 11 00:32:54.710716 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 11 00:32:54.710721 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Sep 11 00:32:54.710727 kernel: Performance Events: Skylake events, core PMU driver. Sep 11 00:32:54.710733 kernel: core: CPUID marked event: 'cpu cycles' unavailable Sep 11 00:32:54.710739 kernel: core: CPUID marked event: 'instructions' unavailable Sep 11 00:32:54.710744 kernel: core: CPUID marked event: 'bus cycles' unavailable Sep 11 00:32:54.710749 kernel: core: CPUID marked event: 'cache references' unavailable Sep 11 00:32:54.710756 kernel: core: CPUID marked event: 'cache misses' unavailable Sep 11 00:32:54.710761 kernel: core: CPUID marked event: 'branch instructions' unavailable Sep 11 00:32:54.710767 kernel: core: CPUID marked event: 'branch misses' unavailable Sep 11 00:32:54.710772 kernel: ... version: 1 Sep 11 00:32:54.710778 kernel: ... bit width: 48 Sep 11 00:32:54.710783 kernel: ... generic registers: 4 Sep 11 00:32:54.710789 kernel: ... value mask: 0000ffffffffffff Sep 11 00:32:54.710794 kernel: ... max period: 000000007fffffff Sep 11 00:32:54.710800 kernel: ... fixed-purpose events: 0 Sep 11 00:32:54.710806 kernel: ... event mask: 000000000000000f Sep 11 00:32:54.710812 kernel: signal: max sigframe size: 1776 Sep 11 00:32:54.710817 kernel: rcu: Hierarchical SRCU implementation. Sep 11 00:32:54.710824 kernel: rcu: Max phase no-delay instances is 400. Sep 11 00:32:54.710829 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Sep 11 00:32:54.710835 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 11 00:32:54.710840 kernel: smp: Bringing up secondary CPUs ... Sep 11 00:32:54.710846 kernel: smpboot: x86: Booting SMP configuration: Sep 11 00:32:54.710851 kernel: .... node #0, CPUs: #1 Sep 11 00:32:54.710858 kernel: Disabled fast string operations Sep 11 00:32:54.710863 kernel: smp: Brought up 1 node, 2 CPUs Sep 11 00:32:54.710869 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Sep 11 00:32:54.710875 kernel: Memory: 1926272K/2096628K available (14336K kernel code, 2429K rwdata, 9960K rodata, 53832K init, 1088K bss, 158984K reserved, 0K cma-reserved) Sep 11 00:32:54.710881 kernel: devtmpfs: initialized Sep 11 00:32:54.710886 kernel: x86/mm: Memory block size: 128MB Sep 11 00:32:54.710892 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Sep 11 00:32:54.710897 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 11 00:32:54.710903 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Sep 11 00:32:54.710910 kernel: pinctrl core: initialized pinctrl subsystem Sep 11 00:32:54.710915 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 11 00:32:54.710921 kernel: audit: initializing netlink subsys (disabled) Sep 11 00:32:54.710926 kernel: audit: type=2000 audit(1757550771.293:1): state=initialized audit_enabled=0 res=1 Sep 11 00:32:54.710932 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 11 00:32:54.710937 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 11 00:32:54.710943 kernel: cpuidle: using governor menu Sep 11 00:32:54.710948 kernel: Simple Boot Flag at 0x36 set to 0x80 Sep 11 00:32:54.710954 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 11 00:32:54.710961 kernel: dca service started, version 1.12.1 Sep 11 00:32:54.710973 kernel: PCI: ECAM [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) for domain 0000 [bus 00-7f] Sep 11 00:32:54.710980 kernel: PCI: Using configuration type 1 for base access Sep 11 00:32:54.710986 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 11 00:32:54.710992 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 11 00:32:54.710998 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 11 00:32:54.711004 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 11 00:32:54.711009 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 11 00:32:54.711015 kernel: ACPI: Added _OSI(Module Device) Sep 11 00:32:54.711022 kernel: ACPI: Added _OSI(Processor Device) Sep 11 00:32:54.711028 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 11 00:32:54.711034 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 11 00:32:54.711040 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Sep 11 00:32:54.711046 kernel: ACPI: Interpreter enabled Sep 11 00:32:54.711052 kernel: ACPI: PM: (supports S0 S1 S5) Sep 11 00:32:54.711057 kernel: ACPI: Using IOAPIC for interrupt routing Sep 11 00:32:54.711063 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 11 00:32:54.711069 kernel: PCI: Using E820 reservations for host bridge windows Sep 11 00:32:54.711076 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Sep 11 00:32:54.711082 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Sep 11 00:32:54.711192 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 11 00:32:54.711253 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Sep 11 00:32:54.711303 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Sep 11 00:32:54.711312 kernel: PCI host bridge to bus 0000:00 Sep 11 00:32:54.711364 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 11 00:32:54.711413 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Sep 11 00:32:54.711457 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 11 00:32:54.711500 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 11 00:32:54.711544 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Sep 11 00:32:54.711587 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Sep 11 00:32:54.711646 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 conventional PCI endpoint Sep 11 00:32:54.712624 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 conventional PCI bridge Sep 11 00:32:54.712682 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 11 00:32:54.712742 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Sep 11 00:32:54.712798 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a conventional PCI endpoint Sep 11 00:32:54.712851 kernel: pci 0000:00:07.1: BAR 4 [io 0x1060-0x106f] Sep 11 00:32:54.712902 kernel: pci 0000:00:07.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Sep 11 00:32:54.712952 kernel: pci 0000:00:07.1: BAR 1 [io 0x03f6]: legacy IDE quirk Sep 11 00:32:54.713002 kernel: pci 0000:00:07.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Sep 11 00:32:54.713051 kernel: pci 0000:00:07.1: BAR 3 [io 0x0376]: legacy IDE quirk Sep 11 00:32:54.713105 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Sep 11 00:32:54.713156 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Sep 11 00:32:54.713218 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Sep 11 00:32:54.713273 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 conventional PCI endpoint Sep 11 00:32:54.713323 kernel: pci 0000:00:07.7: BAR 0 [io 0x1080-0x10bf] Sep 11 00:32:54.713372 kernel: pci 0000:00:07.7: BAR 1 [mem 0xfebfe000-0xfebfffff 64bit] Sep 11 00:32:54.713429 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 conventional PCI endpoint Sep 11 00:32:54.713479 kernel: pci 0000:00:0f.0: BAR 0 [io 0x1070-0x107f] Sep 11 00:32:54.713531 kernel: pci 0000:00:0f.0: BAR 1 [mem 0xe8000000-0xefffffff pref] Sep 11 00:32:54.713581 kernel: pci 0000:00:0f.0: BAR 2 [mem 0xfe000000-0xfe7fffff] Sep 11 00:32:54.713630 kernel: pci 0000:00:0f.0: ROM [mem 0x00000000-0x00007fff pref] Sep 11 00:32:54.713679 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 11 00:32:54.713734 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 conventional PCI bridge Sep 11 00:32:54.713785 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Sep 11 00:32:54.713833 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Sep 11 00:32:54.713887 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Sep 11 00:32:54.713937 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 11 00:32:54.713993 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:32:54.714044 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 11 00:32:54.714094 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Sep 11 00:32:54.714145 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Sep 11 00:32:54.716236 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Sep 11 00:32:54.716311 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:32:54.716367 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 11 00:32:54.716419 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Sep 11 00:32:54.716470 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Sep 11 00:32:54.716521 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Sep 11 00:32:54.716572 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Sep 11 00:32:54.716627 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:32:54.716682 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 11 00:32:54.716733 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Sep 11 00:32:54.716784 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Sep 11 00:32:54.716833 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Sep 11 00:32:54.716884 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Sep 11 00:32:54.716939 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:32:54.716993 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 11 00:32:54.717044 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Sep 11 00:32:54.717094 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Sep 11 00:32:54.717144 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Sep 11 00:32:54.718263 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:32:54.718326 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 11 00:32:54.718386 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Sep 11 00:32:54.718441 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 11 00:32:54.718492 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Sep 11 00:32:54.718548 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:32:54.718600 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 11 00:32:54.718651 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Sep 11 00:32:54.718701 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Sep 11 00:32:54.718751 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Sep 11 00:32:54.718806 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:32:54.718860 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 11 00:32:54.718910 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Sep 11 00:32:54.718960 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Sep 11 00:32:54.719010 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Sep 11 00:32:54.719066 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:32:54.719117 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 11 00:32:54.720190 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Sep 11 00:32:54.720255 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Sep 11 00:32:54.720311 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Sep 11 00:32:54.720370 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:32:54.720423 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 11 00:32:54.720475 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Sep 11 00:32:54.720526 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Sep 11 00:32:54.720577 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Sep 11 00:32:54.720636 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:32:54.720687 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 11 00:32:54.720738 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Sep 11 00:32:54.720789 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Sep 11 00:32:54.720839 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Sep 11 00:32:54.720889 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Sep 11 00:32:54.720944 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:32:54.720998 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 11 00:32:54.721048 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Sep 11 00:32:54.721099 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Sep 11 00:32:54.721153 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Sep 11 00:32:54.721377 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Sep 11 00:32:54.721437 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:32:54.721489 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 11 00:32:54.721542 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Sep 11 00:32:54.721592 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 11 00:32:54.721642 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Sep 11 00:32:54.721697 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:32:54.721747 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 11 00:32:54.721797 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Sep 11 00:32:54.721847 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 11 00:32:54.721899 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Sep 11 00:32:54.721954 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:32:54.722005 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 11 00:32:54.722055 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Sep 11 00:32:54.722104 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Sep 11 00:32:54.722154 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Sep 11 00:32:54.722298 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:32:54.722358 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 11 00:32:54.722408 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Sep 11 00:32:54.722458 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Sep 11 00:32:54.722508 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Sep 11 00:32:54.722566 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:32:54.722616 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 11 00:32:54.722667 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Sep 11 00:32:54.722719 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 11 00:32:54.722770 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Sep 11 00:32:54.722824 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:32:54.722890 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 11 00:32:54.722950 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Sep 11 00:32:54.723010 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Sep 11 00:32:54.723070 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 11 00:32:54.723137 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Sep 11 00:32:54.723210 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:32:54.723262 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 11 00:32:54.723312 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Sep 11 00:32:54.723365 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Sep 11 00:32:54.723415 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Sep 11 00:32:54.723465 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Sep 11 00:32:54.723520 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:32:54.723571 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 11 00:32:54.723621 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Sep 11 00:32:54.723671 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Sep 11 00:32:54.723723 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Sep 11 00:32:54.723776 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Sep 11 00:32:54.723832 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:32:54.723883 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 11 00:32:54.723933 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Sep 11 00:32:54.723984 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 11 00:32:54.724034 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Sep 11 00:32:54.724094 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:32:54.724148 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 11 00:32:54.724214 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Sep 11 00:32:54.724267 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 11 00:32:54.724318 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Sep 11 00:32:54.724372 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:32:54.724424 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 11 00:32:54.724477 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Sep 11 00:32:54.724527 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Sep 11 00:32:54.724577 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Sep 11 00:32:54.724632 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:32:54.724684 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 11 00:32:54.724734 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Sep 11 00:32:54.724784 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Sep 11 00:32:54.724834 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Sep 11 00:32:54.724891 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:32:54.724943 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 11 00:32:54.724996 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Sep 11 00:32:54.725047 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 11 00:32:54.725097 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Sep 11 00:32:54.725153 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:32:54.725215 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 11 00:32:54.725269 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Sep 11 00:32:54.725319 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Sep 11 00:32:54.725373 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Sep 11 00:32:54.725423 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Sep 11 00:32:54.725480 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:32:54.725531 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 11 00:32:54.725590 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Sep 11 00:32:54.725644 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Sep 11 00:32:54.725693 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Sep 11 00:32:54.725756 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Sep 11 00:32:54.725820 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:32:54.725872 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 11 00:32:54.725922 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Sep 11 00:32:54.725972 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Sep 11 00:32:54.726026 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Sep 11 00:32:54.726080 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:32:54.726131 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 11 00:32:54.726191 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Sep 11 00:32:54.726242 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 11 00:32:54.726292 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Sep 11 00:32:54.726372 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:32:54.727249 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 11 00:32:54.727315 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Sep 11 00:32:54.727373 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Sep 11 00:32:54.727428 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Sep 11 00:32:54.727488 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:32:54.727543 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 11 00:32:54.727596 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Sep 11 00:32:54.727653 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Sep 11 00:32:54.727706 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Sep 11 00:32:54.727764 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:32:54.727818 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 11 00:32:54.727871 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Sep 11 00:32:54.727924 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Sep 11 00:32:54.727976 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Sep 11 00:32:54.728034 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:32:54.728092 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 11 00:32:54.728145 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Sep 11 00:32:54.728215 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 11 00:32:54.728269 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Sep 11 00:32:54.728328 kernel: pci_bus 0000:01: extended config space not accessible Sep 11 00:32:54.728383 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 11 00:32:54.728438 kernel: pci_bus 0000:02: extended config space not accessible Sep 11 00:32:54.728450 kernel: acpiphp: Slot [32] registered Sep 11 00:32:54.728457 kernel: acpiphp: Slot [33] registered Sep 11 00:32:54.728463 kernel: acpiphp: Slot [34] registered Sep 11 00:32:54.728468 kernel: acpiphp: Slot [35] registered Sep 11 00:32:54.728474 kernel: acpiphp: Slot [36] registered Sep 11 00:32:54.728480 kernel: acpiphp: Slot [37] registered Sep 11 00:32:54.728486 kernel: acpiphp: Slot [38] registered Sep 11 00:32:54.728492 kernel: acpiphp: Slot [39] registered Sep 11 00:32:54.728498 kernel: acpiphp: Slot [40] registered Sep 11 00:32:54.728505 kernel: acpiphp: Slot [41] registered Sep 11 00:32:54.728511 kernel: acpiphp: Slot [42] registered Sep 11 00:32:54.728517 kernel: acpiphp: Slot [43] registered Sep 11 00:32:54.728523 kernel: acpiphp: Slot [44] registered Sep 11 00:32:54.728529 kernel: acpiphp: Slot [45] registered Sep 11 00:32:54.728535 kernel: acpiphp: Slot [46] registered Sep 11 00:32:54.728541 kernel: acpiphp: Slot [47] registered Sep 11 00:32:54.728546 kernel: acpiphp: Slot [48] registered Sep 11 00:32:54.728552 kernel: acpiphp: Slot [49] registered Sep 11 00:32:54.728559 kernel: acpiphp: Slot [50] registered Sep 11 00:32:54.728565 kernel: acpiphp: Slot [51] registered Sep 11 00:32:54.728571 kernel: acpiphp: Slot [52] registered Sep 11 00:32:54.728577 kernel: acpiphp: Slot [53] registered Sep 11 00:32:54.728582 kernel: acpiphp: Slot [54] registered Sep 11 00:32:54.728588 kernel: acpiphp: Slot [55] registered Sep 11 00:32:54.728594 kernel: acpiphp: Slot [56] registered Sep 11 00:32:54.728600 kernel: acpiphp: Slot [57] registered Sep 11 00:32:54.728606 kernel: acpiphp: Slot [58] registered Sep 11 00:32:54.728612 kernel: acpiphp: Slot [59] registered Sep 11 00:32:54.728619 kernel: acpiphp: Slot [60] registered Sep 11 00:32:54.728624 kernel: acpiphp: Slot [61] registered Sep 11 00:32:54.728630 kernel: acpiphp: Slot [62] registered Sep 11 00:32:54.728636 kernel: acpiphp: Slot [63] registered Sep 11 00:32:54.728690 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Sep 11 00:32:54.728743 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Sep 11 00:32:54.728796 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Sep 11 00:32:54.728849 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Sep 11 00:32:54.728904 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Sep 11 00:32:54.728956 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Sep 11 00:32:54.729016 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 PCIe Endpoint Sep 11 00:32:54.729071 kernel: pci 0000:03:00.0: BAR 0 [io 0x4000-0x4007] Sep 11 00:32:54.729125 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfd5f8000-0xfd5fffff 64bit] Sep 11 00:32:54.730228 kernel: pci 0000:03:00.0: ROM [mem 0x00000000-0x0000ffff pref] Sep 11 00:32:54.730393 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Sep 11 00:32:54.730455 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Sep 11 00:32:54.730528 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 11 00:32:54.730591 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 11 00:32:54.730645 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 11 00:32:54.730698 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 11 00:32:54.730751 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 11 00:32:54.730805 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 11 00:32:54.730859 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 11 00:32:54.730915 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 11 00:32:54.730974 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 PCIe Endpoint Sep 11 00:32:54.731026 kernel: pci 0000:0b:00.0: BAR 0 [mem 0xfd4fc000-0xfd4fcfff] Sep 11 00:32:54.731077 kernel: pci 0000:0b:00.0: BAR 1 [mem 0xfd4fd000-0xfd4fdfff] Sep 11 00:32:54.731133 kernel: pci 0000:0b:00.0: BAR 2 [mem 0xfd4fe000-0xfd4fffff] Sep 11 00:32:54.731208 kernel: pci 0000:0b:00.0: BAR 3 [io 0x5000-0x500f] Sep 11 00:32:54.731280 kernel: pci 0000:0b:00.0: ROM [mem 0x00000000-0x0000ffff pref] Sep 11 00:32:54.731337 kernel: pci 0000:0b:00.0: supports D1 D2 Sep 11 00:32:54.731443 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 11 00:32:54.731501 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Sep 11 00:32:54.731555 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 11 00:32:54.731609 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 11 00:32:54.731662 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 11 00:32:54.731714 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 11 00:32:54.731772 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 11 00:32:54.731838 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 11 00:32:54.731892 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 11 00:32:54.731944 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 11 00:32:54.731997 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 11 00:32:54.732050 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 11 00:32:54.732101 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 11 00:32:54.732154 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 11 00:32:54.732484 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 11 00:32:54.732590 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 11 00:32:54.732701 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 11 00:32:54.732796 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 11 00:32:54.732896 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 11 00:32:54.733031 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 11 00:32:54.733129 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 11 00:32:54.734572 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 11 00:32:54.734639 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 11 00:32:54.734695 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 11 00:32:54.734759 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 11 00:32:54.734815 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 11 00:32:54.734825 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Sep 11 00:32:54.734832 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Sep 11 00:32:54.734841 kernel: ACPI: PCI: Interrupt link LNKB disabled Sep 11 00:32:54.734847 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 11 00:32:54.734853 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Sep 11 00:32:54.734859 kernel: iommu: Default domain type: Translated Sep 11 00:32:54.734865 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 11 00:32:54.734871 kernel: PCI: Using ACPI for IRQ routing Sep 11 00:32:54.734877 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 11 00:32:54.734883 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Sep 11 00:32:54.734889 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Sep 11 00:32:54.734962 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Sep 11 00:32:54.735019 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Sep 11 00:32:54.735070 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 11 00:32:54.735079 kernel: vgaarb: loaded Sep 11 00:32:54.735086 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Sep 11 00:32:54.735092 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Sep 11 00:32:54.735098 kernel: clocksource: Switched to clocksource tsc-early Sep 11 00:32:54.735104 kernel: VFS: Disk quotas dquot_6.6.0 Sep 11 00:32:54.735110 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 11 00:32:54.735118 kernel: pnp: PnP ACPI init Sep 11 00:32:54.735862 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Sep 11 00:32:54.735923 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Sep 11 00:32:54.735973 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Sep 11 00:32:54.736024 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Sep 11 00:32:54.736074 kernel: pnp 00:06: [dma 2] Sep 11 00:32:54.736127 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Sep 11 00:32:54.736207 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Sep 11 00:32:54.736254 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Sep 11 00:32:54.736263 kernel: pnp: PnP ACPI: found 8 devices Sep 11 00:32:54.736270 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 11 00:32:54.736276 kernel: NET: Registered PF_INET protocol family Sep 11 00:32:54.736283 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 11 00:32:54.736289 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 11 00:32:54.736295 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 11 00:32:54.736303 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 11 00:32:54.736309 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 11 00:32:54.736315 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 11 00:32:54.736321 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 11 00:32:54.736327 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 11 00:32:54.736333 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 11 00:32:54.736339 kernel: NET: Registered PF_XDP protocol family Sep 11 00:32:54.736393 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Sep 11 00:32:54.736452 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 11 00:32:54.736563 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 11 00:32:54.736618 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 11 00:32:54.736671 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 11 00:32:54.736724 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Sep 11 00:32:54.736776 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Sep 11 00:32:54.736828 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Sep 11 00:32:54.736881 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Sep 11 00:32:54.736937 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Sep 11 00:32:54.736990 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Sep 11 00:32:54.737042 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Sep 11 00:32:54.737095 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Sep 11 00:32:54.737146 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Sep 11 00:32:54.737256 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Sep 11 00:32:54.737310 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Sep 11 00:32:54.737361 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Sep 11 00:32:54.737417 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Sep 11 00:32:54.737469 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Sep 11 00:32:54.737521 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Sep 11 00:32:54.737646 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Sep 11 00:32:54.737712 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Sep 11 00:32:54.737769 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Sep 11 00:32:54.737824 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref]: assigned Sep 11 00:32:54.737877 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref]: assigned Sep 11 00:32:54.737933 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.737984 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.738035 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.738086 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.738137 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.738198 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.738251 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.738305 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.738358 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.738409 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.738460 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.738510 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.738561 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.738612 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.738663 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.738716 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.738767 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.738818 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.738869 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.738919 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.738971 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.739022 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.739073 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.739125 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.739197 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.740534 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.740590 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.740642 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.740694 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.740745 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.740797 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.740851 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.740902 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.740953 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.741004 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.741055 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.741106 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.741158 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.741228 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.741283 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.741335 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.741386 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.741437 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.741486 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.741536 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.741587 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.741636 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.741688 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.741739 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.741789 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.741838 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.741888 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.741938 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.741988 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.742038 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.742088 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.742138 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.742212 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.742265 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.742315 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.742366 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.742421 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.742472 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.742522 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.742573 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.742623 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.742675 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.742726 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.742777 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.742827 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.742878 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.742928 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.742978 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.743029 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.743084 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.743135 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.743383 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.743437 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.743490 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.743542 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.743594 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.743645 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.743700 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:32:54.743751 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Sep 11 00:32:54.743803 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 11 00:32:54.743855 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Sep 11 00:32:54.743911 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Sep 11 00:32:54.743961 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Sep 11 00:32:54.744011 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 11 00:32:54.744065 kernel: pci 0000:03:00.0: ROM [mem 0xfd500000-0xfd50ffff pref]: assigned Sep 11 00:32:54.744121 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 11 00:32:54.745611 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Sep 11 00:32:54.745670 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Sep 11 00:32:54.745722 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Sep 11 00:32:54.745775 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 11 00:32:54.745826 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Sep 11 00:32:54.745876 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Sep 11 00:32:54.745927 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Sep 11 00:32:54.745980 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 11 00:32:54.746031 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Sep 11 00:32:54.746085 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Sep 11 00:32:54.746135 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Sep 11 00:32:54.746206 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 11 00:32:54.746258 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Sep 11 00:32:54.746308 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Sep 11 00:32:54.746358 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 11 00:32:54.746408 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Sep 11 00:32:54.746460 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 11 00:32:54.746514 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 11 00:32:54.746564 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Sep 11 00:32:54.746614 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Sep 11 00:32:54.746666 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 11 00:32:54.746716 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Sep 11 00:32:54.746768 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Sep 11 00:32:54.746820 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 11 00:32:54.746873 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Sep 11 00:32:54.746924 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Sep 11 00:32:54.746978 kernel: pci 0000:0b:00.0: ROM [mem 0xfd400000-0xfd40ffff pref]: assigned Sep 11 00:32:54.747030 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 11 00:32:54.747080 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Sep 11 00:32:54.747130 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Sep 11 00:32:54.747195 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Sep 11 00:32:54.747248 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 11 00:32:54.747302 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Sep 11 00:32:54.747353 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Sep 11 00:32:54.747402 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Sep 11 00:32:54.747456 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 11 00:32:54.747506 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Sep 11 00:32:54.747559 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Sep 11 00:32:54.747628 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Sep 11 00:32:54.747688 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 11 00:32:54.747754 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Sep 11 00:32:54.747817 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 11 00:32:54.747891 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 11 00:32:54.747957 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Sep 11 00:32:54.748019 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 11 00:32:54.748090 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 11 00:32:54.748153 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Sep 11 00:32:54.748231 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Sep 11 00:32:54.748286 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 11 00:32:54.748343 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Sep 11 00:32:54.748404 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Sep 11 00:32:54.748455 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 11 00:32:54.748506 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Sep 11 00:32:54.748559 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 11 00:32:54.748610 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 11 00:32:54.748660 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Sep 11 00:32:54.748713 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Sep 11 00:32:54.748764 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 11 00:32:54.748816 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 11 00:32:54.748866 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Sep 11 00:32:54.748916 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Sep 11 00:32:54.748967 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Sep 11 00:32:54.749018 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 11 00:32:54.749068 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Sep 11 00:32:54.749127 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Sep 11 00:32:54.749537 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Sep 11 00:32:54.749597 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 11 00:32:54.749649 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Sep 11 00:32:54.749699 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 11 00:32:54.749751 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 11 00:32:54.749801 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Sep 11 00:32:54.749852 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 11 00:32:54.749904 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 11 00:32:54.749954 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Sep 11 00:32:54.750006 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Sep 11 00:32:54.750057 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 11 00:32:54.750106 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Sep 11 00:32:54.750156 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Sep 11 00:32:54.750245 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 11 00:32:54.750296 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Sep 11 00:32:54.750348 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 11 00:32:54.750404 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 11 00:32:54.750456 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Sep 11 00:32:54.750506 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Sep 11 00:32:54.750556 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Sep 11 00:32:54.750609 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 11 00:32:54.750659 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Sep 11 00:32:54.750709 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Sep 11 00:32:54.750760 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Sep 11 00:32:54.750811 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 11 00:32:54.750864 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Sep 11 00:32:54.750914 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Sep 11 00:32:54.750969 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 11 00:32:54.751020 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Sep 11 00:32:54.751070 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 11 00:32:54.751123 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 11 00:32:54.751185 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Sep 11 00:32:54.751239 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Sep 11 00:32:54.751294 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 11 00:32:54.751349 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Sep 11 00:32:54.751400 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Sep 11 00:32:54.751452 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 11 00:32:54.751661 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Sep 11 00:32:54.751843 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Sep 11 00:32:54.751913 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 11 00:32:54.751992 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Sep 11 00:32:54.752045 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 11 00:32:54.752095 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Sep 11 00:32:54.752140 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Sep 11 00:32:54.752201 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Sep 11 00:32:54.752246 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Sep 11 00:32:54.752290 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Sep 11 00:32:54.752346 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Sep 11 00:32:54.752393 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Sep 11 00:32:54.752439 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 11 00:32:54.752486 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Sep 11 00:32:54.752532 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Sep 11 00:32:54.752581 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Sep 11 00:32:54.752628 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Sep 11 00:32:54.752677 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Sep 11 00:32:54.752729 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Sep 11 00:32:54.752777 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Sep 11 00:32:54.752823 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Sep 11 00:32:54.752874 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Sep 11 00:32:54.752922 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Sep 11 00:32:54.752967 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Sep 11 00:32:54.753020 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Sep 11 00:32:54.753067 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Sep 11 00:32:54.753112 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Sep 11 00:32:54.753175 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Sep 11 00:32:54.753230 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Sep 11 00:32:54.753281 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Sep 11 00:32:54.753330 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 11 00:32:54.753382 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Sep 11 00:32:54.753428 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Sep 11 00:32:54.753478 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Sep 11 00:32:54.753534 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Sep 11 00:32:54.753594 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Sep 11 00:32:54.753651 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Sep 11 00:32:54.753707 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Sep 11 00:32:54.753754 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Sep 11 00:32:54.753800 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Sep 11 00:32:54.753851 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Sep 11 00:32:54.753897 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Sep 11 00:32:54.753945 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Sep 11 00:32:54.753995 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Sep 11 00:32:54.754041 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Sep 11 00:32:54.754086 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Sep 11 00:32:54.754139 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Sep 11 00:32:54.754197 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 11 00:32:54.754248 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Sep 11 00:32:54.754296 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 11 00:32:54.754350 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Sep 11 00:32:54.754398 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Sep 11 00:32:54.754447 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Sep 11 00:32:54.754494 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Sep 11 00:32:54.754543 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Sep 11 00:32:54.754592 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 11 00:32:54.754643 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Sep 11 00:32:54.754689 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Sep 11 00:32:54.754735 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 11 00:32:54.754791 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Sep 11 00:32:54.754844 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Sep 11 00:32:54.754905 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Sep 11 00:32:54.754968 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Sep 11 00:32:54.755025 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Sep 11 00:32:54.755080 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Sep 11 00:32:54.755143 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Sep 11 00:32:54.755208 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 11 00:32:54.755263 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Sep 11 00:32:54.755319 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 11 00:32:54.755379 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Sep 11 00:32:54.755433 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Sep 11 00:32:54.755743 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Sep 11 00:32:54.755797 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Sep 11 00:32:54.755849 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Sep 11 00:32:54.755899 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 11 00:32:54.755949 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Sep 11 00:32:54.755996 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Sep 11 00:32:54.756042 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Sep 11 00:32:54.756091 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Sep 11 00:32:54.757447 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Sep 11 00:32:54.757495 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Sep 11 00:32:54.757552 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Sep 11 00:32:54.757599 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Sep 11 00:32:54.757652 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Sep 11 00:32:54.757698 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 11 00:32:54.757748 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Sep 11 00:32:54.757794 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Sep 11 00:32:54.757847 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Sep 11 00:32:54.757894 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Sep 11 00:32:54.757945 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Sep 11 00:32:54.757991 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Sep 11 00:32:54.758042 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Sep 11 00:32:54.758088 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 11 00:32:54.758145 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 11 00:32:54.758157 kernel: PCI: CLS 32 bytes, default 64 Sep 11 00:32:54.758212 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 11 00:32:54.758220 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Sep 11 00:32:54.758227 kernel: clocksource: Switched to clocksource tsc Sep 11 00:32:54.758233 kernel: Initialise system trusted keyrings Sep 11 00:32:54.758239 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 11 00:32:54.758245 kernel: Key type asymmetric registered Sep 11 00:32:54.758250 kernel: Asymmetric key parser 'x509' registered Sep 11 00:32:54.758259 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 11 00:32:54.758265 kernel: io scheduler mq-deadline registered Sep 11 00:32:54.758270 kernel: io scheduler kyber registered Sep 11 00:32:54.758276 kernel: io scheduler bfq registered Sep 11 00:32:54.758336 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Sep 11 00:32:54.758390 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:32:54.758442 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Sep 11 00:32:54.758493 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:32:54.758549 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Sep 11 00:32:54.758600 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:32:54.758653 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Sep 11 00:32:54.758705 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:32:54.758758 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Sep 11 00:32:54.758810 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:32:54.758862 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Sep 11 00:32:54.758915 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:32:54.758968 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Sep 11 00:32:54.759018 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:32:54.759070 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Sep 11 00:32:54.759121 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:32:54.759186 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Sep 11 00:32:54.759242 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:32:54.759304 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Sep 11 00:32:54.759364 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:32:54.759416 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Sep 11 00:32:54.759466 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:32:54.759518 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Sep 11 00:32:54.759580 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:32:54.759634 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Sep 11 00:32:54.759686 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:32:54.759740 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Sep 11 00:32:54.759791 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:32:54.759843 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Sep 11 00:32:54.759895 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:32:54.759947 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Sep 11 00:32:54.759997 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:32:54.760049 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Sep 11 00:32:54.760102 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:32:54.760154 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Sep 11 00:32:54.760439 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:32:54.760495 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Sep 11 00:32:54.760547 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:32:54.760599 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Sep 11 00:32:54.760651 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:32:54.760706 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Sep 11 00:32:54.760757 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:32:54.760809 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Sep 11 00:32:54.760859 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:32:54.760911 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Sep 11 00:32:54.760962 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:32:54.761014 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Sep 11 00:32:54.761065 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:32:54.761121 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Sep 11 00:32:54.761220 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:32:54.761278 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Sep 11 00:32:54.761330 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:32:54.761382 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Sep 11 00:32:54.761434 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:32:54.761486 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Sep 11 00:32:54.761540 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:32:54.761593 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Sep 11 00:32:54.761644 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:32:54.761696 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Sep 11 00:32:54.761747 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:32:54.761798 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Sep 11 00:32:54.761848 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:32:54.761900 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Sep 11 00:32:54.761953 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:32:54.761965 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 11 00:32:54.761972 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 11 00:32:54.761978 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 11 00:32:54.761985 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Sep 11 00:32:54.761991 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 11 00:32:54.761998 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 11 00:32:54.762051 kernel: rtc_cmos 00:01: registered as rtc0 Sep 11 00:32:54.762099 kernel: rtc_cmos 00:01: setting system clock to 2025-09-11T00:32:54 UTC (1757550774) Sep 11 00:32:54.762108 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 11 00:32:54.762153 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Sep 11 00:32:54.762180 kernel: intel_pstate: CPU model not supported Sep 11 00:32:54.762190 kernel: NET: Registered PF_INET6 protocol family Sep 11 00:32:54.762196 kernel: Segment Routing with IPv6 Sep 11 00:32:54.762206 kernel: In-situ OAM (IOAM) with IPv6 Sep 11 00:32:54.762212 kernel: NET: Registered PF_PACKET protocol family Sep 11 00:32:54.762220 kernel: Key type dns_resolver registered Sep 11 00:32:54.762226 kernel: IPI shorthand broadcast: enabled Sep 11 00:32:54.762233 kernel: sched_clock: Marking stable (2711003689, 171549336)->(2897110174, -14557149) Sep 11 00:32:54.762244 kernel: registered taskstats version 1 Sep 11 00:32:54.762252 kernel: Loading compiled-in X.509 certificates Sep 11 00:32:54.762258 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: 8138ce5002a1b572fd22b23ac238f29bab3f249f' Sep 11 00:32:54.762265 kernel: Demotion targets for Node 0: null Sep 11 00:32:54.762271 kernel: Key type .fscrypt registered Sep 11 00:32:54.762280 kernel: Key type fscrypt-provisioning registered Sep 11 00:32:54.762289 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 11 00:32:54.762299 kernel: ima: Allocated hash algorithm: sha1 Sep 11 00:32:54.762305 kernel: ima: No architecture policies found Sep 11 00:32:54.762311 kernel: clk: Disabling unused clocks Sep 11 00:32:54.762318 kernel: Warning: unable to open an initial console. Sep 11 00:32:54.762324 kernel: Freeing unused kernel image (initmem) memory: 53832K Sep 11 00:32:54.762331 kernel: Write protecting the kernel read-only data: 24576k Sep 11 00:32:54.762342 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 11 00:32:54.762349 kernel: Run /init as init process Sep 11 00:32:54.762355 kernel: with arguments: Sep 11 00:32:54.762362 kernel: /init Sep 11 00:32:54.762368 kernel: with environment: Sep 11 00:32:54.762374 kernel: HOME=/ Sep 11 00:32:54.762380 kernel: TERM=linux Sep 11 00:32:54.762387 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 11 00:32:54.762394 systemd[1]: Successfully made /usr/ read-only. Sep 11 00:32:54.762403 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 00:32:54.762411 systemd[1]: Detected virtualization vmware. Sep 11 00:32:54.762418 systemd[1]: Detected architecture x86-64. Sep 11 00:32:54.762424 systemd[1]: Running in initrd. Sep 11 00:32:54.762430 systemd[1]: No hostname configured, using default hostname. Sep 11 00:32:54.762436 systemd[1]: Hostname set to . Sep 11 00:32:54.762443 systemd[1]: Initializing machine ID from random generator. Sep 11 00:32:54.762449 systemd[1]: Queued start job for default target initrd.target. Sep 11 00:32:54.762457 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:32:54.762463 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:32:54.762470 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 11 00:32:54.762477 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 00:32:54.762483 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 11 00:32:54.762490 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 11 00:32:54.762497 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 11 00:32:54.762505 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 11 00:32:54.762512 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:32:54.762518 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:32:54.762524 systemd[1]: Reached target paths.target - Path Units. Sep 11 00:32:54.762531 systemd[1]: Reached target slices.target - Slice Units. Sep 11 00:32:54.762537 systemd[1]: Reached target swap.target - Swaps. Sep 11 00:32:54.762544 systemd[1]: Reached target timers.target - Timer Units. Sep 11 00:32:54.762550 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 00:32:54.762557 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 00:32:54.762564 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 11 00:32:54.762570 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 11 00:32:54.762577 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:32:54.762583 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 00:32:54.762590 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:32:54.762596 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 00:32:54.762603 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 11 00:32:54.762609 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 00:32:54.762618 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 11 00:32:54.762625 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 11 00:32:54.762631 systemd[1]: Starting systemd-fsck-usr.service... Sep 11 00:32:54.762638 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 00:32:54.762644 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 00:32:54.762650 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:32:54.762671 systemd-journald[243]: Collecting audit messages is disabled. Sep 11 00:32:54.762690 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 11 00:32:54.762697 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:32:54.762705 systemd[1]: Finished systemd-fsck-usr.service. Sep 11 00:32:54.762712 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 11 00:32:54.762718 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:32:54.762725 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 11 00:32:54.762732 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 00:32:54.762738 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 11 00:32:54.762745 kernel: Bridge firewalling registered Sep 11 00:32:54.762752 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 00:32:54.762761 systemd-journald[243]: Journal started Sep 11 00:32:54.762776 systemd-journald[243]: Runtime Journal (/run/log/journal/96a890cd711c4f6086bd41855aa78e5b) is 4.8M, max 38.9M, 34M free. Sep 11 00:32:54.729349 systemd-modules-load[244]: Inserted module 'overlay' Sep 11 00:32:54.756200 systemd-modules-load[244]: Inserted module 'br_netfilter' Sep 11 00:32:54.765234 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 00:32:54.765260 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 00:32:54.769247 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:32:54.770235 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 00:32:54.770599 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:32:54.771443 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 00:32:54.775727 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 11 00:32:54.780410 systemd-tmpfiles[274]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 11 00:32:54.782586 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:32:54.784419 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:32:54.786232 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 00:32:54.787186 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:32:54.814829 systemd-resolved[293]: Positive Trust Anchors: Sep 11 00:32:54.815040 systemd-resolved[293]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 00:32:54.815063 systemd-resolved[293]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 00:32:54.817709 systemd-resolved[293]: Defaulting to hostname 'linux'. Sep 11 00:32:54.818323 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 00:32:54.818481 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:32:54.839183 kernel: SCSI subsystem initialized Sep 11 00:32:54.856186 kernel: Loading iSCSI transport class v2.0-870. Sep 11 00:32:54.864182 kernel: iscsi: registered transport (tcp) Sep 11 00:32:54.887209 kernel: iscsi: registered transport (qla4xxx) Sep 11 00:32:54.887254 kernel: QLogic iSCSI HBA Driver Sep 11 00:32:54.899669 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 00:32:54.918463 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:32:54.919644 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 00:32:54.942845 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 11 00:32:54.943937 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 11 00:32:54.982184 kernel: raid6: avx2x4 gen() 46331 MB/s Sep 11 00:32:54.999187 kernel: raid6: avx2x2 gen() 52352 MB/s Sep 11 00:32:55.016396 kernel: raid6: avx2x1 gen() 43429 MB/s Sep 11 00:32:55.016440 kernel: raid6: using algorithm avx2x2 gen() 52352 MB/s Sep 11 00:32:55.034403 kernel: raid6: .... xor() 31757 MB/s, rmw enabled Sep 11 00:32:55.034452 kernel: raid6: using avx2x2 recovery algorithm Sep 11 00:32:55.049192 kernel: xor: automatically using best checksumming function avx Sep 11 00:32:55.159193 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 11 00:32:55.162551 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 11 00:32:55.163625 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:32:55.182597 systemd-udevd[493]: Using default interface naming scheme 'v255'. Sep 11 00:32:55.186275 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:32:55.186926 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 11 00:32:55.198485 dracut-pre-trigger[497]: rd.md=0: removing MD RAID activation Sep 11 00:32:55.212603 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 00:32:55.213415 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 00:32:55.292629 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:32:55.293891 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 11 00:32:55.358189 kernel: VMware PVSCSI driver - version 1.0.7.0-k Sep 11 00:32:55.358232 kernel: vmw_pvscsi: using 64bit dma Sep 11 00:32:55.364191 kernel: vmw_pvscsi: max_id: 16 Sep 11 00:32:55.364222 kernel: vmw_pvscsi: setting ring_pages to 8 Sep 11 00:32:55.375216 kernel: vmw_pvscsi: enabling reqCallThreshold Sep 11 00:32:55.375254 kernel: vmw_pvscsi: driver-based request coalescing enabled Sep 11 00:32:55.375263 kernel: vmw_pvscsi: using MSI-X Sep 11 00:32:55.375271 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Sep 11 00:32:55.395231 kernel: VMware vmxnet3 virtual NIC driver - version 1.9.0.0-k-NAPI Sep 11 00:32:55.395307 (udev-worker)[545]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Sep 11 00:32:55.397576 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Sep 11 00:32:55.397700 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Sep 11 00:32:55.397776 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Sep 11 00:32:55.403032 kernel: cryptd: max_cpu_qlen set to 1000 Sep 11 00:32:55.404158 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:32:55.404280 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:32:55.405548 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:32:55.410037 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:32:55.412267 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Sep 11 00:32:55.412383 kernel: libata version 3.00 loaded. Sep 11 00:32:55.416183 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Sep 11 00:32:55.420186 kernel: ata_piix 0000:00:07.1: version 2.13 Sep 11 00:32:55.422413 kernel: scsi host1: ata_piix Sep 11 00:32:55.422451 kernel: AES CTR mode by8 optimization enabled Sep 11 00:32:55.426308 kernel: scsi host2: ata_piix Sep 11 00:32:55.426417 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 lpm-pol 0 Sep 11 00:32:55.426426 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Sep 11 00:32:55.426503 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 lpm-pol 0 Sep 11 00:32:55.429181 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Sep 11 00:32:55.429299 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 11 00:32:55.429374 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Sep 11 00:32:55.429437 kernel: sd 0:0:0:0: [sda] Cache data unavailable Sep 11 00:32:55.429500 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Sep 11 00:32:55.450078 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:32:55.490191 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 11 00:32:55.492183 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 11 00:32:55.595186 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Sep 11 00:32:55.599192 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Sep 11 00:32:55.629479 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Sep 11 00:32:55.629654 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 11 00:32:55.654185 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 11 00:32:55.662036 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Sep 11 00:32:55.671184 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Sep 11 00:32:55.679971 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Sep 11 00:32:55.685384 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Sep 11 00:32:55.685559 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Sep 11 00:32:55.687249 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 11 00:32:55.723803 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 11 00:32:55.734228 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 11 00:32:55.961097 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 11 00:32:55.961510 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 00:32:55.961665 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:32:55.961901 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 00:32:55.962645 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 11 00:32:55.986113 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 11 00:32:56.739055 disk-uuid[647]: The operation has completed successfully. Sep 11 00:32:56.739331 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 11 00:32:56.778095 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 11 00:32:56.778161 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 11 00:32:56.788870 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 11 00:32:56.798989 sh[677]: Success Sep 11 00:32:56.814107 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 11 00:32:56.814142 kernel: device-mapper: uevent: version 1.0.3 Sep 11 00:32:56.814151 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 11 00:32:56.821237 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Sep 11 00:32:56.867188 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 11 00:32:56.867734 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 11 00:32:56.877209 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 11 00:32:56.891054 kernel: BTRFS: device fsid f1eb5eb7-34cc-49c0-9f2b-e603bd772d66 devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (689) Sep 11 00:32:56.891079 kernel: BTRFS info (device dm-0): first mount of filesystem f1eb5eb7-34cc-49c0-9f2b-e603bd772d66 Sep 11 00:32:56.891092 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:32:56.900186 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 11 00:32:56.900205 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 11 00:32:56.900213 kernel: BTRFS info (device dm-0): enabling free space tree Sep 11 00:32:56.902524 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 11 00:32:56.902831 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 11 00:32:56.903389 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Sep 11 00:32:56.904673 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 11 00:32:56.928179 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (712) Sep 11 00:32:56.930357 kernel: BTRFS info (device sda6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:32:56.930380 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:32:56.936284 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 11 00:32:56.936466 kernel: BTRFS info (device sda6): enabling free space tree Sep 11 00:32:56.940179 kernel: BTRFS info (device sda6): last unmount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:32:56.944543 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 11 00:32:56.946423 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 11 00:32:56.984089 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Sep 11 00:32:56.984751 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 11 00:32:57.065793 ignition[731]: Ignition 2.21.0 Sep 11 00:32:57.066272 ignition[731]: Stage: fetch-offline Sep 11 00:32:57.066406 ignition[731]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:32:57.066532 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 11 00:32:57.068206 ignition[731]: parsed url from cmdline: "" Sep 11 00:32:57.068212 ignition[731]: no config URL provided Sep 11 00:32:57.068216 ignition[731]: reading system config file "/usr/lib/ignition/user.ign" Sep 11 00:32:57.068223 ignition[731]: no config at "/usr/lib/ignition/user.ign" Sep 11 00:32:57.068628 ignition[731]: config successfully fetched Sep 11 00:32:57.068653 ignition[731]: parsing config with SHA512: 5f5c3f597e6db14197d34f29f36043cab9a2470cc0edb124f651c16a2a12891d9935b2a4bb70034d73e7d30950bb67eb6c0bbc77c74c62c8783eeee691d91bb8 Sep 11 00:32:57.074195 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 00:32:57.074542 unknown[731]: fetched base config from "system" Sep 11 00:32:57.074548 unknown[731]: fetched user config from "vmware" Sep 11 00:32:57.074798 ignition[731]: fetch-offline: fetch-offline passed Sep 11 00:32:57.074840 ignition[731]: Ignition finished successfully Sep 11 00:32:57.076873 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 00:32:57.077343 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 00:32:57.105396 systemd-networkd[871]: lo: Link UP Sep 11 00:32:57.105404 systemd-networkd[871]: lo: Gained carrier Sep 11 00:32:57.106223 systemd-networkd[871]: Enumeration completed Sep 11 00:32:57.106394 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 00:32:57.106553 systemd[1]: Reached target network.target - Network. Sep 11 00:32:57.106645 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 11 00:32:57.106738 systemd-networkd[871]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Sep 11 00:32:57.107943 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 11 00:32:57.109422 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Sep 11 00:32:57.109568 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Sep 11 00:32:57.110633 systemd-networkd[871]: ens192: Link UP Sep 11 00:32:57.110637 systemd-networkd[871]: ens192: Gained carrier Sep 11 00:32:57.130227 ignition[874]: Ignition 2.21.0 Sep 11 00:32:57.130236 ignition[874]: Stage: kargs Sep 11 00:32:57.130356 ignition[874]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:32:57.130362 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 11 00:32:57.131479 ignition[874]: kargs: kargs passed Sep 11 00:32:57.131729 ignition[874]: Ignition finished successfully Sep 11 00:32:57.133510 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 11 00:32:57.134500 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 11 00:32:57.149399 ignition[881]: Ignition 2.21.0 Sep 11 00:32:57.149656 ignition[881]: Stage: disks Sep 11 00:32:57.149751 ignition[881]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:32:57.149757 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 11 00:32:57.151031 ignition[881]: disks: disks passed Sep 11 00:32:57.151091 ignition[881]: Ignition finished successfully Sep 11 00:32:57.152631 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 11 00:32:57.153001 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 11 00:32:57.153264 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 11 00:32:57.153530 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 00:32:57.153757 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 00:32:57.153991 systemd[1]: Reached target basic.target - Basic System. Sep 11 00:32:57.154770 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 11 00:32:57.182013 systemd-fsck[890]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Sep 11 00:32:57.182945 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 11 00:32:57.184028 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 11 00:32:58.050208 kernel: EXT4-fs (sda9): mounted filesystem 6a9ce0af-81d0-4628-9791-e47488ed2744 r/w with ordered data mode. Quota mode: none. Sep 11 00:32:58.050689 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 11 00:32:58.051027 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 11 00:32:58.064458 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 00:32:58.076500 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 11 00:32:58.076794 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 11 00:32:58.076825 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 11 00:32:58.076840 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 00:32:58.080098 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 11 00:32:58.080903 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 11 00:32:58.160187 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (898) Sep 11 00:32:58.169975 kernel: BTRFS info (device sda6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:32:58.170010 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:32:58.210775 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 11 00:32:58.210819 kernel: BTRFS info (device sda6): enabling free space tree Sep 11 00:32:58.211921 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 00:32:58.226349 initrd-setup-root[922]: cut: /sysroot/etc/passwd: No such file or directory Sep 11 00:32:58.229376 initrd-setup-root[929]: cut: /sysroot/etc/group: No such file or directory Sep 11 00:32:58.231927 initrd-setup-root[936]: cut: /sysroot/etc/shadow: No such file or directory Sep 11 00:32:58.233870 initrd-setup-root[943]: cut: /sysroot/etc/gshadow: No such file or directory Sep 11 00:32:58.330613 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 11 00:32:58.331286 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 11 00:32:58.332234 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 11 00:32:58.346035 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 11 00:32:58.347173 kernel: BTRFS info (device sda6): last unmount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:32:58.368463 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 11 00:32:58.369057 ignition[1011]: INFO : Ignition 2.21.0 Sep 11 00:32:58.369932 ignition[1011]: INFO : Stage: mount Sep 11 00:32:58.369932 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:32:58.369932 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 11 00:32:58.369932 ignition[1011]: INFO : mount: mount passed Sep 11 00:32:58.369932 ignition[1011]: INFO : Ignition finished successfully Sep 11 00:32:58.371266 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 11 00:32:58.371902 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 11 00:32:59.051764 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 00:32:59.072821 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1023) Sep 11 00:32:59.072858 kernel: BTRFS info (device sda6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:32:59.072867 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:32:59.077788 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 11 00:32:59.077824 kernel: BTRFS info (device sda6): enabling free space tree Sep 11 00:32:59.079035 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 00:32:59.092261 systemd-networkd[871]: ens192: Gained IPv6LL Sep 11 00:32:59.097715 ignition[1039]: INFO : Ignition 2.21.0 Sep 11 00:32:59.097715 ignition[1039]: INFO : Stage: files Sep 11 00:32:59.098358 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:32:59.098358 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 11 00:32:59.099088 ignition[1039]: DEBUG : files: compiled without relabeling support, skipping Sep 11 00:32:59.112473 ignition[1039]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 11 00:32:59.112473 ignition[1039]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 11 00:32:59.122116 ignition[1039]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 11 00:32:59.122398 ignition[1039]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 11 00:32:59.122594 unknown[1039]: wrote ssh authorized keys file for user: core Sep 11 00:32:59.122860 ignition[1039]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 11 00:32:59.124655 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 11 00:32:59.124655 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 11 00:32:59.188406 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 11 00:33:00.167289 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 11 00:33:00.167289 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 11 00:33:00.167891 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 11 00:33:00.167891 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 11 00:33:00.167891 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 11 00:33:00.167891 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 00:33:00.167891 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 00:33:00.167891 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 00:33:00.167891 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 00:33:00.183921 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 00:33:00.184159 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 00:33:00.184159 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 11 00:33:00.186347 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 11 00:33:00.186604 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 11 00:33:00.186604 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 11 00:33:00.785758 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 11 00:33:01.164621 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 11 00:33:01.164621 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Sep 11 00:33:01.176698 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Sep 11 00:33:01.176698 ignition[1039]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 11 00:33:01.189963 ignition[1039]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 00:33:01.194944 ignition[1039]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 00:33:01.194944 ignition[1039]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 11 00:33:01.194944 ignition[1039]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 11 00:33:01.195423 ignition[1039]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 11 00:33:01.195423 ignition[1039]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 11 00:33:01.195423 ignition[1039]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 11 00:33:01.195423 ignition[1039]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 11 00:33:01.559277 ignition[1039]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 11 00:33:01.562099 ignition[1039]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 11 00:33:01.562373 ignition[1039]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 11 00:33:01.562373 ignition[1039]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 11 00:33:01.562373 ignition[1039]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 11 00:33:01.562373 ignition[1039]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 11 00:33:01.563925 ignition[1039]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 11 00:33:01.563925 ignition[1039]: INFO : files: files passed Sep 11 00:33:01.563925 ignition[1039]: INFO : Ignition finished successfully Sep 11 00:33:01.563381 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 11 00:33:01.564858 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 11 00:33:01.566314 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 11 00:33:01.572911 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 11 00:33:01.572995 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 11 00:33:01.576089 initrd-setup-root-after-ignition[1072]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:33:01.576089 initrd-setup-root-after-ignition[1072]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:33:01.577204 initrd-setup-root-after-ignition[1076]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:33:01.578102 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 00:33:01.578454 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 11 00:33:01.578990 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 11 00:33:01.607642 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 11 00:33:01.607715 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 11 00:33:01.608202 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 11 00:33:01.608322 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 11 00:33:01.608525 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 11 00:33:01.609040 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 11 00:33:01.625574 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 00:33:01.626446 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 11 00:33:01.637245 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:33:01.637563 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:33:01.637914 systemd[1]: Stopped target timers.target - Timer Units. Sep 11 00:33:01.638206 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 11 00:33:01.638401 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 00:33:01.638806 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 11 00:33:01.639050 systemd[1]: Stopped target basic.target - Basic System. Sep 11 00:33:01.639332 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 11 00:33:01.639612 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 00:33:01.639865 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 11 00:33:01.640161 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 11 00:33:01.640454 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 11 00:33:01.640784 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 00:33:01.641142 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 11 00:33:01.641384 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 11 00:33:01.641528 systemd[1]: Stopped target swap.target - Swaps. Sep 11 00:33:01.641636 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 11 00:33:01.641736 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 11 00:33:01.641992 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:33:01.642138 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:33:01.642267 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 11 00:33:01.642318 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:33:01.642470 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 11 00:33:01.642535 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 11 00:33:01.642823 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 11 00:33:01.642888 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 00:33:01.643184 systemd[1]: Stopped target paths.target - Path Units. Sep 11 00:33:01.643312 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 11 00:33:01.647201 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:33:01.647414 systemd[1]: Stopped target slices.target - Slice Units. Sep 11 00:33:01.647624 systemd[1]: Stopped target sockets.target - Socket Units. Sep 11 00:33:01.647818 systemd[1]: iscsid.socket: Deactivated successfully. Sep 11 00:33:01.647883 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 00:33:01.648111 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 11 00:33:01.648160 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 00:33:01.648346 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 11 00:33:01.648423 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 00:33:01.648690 systemd[1]: ignition-files.service: Deactivated successfully. Sep 11 00:33:01.648749 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 11 00:33:01.649676 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 11 00:33:01.649795 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 11 00:33:01.649892 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:33:01.650647 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 11 00:33:01.652253 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 11 00:33:01.652407 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:33:01.652972 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 11 00:33:01.653081 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 00:33:01.656953 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 11 00:33:01.657448 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 11 00:33:01.665981 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 11 00:33:01.669789 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 11 00:33:01.669858 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 11 00:33:01.672782 ignition[1096]: INFO : Ignition 2.21.0 Sep 11 00:33:01.672782 ignition[1096]: INFO : Stage: umount Sep 11 00:33:01.674215 ignition[1096]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:33:01.674215 ignition[1096]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 11 00:33:01.674215 ignition[1096]: INFO : umount: umount passed Sep 11 00:33:01.674573 ignition[1096]: INFO : Ignition finished successfully Sep 11 00:33:01.675589 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 11 00:33:01.675668 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 11 00:33:01.675928 systemd[1]: Stopped target network.target - Network. Sep 11 00:33:01.676034 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 11 00:33:01.676064 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 11 00:33:01.676219 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 11 00:33:01.676242 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 11 00:33:01.676394 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 11 00:33:01.676416 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 11 00:33:01.676570 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 11 00:33:01.676592 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 11 00:33:01.676738 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 11 00:33:01.676764 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 11 00:33:01.676966 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 11 00:33:01.677287 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 11 00:33:01.682425 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 11 00:33:01.682584 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 11 00:33:01.684597 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 11 00:33:01.684808 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 11 00:33:01.684893 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 11 00:33:01.686229 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 11 00:33:01.687007 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 11 00:33:01.687251 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 11 00:33:01.687287 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:33:01.688246 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 11 00:33:01.688370 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 11 00:33:01.688415 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 00:33:01.688583 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Sep 11 00:33:01.688619 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Sep 11 00:33:01.688766 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 11 00:33:01.688798 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:33:01.689001 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 11 00:33:01.689031 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 11 00:33:01.689160 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 11 00:33:01.689204 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:33:01.689446 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:33:01.690986 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 11 00:33:01.691052 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:33:01.703411 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 11 00:33:01.703501 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:33:01.704023 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 11 00:33:01.704065 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 11 00:33:01.704321 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 11 00:33:01.704338 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:33:01.704500 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 11 00:33:01.704527 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 11 00:33:01.704799 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 11 00:33:01.704826 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 11 00:33:01.705101 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 11 00:33:01.705125 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 00:33:01.705888 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 11 00:33:01.705991 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 11 00:33:01.706019 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:33:01.706198 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 11 00:33:01.706221 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:33:01.706383 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:33:01.706407 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:33:01.707353 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 11 00:33:01.707391 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 11 00:33:01.707425 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:33:01.707618 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 11 00:33:01.707670 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 11 00:33:01.722001 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 11 00:33:01.722117 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 11 00:33:01.722529 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 11 00:33:01.723277 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 11 00:33:01.736467 systemd[1]: Switching root. Sep 11 00:33:01.762006 systemd-journald[243]: Journal stopped Sep 11 00:33:02.866892 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). Sep 11 00:33:02.866924 kernel: SELinux: policy capability network_peer_controls=1 Sep 11 00:33:02.866933 kernel: SELinux: policy capability open_perms=1 Sep 11 00:33:02.866939 kernel: SELinux: policy capability extended_socket_class=1 Sep 11 00:33:02.866945 kernel: SELinux: policy capability always_check_network=0 Sep 11 00:33:02.866952 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 11 00:33:02.866963 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 11 00:33:02.866973 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 11 00:33:02.866980 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 11 00:33:02.866986 kernel: SELinux: policy capability userspace_initial_context=0 Sep 11 00:33:02.866992 kernel: audit: type=1403 audit(1757550782.145:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 11 00:33:02.866998 systemd[1]: Successfully loaded SELinux policy in 41.297ms. Sep 11 00:33:02.867007 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.923ms. Sep 11 00:33:02.867015 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 00:33:02.867022 systemd[1]: Detected virtualization vmware. Sep 11 00:33:02.867028 systemd[1]: Detected architecture x86-64. Sep 11 00:33:02.867036 systemd[1]: Detected first boot. Sep 11 00:33:02.867043 systemd[1]: Initializing machine ID from random generator. Sep 11 00:33:02.867050 zram_generator::config[1139]: No configuration found. Sep 11 00:33:02.867148 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Sep 11 00:33:02.867159 kernel: Guest personality initialized and is active Sep 11 00:33:02.867195 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 11 00:33:02.867203 kernel: Initialized host personality Sep 11 00:33:02.867212 kernel: NET: Registered PF_VSOCK protocol family Sep 11 00:33:02.867219 systemd[1]: Populated /etc with preset unit settings. Sep 11 00:33:02.867226 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 11 00:33:02.867234 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Sep 11 00:33:02.867240 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 11 00:33:02.867247 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 11 00:33:02.867253 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 11 00:33:02.867261 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 11 00:33:02.867268 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 11 00:33:02.867275 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 11 00:33:02.867282 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 11 00:33:02.867288 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 11 00:33:02.867295 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 11 00:33:02.867302 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 11 00:33:02.867310 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 11 00:33:02.867317 systemd[1]: Created slice user.slice - User and Session Slice. Sep 11 00:33:02.867324 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:33:02.867333 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:33:02.867348 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 11 00:33:02.867356 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 11 00:33:02.867363 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 11 00:33:02.867370 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 00:33:02.867379 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 11 00:33:02.867390 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:33:02.867401 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:33:02.867410 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 11 00:33:02.867417 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 11 00:33:02.867424 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 11 00:33:02.867431 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 11 00:33:02.867437 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:33:02.867446 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 00:33:02.867453 systemd[1]: Reached target slices.target - Slice Units. Sep 11 00:33:02.867460 systemd[1]: Reached target swap.target - Swaps. Sep 11 00:33:02.867467 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 11 00:33:02.867474 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 11 00:33:02.867482 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 11 00:33:02.867490 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:33:02.867496 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 00:33:02.867503 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:33:02.867510 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 11 00:33:02.867517 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 11 00:33:02.867524 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 11 00:33:02.867531 systemd[1]: Mounting media.mount - External Media Directory... Sep 11 00:33:02.867539 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:33:02.867546 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 11 00:33:02.867553 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 11 00:33:02.867562 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 11 00:33:02.867575 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 11 00:33:02.867583 systemd[1]: Reached target machines.target - Containers. Sep 11 00:33:02.867590 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 11 00:33:02.867597 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Sep 11 00:33:02.867606 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 00:33:02.867613 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 11 00:33:02.867620 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:33:02.867627 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 00:33:02.867633 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:33:02.867640 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 11 00:33:02.867647 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:33:02.867654 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 11 00:33:02.867663 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 11 00:33:02.867670 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 11 00:33:02.867677 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 11 00:33:02.867684 systemd[1]: Stopped systemd-fsck-usr.service. Sep 11 00:33:02.867693 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:33:02.867700 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 00:33:02.867707 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 00:33:02.867714 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 00:33:02.867722 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 11 00:33:02.867729 kernel: fuse: init (API version 7.41) Sep 11 00:33:02.867735 kernel: loop: module loaded Sep 11 00:33:02.867742 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 11 00:33:02.867748 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 00:33:02.867755 systemd[1]: verity-setup.service: Deactivated successfully. Sep 11 00:33:02.867762 systemd[1]: Stopped verity-setup.service. Sep 11 00:33:02.867770 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:33:02.867776 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 11 00:33:02.867801 systemd-journald[1239]: Collecting audit messages is disabled. Sep 11 00:33:02.867817 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 11 00:33:02.867825 systemd[1]: Mounted media.mount - External Media Directory. Sep 11 00:33:02.867832 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 11 00:33:02.867840 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 11 00:33:02.867847 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 11 00:33:02.867854 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 11 00:33:02.867861 systemd-journald[1239]: Journal started Sep 11 00:33:02.867877 systemd-journald[1239]: Runtime Journal (/run/log/journal/70d98d49baa846e998639f382d9f6b05) is 4.8M, max 38.9M, 34M free. Sep 11 00:33:02.686798 systemd[1]: Queued start job for default target multi-user.target. Sep 11 00:33:02.693402 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 11 00:33:02.693656 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 11 00:33:02.868244 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 00:33:02.868442 jq[1209]: true Sep 11 00:33:02.871855 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:33:02.872120 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 11 00:33:02.873219 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 11 00:33:02.873513 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:33:02.873655 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:33:02.873921 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:33:02.874044 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:33:02.874308 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 11 00:33:02.874412 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 11 00:33:02.874622 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:33:02.874723 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:33:02.874975 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 00:33:02.875266 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 11 00:33:02.885237 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:33:02.891549 jq[1254]: true Sep 11 00:33:02.888904 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 00:33:02.893340 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 11 00:33:02.897259 kernel: ACPI: bus type drm_connector registered Sep 11 00:33:02.899580 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 11 00:33:02.899715 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 11 00:33:02.899738 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 00:33:02.900496 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 11 00:33:02.905414 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 11 00:33:02.905610 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:33:02.909416 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 11 00:33:02.912271 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 11 00:33:02.912444 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 00:33:02.914781 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 11 00:33:02.914923 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 00:33:02.921411 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:33:02.923778 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 11 00:33:02.926360 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 11 00:33:02.928595 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 00:33:02.928752 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 00:33:02.929060 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 11 00:33:02.931463 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 11 00:33:02.931842 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 11 00:33:02.934277 systemd-journald[1239]: Time spent on flushing to /var/log/journal/70d98d49baa846e998639f382d9f6b05 is 93.604ms for 1763 entries. Sep 11 00:33:02.934277 systemd-journald[1239]: System Journal (/var/log/journal/70d98d49baa846e998639f382d9f6b05) is 8M, max 584.8M, 576.8M free. Sep 11 00:33:03.038416 systemd-journald[1239]: Received client request to flush runtime journal. Sep 11 00:33:03.038445 kernel: loop0: detected capacity change from 0 to 146240 Sep 11 00:33:03.023639 ignition[1282]: Ignition 2.21.0 Sep 11 00:33:02.945073 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 11 00:33:03.023868 ignition[1282]: deleting config from guestinfo properties Sep 11 00:33:02.945382 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 11 00:33:03.036104 ignition[1282]: Successfully deleted config Sep 11 00:33:02.946939 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 11 00:33:02.977129 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:33:03.017577 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 11 00:33:03.035580 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:33:03.041100 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 11 00:33:03.045636 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Sep 11 00:33:03.052866 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 11 00:33:03.055456 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 11 00:33:03.058407 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 00:33:03.075349 kernel: loop1: detected capacity change from 0 to 221472 Sep 11 00:33:03.093097 systemd-tmpfiles[1307]: ACLs are not supported, ignoring. Sep 11 00:33:03.093109 systemd-tmpfiles[1307]: ACLs are not supported, ignoring. Sep 11 00:33:03.098353 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:33:03.112188 kernel: loop2: detected capacity change from 0 to 2960 Sep 11 00:33:03.138186 kernel: loop3: detected capacity change from 0 to 113872 Sep 11 00:33:03.249288 kernel: loop4: detected capacity change from 0 to 146240 Sep 11 00:33:03.288185 kernel: loop5: detected capacity change from 0 to 221472 Sep 11 00:33:03.345187 kernel: loop6: detected capacity change from 0 to 2960 Sep 11 00:33:03.363200 kernel: loop7: detected capacity change from 0 to 113872 Sep 11 00:33:03.424196 (sd-merge)[1313]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Sep 11 00:33:03.424886 (sd-merge)[1313]: Merged extensions into '/usr'. Sep 11 00:33:03.433401 systemd[1]: Reload requested from client PID 1280 ('systemd-sysext') (unit systemd-sysext.service)... Sep 11 00:33:03.433504 systemd[1]: Reloading... Sep 11 00:33:03.537341 zram_generator::config[1340]: No configuration found. Sep 11 00:33:03.628786 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:33:03.637193 ldconfig[1275]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 11 00:33:03.639797 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 11 00:33:03.688070 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 11 00:33:03.688348 systemd[1]: Reloading finished in 254 ms. Sep 11 00:33:03.705140 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 11 00:33:03.705605 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 11 00:33:03.711290 systemd[1]: Starting ensure-sysext.service... Sep 11 00:33:03.713245 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 00:33:03.726310 systemd[1]: Reload requested from client PID 1396 ('systemctl') (unit ensure-sysext.service)... Sep 11 00:33:03.726319 systemd[1]: Reloading... Sep 11 00:33:03.736420 systemd-tmpfiles[1397]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 11 00:33:03.736445 systemd-tmpfiles[1397]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 11 00:33:03.736613 systemd-tmpfiles[1397]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 11 00:33:03.736808 systemd-tmpfiles[1397]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 11 00:33:03.737994 systemd-tmpfiles[1397]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 11 00:33:03.738655 systemd-tmpfiles[1397]: ACLs are not supported, ignoring. Sep 11 00:33:03.738777 systemd-tmpfiles[1397]: ACLs are not supported, ignoring. Sep 11 00:33:03.742680 systemd-tmpfiles[1397]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 00:33:03.742691 systemd-tmpfiles[1397]: Skipping /boot Sep 11 00:33:03.756443 systemd-tmpfiles[1397]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 00:33:03.756451 systemd-tmpfiles[1397]: Skipping /boot Sep 11 00:33:03.783193 zram_generator::config[1423]: No configuration found. Sep 11 00:33:03.865105 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:33:03.873892 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 11 00:33:03.923634 systemd[1]: Reloading finished in 197 ms. Sep 11 00:33:03.943296 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 11 00:33:03.946749 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:33:03.954347 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:33:03.955973 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 11 00:33:03.957412 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 11 00:33:03.959246 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 00:33:03.960691 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:33:03.964806 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 11 00:33:03.971259 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:33:03.973953 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:33:03.975443 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:33:03.978130 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:33:03.978343 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:33:03.978426 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:33:03.978497 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:33:03.980597 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:33:03.980717 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:33:03.980786 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:33:03.982346 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 11 00:33:03.983215 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:33:03.985219 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:33:03.988372 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 00:33:03.988606 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:33:03.988691 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:33:03.988835 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:33:03.989347 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:33:03.989513 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:33:03.993471 systemd[1]: Finished ensure-sysext.service. Sep 11 00:33:03.997735 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 11 00:33:04.002665 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 11 00:33:04.007207 systemd-udevd[1486]: Using default interface naming scheme 'v255'. Sep 11 00:33:04.013004 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:33:04.013172 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:33:04.013504 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:33:04.013623 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:33:04.013864 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 00:33:04.014011 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 00:33:04.014837 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 00:33:04.014890 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 00:33:04.040810 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 11 00:33:04.168516 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 11 00:33:04.168756 systemd[1]: Reached target time-set.target - System Time Set. Sep 11 00:33:04.170337 systemd-resolved[1485]: Positive Trust Anchors: Sep 11 00:33:04.170348 systemd-resolved[1485]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 00:33:04.170382 systemd-resolved[1485]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 00:33:04.171307 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 11 00:33:04.173513 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 11 00:33:04.177932 systemd-resolved[1485]: Defaulting to hostname 'linux'. Sep 11 00:33:04.182093 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 00:33:04.182306 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:33:04.184948 augenrules[1527]: No rules Sep 11 00:33:04.186002 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:33:04.186331 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:33:04.192727 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 11 00:33:04.194430 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:33:04.197282 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 00:33:04.241065 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 11 00:33:04.242465 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 11 00:33:04.242500 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 00:33:04.243316 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 11 00:33:04.243484 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 11 00:33:04.243609 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 11 00:33:04.243823 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 11 00:33:04.244136 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 11 00:33:04.244368 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 11 00:33:04.244554 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 11 00:33:04.244586 systemd[1]: Reached target paths.target - Path Units. Sep 11 00:33:04.244749 systemd[1]: Reached target timers.target - Timer Units. Sep 11 00:33:04.245592 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 11 00:33:04.247325 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 11 00:33:04.250142 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 11 00:33:04.250572 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 11 00:33:04.250811 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 11 00:33:04.255603 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 11 00:33:04.255994 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 11 00:33:04.257020 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 11 00:33:04.258085 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 00:33:04.258584 systemd[1]: Reached target basic.target - Basic System. Sep 11 00:33:04.258718 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 11 00:33:04.258736 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 11 00:33:04.260264 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 11 00:33:04.264380 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 11 00:33:04.268321 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 11 00:33:04.273311 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 11 00:33:04.273432 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 11 00:33:04.275603 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 11 00:33:04.281714 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 11 00:33:04.284287 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 11 00:33:04.290286 jq[1565]: false Sep 11 00:33:04.292486 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 11 00:33:04.297299 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 11 00:33:04.302975 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 11 00:33:04.304786 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 11 00:33:04.305367 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 11 00:33:04.306763 systemd[1]: Starting update-engine.service - Update Engine... Sep 11 00:33:04.310145 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 11 00:33:04.313243 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Sep 11 00:33:04.315219 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 11 00:33:04.315616 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 11 00:33:04.318420 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 11 00:33:04.325921 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 11 00:33:04.335917 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Refreshing passwd entry cache Sep 11 00:33:04.335921 oslogin_cache_refresh[1567]: Refreshing passwd entry cache Sep 11 00:33:04.339477 update_engine[1581]: I20250911 00:33:04.338099 1581 main.cc:92] Flatcar Update Engine starting Sep 11 00:33:04.338747 oslogin_cache_refresh[1567]: Failure getting users, quitting Sep 11 00:33:04.340543 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Failure getting users, quitting Sep 11 00:33:04.340543 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 11 00:33:04.340543 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Refreshing group entry cache Sep 11 00:33:04.338766 oslogin_cache_refresh[1567]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 11 00:33:04.340355 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 11 00:33:04.338811 oslogin_cache_refresh[1567]: Refreshing group entry cache Sep 11 00:33:04.340516 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 11 00:33:04.352302 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Failure getting groups, quitting Sep 11 00:33:04.352302 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 11 00:33:04.351617 oslogin_cache_refresh[1567]: Failure getting groups, quitting Sep 11 00:33:04.351625 oslogin_cache_refresh[1567]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 11 00:33:04.362434 jq[1582]: true Sep 11 00:33:04.363505 tar[1587]: linux-amd64/helm Sep 11 00:33:04.367033 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 11 00:33:04.367257 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 11 00:33:04.369871 systemd-networkd[1538]: lo: Link UP Sep 11 00:33:04.370085 systemd-networkd[1538]: lo: Gained carrier Sep 11 00:33:04.375340 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Sep 11 00:33:04.375548 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Sep 11 00:33:04.372544 systemd-networkd[1538]: Enumeration completed Sep 11 00:33:04.372662 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 00:33:04.372845 systemd-networkd[1538]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Sep 11 00:33:04.373279 systemd[1]: Reached target network.target - Network. Sep 11 00:33:04.376432 systemd[1]: Starting containerd.service - containerd container runtime... Sep 11 00:33:04.377235 systemd-networkd[1538]: ens192: Link UP Sep 11 00:33:04.377925 systemd-networkd[1538]: ens192: Gained carrier Sep 11 00:33:04.382826 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 11 00:33:04.386039 systemd-timesyncd[1497]: Network configuration changed, trying to establish connection. Sep 11 00:33:04.387219 extend-filesystems[1566]: Found /dev/sda6 Sep 11 00:33:04.388688 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 11 00:33:04.392438 extend-filesystems[1566]: Found /dev/sda9 Sep 11 00:33:04.395490 extend-filesystems[1566]: Checking size of /dev/sda9 Sep 11 00:33:04.407773 systemd[1]: motdgen.service: Deactivated successfully. Sep 11 00:33:04.408549 jq[1601]: true Sep 11 00:33:04.409229 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 11 00:33:04.418404 dbus-daemon[1563]: [system] SELinux support is enabled Sep 11 00:33:04.418514 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 11 00:33:04.421520 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 11 00:33:04.421542 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 11 00:33:04.421696 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 11 00:33:04.421709 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 11 00:33:04.437613 update_engine[1581]: I20250911 00:33:04.436237 1581 update_check_scheduler.cc:74] Next update check in 5m49s Sep 11 00:33:04.437709 extend-filesystems[1566]: Old size kept for /dev/sda9 Sep 11 00:33:04.436454 systemd[1]: Started update-engine.service - Update Engine. Sep 11 00:33:04.438279 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 11 00:33:04.438521 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 11 00:33:04.444489 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 11 00:33:04.449313 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 11 00:33:04.449791 (ntainerd)[1625]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 11 00:33:04.499414 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Sep 11 00:33:04.502455 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Sep 11 00:33:04.602425 unknown[1639]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Sep 11 00:33:04.603975 unknown[1639]: Core dump limit set to -1 Sep 11 00:33:04.612321 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 11 00:33:04.613016 bash[1635]: Updated "/home/core/.ssh/authorized_keys" Sep 11 00:33:04.613447 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 11 00:33:04.614216 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 11 00:33:04.622999 sshd_keygen[1606]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 11 00:33:04.620193 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Sep 11 00:33:04.646191 kernel: mousedev: PS/2 mouse device common for all mice Sep 11 00:33:04.689687 systemd-logind[1580]: New seat seat0. Sep 11 00:33:04.690650 systemd[1]: Started systemd-logind.service - User Login Management. Sep 11 00:33:04.725986 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 11 00:33:04.727889 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 11 00:33:04.746102 systemd[1]: issuegen.service: Deactivated successfully. Sep 11 00:33:04.746503 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 11 00:33:04.747394 kernel: ACPI: button: Power Button [PWRF] Sep 11 00:33:04.750300 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 11 00:33:04.760484 locksmithd[1630]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 11 00:33:04.779973 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 11 00:33:04.783355 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 11 00:33:04.787411 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 11 00:33:04.787833 systemd[1]: Reached target getty.target - Login Prompts. Sep 11 00:33:04.901229 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Sep 11 00:33:05.016560 containerd[1625]: time="2025-09-11T00:33:05Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 11 00:33:05.017789 containerd[1625]: time="2025-09-11T00:33:05.017766421Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 11 00:33:05.034156 (udev-worker)[1542]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Sep 11 00:33:05.036312 systemd-logind[1580]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 11 00:33:05.037077 containerd[1625]: time="2025-09-11T00:33:05.037051868Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.271µs" Sep 11 00:33:05.037077 containerd[1625]: time="2025-09-11T00:33:05.037073030Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 11 00:33:05.037122 containerd[1625]: time="2025-09-11T00:33:05.037086852Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 11 00:33:05.037209 containerd[1625]: time="2025-09-11T00:33:05.037193805Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 11 00:33:05.037209 containerd[1625]: time="2025-09-11T00:33:05.037207544Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 11 00:33:05.037254 containerd[1625]: time="2025-09-11T00:33:05.037225067Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 00:33:05.037283 containerd[1625]: time="2025-09-11T00:33:05.037269774Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 00:33:05.037301 containerd[1625]: time="2025-09-11T00:33:05.037280753Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 00:33:05.037521 containerd[1625]: time="2025-09-11T00:33:05.037445835Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 00:33:05.037521 containerd[1625]: time="2025-09-11T00:33:05.037460009Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 00:33:05.037521 containerd[1625]: time="2025-09-11T00:33:05.037467390Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 00:33:05.037521 containerd[1625]: time="2025-09-11T00:33:05.037474389Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 11 00:33:05.037521 containerd[1625]: time="2025-09-11T00:33:05.037516903Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 11 00:33:05.037669 containerd[1625]: time="2025-09-11T00:33:05.037655164Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 00:33:05.037689 containerd[1625]: time="2025-09-11T00:33:05.037677188Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 00:33:05.037689 containerd[1625]: time="2025-09-11T00:33:05.037685105Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 11 00:33:05.037716 containerd[1625]: time="2025-09-11T00:33:05.037700447Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 11 00:33:05.037859 containerd[1625]: time="2025-09-11T00:33:05.037846034Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 11 00:33:05.037893 containerd[1625]: time="2025-09-11T00:33:05.037882752Z" level=info msg="metadata content store policy set" policy=shared Sep 11 00:33:05.049689 systemd-logind[1580]: Watching system buttons on /dev/input/event2 (Power Button) Sep 11 00:33:05.054429 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Sep 11 00:33:05.056343 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 11 00:33:05.063365 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:33:05.096563 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 11 00:33:05.338390 containerd[1625]: time="2025-09-11T00:33:05.338353995Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 11 00:33:05.338467 containerd[1625]: time="2025-09-11T00:33:05.338435012Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 11 00:33:05.338467 containerd[1625]: time="2025-09-11T00:33:05.338448163Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 11 00:33:05.338510 containerd[1625]: time="2025-09-11T00:33:05.338469075Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 11 00:33:05.338510 containerd[1625]: time="2025-09-11T00:33:05.338477622Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 11 00:33:05.338510 containerd[1625]: time="2025-09-11T00:33:05.338483635Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 11 00:33:05.338510 containerd[1625]: time="2025-09-11T00:33:05.338493615Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 11 00:33:05.338510 containerd[1625]: time="2025-09-11T00:33:05.338500913Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 11 00:33:05.338510 containerd[1625]: time="2025-09-11T00:33:05.338507498Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 11 00:33:05.338586 containerd[1625]: time="2025-09-11T00:33:05.338513067Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 11 00:33:05.338586 containerd[1625]: time="2025-09-11T00:33:05.338519536Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 11 00:33:05.338586 containerd[1625]: time="2025-09-11T00:33:05.338545510Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 11 00:33:05.338688 containerd[1625]: time="2025-09-11T00:33:05.338644570Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 11 00:33:05.338688 containerd[1625]: time="2025-09-11T00:33:05.338659647Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 11 00:33:05.338688 containerd[1625]: time="2025-09-11T00:33:05.338668630Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 11 00:33:05.338688 containerd[1625]: time="2025-09-11T00:33:05.338674976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 11 00:33:05.338688 containerd[1625]: time="2025-09-11T00:33:05.338682066Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 11 00:33:05.338763 containerd[1625]: time="2025-09-11T00:33:05.338697538Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 11 00:33:05.338763 containerd[1625]: time="2025-09-11T00:33:05.338707555Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 11 00:33:05.338763 containerd[1625]: time="2025-09-11T00:33:05.338713576Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 11 00:33:05.338763 containerd[1625]: time="2025-09-11T00:33:05.338719853Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 11 00:33:05.338763 containerd[1625]: time="2025-09-11T00:33:05.338725678Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 11 00:33:05.338763 containerd[1625]: time="2025-09-11T00:33:05.338732762Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 11 00:33:05.338843 containerd[1625]: time="2025-09-11T00:33:05.338782810Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 11 00:33:05.338843 containerd[1625]: time="2025-09-11T00:33:05.338792046Z" level=info msg="Start snapshots syncer" Sep 11 00:33:05.338843 containerd[1625]: time="2025-09-11T00:33:05.338807767Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 11 00:33:05.339054 containerd[1625]: time="2025-09-11T00:33:05.339011327Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 11 00:33:05.339127 containerd[1625]: time="2025-09-11T00:33:05.339063652Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 11 00:33:05.339144 containerd[1625]: time="2025-09-11T00:33:05.339123230Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 11 00:33:05.339375 containerd[1625]: time="2025-09-11T00:33:05.339232272Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 11 00:33:05.339375 containerd[1625]: time="2025-09-11T00:33:05.339252510Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 11 00:33:05.339375 containerd[1625]: time="2025-09-11T00:33:05.339259098Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 11 00:33:05.339375 containerd[1625]: time="2025-09-11T00:33:05.339265377Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 11 00:33:05.339375 containerd[1625]: time="2025-09-11T00:33:05.339282178Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 11 00:33:05.339375 containerd[1625]: time="2025-09-11T00:33:05.339289331Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 11 00:33:05.339375 containerd[1625]: time="2025-09-11T00:33:05.339295823Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 11 00:33:05.339375 containerd[1625]: time="2025-09-11T00:33:05.339310692Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 11 00:33:05.339375 containerd[1625]: time="2025-09-11T00:33:05.339316635Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 11 00:33:05.339375 containerd[1625]: time="2025-09-11T00:33:05.339322638Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 11 00:33:05.339518 containerd[1625]: time="2025-09-11T00:33:05.339381928Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 00:33:05.339518 containerd[1625]: time="2025-09-11T00:33:05.339395070Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 00:33:05.339518 containerd[1625]: time="2025-09-11T00:33:05.339400264Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 00:33:05.339518 containerd[1625]: time="2025-09-11T00:33:05.339405614Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 00:33:05.339518 containerd[1625]: time="2025-09-11T00:33:05.339410089Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 11 00:33:05.339518 containerd[1625]: time="2025-09-11T00:33:05.339418240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 11 00:33:05.340193 containerd[1625]: time="2025-09-11T00:33:05.340178665Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 11 00:33:05.340237 containerd[1625]: time="2025-09-11T00:33:05.340196921Z" level=info msg="runtime interface created" Sep 11 00:33:05.340237 containerd[1625]: time="2025-09-11T00:33:05.340201256Z" level=info msg="created NRI interface" Sep 11 00:33:05.340237 containerd[1625]: time="2025-09-11T00:33:05.340206888Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 11 00:33:05.340237 containerd[1625]: time="2025-09-11T00:33:05.340215743Z" level=info msg="Connect containerd service" Sep 11 00:33:05.340237 containerd[1625]: time="2025-09-11T00:33:05.340231354Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 11 00:33:05.340827 containerd[1625]: time="2025-09-11T00:33:05.340797624Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 11 00:33:05.387188 tar[1587]: linux-amd64/LICENSE Sep 11 00:33:05.387188 tar[1587]: linux-amd64/README.md Sep 11 00:33:05.397089 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 11 00:33:05.459184 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:33:05.460014 containerd[1625]: time="2025-09-11T00:33:05.459989606Z" level=info msg="Start subscribing containerd event" Sep 11 00:33:05.460148 containerd[1625]: time="2025-09-11T00:33:05.460024173Z" level=info msg="Start recovering state" Sep 11 00:33:05.460233 containerd[1625]: time="2025-09-11T00:33:05.460219742Z" level=info msg="Start event monitor" Sep 11 00:33:05.460254 containerd[1625]: time="2025-09-11T00:33:05.460235889Z" level=info msg="Start cni network conf syncer for default" Sep 11 00:33:05.460254 containerd[1625]: time="2025-09-11T00:33:05.460244626Z" level=info msg="Start streaming server" Sep 11 00:33:05.460254 containerd[1625]: time="2025-09-11T00:33:05.460249610Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 11 00:33:05.460254 containerd[1625]: time="2025-09-11T00:33:05.460253444Z" level=info msg="runtime interface starting up..." Sep 11 00:33:05.460335 containerd[1625]: time="2025-09-11T00:33:05.460256444Z" level=info msg="starting plugins..." Sep 11 00:33:05.460335 containerd[1625]: time="2025-09-11T00:33:05.460265846Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 11 00:33:05.460335 containerd[1625]: time="2025-09-11T00:33:05.460119259Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 11 00:33:05.460394 containerd[1625]: time="2025-09-11T00:33:05.460355922Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 11 00:33:05.460394 containerd[1625]: time="2025-09-11T00:33:05.460385939Z" level=info msg="containerd successfully booted in 0.444058s" Sep 11 00:33:05.461204 systemd[1]: Started containerd.service - containerd container runtime. Sep 11 00:33:06.196331 systemd-networkd[1538]: ens192: Gained IPv6LL Sep 11 00:33:06.196668 systemd-timesyncd[1497]: Network configuration changed, trying to establish connection. Sep 11 00:33:06.197527 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 11 00:33:06.198225 systemd[1]: Reached target network-online.target - Network is Online. Sep 11 00:33:06.199320 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Sep 11 00:33:06.208035 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:33:06.210288 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 11 00:33:06.254513 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 11 00:33:06.255977 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 11 00:33:06.256536 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Sep 11 00:33:06.257640 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 11 00:33:07.771916 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:33:07.772293 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 11 00:33:07.773145 systemd[1]: Startup finished in 2.761s (kernel) + 7.546s (initrd) + 5.667s (userspace) = 15.975s. Sep 11 00:33:07.777231 systemd-timesyncd[1497]: Network configuration changed, trying to establish connection. Sep 11 00:33:07.777458 systemd-timesyncd[1497]: Network configuration changed, trying to establish connection. Sep 11 00:33:07.779630 (kubelet)[1803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:33:07.816031 login[1680]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 11 00:33:07.817222 login[1681]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 11 00:33:07.822158 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 11 00:33:07.822792 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 11 00:33:07.828621 systemd-logind[1580]: New session 2 of user core. Sep 11 00:33:07.831064 systemd-logind[1580]: New session 1 of user core. Sep 11 00:33:07.838626 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 11 00:33:07.840761 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 11 00:33:07.859288 (systemd)[1810]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 11 00:33:07.861691 systemd-logind[1580]: New session c1 of user core. Sep 11 00:33:07.960347 systemd[1810]: Queued start job for default target default.target. Sep 11 00:33:07.968009 systemd[1810]: Created slice app.slice - User Application Slice. Sep 11 00:33:07.968030 systemd[1810]: Reached target paths.target - Paths. Sep 11 00:33:07.968057 systemd[1810]: Reached target timers.target - Timers. Sep 11 00:33:07.971217 systemd[1810]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 11 00:33:07.975573 systemd[1810]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 11 00:33:07.975607 systemd[1810]: Reached target sockets.target - Sockets. Sep 11 00:33:07.975631 systemd[1810]: Reached target basic.target - Basic System. Sep 11 00:33:07.975653 systemd[1810]: Reached target default.target - Main User Target. Sep 11 00:33:07.975670 systemd[1810]: Startup finished in 110ms. Sep 11 00:33:07.975751 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 11 00:33:07.981266 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 11 00:33:07.981968 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 11 00:33:08.956624 kubelet[1803]: E0911 00:33:08.956568 1803 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:33:08.958593 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:33:08.958690 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:33:08.958909 systemd[1]: kubelet.service: Consumed 657ms CPU time, 267.2M memory peak. Sep 11 00:33:19.209149 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 11 00:33:19.210526 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:33:19.571743 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:33:19.580404 (kubelet)[1854]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:33:19.658673 kubelet[1854]: E0911 00:33:19.658640 1854 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:33:19.661226 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:33:19.661376 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:33:19.661712 systemd[1]: kubelet.service: Consumed 110ms CPU time, 109.4M memory peak. Sep 11 00:33:29.911857 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 11 00:33:29.913606 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:33:30.266927 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:33:30.269746 (kubelet)[1869]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:33:30.303313 kubelet[1869]: E0911 00:33:30.303267 1869 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:33:30.304613 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:33:30.304742 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:33:30.305045 systemd[1]: kubelet.service: Consumed 100ms CPU time, 110.3M memory peak. Sep 11 00:33:34.738640 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 11 00:33:34.739785 systemd[1]: Started sshd@0-139.178.70.101:22-139.178.89.65:50098.service - OpenSSH per-connection server daemon (139.178.89.65:50098). Sep 11 00:33:34.848664 sshd[1876]: Accepted publickey for core from 139.178.89.65 port 50098 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:33:34.849583 sshd-session[1876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:33:34.852267 systemd-logind[1580]: New session 3 of user core. Sep 11 00:33:34.859518 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 11 00:33:34.914261 systemd[1]: Started sshd@1-139.178.70.101:22-139.178.89.65:50104.service - OpenSSH per-connection server daemon (139.178.89.65:50104). Sep 11 00:33:34.957426 sshd[1881]: Accepted publickey for core from 139.178.89.65 port 50104 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:33:34.957751 sshd-session[1881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:33:34.960886 systemd-logind[1580]: New session 4 of user core. Sep 11 00:33:34.968324 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 11 00:33:35.018529 sshd[1883]: Connection closed by 139.178.89.65 port 50104 Sep 11 00:33:35.019358 sshd-session[1881]: pam_unix(sshd:session): session closed for user core Sep 11 00:33:35.030675 systemd[1]: sshd@1-139.178.70.101:22-139.178.89.65:50104.service: Deactivated successfully. Sep 11 00:33:35.032309 systemd[1]: session-4.scope: Deactivated successfully. Sep 11 00:33:35.033124 systemd-logind[1580]: Session 4 logged out. Waiting for processes to exit. Sep 11 00:33:35.034681 systemd-logind[1580]: Removed session 4. Sep 11 00:33:35.035804 systemd[1]: Started sshd@2-139.178.70.101:22-139.178.89.65:50116.service - OpenSSH per-connection server daemon (139.178.89.65:50116). Sep 11 00:33:35.081053 sshd[1889]: Accepted publickey for core from 139.178.89.65 port 50116 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:33:35.082118 sshd-session[1889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:33:35.086138 systemd-logind[1580]: New session 5 of user core. Sep 11 00:33:35.093364 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 11 00:33:35.143403 sshd[1891]: Connection closed by 139.178.89.65 port 50116 Sep 11 00:33:35.143295 sshd-session[1889]: pam_unix(sshd:session): session closed for user core Sep 11 00:33:35.152543 systemd[1]: sshd@2-139.178.70.101:22-139.178.89.65:50116.service: Deactivated successfully. Sep 11 00:33:35.153847 systemd[1]: session-5.scope: Deactivated successfully. Sep 11 00:33:35.154531 systemd-logind[1580]: Session 5 logged out. Waiting for processes to exit. Sep 11 00:33:35.156057 systemd[1]: Started sshd@3-139.178.70.101:22-139.178.89.65:50132.service - OpenSSH per-connection server daemon (139.178.89.65:50132). Sep 11 00:33:35.157591 systemd-logind[1580]: Removed session 5. Sep 11 00:33:35.206581 sshd[1897]: Accepted publickey for core from 139.178.89.65 port 50132 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:33:35.207736 sshd-session[1897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:33:35.212265 systemd-logind[1580]: New session 6 of user core. Sep 11 00:33:35.220400 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 11 00:33:35.270251 sshd[1899]: Connection closed by 139.178.89.65 port 50132 Sep 11 00:33:35.270823 sshd-session[1897]: pam_unix(sshd:session): session closed for user core Sep 11 00:33:35.282497 systemd[1]: sshd@3-139.178.70.101:22-139.178.89.65:50132.service: Deactivated successfully. Sep 11 00:33:35.283382 systemd[1]: session-6.scope: Deactivated successfully. Sep 11 00:33:35.284302 systemd-logind[1580]: Session 6 logged out. Waiting for processes to exit. Sep 11 00:33:35.287332 systemd[1]: Started sshd@4-139.178.70.101:22-139.178.89.65:50142.service - OpenSSH per-connection server daemon (139.178.89.65:50142). Sep 11 00:33:35.288257 systemd-logind[1580]: Removed session 6. Sep 11 00:33:35.329626 sshd[1905]: Accepted publickey for core from 139.178.89.65 port 50142 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:33:35.330556 sshd-session[1905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:33:35.334202 systemd-logind[1580]: New session 7 of user core. Sep 11 00:33:35.342301 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 11 00:33:35.404136 sudo[1908]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 11 00:33:35.404406 sudo[1908]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:33:35.415087 sudo[1908]: pam_unix(sudo:session): session closed for user root Sep 11 00:33:35.416033 sshd[1907]: Connection closed by 139.178.89.65 port 50142 Sep 11 00:33:35.417307 sshd-session[1905]: pam_unix(sshd:session): session closed for user core Sep 11 00:33:35.427154 systemd[1]: sshd@4-139.178.70.101:22-139.178.89.65:50142.service: Deactivated successfully. Sep 11 00:33:35.428452 systemd[1]: session-7.scope: Deactivated successfully. Sep 11 00:33:35.429211 systemd-logind[1580]: Session 7 logged out. Waiting for processes to exit. Sep 11 00:33:35.430997 systemd-logind[1580]: Removed session 7. Sep 11 00:33:35.432518 systemd[1]: Started sshd@5-139.178.70.101:22-139.178.89.65:50158.service - OpenSSH per-connection server daemon (139.178.89.65:50158). Sep 11 00:33:35.473612 sshd[1914]: Accepted publickey for core from 139.178.89.65 port 50158 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:33:35.474680 sshd-session[1914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:33:35.477853 systemd-logind[1580]: New session 8 of user core. Sep 11 00:33:35.486380 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 11 00:33:35.534993 sudo[1918]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 11 00:33:35.535400 sudo[1918]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:33:35.538415 sudo[1918]: pam_unix(sudo:session): session closed for user root Sep 11 00:33:35.541868 sudo[1917]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 11 00:33:35.542330 sudo[1917]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:33:35.548810 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:33:35.587854 augenrules[1940]: No rules Sep 11 00:33:35.588722 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:33:35.588968 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:33:35.589832 sudo[1917]: pam_unix(sudo:session): session closed for user root Sep 11 00:33:35.591137 sshd[1916]: Connection closed by 139.178.89.65 port 50158 Sep 11 00:33:35.592132 sshd-session[1914]: pam_unix(sshd:session): session closed for user core Sep 11 00:33:35.598617 systemd[1]: sshd@5-139.178.70.101:22-139.178.89.65:50158.service: Deactivated successfully. Sep 11 00:33:35.600518 systemd[1]: session-8.scope: Deactivated successfully. Sep 11 00:33:35.601154 systemd-logind[1580]: Session 8 logged out. Waiting for processes to exit. Sep 11 00:33:35.603286 systemd[1]: Started sshd@6-139.178.70.101:22-139.178.89.65:50170.service - OpenSSH per-connection server daemon (139.178.89.65:50170). Sep 11 00:33:35.604621 systemd-logind[1580]: Removed session 8. Sep 11 00:33:35.644086 sshd[1949]: Accepted publickey for core from 139.178.89.65 port 50170 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:33:35.645180 sshd-session[1949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:33:35.648855 systemd-logind[1580]: New session 9 of user core. Sep 11 00:33:35.665398 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 11 00:33:35.714081 sudo[1952]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 11 00:33:35.714249 sudo[1952]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:33:36.181464 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 11 00:33:36.202608 (dockerd)[1969]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 11 00:33:36.562214 dockerd[1969]: time="2025-09-11T00:33:36.562112134Z" level=info msg="Starting up" Sep 11 00:33:36.563638 dockerd[1969]: time="2025-09-11T00:33:36.563621429Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 11 00:33:36.938903 dockerd[1969]: time="2025-09-11T00:33:36.938827602Z" level=info msg="Loading containers: start." Sep 11 00:33:37.008175 kernel: Initializing XFRM netlink socket Sep 11 00:33:37.329571 systemd-timesyncd[1497]: Network configuration changed, trying to establish connection. Sep 11 00:33:37.358271 systemd-networkd[1538]: docker0: Link UP Sep 11 00:33:37.360294 dockerd[1969]: time="2025-09-11T00:33:37.360268786Z" level=info msg="Loading containers: done." Sep 11 00:35:10.314460 systemd-resolved[1485]: Clock change detected. Flushing caches. Sep 11 00:35:10.314588 systemd-timesyncd[1497]: Contacted time server 23.186.168.130:123 (2.flatcar.pool.ntp.org). Sep 11 00:35:10.314630 systemd-timesyncd[1497]: Initial clock synchronization to Thu 2025-09-11 00:35:10.314382 UTC. Sep 11 00:35:10.323675 dockerd[1969]: time="2025-09-11T00:35:10.323441399Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 11 00:35:10.323675 dockerd[1969]: time="2025-09-11T00:35:10.323494695Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 11 00:35:10.323675 dockerd[1969]: time="2025-09-11T00:35:10.323559982Z" level=info msg="Initializing buildkit" Sep 11 00:35:10.335681 dockerd[1969]: time="2025-09-11T00:35:10.335656679Z" level=info msg="Completed buildkit initialization" Sep 11 00:35:10.338761 dockerd[1969]: time="2025-09-11T00:35:10.338726346Z" level=info msg="Daemon has completed initialization" Sep 11 00:35:10.338927 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 11 00:35:10.340008 dockerd[1969]: time="2025-09-11T00:35:10.339617995Z" level=info msg="API listen on /run/docker.sock" Sep 11 00:35:10.583524 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3195879850-merged.mount: Deactivated successfully. Sep 11 00:35:11.220716 containerd[1625]: time="2025-09-11T00:35:11.220690326Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 11 00:35:12.071948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3395874392.mount: Deactivated successfully. Sep 11 00:35:13.011382 containerd[1625]: time="2025-09-11T00:35:13.011350194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:35:13.012009 containerd[1625]: time="2025-09-11T00:35:13.011993147Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 11 00:35:13.012426 containerd[1625]: time="2025-09-11T00:35:13.012410267Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:35:13.014232 containerd[1625]: time="2025-09-11T00:35:13.014215245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:35:13.014721 containerd[1625]: time="2025-09-11T00:35:13.014705180Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 1.793990924s" Sep 11 00:35:13.014745 containerd[1625]: time="2025-09-11T00:35:13.014724522Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 11 00:35:13.015198 containerd[1625]: time="2025-09-11T00:35:13.015179093Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 11 00:35:13.457392 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 11 00:35:13.461243 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:35:13.645163 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:35:13.655409 (kubelet)[2237]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:35:13.684376 kubelet[2237]: E0911 00:35:13.684329 2237 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:35:13.686265 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:35:13.686375 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:35:13.686772 systemd[1]: kubelet.service: Consumed 99ms CPU time, 108.3M memory peak. Sep 11 00:35:14.530994 containerd[1625]: time="2025-09-11T00:35:14.530960304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:35:14.536275 containerd[1625]: time="2025-09-11T00:35:14.536252305Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 11 00:35:14.544253 containerd[1625]: time="2025-09-11T00:35:14.544225595Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:35:14.549951 containerd[1625]: time="2025-09-11T00:35:14.549916969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:35:14.550558 containerd[1625]: time="2025-09-11T00:35:14.550241850Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.535047544s" Sep 11 00:35:14.550558 containerd[1625]: time="2025-09-11T00:35:14.550267567Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 11 00:35:14.550679 containerd[1625]: time="2025-09-11T00:35:14.550662883Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 11 00:35:16.140110 containerd[1625]: time="2025-09-11T00:35:16.140039167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:35:16.150111 containerd[1625]: time="2025-09-11T00:35:16.150051974Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 11 00:35:16.152717 containerd[1625]: time="2025-09-11T00:35:16.152686611Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:35:16.162432 containerd[1625]: time="2025-09-11T00:35:16.162378097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:35:16.162953 containerd[1625]: time="2025-09-11T00:35:16.162849081Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.612169333s" Sep 11 00:35:16.162953 containerd[1625]: time="2025-09-11T00:35:16.162869507Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 11 00:35:16.163210 containerd[1625]: time="2025-09-11T00:35:16.163195713Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 11 00:35:17.191458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1510976389.mount: Deactivated successfully. Sep 11 00:35:17.628294 containerd[1625]: time="2025-09-11T00:35:17.627924179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:35:17.630112 containerd[1625]: time="2025-09-11T00:35:17.630098391Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 11 00:35:17.638377 containerd[1625]: time="2025-09-11T00:35:17.638349383Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:35:17.646531 containerd[1625]: time="2025-09-11T00:35:17.646489769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:35:17.646841 containerd[1625]: time="2025-09-11T00:35:17.646726501Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 1.483477363s" Sep 11 00:35:17.646841 containerd[1625]: time="2025-09-11T00:35:17.646747551Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 11 00:35:17.647079 containerd[1625]: time="2025-09-11T00:35:17.647032268Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 11 00:35:18.380440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1195473781.mount: Deactivated successfully. Sep 11 00:35:19.661729 containerd[1625]: time="2025-09-11T00:35:19.661684614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:35:19.663530 containerd[1625]: time="2025-09-11T00:35:19.663504830Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 11 00:35:19.665254 containerd[1625]: time="2025-09-11T00:35:19.665150665Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:35:19.669714 containerd[1625]: time="2025-09-11T00:35:19.669671117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:35:19.671400 containerd[1625]: time="2025-09-11T00:35:19.670598910Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.023541266s" Sep 11 00:35:19.671400 containerd[1625]: time="2025-09-11T00:35:19.670625661Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 11 00:35:19.671639 containerd[1625]: time="2025-09-11T00:35:19.671615900Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 11 00:35:20.153306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1564202523.mount: Deactivated successfully. Sep 11 00:35:20.157921 containerd[1625]: time="2025-09-11T00:35:20.157867684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:35:20.158602 containerd[1625]: time="2025-09-11T00:35:20.158569986Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 11 00:35:20.158994 containerd[1625]: time="2025-09-11T00:35:20.158974788Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:35:20.160138 containerd[1625]: time="2025-09-11T00:35:20.160111316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:35:20.160838 containerd[1625]: time="2025-09-11T00:35:20.160816976Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 489.12717ms" Sep 11 00:35:20.160871 containerd[1625]: time="2025-09-11T00:35:20.160839330Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 11 00:35:20.161202 containerd[1625]: time="2025-09-11T00:35:20.161162428Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 11 00:35:20.858842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1878213521.mount: Deactivated successfully. Sep 11 00:35:22.758856 update_engine[1581]: I20250911 00:35:22.758759 1581 update_attempter.cc:509] Updating boot flags... Sep 11 00:35:22.933441 containerd[1625]: time="2025-09-11T00:35:22.933404379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:35:22.935171 containerd[1625]: time="2025-09-11T00:35:22.935144014Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 11 00:35:22.935499 containerd[1625]: time="2025-09-11T00:35:22.935482067Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:35:22.938703 containerd[1625]: time="2025-09-11T00:35:22.938684892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:35:22.942452 containerd[1625]: time="2025-09-11T00:35:22.942325500Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.781144549s" Sep 11 00:35:22.942452 containerd[1625]: time="2025-09-11T00:35:22.942355186Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 11 00:35:23.707590 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 11 00:35:23.711190 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:35:24.129165 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:35:24.130750 (kubelet)[2419]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:35:24.197631 kubelet[2419]: E0911 00:35:24.197588 2419 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:35:24.199268 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:35:24.199352 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:35:24.199575 systemd[1]: kubelet.service: Consumed 107ms CPU time, 111M memory peak. Sep 11 00:35:25.138990 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:35:25.139341 systemd[1]: kubelet.service: Consumed 107ms CPU time, 111M memory peak. Sep 11 00:35:25.141272 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:35:25.161345 systemd[1]: Reload requested from client PID 2433 ('systemctl') (unit session-9.scope)... Sep 11 00:35:25.161355 systemd[1]: Reloading... Sep 11 00:35:25.233199 zram_generator::config[2486]: No configuration found. Sep 11 00:35:25.287403 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:35:25.295429 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 11 00:35:25.365661 systemd[1]: Reloading finished in 204 ms. Sep 11 00:35:25.384182 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 11 00:35:25.384235 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 11 00:35:25.384431 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:35:25.385683 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:35:25.689645 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:35:25.693339 (kubelet)[2544]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 00:35:25.725052 kubelet[2544]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:35:25.725265 kubelet[2544]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 11 00:35:25.725294 kubelet[2544]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:35:25.725383 kubelet[2544]: I0911 00:35:25.725364 2544 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 00:35:26.030102 kubelet[2544]: I0911 00:35:26.029278 2544 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 11 00:35:26.030102 kubelet[2544]: I0911 00:35:26.029305 2544 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 00:35:26.030102 kubelet[2544]: I0911 00:35:26.029487 2544 server.go:934] "Client rotation is on, will bootstrap in background" Sep 11 00:35:26.094860 kubelet[2544]: I0911 00:35:26.094786 2544 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:35:26.098025 kubelet[2544]: E0911 00:35:26.097996 2544 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:35:26.111824 kubelet[2544]: I0911 00:35:26.111781 2544 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 00:35:26.116909 kubelet[2544]: I0911 00:35:26.116879 2544 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 00:35:26.118974 kubelet[2544]: I0911 00:35:26.118950 2544 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 11 00:35:26.119193 kubelet[2544]: I0911 00:35:26.119094 2544 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 00:35:26.119342 kubelet[2544]: I0911 00:35:26.119144 2544 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 00:35:26.119456 kubelet[2544]: I0911 00:35:26.119351 2544 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 00:35:26.119456 kubelet[2544]: I0911 00:35:26.119362 2544 container_manager_linux.go:300] "Creating device plugin manager" Sep 11 00:35:26.121595 kubelet[2544]: I0911 00:35:26.121558 2544 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:35:26.124504 kubelet[2544]: I0911 00:35:26.124440 2544 kubelet.go:408] "Attempting to sync node with API server" Sep 11 00:35:26.124504 kubelet[2544]: I0911 00:35:26.124461 2544 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 00:35:26.126302 kubelet[2544]: I0911 00:35:26.126154 2544 kubelet.go:314] "Adding apiserver pod source" Sep 11 00:35:26.126302 kubelet[2544]: I0911 00:35:26.126172 2544 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 00:35:26.131138 kubelet[2544]: I0911 00:35:26.131101 2544 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 11 00:35:26.134321 kubelet[2544]: W0911 00:35:26.134199 2544 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.101:6443: connect: connection refused Sep 11 00:35:26.134321 kubelet[2544]: E0911 00:35:26.134236 2544 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:35:26.134321 kubelet[2544]: W0911 00:35:26.134280 2544 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.101:6443: connect: connection refused Sep 11 00:35:26.134321 kubelet[2544]: E0911 00:35:26.134298 2544 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:35:26.143296 kubelet[2544]: I0911 00:35:26.143277 2544 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 11 00:35:26.146095 kubelet[2544]: W0911 00:35:26.146073 2544 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 11 00:35:26.146485 kubelet[2544]: I0911 00:35:26.146470 2544 server.go:1274] "Started kubelet" Sep 11 00:35:26.150965 kubelet[2544]: I0911 00:35:26.150867 2544 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 00:35:26.160101 kubelet[2544]: I0911 00:35:26.159674 2544 server.go:449] "Adding debug handlers to kubelet server" Sep 11 00:35:26.160101 kubelet[2544]: I0911 00:35:26.159849 2544 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 00:35:26.160101 kubelet[2544]: I0911 00:35:26.160043 2544 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 00:35:26.165220 kubelet[2544]: I0911 00:35:26.165208 2544 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 00:35:26.168386 kubelet[2544]: E0911 00:35:26.160157 2544 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.101:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.101:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18641341639e4b1c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-11 00:35:26.146456348 +0000 UTC m=+0.450699683,LastTimestamp:2025-09-11 00:35:26.146456348 +0000 UTC m=+0.450699683,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 11 00:35:26.168453 kubelet[2544]: I0911 00:35:26.168446 2544 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 00:35:26.171769 kubelet[2544]: I0911 00:35:26.171760 2544 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 11 00:35:26.171942 kubelet[2544]: E0911 00:35:26.171933 2544 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:35:26.173777 kubelet[2544]: I0911 00:35:26.173721 2544 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 11 00:35:26.173777 kubelet[2544]: I0911 00:35:26.173759 2544 reconciler.go:26] "Reconciler: start to sync state" Sep 11 00:35:26.176653 kubelet[2544]: E0911 00:35:26.176603 2544 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.101:6443: connect: connection refused" interval="200ms" Sep 11 00:35:26.181778 kubelet[2544]: W0911 00:35:26.181434 2544 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.101:6443: connect: connection refused Sep 11 00:35:26.181778 kubelet[2544]: E0911 00:35:26.181754 2544 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:35:26.186120 kubelet[2544]: I0911 00:35:26.185460 2544 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 11 00:35:26.189589 kubelet[2544]: I0911 00:35:26.189567 2544 factory.go:221] Registration of the containerd container factory successfully Sep 11 00:35:26.189589 kubelet[2544]: I0911 00:35:26.189580 2544 factory.go:221] Registration of the systemd container factory successfully Sep 11 00:35:26.189690 kubelet[2544]: I0911 00:35:26.189627 2544 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 00:35:26.189929 kubelet[2544]: I0911 00:35:26.189917 2544 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 11 00:35:26.189978 kubelet[2544]: I0911 00:35:26.189972 2544 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 11 00:35:26.190031 kubelet[2544]: I0911 00:35:26.190025 2544 kubelet.go:2321] "Starting kubelet main sync loop" Sep 11 00:35:26.190213 kubelet[2544]: E0911 00:35:26.190199 2544 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 00:35:26.197235 kubelet[2544]: W0911 00:35:26.197185 2544 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.101:6443: connect: connection refused Sep 11 00:35:26.197569 kubelet[2544]: E0911 00:35:26.197549 2544 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:35:26.216043 kubelet[2544]: E0911 00:35:26.216019 2544 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 00:35:26.218306 kubelet[2544]: I0911 00:35:26.218285 2544 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 11 00:35:26.218306 kubelet[2544]: I0911 00:35:26.218298 2544 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 11 00:35:26.218306 kubelet[2544]: I0911 00:35:26.218311 2544 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:35:26.219345 kubelet[2544]: I0911 00:35:26.219330 2544 policy_none.go:49] "None policy: Start" Sep 11 00:35:26.219692 kubelet[2544]: I0911 00:35:26.219683 2544 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 11 00:35:26.219780 kubelet[2544]: I0911 00:35:26.219773 2544 state_mem.go:35] "Initializing new in-memory state store" Sep 11 00:35:26.225924 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 11 00:35:26.240359 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 11 00:35:26.243393 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 11 00:35:26.262072 kubelet[2544]: I0911 00:35:26.262032 2544 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 11 00:35:26.262268 kubelet[2544]: I0911 00:35:26.262228 2544 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 00:35:26.262268 kubelet[2544]: I0911 00:35:26.262238 2544 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 00:35:26.262475 kubelet[2544]: I0911 00:35:26.262422 2544 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 00:35:26.264120 kubelet[2544]: E0911 00:35:26.264080 2544 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 11 00:35:26.300348 systemd[1]: Created slice kubepods-burstable-pod9453588a6212a23d89eb04f8f40fd6d3.slice - libcontainer container kubepods-burstable-pod9453588a6212a23d89eb04f8f40fd6d3.slice. Sep 11 00:35:26.312168 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice - libcontainer container kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 11 00:35:26.315849 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice - libcontainer container kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 11 00:35:26.363972 kubelet[2544]: I0911 00:35:26.363913 2544 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 00:35:26.364296 kubelet[2544]: E0911 00:35:26.364270 2544 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.101:6443/api/v1/nodes\": dial tcp 139.178.70.101:6443: connect: connection refused" node="localhost" Sep 11 00:35:26.377989 kubelet[2544]: E0911 00:35:26.377960 2544 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.101:6443: connect: connection refused" interval="400ms" Sep 11 00:35:26.475729 kubelet[2544]: I0911 00:35:26.475623 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9453588a6212a23d89eb04f8f40fd6d3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9453588a6212a23d89eb04f8f40fd6d3\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:35:26.475924 kubelet[2544]: I0911 00:35:26.475830 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9453588a6212a23d89eb04f8f40fd6d3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9453588a6212a23d89eb04f8f40fd6d3\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:35:26.475924 kubelet[2544]: I0911 00:35:26.475854 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:35:26.475924 kubelet[2544]: I0911 00:35:26.475870 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:35:26.476154 kubelet[2544]: I0911 00:35:26.476029 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9453588a6212a23d89eb04f8f40fd6d3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9453588a6212a23d89eb04f8f40fd6d3\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:35:26.476154 kubelet[2544]: I0911 00:35:26.476044 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:35:26.476154 kubelet[2544]: I0911 00:35:26.476058 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:35:26.476154 kubelet[2544]: I0911 00:35:26.476079 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:35:26.476154 kubelet[2544]: I0911 00:35:26.476127 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 11 00:35:26.565873 kubelet[2544]: I0911 00:35:26.565715 2544 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 00:35:26.566379 kubelet[2544]: E0911 00:35:26.566314 2544 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.101:6443/api/v1/nodes\": dial tcp 139.178.70.101:6443: connect: connection refused" node="localhost" Sep 11 00:35:26.612717 containerd[1625]: time="2025-09-11T00:35:26.612529899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9453588a6212a23d89eb04f8f40fd6d3,Namespace:kube-system,Attempt:0,}" Sep 11 00:35:26.621846 containerd[1625]: time="2025-09-11T00:35:26.621704044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 11 00:35:26.631908 containerd[1625]: time="2025-09-11T00:35:26.631812801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 11 00:35:26.760377 containerd[1625]: time="2025-09-11T00:35:26.760315441Z" level=info msg="connecting to shim 36d7c5b41084dcb12e11b5bf37fa21bf6be7b1db2f06d8ae204b33c3bf0b9ff1" address="unix:///run/containerd/s/31ab849ae0858b764d73b241ec44ec3e87faa94512ac930cb79af59cb4b02d8a" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:35:26.760873 containerd[1625]: time="2025-09-11T00:35:26.760853469Z" level=info msg="connecting to shim 498d09dce3208cef388c59768a6467f38900fbd7aac0f9ce0e2709572941b6f5" address="unix:///run/containerd/s/6e4438572633b78832e7ffeac68405e5fcb44166e7ecdc84d0f574e27d9f3955" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:35:26.766284 containerd[1625]: time="2025-09-11T00:35:26.766258394Z" level=info msg="connecting to shim 88a3f210a3db1d9fa81d70bd9f22604c76b1869dd6fb231996cc5bffde03499e" address="unix:///run/containerd/s/de31153314a30a7fd14576c32e574f1a4a01d1180972be1138aadc5fdbb862dc" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:35:26.779048 kubelet[2544]: E0911 00:35:26.779018 2544 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.101:6443: connect: connection refused" interval="800ms" Sep 11 00:35:26.924219 systemd[1]: Started cri-containerd-88a3f210a3db1d9fa81d70bd9f22604c76b1869dd6fb231996cc5bffde03499e.scope - libcontainer container 88a3f210a3db1d9fa81d70bd9f22604c76b1869dd6fb231996cc5bffde03499e. Sep 11 00:35:26.929237 systemd[1]: Started cri-containerd-36d7c5b41084dcb12e11b5bf37fa21bf6be7b1db2f06d8ae204b33c3bf0b9ff1.scope - libcontainer container 36d7c5b41084dcb12e11b5bf37fa21bf6be7b1db2f06d8ae204b33c3bf0b9ff1. Sep 11 00:35:26.931446 systemd[1]: Started cri-containerd-498d09dce3208cef388c59768a6467f38900fbd7aac0f9ce0e2709572941b6f5.scope - libcontainer container 498d09dce3208cef388c59768a6467f38900fbd7aac0f9ce0e2709572941b6f5. Sep 11 00:35:26.993137 kubelet[2544]: I0911 00:35:26.993122 2544 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 00:35:27.089744 kubelet[2544]: E0911 00:35:26.993456 2544 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.101:6443/api/v1/nodes\": dial tcp 139.178.70.101:6443: connect: connection refused" node="localhost" Sep 11 00:35:27.089744 kubelet[2544]: W0911 00:35:27.040871 2544 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.101:6443: connect: connection refused Sep 11 00:35:27.089744 kubelet[2544]: E0911 00:35:27.040902 2544 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:35:27.098566 containerd[1625]: time="2025-09-11T00:35:27.098537276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"498d09dce3208cef388c59768a6467f38900fbd7aac0f9ce0e2709572941b6f5\"" Sep 11 00:35:27.100478 containerd[1625]: time="2025-09-11T00:35:27.100454678Z" level=info msg="CreateContainer within sandbox \"498d09dce3208cef388c59768a6467f38900fbd7aac0f9ce0e2709572941b6f5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 11 00:35:27.139816 containerd[1625]: time="2025-09-11T00:35:27.139744956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"88a3f210a3db1d9fa81d70bd9f22604c76b1869dd6fb231996cc5bffde03499e\"" Sep 11 00:35:27.141891 containerd[1625]: time="2025-09-11T00:35:27.141870191Z" level=info msg="CreateContainer within sandbox \"88a3f210a3db1d9fa81d70bd9f22604c76b1869dd6fb231996cc5bffde03499e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 11 00:35:27.167231 kubelet[2544]: W0911 00:35:27.167169 2544 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.101:6443: connect: connection refused Sep 11 00:35:27.167231 kubelet[2544]: E0911 00:35:27.167216 2544 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:35:27.187677 containerd[1625]: time="2025-09-11T00:35:27.187574216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9453588a6212a23d89eb04f8f40fd6d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"36d7c5b41084dcb12e11b5bf37fa21bf6be7b1db2f06d8ae204b33c3bf0b9ff1\"" Sep 11 00:35:27.189604 containerd[1625]: time="2025-09-11T00:35:27.189582623Z" level=info msg="CreateContainer within sandbox \"36d7c5b41084dcb12e11b5bf37fa21bf6be7b1db2f06d8ae204b33c3bf0b9ff1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 11 00:35:27.254185 kubelet[2544]: W0911 00:35:27.254057 2544 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.101:6443: connect: connection refused Sep 11 00:35:27.254322 kubelet[2544]: E0911 00:35:27.254205 2544 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:35:27.254322 kubelet[2544]: W0911 00:35:27.254142 2544 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.101:6443: connect: connection refused Sep 11 00:35:27.254322 kubelet[2544]: E0911 00:35:27.254230 2544 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:35:27.309103 containerd[1625]: time="2025-09-11T00:35:27.308724593Z" level=info msg="Container 9fdb402f1b8e6b10fc5e7da6ecbd31243d58f659eed1575481962de8acacdb8d: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:35:27.311319 containerd[1625]: time="2025-09-11T00:35:27.311292882Z" level=info msg="Container 08d0973364a5a2a42a03a5175b3d162c6069b4a65b31a2c065ae86a0b203677e: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:35:27.311570 containerd[1625]: time="2025-09-11T00:35:27.311554065Z" level=info msg="Container 74132974e8da2f6dc7e597d0ed66e75d6881d986239822f61e7aeb86a887327e: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:35:27.318365 containerd[1625]: time="2025-09-11T00:35:27.318312515Z" level=info msg="CreateContainer within sandbox \"36d7c5b41084dcb12e11b5bf37fa21bf6be7b1db2f06d8ae204b33c3bf0b9ff1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"74132974e8da2f6dc7e597d0ed66e75d6881d986239822f61e7aeb86a887327e\"" Sep 11 00:35:27.319181 containerd[1625]: time="2025-09-11T00:35:27.319159661Z" level=info msg="CreateContainer within sandbox \"88a3f210a3db1d9fa81d70bd9f22604c76b1869dd6fb231996cc5bffde03499e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"08d0973364a5a2a42a03a5175b3d162c6069b4a65b31a2c065ae86a0b203677e\"" Sep 11 00:35:27.319427 containerd[1625]: time="2025-09-11T00:35:27.319293291Z" level=info msg="CreateContainer within sandbox \"498d09dce3208cef388c59768a6467f38900fbd7aac0f9ce0e2709572941b6f5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9fdb402f1b8e6b10fc5e7da6ecbd31243d58f659eed1575481962de8acacdb8d\"" Sep 11 00:35:27.319595 containerd[1625]: time="2025-09-11T00:35:27.319580745Z" level=info msg="StartContainer for \"08d0973364a5a2a42a03a5175b3d162c6069b4a65b31a2c065ae86a0b203677e\"" Sep 11 00:35:27.319665 containerd[1625]: time="2025-09-11T00:35:27.319651563Z" level=info msg="StartContainer for \"9fdb402f1b8e6b10fc5e7da6ecbd31243d58f659eed1575481962de8acacdb8d\"" Sep 11 00:35:27.319758 containerd[1625]: time="2025-09-11T00:35:27.319599668Z" level=info msg="StartContainer for \"74132974e8da2f6dc7e597d0ed66e75d6881d986239822f61e7aeb86a887327e\"" Sep 11 00:35:27.320380 containerd[1625]: time="2025-09-11T00:35:27.320359515Z" level=info msg="connecting to shim 9fdb402f1b8e6b10fc5e7da6ecbd31243d58f659eed1575481962de8acacdb8d" address="unix:///run/containerd/s/6e4438572633b78832e7ffeac68405e5fcb44166e7ecdc84d0f574e27d9f3955" protocol=ttrpc version=3 Sep 11 00:35:27.321263 containerd[1625]: time="2025-09-11T00:35:27.320647808Z" level=info msg="connecting to shim 74132974e8da2f6dc7e597d0ed66e75d6881d986239822f61e7aeb86a887327e" address="unix:///run/containerd/s/31ab849ae0858b764d73b241ec44ec3e87faa94512ac930cb79af59cb4b02d8a" protocol=ttrpc version=3 Sep 11 00:35:27.321434 containerd[1625]: time="2025-09-11T00:35:27.321229312Z" level=info msg="connecting to shim 08d0973364a5a2a42a03a5175b3d162c6069b4a65b31a2c065ae86a0b203677e" address="unix:///run/containerd/s/de31153314a30a7fd14576c32e574f1a4a01d1180972be1138aadc5fdbb862dc" protocol=ttrpc version=3 Sep 11 00:35:27.338863 systemd[1]: Started cri-containerd-9fdb402f1b8e6b10fc5e7da6ecbd31243d58f659eed1575481962de8acacdb8d.scope - libcontainer container 9fdb402f1b8e6b10fc5e7da6ecbd31243d58f659eed1575481962de8acacdb8d. Sep 11 00:35:27.343118 systemd[1]: Started cri-containerd-08d0973364a5a2a42a03a5175b3d162c6069b4a65b31a2c065ae86a0b203677e.scope - libcontainer container 08d0973364a5a2a42a03a5175b3d162c6069b4a65b31a2c065ae86a0b203677e. Sep 11 00:35:27.355262 systemd[1]: Started cri-containerd-74132974e8da2f6dc7e597d0ed66e75d6881d986239822f61e7aeb86a887327e.scope - libcontainer container 74132974e8da2f6dc7e597d0ed66e75d6881d986239822f61e7aeb86a887327e. Sep 11 00:35:27.423947 containerd[1625]: time="2025-09-11T00:35:27.423921979Z" level=info msg="StartContainer for \"74132974e8da2f6dc7e597d0ed66e75d6881d986239822f61e7aeb86a887327e\" returns successfully" Sep 11 00:35:27.424683 containerd[1625]: time="2025-09-11T00:35:27.424667481Z" level=info msg="StartContainer for \"08d0973364a5a2a42a03a5175b3d162c6069b4a65b31a2c065ae86a0b203677e\" returns successfully" Sep 11 00:35:27.427424 containerd[1625]: time="2025-09-11T00:35:27.427408414Z" level=info msg="StartContainer for \"9fdb402f1b8e6b10fc5e7da6ecbd31243d58f659eed1575481962de8acacdb8d\" returns successfully" Sep 11 00:35:27.580302 kubelet[2544]: E0911 00:35:27.580232 2544 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.101:6443: connect: connection refused" interval="1.6s" Sep 11 00:35:27.655269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2040662708.mount: Deactivated successfully. Sep 11 00:35:27.794599 kubelet[2544]: I0911 00:35:27.794579 2544 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 00:35:27.794841 kubelet[2544]: E0911 00:35:27.794789 2544 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.101:6443/api/v1/nodes\": dial tcp 139.178.70.101:6443: connect: connection refused" node="localhost" Sep 11 00:35:29.183656 kubelet[2544]: E0911 00:35:29.183630 2544 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 11 00:35:29.306864 kubelet[2544]: E0911 00:35:29.306838 2544 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 11 00:35:29.395827 kubelet[2544]: I0911 00:35:29.395805 2544 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 00:35:29.433057 kubelet[2544]: I0911 00:35:29.432952 2544 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 11 00:35:29.433057 kubelet[2544]: E0911 00:35:29.432983 2544 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 11 00:35:29.438554 kubelet[2544]: E0911 00:35:29.438474 2544 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:35:29.538990 kubelet[2544]: E0911 00:35:29.538940 2544 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:35:30.135397 kubelet[2544]: I0911 00:35:30.135244 2544 apiserver.go:52] "Watching apiserver" Sep 11 00:35:30.174269 kubelet[2544]: I0911 00:35:30.174244 2544 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 11 00:35:30.806840 systemd[1]: Reload requested from client PID 2809 ('systemctl') (unit session-9.scope)... Sep 11 00:35:30.807071 systemd[1]: Reloading... Sep 11 00:35:30.875101 zram_generator::config[2852]: No configuration found. Sep 11 00:35:30.951836 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:35:30.960061 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 11 00:35:31.041280 systemd[1]: Reloading finished in 233 ms. Sep 11 00:35:31.075457 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:35:31.087786 systemd[1]: kubelet.service: Deactivated successfully. Sep 11 00:35:31.087975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:35:31.088016 systemd[1]: kubelet.service: Consumed 548ms CPU time, 125.8M memory peak. Sep 11 00:35:31.089710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:35:31.600824 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:35:31.604184 (kubelet)[2920]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 00:35:31.651351 kubelet[2920]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:35:31.651351 kubelet[2920]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 11 00:35:31.651351 kubelet[2920]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:35:31.651580 kubelet[2920]: I0911 00:35:31.651388 2920 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 00:35:31.659100 kubelet[2920]: I0911 00:35:31.658496 2920 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 11 00:35:31.659100 kubelet[2920]: I0911 00:35:31.658511 2920 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 00:35:31.659100 kubelet[2920]: I0911 00:35:31.658645 2920 server.go:934] "Client rotation is on, will bootstrap in background" Sep 11 00:35:31.659522 kubelet[2920]: I0911 00:35:31.659513 2920 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 11 00:35:31.662256 kubelet[2920]: I0911 00:35:31.662231 2920 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:35:31.665158 kubelet[2920]: I0911 00:35:31.665145 2920 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 00:35:31.673818 kubelet[2920]: I0911 00:35:31.673796 2920 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 00:35:31.673895 kubelet[2920]: I0911 00:35:31.673888 2920 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 11 00:35:31.673985 kubelet[2920]: I0911 00:35:31.673965 2920 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 00:35:31.674097 kubelet[2920]: I0911 00:35:31.673982 2920 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 00:35:31.674097 kubelet[2920]: I0911 00:35:31.674093 2920 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 00:35:31.674170 kubelet[2920]: I0911 00:35:31.674101 2920 container_manager_linux.go:300] "Creating device plugin manager" Sep 11 00:35:31.674170 kubelet[2920]: I0911 00:35:31.674117 2920 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:35:31.674205 kubelet[2920]: I0911 00:35:31.674174 2920 kubelet.go:408] "Attempting to sync node with API server" Sep 11 00:35:31.674205 kubelet[2920]: I0911 00:35:31.674181 2920 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 00:35:31.674205 kubelet[2920]: I0911 00:35:31.674197 2920 kubelet.go:314] "Adding apiserver pod source" Sep 11 00:35:31.674205 kubelet[2920]: I0911 00:35:31.674203 2920 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 00:35:31.678418 kubelet[2920]: I0911 00:35:31.678394 2920 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 11 00:35:31.679005 kubelet[2920]: I0911 00:35:31.678993 2920 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 11 00:35:31.686341 kubelet[2920]: I0911 00:35:31.686325 2920 server.go:1274] "Started kubelet" Sep 11 00:35:31.686416 kubelet[2920]: I0911 00:35:31.686380 2920 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 00:35:31.686989 kubelet[2920]: I0911 00:35:31.686973 2920 server.go:449] "Adding debug handlers to kubelet server" Sep 11 00:35:31.687913 kubelet[2920]: I0911 00:35:31.687635 2920 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 00:35:31.687913 kubelet[2920]: I0911 00:35:31.687746 2920 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 00:35:31.689460 kubelet[2920]: I0911 00:35:31.688814 2920 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 00:35:31.690404 kubelet[2920]: I0911 00:35:31.690396 2920 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 11 00:35:31.690547 kubelet[2920]: I0911 00:35:31.690500 2920 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 00:35:31.692228 kubelet[2920]: I0911 00:35:31.692220 2920 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 11 00:35:31.692318 kubelet[2920]: I0911 00:35:31.692312 2920 reconciler.go:26] "Reconciler: start to sync state" Sep 11 00:35:31.694863 kubelet[2920]: I0911 00:35:31.694184 2920 factory.go:221] Registration of the systemd container factory successfully Sep 11 00:35:31.694863 kubelet[2920]: I0911 00:35:31.694246 2920 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 00:35:31.694863 kubelet[2920]: E0911 00:35:31.694572 2920 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 00:35:31.695089 kubelet[2920]: I0911 00:35:31.695069 2920 factory.go:221] Registration of the containerd container factory successfully Sep 11 00:35:31.701274 kubelet[2920]: I0911 00:35:31.701251 2920 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 11 00:35:31.701831 kubelet[2920]: I0911 00:35:31.701817 2920 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 11 00:35:31.701831 kubelet[2920]: I0911 00:35:31.701829 2920 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 11 00:35:31.701882 kubelet[2920]: I0911 00:35:31.701838 2920 kubelet.go:2321] "Starting kubelet main sync loop" Sep 11 00:35:31.701882 kubelet[2920]: E0911 00:35:31.701858 2920 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 00:35:31.734874 kubelet[2920]: I0911 00:35:31.734860 2920 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 11 00:35:31.734874 kubelet[2920]: I0911 00:35:31.734869 2920 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 11 00:35:31.734874 kubelet[2920]: I0911 00:35:31.734879 2920 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:35:31.734987 kubelet[2920]: I0911 00:35:31.734972 2920 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 11 00:35:31.735006 kubelet[2920]: I0911 00:35:31.734982 2920 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 11 00:35:31.735006 kubelet[2920]: I0911 00:35:31.735005 2920 policy_none.go:49] "None policy: Start" Sep 11 00:35:31.735395 kubelet[2920]: I0911 00:35:31.735387 2920 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 11 00:35:31.735884 kubelet[2920]: I0911 00:35:31.735442 2920 state_mem.go:35] "Initializing new in-memory state store" Sep 11 00:35:31.735884 kubelet[2920]: I0911 00:35:31.735520 2920 state_mem.go:75] "Updated machine memory state" Sep 11 00:35:31.738177 kubelet[2920]: I0911 00:35:31.738168 2920 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 11 00:35:31.739033 kubelet[2920]: I0911 00:35:31.739025 2920 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 00:35:31.739124 kubelet[2920]: I0911 00:35:31.739068 2920 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 00:35:31.740453 kubelet[2920]: I0911 00:35:31.740443 2920 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 00:35:31.807383 kubelet[2920]: E0911 00:35:31.807361 2920 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 11 00:35:31.842771 kubelet[2920]: I0911 00:35:31.842748 2920 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 00:35:31.847731 kubelet[2920]: I0911 00:35:31.847641 2920 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 11 00:35:31.847731 kubelet[2920]: I0911 00:35:31.847701 2920 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 11 00:35:31.894198 kubelet[2920]: I0911 00:35:31.894134 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:35:31.894936 kubelet[2920]: I0911 00:35:31.894855 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:35:31.894936 kubelet[2920]: I0911 00:35:31.894871 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:35:31.894936 kubelet[2920]: I0911 00:35:31.894881 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:35:31.894936 kubelet[2920]: I0911 00:35:31.894891 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9453588a6212a23d89eb04f8f40fd6d3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9453588a6212a23d89eb04f8f40fd6d3\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:35:31.895107 kubelet[2920]: I0911 00:35:31.895037 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:35:31.895107 kubelet[2920]: I0911 00:35:31.895052 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 11 00:35:31.895107 kubelet[2920]: I0911 00:35:31.895060 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9453588a6212a23d89eb04f8f40fd6d3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9453588a6212a23d89eb04f8f40fd6d3\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:35:31.895107 kubelet[2920]: I0911 00:35:31.895068 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9453588a6212a23d89eb04f8f40fd6d3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9453588a6212a23d89eb04f8f40fd6d3\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:35:32.675834 kubelet[2920]: I0911 00:35:32.675803 2920 apiserver.go:52] "Watching apiserver" Sep 11 00:35:32.692581 kubelet[2920]: I0911 00:35:32.692544 2920 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 11 00:35:32.757239 kubelet[2920]: E0911 00:35:32.757170 2920 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:35:32.758430 kubelet[2920]: E0911 00:35:32.758408 2920 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 11 00:35:32.796919 kubelet[2920]: I0911 00:35:32.796859 2920 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.796849484 podStartE2EDuration="1.796849484s" podCreationTimestamp="2025-09-11 00:35:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:35:32.796846619 +0000 UTC m=+1.183830631" watchObservedRunningTime="2025-09-11 00:35:32.796849484 +0000 UTC m=+1.183833487" Sep 11 00:35:32.911450 kubelet[2920]: I0911 00:35:32.911417 2920 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.911405714 podStartE2EDuration="2.911405714s" podCreationTimestamp="2025-09-11 00:35:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:35:32.881519636 +0000 UTC m=+1.268503648" watchObservedRunningTime="2025-09-11 00:35:32.911405714 +0000 UTC m=+1.298389717" Sep 11 00:35:32.917582 kubelet[2920]: I0911 00:35:32.911495 2920 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.911491101 podStartE2EDuration="1.911491101s" podCreationTimestamp="2025-09-11 00:35:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:35:32.911301178 +0000 UTC m=+1.298285183" watchObservedRunningTime="2025-09-11 00:35:32.911491101 +0000 UTC m=+1.298475109" Sep 11 00:35:36.958727 kubelet[2920]: I0911 00:35:36.958704 2920 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 11 00:35:36.959214 kubelet[2920]: I0911 00:35:36.958964 2920 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 11 00:35:36.959237 containerd[1625]: time="2025-09-11T00:35:36.958861112Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 11 00:35:38.038209 systemd[1]: Created slice kubepods-besteffort-podd15611e2_6eb9_461d_9565_9213fbf8c5cc.slice - libcontainer container kubepods-besteffort-podd15611e2_6eb9_461d_9565_9213fbf8c5cc.slice. Sep 11 00:35:38.084202 systemd[1]: Created slice kubepods-besteffort-pod00321514_541a_4fab_9711_492abc86a4d1.slice - libcontainer container kubepods-besteffort-pod00321514_541a_4fab_9711_492abc86a4d1.slice. Sep 11 00:35:38.134519 kubelet[2920]: I0911 00:35:38.134390 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d15611e2-6eb9-461d-9565-9213fbf8c5cc-lib-modules\") pod \"kube-proxy-fpdth\" (UID: \"d15611e2-6eb9-461d-9565-9213fbf8c5cc\") " pod="kube-system/kube-proxy-fpdth" Sep 11 00:35:38.134519 kubelet[2920]: I0911 00:35:38.134426 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d15611e2-6eb9-461d-9565-9213fbf8c5cc-xtables-lock\") pod \"kube-proxy-fpdth\" (UID: \"d15611e2-6eb9-461d-9565-9213fbf8c5cc\") " pod="kube-system/kube-proxy-fpdth" Sep 11 00:35:38.134519 kubelet[2920]: I0911 00:35:38.134440 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/00321514-541a-4fab-9711-492abc86a4d1-var-lib-calico\") pod \"tigera-operator-58fc44c59b-f2h49\" (UID: \"00321514-541a-4fab-9711-492abc86a4d1\") " pod="tigera-operator/tigera-operator-58fc44c59b-f2h49" Sep 11 00:35:38.134519 kubelet[2920]: I0911 00:35:38.134453 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d15611e2-6eb9-461d-9565-9213fbf8c5cc-kube-proxy\") pod \"kube-proxy-fpdth\" (UID: \"d15611e2-6eb9-461d-9565-9213fbf8c5cc\") " pod="kube-system/kube-proxy-fpdth" Sep 11 00:35:38.134519 kubelet[2920]: I0911 00:35:38.134463 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vj69\" (UniqueName: \"kubernetes.io/projected/d15611e2-6eb9-461d-9565-9213fbf8c5cc-kube-api-access-7vj69\") pod \"kube-proxy-fpdth\" (UID: \"d15611e2-6eb9-461d-9565-9213fbf8c5cc\") " pod="kube-system/kube-proxy-fpdth" Sep 11 00:35:38.134822 kubelet[2920]: I0911 00:35:38.134471 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9j9g\" (UniqueName: \"kubernetes.io/projected/00321514-541a-4fab-9711-492abc86a4d1-kube-api-access-c9j9g\") pod \"tigera-operator-58fc44c59b-f2h49\" (UID: \"00321514-541a-4fab-9711-492abc86a4d1\") " pod="tigera-operator/tigera-operator-58fc44c59b-f2h49" Sep 11 00:35:38.350578 containerd[1625]: time="2025-09-11T00:35:38.350184486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fpdth,Uid:d15611e2-6eb9-461d-9565-9213fbf8c5cc,Namespace:kube-system,Attempt:0,}" Sep 11 00:35:38.364442 containerd[1625]: time="2025-09-11T00:35:38.364179732Z" level=info msg="connecting to shim 16308ffd5d9ac7ff8486e1b651231def37f37d81cdb987a7246aab9ae408ba45" address="unix:///run/containerd/s/f67fd02d81d41eba9ee7dcde4828fe3c780f62d6091171f55bde9f8bacfd37f4" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:35:38.387685 containerd[1625]: time="2025-09-11T00:35:38.387652970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-f2h49,Uid:00321514-541a-4fab-9711-492abc86a4d1,Namespace:tigera-operator,Attempt:0,}" Sep 11 00:35:38.390240 systemd[1]: Started cri-containerd-16308ffd5d9ac7ff8486e1b651231def37f37d81cdb987a7246aab9ae408ba45.scope - libcontainer container 16308ffd5d9ac7ff8486e1b651231def37f37d81cdb987a7246aab9ae408ba45. Sep 11 00:35:38.412153 containerd[1625]: time="2025-09-11T00:35:38.412126299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fpdth,Uid:d15611e2-6eb9-461d-9565-9213fbf8c5cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"16308ffd5d9ac7ff8486e1b651231def37f37d81cdb987a7246aab9ae408ba45\"" Sep 11 00:35:38.414768 containerd[1625]: time="2025-09-11T00:35:38.414741076Z" level=info msg="CreateContainer within sandbox \"16308ffd5d9ac7ff8486e1b651231def37f37d81cdb987a7246aab9ae408ba45\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 11 00:35:38.417249 containerd[1625]: time="2025-09-11T00:35:38.417190244Z" level=info msg="connecting to shim 0ce4cba595f80ee443389a7834173f95fad3f68e3d156e3d7a48f5be7a344ccb" address="unix:///run/containerd/s/149b85ea3ce9579c5a673826954716f225d28dce0ff9d13fd7c9fad3e67e6770" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:35:38.435345 systemd[1]: Started cri-containerd-0ce4cba595f80ee443389a7834173f95fad3f68e3d156e3d7a48f5be7a344ccb.scope - libcontainer container 0ce4cba595f80ee443389a7834173f95fad3f68e3d156e3d7a48f5be7a344ccb. Sep 11 00:35:38.439438 containerd[1625]: time="2025-09-11T00:35:38.439409729Z" level=info msg="Container cd4d85db5241243a9321f9fa8f832858a3ac673568fa152d9cb5f32f19f1b909: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:35:38.444926 containerd[1625]: time="2025-09-11T00:35:38.444902125Z" level=info msg="CreateContainer within sandbox \"16308ffd5d9ac7ff8486e1b651231def37f37d81cdb987a7246aab9ae408ba45\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cd4d85db5241243a9321f9fa8f832858a3ac673568fa152d9cb5f32f19f1b909\"" Sep 11 00:35:38.446593 containerd[1625]: time="2025-09-11T00:35:38.445877686Z" level=info msg="StartContainer for \"cd4d85db5241243a9321f9fa8f832858a3ac673568fa152d9cb5f32f19f1b909\"" Sep 11 00:35:38.447136 containerd[1625]: time="2025-09-11T00:35:38.447116861Z" level=info msg="connecting to shim cd4d85db5241243a9321f9fa8f832858a3ac673568fa152d9cb5f32f19f1b909" address="unix:///run/containerd/s/f67fd02d81d41eba9ee7dcde4828fe3c780f62d6091171f55bde9f8bacfd37f4" protocol=ttrpc version=3 Sep 11 00:35:38.468172 systemd[1]: Started cri-containerd-cd4d85db5241243a9321f9fa8f832858a3ac673568fa152d9cb5f32f19f1b909.scope - libcontainer container cd4d85db5241243a9321f9fa8f832858a3ac673568fa152d9cb5f32f19f1b909. Sep 11 00:35:38.497312 containerd[1625]: time="2025-09-11T00:35:38.497286724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-f2h49,Uid:00321514-541a-4fab-9711-492abc86a4d1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0ce4cba595f80ee443389a7834173f95fad3f68e3d156e3d7a48f5be7a344ccb\"" Sep 11 00:35:38.498746 containerd[1625]: time="2025-09-11T00:35:38.498728925Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 11 00:35:38.505822 containerd[1625]: time="2025-09-11T00:35:38.505778370Z" level=info msg="StartContainer for \"cd4d85db5241243a9321f9fa8f832858a3ac673568fa152d9cb5f32f19f1b909\" returns successfully" Sep 11 00:35:38.748765 kubelet[2920]: I0911 00:35:38.748709 2920 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fpdth" podStartSLOduration=0.748692789 podStartE2EDuration="748.692789ms" podCreationTimestamp="2025-09-11 00:35:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:35:38.748575836 +0000 UTC m=+7.135559859" watchObservedRunningTime="2025-09-11 00:35:38.748692789 +0000 UTC m=+7.135676806" Sep 11 00:35:39.899619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1275121071.mount: Deactivated successfully. Sep 11 00:35:41.837983 containerd[1625]: time="2025-09-11T00:35:41.837608904Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:35:41.844678 containerd[1625]: time="2025-09-11T00:35:41.844664769Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 11 00:35:41.850056 containerd[1625]: time="2025-09-11T00:35:41.850044539Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:35:41.855197 containerd[1625]: time="2025-09-11T00:35:41.855184161Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:35:41.855447 containerd[1625]: time="2025-09-11T00:35:41.855431102Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 3.356685595s" Sep 11 00:35:41.855480 containerd[1625]: time="2025-09-11T00:35:41.855447339Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 11 00:35:41.856757 containerd[1625]: time="2025-09-11T00:35:41.856740173Z" level=info msg="CreateContainer within sandbox \"0ce4cba595f80ee443389a7834173f95fad3f68e3d156e3d7a48f5be7a344ccb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 11 00:35:41.896977 containerd[1625]: time="2025-09-11T00:35:41.896934434Z" level=info msg="Container 932648b7973f47448d58be6e280a9817c59231aec32a7539d9be412c96260a45: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:35:41.898946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount852509324.mount: Deactivated successfully. Sep 11 00:35:41.919437 containerd[1625]: time="2025-09-11T00:35:41.919375354Z" level=info msg="CreateContainer within sandbox \"0ce4cba595f80ee443389a7834173f95fad3f68e3d156e3d7a48f5be7a344ccb\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"932648b7973f47448d58be6e280a9817c59231aec32a7539d9be412c96260a45\"" Sep 11 00:35:41.920777 containerd[1625]: time="2025-09-11T00:35:41.919819507Z" level=info msg="StartContainer for \"932648b7973f47448d58be6e280a9817c59231aec32a7539d9be412c96260a45\"" Sep 11 00:35:41.920777 containerd[1625]: time="2025-09-11T00:35:41.920241717Z" level=info msg="connecting to shim 932648b7973f47448d58be6e280a9817c59231aec32a7539d9be412c96260a45" address="unix:///run/containerd/s/149b85ea3ce9579c5a673826954716f225d28dce0ff9d13fd7c9fad3e67e6770" protocol=ttrpc version=3 Sep 11 00:35:41.941177 systemd[1]: Started cri-containerd-932648b7973f47448d58be6e280a9817c59231aec32a7539d9be412c96260a45.scope - libcontainer container 932648b7973f47448d58be6e280a9817c59231aec32a7539d9be412c96260a45. Sep 11 00:35:41.980346 containerd[1625]: time="2025-09-11T00:35:41.980324566Z" level=info msg="StartContainer for \"932648b7973f47448d58be6e280a9817c59231aec32a7539d9be412c96260a45\" returns successfully" Sep 11 00:35:46.145809 kubelet[2920]: I0911 00:35:46.145759 2920 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-f2h49" podStartSLOduration=4.785967147 podStartE2EDuration="8.143628559s" podCreationTimestamp="2025-09-11 00:35:38 +0000 UTC" firstStartedPulling="2025-09-11 00:35:38.498371358 +0000 UTC m=+6.885355360" lastFinishedPulling="2025-09-11 00:35:41.856032766 +0000 UTC m=+10.243016772" observedRunningTime="2025-09-11 00:35:42.755172418 +0000 UTC m=+11.142156430" watchObservedRunningTime="2025-09-11 00:35:46.143628559 +0000 UTC m=+14.530612565" Sep 11 00:35:47.134766 sudo[1952]: pam_unix(sudo:session): session closed for user root Sep 11 00:35:47.137300 sshd[1951]: Connection closed by 139.178.89.65 port 50170 Sep 11 00:35:47.137926 sshd-session[1949]: pam_unix(sshd:session): session closed for user core Sep 11 00:35:47.139740 systemd[1]: sshd@6-139.178.70.101:22-139.178.89.65:50170.service: Deactivated successfully. Sep 11 00:35:47.141558 systemd[1]: session-9.scope: Deactivated successfully. Sep 11 00:35:47.143016 systemd[1]: session-9.scope: Consumed 3.192s CPU time, 151M memory peak. Sep 11 00:35:47.146266 systemd-logind[1580]: Session 9 logged out. Waiting for processes to exit. Sep 11 00:35:47.147424 systemd-logind[1580]: Removed session 9. Sep 11 00:35:49.611980 systemd[1]: Created slice kubepods-besteffort-pod9f159206_b0f6_487c_8e0d_9aef6a8806f1.slice - libcontainer container kubepods-besteffort-pod9f159206_b0f6_487c_8e0d_9aef6a8806f1.slice. Sep 11 00:35:49.702493 kubelet[2920]: I0911 00:35:49.702463 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f159206-b0f6-487c-8e0d-9aef6a8806f1-tigera-ca-bundle\") pod \"calico-typha-5b9c67b4d7-vd8n6\" (UID: \"9f159206-b0f6-487c-8e0d-9aef6a8806f1\") " pod="calico-system/calico-typha-5b9c67b4d7-vd8n6" Sep 11 00:35:49.702493 kubelet[2920]: I0911 00:35:49.702494 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9f159206-b0f6-487c-8e0d-9aef6a8806f1-typha-certs\") pod \"calico-typha-5b9c67b4d7-vd8n6\" (UID: \"9f159206-b0f6-487c-8e0d-9aef6a8806f1\") " pod="calico-system/calico-typha-5b9c67b4d7-vd8n6" Sep 11 00:35:49.702730 kubelet[2920]: I0911 00:35:49.702507 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m85b\" (UniqueName: \"kubernetes.io/projected/9f159206-b0f6-487c-8e0d-9aef6a8806f1-kube-api-access-2m85b\") pod \"calico-typha-5b9c67b4d7-vd8n6\" (UID: \"9f159206-b0f6-487c-8e0d-9aef6a8806f1\") " pod="calico-system/calico-typha-5b9c67b4d7-vd8n6" Sep 11 00:35:49.921449 containerd[1625]: time="2025-09-11T00:35:49.921210380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b9c67b4d7-vd8n6,Uid:9f159206-b0f6-487c-8e0d-9aef6a8806f1,Namespace:calico-system,Attempt:0,}" Sep 11 00:35:49.938644 containerd[1625]: time="2025-09-11T00:35:49.938614842Z" level=info msg="connecting to shim 36adb4a37cd5f0d4483a6f8cd3e8bb24ef681768193ed040bc6bc886430bce15" address="unix:///run/containerd/s/9395254e8d3a761477b12b8487feafc7e70193c24a487cb70fed4131a64b29e8" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:35:49.961215 systemd[1]: Started cri-containerd-36adb4a37cd5f0d4483a6f8cd3e8bb24ef681768193ed040bc6bc886430bce15.scope - libcontainer container 36adb4a37cd5f0d4483a6f8cd3e8bb24ef681768193ed040bc6bc886430bce15. Sep 11 00:35:49.996362 containerd[1625]: time="2025-09-11T00:35:49.996332764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b9c67b4d7-vd8n6,Uid:9f159206-b0f6-487c-8e0d-9aef6a8806f1,Namespace:calico-system,Attempt:0,} returns sandbox id \"36adb4a37cd5f0d4483a6f8cd3e8bb24ef681768193ed040bc6bc886430bce15\"" Sep 11 00:35:49.997656 containerd[1625]: time="2025-09-11T00:35:49.997636728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 11 00:35:50.047209 systemd[1]: Created slice kubepods-besteffort-pod7cf875fc_541e_4e16_be31_7473497b6ade.slice - libcontainer container kubepods-besteffort-pod7cf875fc_541e_4e16_be31_7473497b6ade.slice. Sep 11 00:35:50.104557 kubelet[2920]: I0911 00:35:50.104504 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7cf875fc-541e-4e16-be31-7473497b6ade-node-certs\") pod \"calico-node-g9kws\" (UID: \"7cf875fc-541e-4e16-be31-7473497b6ade\") " pod="calico-system/calico-node-g9kws" Sep 11 00:35:50.104557 kubelet[2920]: I0911 00:35:50.104537 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7cf875fc-541e-4e16-be31-7473497b6ade-cni-log-dir\") pod \"calico-node-g9kws\" (UID: \"7cf875fc-541e-4e16-be31-7473497b6ade\") " pod="calico-system/calico-node-g9kws" Sep 11 00:35:50.104725 kubelet[2920]: I0911 00:35:50.104575 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ddhq\" (UniqueName: \"kubernetes.io/projected/7cf875fc-541e-4e16-be31-7473497b6ade-kube-api-access-8ddhq\") pod \"calico-node-g9kws\" (UID: \"7cf875fc-541e-4e16-be31-7473497b6ade\") " pod="calico-system/calico-node-g9kws" Sep 11 00:35:50.104725 kubelet[2920]: I0911 00:35:50.104624 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7cf875fc-541e-4e16-be31-7473497b6ade-var-lib-calico\") pod \"calico-node-g9kws\" (UID: \"7cf875fc-541e-4e16-be31-7473497b6ade\") " pod="calico-system/calico-node-g9kws" Sep 11 00:35:50.104725 kubelet[2920]: I0911 00:35:50.104641 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7cf875fc-541e-4e16-be31-7473497b6ade-var-run-calico\") pod \"calico-node-g9kws\" (UID: \"7cf875fc-541e-4e16-be31-7473497b6ade\") " pod="calico-system/calico-node-g9kws" Sep 11 00:35:50.104725 kubelet[2920]: I0911 00:35:50.104656 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7cf875fc-541e-4e16-be31-7473497b6ade-cni-net-dir\") pod \"calico-node-g9kws\" (UID: \"7cf875fc-541e-4e16-be31-7473497b6ade\") " pod="calico-system/calico-node-g9kws" Sep 11 00:35:50.104725 kubelet[2920]: I0911 00:35:50.104689 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7cf875fc-541e-4e16-be31-7473497b6ade-cni-bin-dir\") pod \"calico-node-g9kws\" (UID: \"7cf875fc-541e-4e16-be31-7473497b6ade\") " pod="calico-system/calico-node-g9kws" Sep 11 00:35:50.104871 kubelet[2920]: I0911 00:35:50.104704 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7cf875fc-541e-4e16-be31-7473497b6ade-lib-modules\") pod \"calico-node-g9kws\" (UID: \"7cf875fc-541e-4e16-be31-7473497b6ade\") " pod="calico-system/calico-node-g9kws" Sep 11 00:35:50.104871 kubelet[2920]: I0911 00:35:50.104717 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7cf875fc-541e-4e16-be31-7473497b6ade-flexvol-driver-host\") pod \"calico-node-g9kws\" (UID: \"7cf875fc-541e-4e16-be31-7473497b6ade\") " pod="calico-system/calico-node-g9kws" Sep 11 00:35:50.104871 kubelet[2920]: I0911 00:35:50.104728 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7cf875fc-541e-4e16-be31-7473497b6ade-policysync\") pod \"calico-node-g9kws\" (UID: \"7cf875fc-541e-4e16-be31-7473497b6ade\") " pod="calico-system/calico-node-g9kws" Sep 11 00:35:50.104871 kubelet[2920]: I0911 00:35:50.104738 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7cf875fc-541e-4e16-be31-7473497b6ade-xtables-lock\") pod \"calico-node-g9kws\" (UID: \"7cf875fc-541e-4e16-be31-7473497b6ade\") " pod="calico-system/calico-node-g9kws" Sep 11 00:35:50.104871 kubelet[2920]: I0911 00:35:50.104749 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cf875fc-541e-4e16-be31-7473497b6ade-tigera-ca-bundle\") pod \"calico-node-g9kws\" (UID: \"7cf875fc-541e-4e16-be31-7473497b6ade\") " pod="calico-system/calico-node-g9kws" Sep 11 00:35:50.219379 kubelet[2920]: E0911 00:35:50.219303 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.219379 kubelet[2920]: W0911 00:35:50.219322 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.219379 kubelet[2920]: E0911 00:35:50.219348 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.340572 kubelet[2920]: E0911 00:35:50.340536 2920 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bzm2v" podUID="fd90569b-afca-452c-ba50-8a3dd99f9227" Sep 11 00:35:50.349911 containerd[1625]: time="2025-09-11T00:35:50.349882275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g9kws,Uid:7cf875fc-541e-4e16-be31-7473497b6ade,Namespace:calico-system,Attempt:0,}" Sep 11 00:35:50.366786 containerd[1625]: time="2025-09-11T00:35:50.366359634Z" level=info msg="connecting to shim 5b6050d0e5b8b3ed6624b3415ffcc4b1522957470915633faf456e4abca5bf8f" address="unix:///run/containerd/s/f781b7facb98062ef11fb05d4a12df276ed34aa52e6a9a48c9a9c16858bba816" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:35:50.388363 systemd[1]: Started cri-containerd-5b6050d0e5b8b3ed6624b3415ffcc4b1522957470915633faf456e4abca5bf8f.scope - libcontainer container 5b6050d0e5b8b3ed6624b3415ffcc4b1522957470915633faf456e4abca5bf8f. Sep 11 00:35:50.399243 kubelet[2920]: E0911 00:35:50.399219 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.399243 kubelet[2920]: W0911 00:35:50.399236 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.399519 kubelet[2920]: E0911 00:35:50.399356 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.399519 kubelet[2920]: E0911 00:35:50.399477 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.399519 kubelet[2920]: W0911 00:35:50.399482 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.399519 kubelet[2920]: E0911 00:35:50.399487 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.399705 kubelet[2920]: E0911 00:35:50.399564 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.399705 kubelet[2920]: W0911 00:35:50.399569 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.399705 kubelet[2920]: E0911 00:35:50.399574 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.399705 kubelet[2920]: E0911 00:35:50.399663 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.399705 kubelet[2920]: W0911 00:35:50.399669 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.399705 kubelet[2920]: E0911 00:35:50.399674 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.400294 kubelet[2920]: E0911 00:35:50.399775 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.400294 kubelet[2920]: W0911 00:35:50.399779 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.400294 kubelet[2920]: E0911 00:35:50.399785 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.400294 kubelet[2920]: E0911 00:35:50.399883 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.400294 kubelet[2920]: W0911 00:35:50.399888 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.400294 kubelet[2920]: E0911 00:35:50.399893 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.400294 kubelet[2920]: E0911 00:35:50.399990 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.400294 kubelet[2920]: W0911 00:35:50.399994 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.400294 kubelet[2920]: E0911 00:35:50.399999 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.400294 kubelet[2920]: E0911 00:35:50.400074 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.400973 kubelet[2920]: W0911 00:35:50.400079 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.400973 kubelet[2920]: E0911 00:35:50.400175 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.400973 kubelet[2920]: E0911 00:35:50.400296 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.400973 kubelet[2920]: W0911 00:35:50.400301 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.400973 kubelet[2920]: E0911 00:35:50.400307 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.400973 kubelet[2920]: E0911 00:35:50.400427 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.400973 kubelet[2920]: W0911 00:35:50.400432 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.400973 kubelet[2920]: E0911 00:35:50.400437 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.400973 kubelet[2920]: E0911 00:35:50.400571 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.400973 kubelet[2920]: W0911 00:35:50.400576 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.401609 kubelet[2920]: E0911 00:35:50.400580 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.401609 kubelet[2920]: E0911 00:35:50.400678 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.401609 kubelet[2920]: W0911 00:35:50.400683 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.401609 kubelet[2920]: E0911 00:35:50.400698 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.401609 kubelet[2920]: E0911 00:35:50.400805 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.401609 kubelet[2920]: W0911 00:35:50.400810 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.401609 kubelet[2920]: E0911 00:35:50.400815 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.401609 kubelet[2920]: E0911 00:35:50.400901 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.401609 kubelet[2920]: W0911 00:35:50.400907 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.401609 kubelet[2920]: E0911 00:35:50.400911 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.401773 kubelet[2920]: E0911 00:35:50.401002 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.401773 kubelet[2920]: W0911 00:35:50.401007 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.401773 kubelet[2920]: E0911 00:35:50.401012 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.401773 kubelet[2920]: E0911 00:35:50.401115 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.401773 kubelet[2920]: W0911 00:35:50.401119 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.401773 kubelet[2920]: E0911 00:35:50.401125 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.401773 kubelet[2920]: E0911 00:35:50.401215 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.401773 kubelet[2920]: W0911 00:35:50.401219 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.401773 kubelet[2920]: E0911 00:35:50.401224 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.401773 kubelet[2920]: E0911 00:35:50.401335 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.401927 kubelet[2920]: W0911 00:35:50.401340 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.401927 kubelet[2920]: E0911 00:35:50.401344 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.401927 kubelet[2920]: E0911 00:35:50.401420 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.401927 kubelet[2920]: W0911 00:35:50.401425 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.401927 kubelet[2920]: E0911 00:35:50.401431 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.401927 kubelet[2920]: E0911 00:35:50.401556 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.401927 kubelet[2920]: W0911 00:35:50.401561 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.401927 kubelet[2920]: E0911 00:35:50.401566 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.415215 kubelet[2920]: E0911 00:35:50.415051 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.415215 kubelet[2920]: W0911 00:35:50.415066 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.415215 kubelet[2920]: E0911 00:35:50.415113 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.415215 kubelet[2920]: I0911 00:35:50.415136 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fd90569b-afca-452c-ba50-8a3dd99f9227-varrun\") pod \"csi-node-driver-bzm2v\" (UID: \"fd90569b-afca-452c-ba50-8a3dd99f9227\") " pod="calico-system/csi-node-driver-bzm2v" Sep 11 00:35:50.415346 kubelet[2920]: E0911 00:35:50.415227 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.415346 kubelet[2920]: W0911 00:35:50.415232 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.415346 kubelet[2920]: E0911 00:35:50.415239 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.415346 kubelet[2920]: I0911 00:35:50.415247 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fd90569b-afca-452c-ba50-8a3dd99f9227-socket-dir\") pod \"csi-node-driver-bzm2v\" (UID: \"fd90569b-afca-452c-ba50-8a3dd99f9227\") " pod="calico-system/csi-node-driver-bzm2v" Sep 11 00:35:50.415346 kubelet[2920]: E0911 00:35:50.415341 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.415346 kubelet[2920]: W0911 00:35:50.415346 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.415440 kubelet[2920]: E0911 00:35:50.415351 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.415440 kubelet[2920]: I0911 00:35:50.415360 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fd90569b-afca-452c-ba50-8a3dd99f9227-registration-dir\") pod \"csi-node-driver-bzm2v\" (UID: \"fd90569b-afca-452c-ba50-8a3dd99f9227\") " pod="calico-system/csi-node-driver-bzm2v" Sep 11 00:35:50.415440 kubelet[2920]: E0911 00:35:50.415435 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.415488 kubelet[2920]: W0911 00:35:50.415440 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.415488 kubelet[2920]: E0911 00:35:50.415445 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.415488 kubelet[2920]: I0911 00:35:50.415453 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb8kz\" (UniqueName: \"kubernetes.io/projected/fd90569b-afca-452c-ba50-8a3dd99f9227-kube-api-access-pb8kz\") pod \"csi-node-driver-bzm2v\" (UID: \"fd90569b-afca-452c-ba50-8a3dd99f9227\") " pod="calico-system/csi-node-driver-bzm2v" Sep 11 00:35:50.415535 kubelet[2920]: E0911 00:35:50.415530 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.415551 kubelet[2920]: W0911 00:35:50.415535 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.415551 kubelet[2920]: E0911 00:35:50.415541 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.415551 kubelet[2920]: I0911 00:35:50.415549 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fd90569b-afca-452c-ba50-8a3dd99f9227-kubelet-dir\") pod \"csi-node-driver-bzm2v\" (UID: \"fd90569b-afca-452c-ba50-8a3dd99f9227\") " pod="calico-system/csi-node-driver-bzm2v" Sep 11 00:35:50.416838 kubelet[2920]: E0911 00:35:50.415650 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.416838 kubelet[2920]: W0911 00:35:50.415657 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.416838 kubelet[2920]: E0911 00:35:50.415662 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.416838 kubelet[2920]: E0911 00:35:50.415730 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.416838 kubelet[2920]: W0911 00:35:50.415735 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.416838 kubelet[2920]: E0911 00:35:50.415739 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.416838 kubelet[2920]: E0911 00:35:50.415825 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.416838 kubelet[2920]: W0911 00:35:50.415829 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.416838 kubelet[2920]: E0911 00:35:50.415834 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.416838 kubelet[2920]: E0911 00:35:50.415932 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.418888 kubelet[2920]: W0911 00:35:50.415937 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.418888 kubelet[2920]: E0911 00:35:50.415941 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.418888 kubelet[2920]: E0911 00:35:50.416049 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.418888 kubelet[2920]: W0911 00:35:50.416055 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.418888 kubelet[2920]: E0911 00:35:50.416060 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.418888 kubelet[2920]: E0911 00:35:50.416264 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.418888 kubelet[2920]: W0911 00:35:50.416269 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.418888 kubelet[2920]: E0911 00:35:50.416274 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.418888 kubelet[2920]: E0911 00:35:50.416718 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.418888 kubelet[2920]: W0911 00:35:50.416727 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.419045 kubelet[2920]: E0911 00:35:50.416739 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.419045 kubelet[2920]: E0911 00:35:50.416834 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.419045 kubelet[2920]: W0911 00:35:50.416840 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.419045 kubelet[2920]: E0911 00:35:50.416849 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.419045 kubelet[2920]: E0911 00:35:50.416933 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.419045 kubelet[2920]: W0911 00:35:50.416939 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.419045 kubelet[2920]: E0911 00:35:50.416944 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.419045 kubelet[2920]: E0911 00:35:50.417113 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.419045 kubelet[2920]: W0911 00:35:50.417118 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.419045 kubelet[2920]: E0911 00:35:50.417129 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.429289 containerd[1625]: time="2025-09-11T00:35:50.429269017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g9kws,Uid:7cf875fc-541e-4e16-be31-7473497b6ade,Namespace:calico-system,Attempt:0,} returns sandbox id \"5b6050d0e5b8b3ed6624b3415ffcc4b1522957470915633faf456e4abca5bf8f\"" Sep 11 00:35:50.518469 kubelet[2920]: E0911 00:35:50.516660 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.518469 kubelet[2920]: W0911 00:35:50.516675 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.518469 kubelet[2920]: E0911 00:35:50.516689 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.518469 kubelet[2920]: E0911 00:35:50.516810 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.518469 kubelet[2920]: W0911 00:35:50.516814 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.518469 kubelet[2920]: E0911 00:35:50.516820 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.518469 kubelet[2920]: E0911 00:35:50.516914 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.518469 kubelet[2920]: W0911 00:35:50.516919 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.518469 kubelet[2920]: E0911 00:35:50.516929 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.518469 kubelet[2920]: E0911 00:35:50.517028 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.518704 kubelet[2920]: W0911 00:35:50.517033 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.518704 kubelet[2920]: E0911 00:35:50.517042 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.518704 kubelet[2920]: E0911 00:35:50.517129 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.518704 kubelet[2920]: W0911 00:35:50.517134 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.518704 kubelet[2920]: E0911 00:35:50.517138 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.518704 kubelet[2920]: E0911 00:35:50.517235 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.518704 kubelet[2920]: W0911 00:35:50.517240 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.518704 kubelet[2920]: E0911 00:35:50.517246 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.518704 kubelet[2920]: E0911 00:35:50.517322 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.518704 kubelet[2920]: W0911 00:35:50.517327 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.518855 kubelet[2920]: E0911 00:35:50.517331 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.518855 kubelet[2920]: E0911 00:35:50.517402 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.518855 kubelet[2920]: W0911 00:35:50.517406 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.518855 kubelet[2920]: E0911 00:35:50.517411 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.518855 kubelet[2920]: E0911 00:35:50.517536 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.518855 kubelet[2920]: W0911 00:35:50.517541 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.518855 kubelet[2920]: E0911 00:35:50.517545 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.518855 kubelet[2920]: E0911 00:35:50.518143 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.518855 kubelet[2920]: W0911 00:35:50.518152 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.518855 kubelet[2920]: E0911 00:35:50.518159 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.519014 kubelet[2920]: E0911 00:35:50.518251 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.519014 kubelet[2920]: W0911 00:35:50.518257 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.519014 kubelet[2920]: E0911 00:35:50.518263 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.519014 kubelet[2920]: E0911 00:35:50.518342 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.519014 kubelet[2920]: W0911 00:35:50.518346 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.519014 kubelet[2920]: E0911 00:35:50.518360 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.521532 kubelet[2920]: E0911 00:35:50.521421 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.521532 kubelet[2920]: W0911 00:35:50.521437 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.521532 kubelet[2920]: E0911 00:35:50.521455 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.522156 kubelet[2920]: E0911 00:35:50.521579 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.522156 kubelet[2920]: W0911 00:35:50.521583 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.522156 kubelet[2920]: E0911 00:35:50.521598 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.522156 kubelet[2920]: E0911 00:35:50.522003 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.522156 kubelet[2920]: W0911 00:35:50.522009 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.522156 kubelet[2920]: E0911 00:35:50.522062 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.523450 kubelet[2920]: E0911 00:35:50.523384 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.523450 kubelet[2920]: W0911 00:35:50.523394 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.523592 kubelet[2920]: E0911 00:35:50.523546 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.523592 kubelet[2920]: W0911 00:35:50.523551 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.523789 kubelet[2920]: E0911 00:35:50.523702 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.523789 kubelet[2920]: W0911 00:35:50.523709 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.523789 kubelet[2920]: E0911 00:35:50.523716 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.524184 kubelet[2920]: E0911 00:35:50.524170 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.524353 kubelet[2920]: E0911 00:35:50.524263 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.524353 kubelet[2920]: W0911 00:35:50.524270 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.524353 kubelet[2920]: E0911 00:35:50.524276 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.524420 kubelet[2920]: E0911 00:35:50.524355 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.524420 kubelet[2920]: W0911 00:35:50.524360 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.524420 kubelet[2920]: E0911 00:35:50.524365 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.524469 kubelet[2920]: E0911 00:35:50.524461 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.524469 kubelet[2920]: W0911 00:35:50.524465 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.524502 kubelet[2920]: E0911 00:35:50.524470 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.524502 kubelet[2920]: E0911 00:35:50.524488 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.525104 kubelet[2920]: E0911 00:35:50.524550 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.525104 kubelet[2920]: W0911 00:35:50.524556 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.525104 kubelet[2920]: E0911 00:35:50.524560 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.525104 kubelet[2920]: E0911 00:35:50.524622 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.525104 kubelet[2920]: W0911 00:35:50.524626 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.525104 kubelet[2920]: E0911 00:35:50.524630 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.525104 kubelet[2920]: E0911 00:35:50.524704 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.525104 kubelet[2920]: W0911 00:35:50.524708 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.525104 kubelet[2920]: E0911 00:35:50.524713 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.526238 kubelet[2920]: E0911 00:35:50.526225 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.526332 kubelet[2920]: W0911 00:35:50.526301 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.526332 kubelet[2920]: E0911 00:35:50.526313 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:50.530164 kubelet[2920]: E0911 00:35:50.530148 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:50.530164 kubelet[2920]: W0911 00:35:50.530160 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:50.530260 kubelet[2920]: E0911 00:35:50.530173 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:51.702633 kubelet[2920]: E0911 00:35:51.702605 2920 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bzm2v" podUID="fd90569b-afca-452c-ba50-8a3dd99f9227" Sep 11 00:35:51.710999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1799811961.mount: Deactivated successfully. Sep 11 00:35:52.734585 containerd[1625]: time="2025-09-11T00:35:52.734145548Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:35:52.735224 containerd[1625]: time="2025-09-11T00:35:52.735211177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 11 00:35:52.735921 containerd[1625]: time="2025-09-11T00:35:52.735908054Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:35:52.737479 containerd[1625]: time="2025-09-11T00:35:52.737462663Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:35:52.737821 containerd[1625]: time="2025-09-11T00:35:52.737730825Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.740071928s" Sep 11 00:35:52.738117 containerd[1625]: time="2025-09-11T00:35:52.737923012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 11 00:35:52.738872 containerd[1625]: time="2025-09-11T00:35:52.738860133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 11 00:35:52.748806 containerd[1625]: time="2025-09-11T00:35:52.748780858Z" level=info msg="CreateContainer within sandbox \"36adb4a37cd5f0d4483a6f8cd3e8bb24ef681768193ed040bc6bc886430bce15\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 11 00:35:52.761106 containerd[1625]: time="2025-09-11T00:35:52.758789904Z" level=info msg="Container 426ce57ef71751e04c15e5c8a770d765363a12fbd1afc7b8a282c908a95af846: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:35:52.764080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2261522192.mount: Deactivated successfully. Sep 11 00:35:52.842166 containerd[1625]: time="2025-09-11T00:35:52.842133909Z" level=info msg="CreateContainer within sandbox \"36adb4a37cd5f0d4483a6f8cd3e8bb24ef681768193ed040bc6bc886430bce15\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"426ce57ef71751e04c15e5c8a770d765363a12fbd1afc7b8a282c908a95af846\"" Sep 11 00:35:52.843148 containerd[1625]: time="2025-09-11T00:35:52.843130731Z" level=info msg="StartContainer for \"426ce57ef71751e04c15e5c8a770d765363a12fbd1afc7b8a282c908a95af846\"" Sep 11 00:35:52.844136 containerd[1625]: time="2025-09-11T00:35:52.844116384Z" level=info msg="connecting to shim 426ce57ef71751e04c15e5c8a770d765363a12fbd1afc7b8a282c908a95af846" address="unix:///run/containerd/s/9395254e8d3a761477b12b8487feafc7e70193c24a487cb70fed4131a64b29e8" protocol=ttrpc version=3 Sep 11 00:35:52.862238 systemd[1]: Started cri-containerd-426ce57ef71751e04c15e5c8a770d765363a12fbd1afc7b8a282c908a95af846.scope - libcontainer container 426ce57ef71751e04c15e5c8a770d765363a12fbd1afc7b8a282c908a95af846. Sep 11 00:35:52.908030 containerd[1625]: time="2025-09-11T00:35:52.907992226Z" level=info msg="StartContainer for \"426ce57ef71751e04c15e5c8a770d765363a12fbd1afc7b8a282c908a95af846\" returns successfully" Sep 11 00:35:53.718376 kubelet[2920]: E0911 00:35:53.718335 2920 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bzm2v" podUID="fd90569b-afca-452c-ba50-8a3dd99f9227" Sep 11 00:35:53.861675 kubelet[2920]: E0911 00:35:53.861467 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:53.861675 kubelet[2920]: W0911 00:35:53.861488 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:53.861675 kubelet[2920]: E0911 00:35:53.861509 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:53.861675 kubelet[2920]: E0911 00:35:53.861624 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:53.861675 kubelet[2920]: W0911 00:35:53.861629 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:53.861675 kubelet[2920]: E0911 00:35:53.861635 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:53.861863 kubelet[2920]: E0911 00:35:53.861700 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:53.861863 kubelet[2920]: W0911 00:35:53.861706 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:53.861863 kubelet[2920]: E0911 00:35:53.861711 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:53.861863 kubelet[2920]: E0911 00:35:53.861777 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:53.861863 kubelet[2920]: W0911 00:35:53.861781 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:53.861863 kubelet[2920]: E0911 00:35:53.861785 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:53.861863 kubelet[2920]: E0911 00:35:53.861849 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:53.861863 kubelet[2920]: W0911 00:35:53.861853 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:53.861863 kubelet[2920]: E0911 00:35:53.861858 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:53.862011 kubelet[2920]: E0911 00:35:53.861914 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:53.862011 kubelet[2920]: W0911 00:35:53.861917 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:53.862011 kubelet[2920]: E0911 00:35:53.861922 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:53.862011 kubelet[2920]: E0911 00:35:53.861977 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:53.862011 kubelet[2920]: W0911 00:35:53.861981 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:53.862011 kubelet[2920]: E0911 00:35:53.861985 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:53.862158 kubelet[2920]: E0911 00:35:53.862041 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:53.862158 kubelet[2920]: W0911 00:35:53.862045 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:53.862158 kubelet[2920]: E0911 00:35:53.862050 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:53.862158 kubelet[2920]: E0911 00:35:53.862126 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:53.862158 kubelet[2920]: W0911 00:35:53.862130 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:53.862158 kubelet[2920]: E0911 00:35:53.862135 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:53.862278 kubelet[2920]: E0911 00:35:53.862202 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:53.862278 kubelet[2920]: W0911 00:35:53.862206 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:53.862278 kubelet[2920]: E0911 00:35:53.862211 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:53.862615 kubelet[2920]: E0911 00:35:53.862604 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:53.862615 kubelet[2920]: W0911 00:35:53.862614 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:53.862663 kubelet[2920]: E0911 00:35:53.862623 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:53.862726 kubelet[2920]: E0911 00:35:53.862716 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:53.862726 kubelet[2920]: W0911 00:35:53.862722 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:53.862776 kubelet[2920]: E0911 00:35:53.862727 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:53.862932 kubelet[2920]: E0911 00:35:53.862799 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:53.862932 kubelet[2920]: W0911 00:35:53.862804 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:53.862932 kubelet[2920]: E0911 00:35:53.862808 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:53.862932 kubelet[2920]: E0911 00:35:53.862917 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:53.862932 kubelet[2920]: W0911 00:35:53.862921 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:53.863034 kubelet[2920]: E0911 00:35:53.862951 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:53.863120 kubelet[2920]: E0911 00:35:53.863112 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:53.863120 kubelet[2920]: W0911 00:35:53.863118 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:53.863166 kubelet[2920]: E0911 00:35:53.863123 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:53.960857 kubelet[2920]: E0911 00:35:53.960826 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:53.960857 kubelet[2920]: W0911 00:35:53.960856 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:53.961150 kubelet[2920]: E0911 00:35:53.960889 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:53.961150 kubelet[2920]: E0911 00:35:53.961028 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:53.961150 kubelet[2920]: W0911 00:35:53.961035 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:53.961150 kubelet[2920]: E0911 00:35:53.961047 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:53.961346 kubelet[2920]: E0911 00:35:53.961180 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:53.961346 kubelet[2920]: W0911 00:35:53.961185 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:53.961346 kubelet[2920]: E0911 00:35:53.961197 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:53.961346 kubelet[2920]: E0911 00:35:53.961322 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:53.961346 kubelet[2920]: W0911 00:35:53.961336 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:53.961346 kubelet[2920]: E0911 00:35:53.961346 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:53.961749 kubelet[2920]: E0911 00:35:53.961446 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:53.961749 kubelet[2920]: W0911 00:35:53.961451 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:53.961749 kubelet[2920]: E0911 00:35:53.961459 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:53.962461 kubelet[2920]: E0911 00:35:53.961921 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:53.962461 kubelet[2920]: W0911 00:35:53.961930 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:53.962461 kubelet[2920]: E0911 00:35:53.961944 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:53.962641 kubelet[2920]: E0911 00:35:53.962574 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:53.962641 kubelet[2920]: W0911 00:35:53.962582 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:53.962641 kubelet[2920]: E0911 00:35:53.962592 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:53.962740 kubelet[2920]: E0911 00:35:53.962734 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:53.962828 kubelet[2920]: W0911 00:35:53.962820 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:53.962921 kubelet[2920]: E0911 00:35:53.962901 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:53.963045 kubelet[2920]: E0911 00:35:53.963026 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:53.963126 kubelet[2920]: W0911 00:35:53.963074 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:53.963126 kubelet[2920]: E0911 00:35:53.963102 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:53.963220 kubelet[2920]: E0911 00:35:53.963214 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:53.963315 kubelet[2920]: W0911 00:35:53.963250 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:53.963315 kubelet[2920]: E0911 00:35:53.963267 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:53.964749 kubelet[2920]: E0911 00:35:53.964728 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:53.964749 kubelet[2920]: W0911 00:35:53.964743 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:53.964749 kubelet[2920]: E0911 00:35:53.964756 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:53.965142 kubelet[2920]: E0911 00:35:53.965027 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:53.965142 kubelet[2920]: W0911 00:35:53.965036 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:53.965142 kubelet[2920]: E0911 00:35:53.965046 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:53.972145 kubelet[2920]: E0911 00:35:53.965389 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:53.972145 kubelet[2920]: W0911 00:35:53.965394 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:53.972145 kubelet[2920]: E0911 00:35:53.965409 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:53.972145 kubelet[2920]: E0911 00:35:53.965528 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:53.972145 kubelet[2920]: W0911 00:35:53.965534 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:53.972145 kubelet[2920]: E0911 00:35:53.965549 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:53.972145 kubelet[2920]: E0911 00:35:53.965675 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:53.972145 kubelet[2920]: W0911 00:35:53.965680 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:53.972145 kubelet[2920]: E0911 00:35:53.965689 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:53.972145 kubelet[2920]: E0911 00:35:53.965809 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:53.972318 kubelet[2920]: W0911 00:35:53.965815 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:53.972318 kubelet[2920]: E0911 00:35:53.965824 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:53.972318 kubelet[2920]: E0911 00:35:53.965898 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:53.972318 kubelet[2920]: W0911 00:35:53.965903 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:53.972318 kubelet[2920]: E0911 00:35:53.965909 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:53.972318 kubelet[2920]: E0911 00:35:53.966113 2920 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:35:53.972318 kubelet[2920]: W0911 00:35:53.966119 2920 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:35:53.972318 kubelet[2920]: E0911 00:35:53.966125 2920 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:35:54.714249 containerd[1625]: time="2025-09-11T00:35:54.714220366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:35:54.715202 containerd[1625]: time="2025-09-11T00:35:54.715186325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 11 00:35:54.715941 containerd[1625]: time="2025-09-11T00:35:54.715447196Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:35:54.718099 containerd[1625]: time="2025-09-11T00:35:54.718078253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:35:54.718731 containerd[1625]: time="2025-09-11T00:35:54.718718350Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.979792251s" Sep 11 00:35:54.718784 containerd[1625]: time="2025-09-11T00:35:54.718775824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 11 00:35:54.719978 containerd[1625]: time="2025-09-11T00:35:54.719968666Z" level=info msg="CreateContainer within sandbox \"5b6050d0e5b8b3ed6624b3415ffcc4b1522957470915633faf456e4abca5bf8f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 11 00:35:54.747132 containerd[1625]: time="2025-09-11T00:35:54.745925446Z" level=info msg="Container b2185335eb7f4fda8fd2c3c712209016793143251582b701258ea79f657aeecd: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:35:54.770378 containerd[1625]: time="2025-09-11T00:35:54.770329743Z" level=info msg="CreateContainer within sandbox \"5b6050d0e5b8b3ed6624b3415ffcc4b1522957470915633faf456e4abca5bf8f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b2185335eb7f4fda8fd2c3c712209016793143251582b701258ea79f657aeecd\"" Sep 11 00:35:54.770994 containerd[1625]: time="2025-09-11T00:35:54.770836849Z" level=info msg="StartContainer for \"b2185335eb7f4fda8fd2c3c712209016793143251582b701258ea79f657aeecd\"" Sep 11 00:35:54.772329 containerd[1625]: time="2025-09-11T00:35:54.772294751Z" level=info msg="connecting to shim b2185335eb7f4fda8fd2c3c712209016793143251582b701258ea79f657aeecd" address="unix:///run/containerd/s/f781b7facb98062ef11fb05d4a12df276ed34aa52e6a9a48c9a9c16858bba816" protocol=ttrpc version=3 Sep 11 00:35:54.789220 systemd[1]: Started cri-containerd-b2185335eb7f4fda8fd2c3c712209016793143251582b701258ea79f657aeecd.scope - libcontainer container b2185335eb7f4fda8fd2c3c712209016793143251582b701258ea79f657aeecd. Sep 11 00:35:54.820298 containerd[1625]: time="2025-09-11T00:35:54.819381649Z" level=info msg="StartContainer for \"b2185335eb7f4fda8fd2c3c712209016793143251582b701258ea79f657aeecd\" returns successfully" Sep 11 00:35:54.830736 systemd[1]: cri-containerd-b2185335eb7f4fda8fd2c3c712209016793143251582b701258ea79f657aeecd.scope: Deactivated successfully. Sep 11 00:35:54.831508 systemd[1]: cri-containerd-b2185335eb7f4fda8fd2c3c712209016793143251582b701258ea79f657aeecd.scope: Consumed 20ms CPU time, 6.2M memory peak, 4.3M written to disk. Sep 11 00:35:54.842512 kubelet[2920]: I0911 00:35:54.842312 2920 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:35:54.858215 kubelet[2920]: I0911 00:35:54.857994 2920 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5b9c67b4d7-vd8n6" podStartSLOduration=3.116584226 podStartE2EDuration="5.857981313s" podCreationTimestamp="2025-09-11 00:35:49 +0000 UTC" firstStartedPulling="2025-09-11 00:35:49.997256139 +0000 UTC m=+18.384240142" lastFinishedPulling="2025-09-11 00:35:52.738653225 +0000 UTC m=+21.125637229" observedRunningTime="2025-09-11 00:35:53.878723102 +0000 UTC m=+22.265707113" watchObservedRunningTime="2025-09-11 00:35:54.857981313 +0000 UTC m=+23.244965319" Sep 11 00:35:54.862759 containerd[1625]: time="2025-09-11T00:35:54.862731445Z" level=info msg="received exit event container_id:\"b2185335eb7f4fda8fd2c3c712209016793143251582b701258ea79f657aeecd\" id:\"b2185335eb7f4fda8fd2c3c712209016793143251582b701258ea79f657aeecd\" pid:3575 exited_at:{seconds:1757550954 nanos:832257100}" Sep 11 00:35:54.866295 containerd[1625]: time="2025-09-11T00:35:54.866269312Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2185335eb7f4fda8fd2c3c712209016793143251582b701258ea79f657aeecd\" id:\"b2185335eb7f4fda8fd2c3c712209016793143251582b701258ea79f657aeecd\" pid:3575 exited_at:{seconds:1757550954 nanos:832257100}" Sep 11 00:35:54.881692 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2185335eb7f4fda8fd2c3c712209016793143251582b701258ea79f657aeecd-rootfs.mount: Deactivated successfully. Sep 11 00:35:55.702704 kubelet[2920]: E0911 00:35:55.702400 2920 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bzm2v" podUID="fd90569b-afca-452c-ba50-8a3dd99f9227" Sep 11 00:35:55.827442 containerd[1625]: time="2025-09-11T00:35:55.827415459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 11 00:35:57.702866 kubelet[2920]: E0911 00:35:57.702259 2920 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bzm2v" podUID="fd90569b-afca-452c-ba50-8a3dd99f9227" Sep 11 00:35:59.710110 kubelet[2920]: E0911 00:35:59.709433 2920 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bzm2v" podUID="fd90569b-afca-452c-ba50-8a3dd99f9227" Sep 11 00:35:59.765898 containerd[1625]: time="2025-09-11T00:35:59.765858865Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:35:59.781864 containerd[1625]: time="2025-09-11T00:35:59.778520392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 11 00:35:59.788564 containerd[1625]: time="2025-09-11T00:35:59.788521507Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:35:59.809305 containerd[1625]: time="2025-09-11T00:35:59.809261244Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:35:59.809716 containerd[1625]: time="2025-09-11T00:35:59.809485355Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.981077744s" Sep 11 00:35:59.809716 containerd[1625]: time="2025-09-11T00:35:59.809506925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 11 00:35:59.811690 containerd[1625]: time="2025-09-11T00:35:59.811669811Z" level=info msg="CreateContainer within sandbox \"5b6050d0e5b8b3ed6624b3415ffcc4b1522957470915633faf456e4abca5bf8f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 11 00:35:59.853635 containerd[1625]: time="2025-09-11T00:35:59.853609976Z" level=info msg="Container 76122ec0c03a0870705d3efd4e12b7197056fbb1176b702d24a66ad43d534999: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:35:59.855364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1323651517.mount: Deactivated successfully. Sep 11 00:35:59.873118 containerd[1625]: time="2025-09-11T00:35:59.873069772Z" level=info msg="CreateContainer within sandbox \"5b6050d0e5b8b3ed6624b3415ffcc4b1522957470915633faf456e4abca5bf8f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"76122ec0c03a0870705d3efd4e12b7197056fbb1176b702d24a66ad43d534999\"" Sep 11 00:35:59.873734 containerd[1625]: time="2025-09-11T00:35:59.873713151Z" level=info msg="StartContainer for \"76122ec0c03a0870705d3efd4e12b7197056fbb1176b702d24a66ad43d534999\"" Sep 11 00:35:59.875217 containerd[1625]: time="2025-09-11T00:35:59.875189413Z" level=info msg="connecting to shim 76122ec0c03a0870705d3efd4e12b7197056fbb1176b702d24a66ad43d534999" address="unix:///run/containerd/s/f781b7facb98062ef11fb05d4a12df276ed34aa52e6a9a48c9a9c16858bba816" protocol=ttrpc version=3 Sep 11 00:35:59.898219 systemd[1]: Started cri-containerd-76122ec0c03a0870705d3efd4e12b7197056fbb1176b702d24a66ad43d534999.scope - libcontainer container 76122ec0c03a0870705d3efd4e12b7197056fbb1176b702d24a66ad43d534999. Sep 11 00:35:59.934041 containerd[1625]: time="2025-09-11T00:35:59.933403821Z" level=info msg="StartContainer for \"76122ec0c03a0870705d3efd4e12b7197056fbb1176b702d24a66ad43d534999\" returns successfully" Sep 11 00:36:01.116068 systemd[1]: cri-containerd-76122ec0c03a0870705d3efd4e12b7197056fbb1176b702d24a66ad43d534999.scope: Deactivated successfully. Sep 11 00:36:01.116523 systemd[1]: cri-containerd-76122ec0c03a0870705d3efd4e12b7197056fbb1176b702d24a66ad43d534999.scope: Consumed 302ms CPU time, 160M memory peak, 12K read from disk, 171.3M written to disk. Sep 11 00:36:01.167791 containerd[1625]: time="2025-09-11T00:36:01.167769030Z" level=info msg="received exit event container_id:\"76122ec0c03a0870705d3efd4e12b7197056fbb1176b702d24a66ad43d534999\" id:\"76122ec0c03a0870705d3efd4e12b7197056fbb1176b702d24a66ad43d534999\" pid:3634 exited_at:{seconds:1757550961 nanos:167594977}" Sep 11 00:36:01.168559 containerd[1625]: time="2025-09-11T00:36:01.168360946Z" level=info msg="TaskExit event in podsandbox handler container_id:\"76122ec0c03a0870705d3efd4e12b7197056fbb1176b702d24a66ad43d534999\" id:\"76122ec0c03a0870705d3efd4e12b7197056fbb1176b702d24a66ad43d534999\" pid:3634 exited_at:{seconds:1757550961 nanos:167594977}" Sep 11 00:36:01.169949 kubelet[2920]: I0911 00:36:01.169742 2920 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 11 00:36:01.195848 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76122ec0c03a0870705d3efd4e12b7197056fbb1176b702d24a66ad43d534999-rootfs.mount: Deactivated successfully. Sep 11 00:36:01.334160 systemd[1]: Created slice kubepods-burstable-pod009ce57a_9c56_41ee_87a7_edcfd6755687.slice - libcontainer container kubepods-burstable-pod009ce57a_9c56_41ee_87a7_edcfd6755687.slice. Sep 11 00:36:01.340898 systemd[1]: Created slice kubepods-besteffort-pod38d8df35_ee9c_4504_81fd_f83175c767ed.slice - libcontainer container kubepods-besteffort-pod38d8df35_ee9c_4504_81fd_f83175c767ed.slice. Sep 11 00:36:01.346500 systemd[1]: Created slice kubepods-burstable-pod45d522ab_bc07_4bcb_aee3_7fc8b8a95e06.slice - libcontainer container kubepods-burstable-pod45d522ab_bc07_4bcb_aee3_7fc8b8a95e06.slice. Sep 11 00:36:01.355327 systemd[1]: Created slice kubepods-besteffort-poded7757b9_0398_4e77_bb67_06d7580738b3.slice - libcontainer container kubepods-besteffort-poded7757b9_0398_4e77_bb67_06d7580738b3.slice. Sep 11 00:36:01.360404 systemd[1]: Created slice kubepods-besteffort-podb521f003_45f2_4036_a5ef_199d49bdddda.slice - libcontainer container kubepods-besteffort-podb521f003_45f2_4036_a5ef_199d49bdddda.slice. Sep 11 00:36:01.367397 systemd[1]: Created slice kubepods-besteffort-pod3e0e7115_886e_4dea_a448_53f78d3f3647.slice - libcontainer container kubepods-besteffort-pod3e0e7115_886e_4dea_a448_53f78d3f3647.slice. Sep 11 00:36:01.374242 systemd[1]: Created slice kubepods-besteffort-pod380eb68e_1f44_425e_a2fd_103bf2d1d8db.slice - libcontainer container kubepods-besteffort-pod380eb68e_1f44_425e_a2fd_103bf2d1d8db.slice. Sep 11 00:36:01.415120 kubelet[2920]: I0911 00:36:01.415079 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xlpf\" (UniqueName: \"kubernetes.io/projected/009ce57a-9c56-41ee-87a7-edcfd6755687-kube-api-access-4xlpf\") pod \"coredns-7c65d6cfc9-lxfrt\" (UID: \"009ce57a-9c56-41ee-87a7-edcfd6755687\") " pod="kube-system/coredns-7c65d6cfc9-lxfrt" Sep 11 00:36:01.415309 kubelet[2920]: I0911 00:36:01.415227 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ed7757b9-0398-4e77-bb67-06d7580738b3-goldmane-key-pair\") pod \"goldmane-7988f88666-s2sct\" (UID: \"ed7757b9-0398-4e77-bb67-06d7580738b3\") " pod="calico-system/goldmane-7988f88666-s2sct" Sep 11 00:36:01.415309 kubelet[2920]: I0911 00:36:01.415243 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz7gl\" (UniqueName: \"kubernetes.io/projected/38d8df35-ee9c-4504-81fd-f83175c767ed-kube-api-access-zz7gl\") pod \"calico-kube-controllers-6f9574cb5b-hfwsx\" (UID: \"38d8df35-ee9c-4504-81fd-f83175c767ed\") " pod="calico-system/calico-kube-controllers-6f9574cb5b-hfwsx" Sep 11 00:36:01.415309 kubelet[2920]: I0911 00:36:01.415253 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed7757b9-0398-4e77-bb67-06d7580738b3-goldmane-ca-bundle\") pod \"goldmane-7988f88666-s2sct\" (UID: \"ed7757b9-0398-4e77-bb67-06d7580738b3\") " pod="calico-system/goldmane-7988f88666-s2sct" Sep 11 00:36:01.415467 kubelet[2920]: I0911 00:36:01.415373 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs5dt\" (UniqueName: \"kubernetes.io/projected/ed7757b9-0398-4e77-bb67-06d7580738b3-kube-api-access-fs5dt\") pod \"goldmane-7988f88666-s2sct\" (UID: \"ed7757b9-0398-4e77-bb67-06d7580738b3\") " pod="calico-system/goldmane-7988f88666-s2sct" Sep 11 00:36:01.415467 kubelet[2920]: I0911 00:36:01.415394 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/009ce57a-9c56-41ee-87a7-edcfd6755687-config-volume\") pod \"coredns-7c65d6cfc9-lxfrt\" (UID: \"009ce57a-9c56-41ee-87a7-edcfd6755687\") " pod="kube-system/coredns-7c65d6cfc9-lxfrt" Sep 11 00:36:01.415467 kubelet[2920]: I0911 00:36:01.415407 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/45d522ab-bc07-4bcb-aee3-7fc8b8a95e06-config-volume\") pod \"coredns-7c65d6cfc9-bh6ht\" (UID: \"45d522ab-bc07-4bcb-aee3-7fc8b8a95e06\") " pod="kube-system/coredns-7c65d6cfc9-bh6ht" Sep 11 00:36:01.415613 kubelet[2920]: I0911 00:36:01.415507 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxtkq\" (UniqueName: \"kubernetes.io/projected/b521f003-45f2-4036-a5ef-199d49bdddda-kube-api-access-lxtkq\") pod \"calico-apiserver-5b8ff485d9-9gkkq\" (UID: \"b521f003-45f2-4036-a5ef-199d49bdddda\") " pod="calico-apiserver/calico-apiserver-5b8ff485d9-9gkkq" Sep 11 00:36:01.415613 kubelet[2920]: I0911 00:36:01.415521 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9l9w\" (UniqueName: \"kubernetes.io/projected/45d522ab-bc07-4bcb-aee3-7fc8b8a95e06-kube-api-access-f9l9w\") pod \"coredns-7c65d6cfc9-bh6ht\" (UID: \"45d522ab-bc07-4bcb-aee3-7fc8b8a95e06\") " pod="kube-system/coredns-7c65d6cfc9-bh6ht" Sep 11 00:36:01.415613 kubelet[2920]: I0911 00:36:01.415530 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38d8df35-ee9c-4504-81fd-f83175c767ed-tigera-ca-bundle\") pod \"calico-kube-controllers-6f9574cb5b-hfwsx\" (UID: \"38d8df35-ee9c-4504-81fd-f83175c767ed\") " pod="calico-system/calico-kube-controllers-6f9574cb5b-hfwsx" Sep 11 00:36:01.415613 kubelet[2920]: I0911 00:36:01.415546 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3e0e7115-886e-4dea-a448-53f78d3f3647-calico-apiserver-certs\") pod \"calico-apiserver-5b8ff485d9-mlghw\" (UID: \"3e0e7115-886e-4dea-a448-53f78d3f3647\") " pod="calico-apiserver/calico-apiserver-5b8ff485d9-mlghw" Sep 11 00:36:01.415613 kubelet[2920]: I0911 00:36:01.415555 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed7757b9-0398-4e77-bb67-06d7580738b3-config\") pod \"goldmane-7988f88666-s2sct\" (UID: \"ed7757b9-0398-4e77-bb67-06d7580738b3\") " pod="calico-system/goldmane-7988f88666-s2sct" Sep 11 00:36:01.420170 kubelet[2920]: I0911 00:36:01.415563 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtfbx\" (UniqueName: \"kubernetes.io/projected/3e0e7115-886e-4dea-a448-53f78d3f3647-kube-api-access-dtfbx\") pod \"calico-apiserver-5b8ff485d9-mlghw\" (UID: \"3e0e7115-886e-4dea-a448-53f78d3f3647\") " pod="calico-apiserver/calico-apiserver-5b8ff485d9-mlghw" Sep 11 00:36:01.420170 kubelet[2920]: I0911 00:36:01.415773 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b521f003-45f2-4036-a5ef-199d49bdddda-calico-apiserver-certs\") pod \"calico-apiserver-5b8ff485d9-9gkkq\" (UID: \"b521f003-45f2-4036-a5ef-199d49bdddda\") " pod="calico-apiserver/calico-apiserver-5b8ff485d9-9gkkq" Sep 11 00:36:01.516998 kubelet[2920]: I0911 00:36:01.516369 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/380eb68e-1f44-425e-a2fd-103bf2d1d8db-whisker-ca-bundle\") pod \"whisker-59dd74f4df-ql9x7\" (UID: \"380eb68e-1f44-425e-a2fd-103bf2d1d8db\") " pod="calico-system/whisker-59dd74f4df-ql9x7" Sep 11 00:36:01.516998 kubelet[2920]: I0911 00:36:01.516429 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjjfc\" (UniqueName: \"kubernetes.io/projected/380eb68e-1f44-425e-a2fd-103bf2d1d8db-kube-api-access-qjjfc\") pod \"whisker-59dd74f4df-ql9x7\" (UID: \"380eb68e-1f44-425e-a2fd-103bf2d1d8db\") " pod="calico-system/whisker-59dd74f4df-ql9x7" Sep 11 00:36:01.516998 kubelet[2920]: I0911 00:36:01.516452 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/380eb68e-1f44-425e-a2fd-103bf2d1d8db-whisker-backend-key-pair\") pod \"whisker-59dd74f4df-ql9x7\" (UID: \"380eb68e-1f44-425e-a2fd-103bf2d1d8db\") " pod="calico-system/whisker-59dd74f4df-ql9x7" Sep 11 00:36:01.654372 containerd[1625]: time="2025-09-11T00:36:01.653006783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f9574cb5b-hfwsx,Uid:38d8df35-ee9c-4504-81fd-f83175c767ed,Namespace:calico-system,Attempt:0,}" Sep 11 00:36:01.655997 containerd[1625]: time="2025-09-11T00:36:01.655959523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lxfrt,Uid:009ce57a-9c56-41ee-87a7-edcfd6755687,Namespace:kube-system,Attempt:0,}" Sep 11 00:36:01.664267 containerd[1625]: time="2025-09-11T00:36:01.663421611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-s2sct,Uid:ed7757b9-0398-4e77-bb67-06d7580738b3,Namespace:calico-system,Attempt:0,}" Sep 11 00:36:01.664471 containerd[1625]: time="2025-09-11T00:36:01.664455468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b8ff485d9-9gkkq,Uid:b521f003-45f2-4036-a5ef-199d49bdddda,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:36:01.664650 containerd[1625]: time="2025-09-11T00:36:01.664637761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bh6ht,Uid:45d522ab-bc07-4bcb-aee3-7fc8b8a95e06,Namespace:kube-system,Attempt:0,}" Sep 11 00:36:01.686978 containerd[1625]: time="2025-09-11T00:36:01.686922305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59dd74f4df-ql9x7,Uid:380eb68e-1f44-425e-a2fd-103bf2d1d8db,Namespace:calico-system,Attempt:0,}" Sep 11 00:36:01.687250 containerd[1625]: time="2025-09-11T00:36:01.687231157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b8ff485d9-mlghw,Uid:3e0e7115-886e-4dea-a448-53f78d3f3647,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:36:01.715828 systemd[1]: Created slice kubepods-besteffort-podfd90569b_afca_452c_ba50_8a3dd99f9227.slice - libcontainer container kubepods-besteffort-podfd90569b_afca_452c_ba50_8a3dd99f9227.slice. Sep 11 00:36:01.725316 containerd[1625]: time="2025-09-11T00:36:01.725188545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bzm2v,Uid:fd90569b-afca-452c-ba50-8a3dd99f9227,Namespace:calico-system,Attempt:0,}" Sep 11 00:36:01.924269 containerd[1625]: time="2025-09-11T00:36:01.923383188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 11 00:36:02.020112 containerd[1625]: time="2025-09-11T00:36:02.020074950Z" level=error msg="Failed to destroy network for sandbox \"9c5746db5a5f2e0dfc6f47c78f75b95ffb7589d838182b53b39bf4aef4645396\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:36:02.020261 containerd[1625]: time="2025-09-11T00:36:02.020075980Z" level=error msg="Failed to destroy network for sandbox \"dc32238199b3c6593e23ac6e882491e200cf3281863ce742b81ff2da5bd884e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:36:02.020700 containerd[1625]: time="2025-09-11T00:36:02.020081723Z" level=error msg="Failed to destroy network for sandbox \"37eaed0ecdf629d16426630566b49a3bc799586a352970bb3f9c36565d17e84b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:36:02.020830 containerd[1625]: time="2025-09-11T00:36:02.020126858Z" level=error msg="Failed to destroy network for sandbox \"6ad029006c07d9170eca71eeb73dc982d1e9abeb9c02e5d761b68c29d6e29b25\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:36:02.021228 containerd[1625]: time="2025-09-11T00:36:02.021212140Z" level=error msg="Failed to destroy network for sandbox \"2c39390e78fea3660dc3a607dbd788277e804c8937b24f5a04a46262b3d73a56\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:36:02.021922 containerd[1625]: time="2025-09-11T00:36:02.021879933Z" level=error msg="Failed to destroy network for sandbox \"b3bf6eea3ecd2e5b27ad3245438185a62826ee83312150f64abd384b334f617d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:36:02.027641 containerd[1625]: time="2025-09-11T00:36:02.021893480Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lxfrt,Uid:009ce57a-9c56-41ee-87a7-edcfd6755687,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c5746db5a5f2e0dfc6f47c78f75b95ffb7589d838182b53b39bf4aef4645396\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:36:02.027737 containerd[1625]: time="2025-09-11T00:36:02.022488773Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b8ff485d9-mlghw,Uid:3e0e7115-886e-4dea-a448-53f78d3f3647,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3bf6eea3ecd2e5b27ad3245438185a62826ee83312150f64abd384b334f617d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:36:02.027737 containerd[1625]: time="2025-09-11T00:36:02.022680075Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f9574cb5b-hfwsx,Uid:38d8df35-ee9c-4504-81fd-f83175c767ed,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc32238199b3c6593e23ac6e882491e200cf3281863ce742b81ff2da5bd884e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:36:02.027801 containerd[1625]: time="2025-09-11T00:36:02.022927030Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bh6ht,Uid:45d522ab-bc07-4bcb-aee3-7fc8b8a95e06,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"37eaed0ecdf629d16426630566b49a3bc799586a352970bb3f9c36565d17e84b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:36:02.027801 containerd[1625]: time="2025-09-11T00:36:02.023261411Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-s2sct,Uid:ed7757b9-0398-4e77-bb67-06d7580738b3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ad029006c07d9170eca71eeb73dc982d1e9abeb9c02e5d761b68c29d6e29b25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:36:02.027801 containerd[1625]: time="2025-09-11T00:36:02.023338730Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b8ff485d9-9gkkq,Uid:b521f003-45f2-4036-a5ef-199d49bdddda,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c39390e78fea3660dc3a607dbd788277e804c8937b24f5a04a46262b3d73a56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:36:02.027801 containerd[1625]: time="2025-09-11T00:36:02.025648027Z" level=error msg="Failed to destroy network for sandbox \"841f89eb841144c084988a8bae7e0bd1777d3b6108e71e2dafc7e749ae0b8d9b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:36:02.027984 containerd[1625]: time="2025-09-11T00:36:02.026194492Z" level=error msg="Failed to destroy network for sandbox \"71ec6a7aa683de1c63d72f54a3f99e5f5ded39a8ecf27f9d75a7350710639fe6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:36:02.028346 containerd[1625]: time="2025-09-11T00:36:02.028263341Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59dd74f4df-ql9x7,Uid:380eb68e-1f44-425e-a2fd-103bf2d1d8db,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"841f89eb841144c084988a8bae7e0bd1777d3b6108e71e2dafc7e749ae0b8d9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:36:02.028508 containerd[1625]: time="2025-09-11T00:36:02.028493932Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bzm2v,Uid:fd90569b-afca-452c-ba50-8a3dd99f9227,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"71ec6a7aa683de1c63d72f54a3f99e5f5ded39a8ecf27f9d75a7350710639fe6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:36:02.028764 kubelet[2920]: E0911 00:36:02.028726 2920 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c39390e78fea3660dc3a607dbd788277e804c8937b24f5a04a46262b3d73a56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:36:02.028813 kubelet[2920]: E0911 00:36:02.028798 2920 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c39390e78fea3660dc3a607dbd788277e804c8937b24f5a04a46262b3d73a56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b8ff485d9-9gkkq" Sep 11 00:36:02.028834 kubelet[2920]: E0911 00:36:02.028813 2920 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c39390e78fea3660dc3a607dbd788277e804c8937b24f5a04a46262b3d73a56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b8ff485d9-9gkkq" Sep 11 00:36:02.029062 kubelet[2920]: E0911 00:36:02.028849 2920 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b8ff485d9-9gkkq_calico-apiserver(b521f003-45f2-4036-a5ef-199d49bdddda)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b8ff485d9-9gkkq_calico-apiserver(b521f003-45f2-4036-a5ef-199d49bdddda)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c39390e78fea3660dc3a607dbd788277e804c8937b24f5a04a46262b3d73a56\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b8ff485d9-9gkkq" podUID="b521f003-45f2-4036-a5ef-199d49bdddda" Sep 11 00:36:02.029062 kubelet[2920]: E0911 00:36:02.028725 2920 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c5746db5a5f2e0dfc6f47c78f75b95ffb7589d838182b53b39bf4aef4645396\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:36:02.029062 kubelet[2920]: E0911 00:36:02.028886 2920 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c5746db5a5f2e0dfc6f47c78f75b95ffb7589d838182b53b39bf4aef4645396\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-lxfrt" Sep 11 00:36:02.029242 kubelet[2920]: E0911 00:36:02.028898 2920 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c5746db5a5f2e0dfc6f47c78f75b95ffb7589d838182b53b39bf4aef4645396\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-lxfrt" Sep 11 00:36:02.029242 kubelet[2920]: E0911 00:36:02.028920 2920 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-lxfrt_kube-system(009ce57a-9c56-41ee-87a7-edcfd6755687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-lxfrt_kube-system(009ce57a-9c56-41ee-87a7-edcfd6755687)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c5746db5a5f2e0dfc6f47c78f75b95ffb7589d838182b53b39bf4aef4645396\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-lxfrt" podUID="009ce57a-9c56-41ee-87a7-edcfd6755687" Sep 11 00:36:02.029242 kubelet[2920]: E0911 00:36:02.028962 2920 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71ec6a7aa683de1c63d72f54a3f99e5f5ded39a8ecf27f9d75a7350710639fe6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:36:02.030114 kubelet[2920]: E0911 00:36:02.028990 2920 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71ec6a7aa683de1c63d72f54a3f99e5f5ded39a8ecf27f9d75a7350710639fe6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bzm2v" Sep 11 00:36:02.030114 kubelet[2920]: E0911 00:36:02.029001 2920 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71ec6a7aa683de1c63d72f54a3f99e5f5ded39a8ecf27f9d75a7350710639fe6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bzm2v" Sep 11 00:36:02.030114 kubelet[2920]: E0911 00:36:02.029015 2920 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bzm2v_calico-system(fd90569b-afca-452c-ba50-8a3dd99f9227)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bzm2v_calico-system(fd90569b-afca-452c-ba50-8a3dd99f9227)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71ec6a7aa683de1c63d72f54a3f99e5f5ded39a8ecf27f9d75a7350710639fe6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bzm2v" podUID="fd90569b-afca-452c-ba50-8a3dd99f9227" Sep 11 00:36:02.030197 kubelet[2920]: E0911 00:36:02.029033 2920 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"841f89eb841144c084988a8bae7e0bd1777d3b6108e71e2dafc7e749ae0b8d9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:36:02.030197 kubelet[2920]: E0911 00:36:02.029042 2920 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"841f89eb841144c084988a8bae7e0bd1777d3b6108e71e2dafc7e749ae0b8d9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59dd74f4df-ql9x7" Sep 11 00:36:02.030197 kubelet[2920]: E0911 00:36:02.029510 2920 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"841f89eb841144c084988a8bae7e0bd1777d3b6108e71e2dafc7e749ae0b8d9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59dd74f4df-ql9x7" Sep 11 00:36:02.030690 kubelet[2920]: E0911 00:36:02.029534 2920 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-59dd74f4df-ql9x7_calico-system(380eb68e-1f44-425e-a2fd-103bf2d1d8db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-59dd74f4df-ql9x7_calico-system(380eb68e-1f44-425e-a2fd-103bf2d1d8db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"841f89eb841144c084988a8bae7e0bd1777d3b6108e71e2dafc7e749ae0b8d9b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-59dd74f4df-ql9x7" podUID="380eb68e-1f44-425e-a2fd-103bf2d1d8db" Sep 11 00:36:02.030690 kubelet[2920]: E0911 00:36:02.029575 2920 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37eaed0ecdf629d16426630566b49a3bc799586a352970bb3f9c36565d17e84b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:36:02.030690 kubelet[2920]: E0911 00:36:02.029588 2920 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37eaed0ecdf629d16426630566b49a3bc799586a352970bb3f9c36565d17e84b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-bh6ht" Sep 11 00:36:02.030786 kubelet[2920]: E0911 00:36:02.029597 2920 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37eaed0ecdf629d16426630566b49a3bc799586a352970bb3f9c36565d17e84b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-bh6ht" Sep 11 00:36:02.030786 kubelet[2920]: E0911 00:36:02.029616 2920 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-bh6ht_kube-system(45d522ab-bc07-4bcb-aee3-7fc8b8a95e06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-bh6ht_kube-system(45d522ab-bc07-4bcb-aee3-7fc8b8a95e06)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37eaed0ecdf629d16426630566b49a3bc799586a352970bb3f9c36565d17e84b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-bh6ht" podUID="45d522ab-bc07-4bcb-aee3-7fc8b8a95e06" Sep 11 00:36:02.030786 kubelet[2920]: E0911 00:36:02.029634 2920 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc32238199b3c6593e23ac6e882491e200cf3281863ce742b81ff2da5bd884e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:36:02.030854 kubelet[2920]: E0911 00:36:02.029644 2920 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc32238199b3c6593e23ac6e882491e200cf3281863ce742b81ff2da5bd884e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f9574cb5b-hfwsx" Sep 11 00:36:02.030854 kubelet[2920]: E0911 00:36:02.029652 2920 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc32238199b3c6593e23ac6e882491e200cf3281863ce742b81ff2da5bd884e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f9574cb5b-hfwsx" Sep 11 00:36:02.030854 kubelet[2920]: E0911 00:36:02.029668 2920 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f9574cb5b-hfwsx_calico-system(38d8df35-ee9c-4504-81fd-f83175c767ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f9574cb5b-hfwsx_calico-system(38d8df35-ee9c-4504-81fd-f83175c767ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc32238199b3c6593e23ac6e882491e200cf3281863ce742b81ff2da5bd884e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f9574cb5b-hfwsx" podUID="38d8df35-ee9c-4504-81fd-f83175c767ed" Sep 11 00:36:02.030924 kubelet[2920]: E0911 00:36:02.029695 2920 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ad029006c07d9170eca71eeb73dc982d1e9abeb9c02e5d761b68c29d6e29b25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:36:02.030924 kubelet[2920]: E0911 00:36:02.029709 2920 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ad029006c07d9170eca71eeb73dc982d1e9abeb9c02e5d761b68c29d6e29b25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-s2sct" Sep 11 00:36:02.030924 kubelet[2920]: E0911 00:36:02.029716 2920 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ad029006c07d9170eca71eeb73dc982d1e9abeb9c02e5d761b68c29d6e29b25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-s2sct" Sep 11 00:36:02.030924 kubelet[2920]: E0911 00:36:02.029892 2920 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3bf6eea3ecd2e5b27ad3245438185a62826ee83312150f64abd384b334f617d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:36:02.030995 kubelet[2920]: E0911 00:36:02.029912 2920 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3bf6eea3ecd2e5b27ad3245438185a62826ee83312150f64abd384b334f617d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b8ff485d9-mlghw" Sep 11 00:36:02.030995 kubelet[2920]: E0911 00:36:02.029925 2920 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3bf6eea3ecd2e5b27ad3245438185a62826ee83312150f64abd384b334f617d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b8ff485d9-mlghw" Sep 11 00:36:02.030995 kubelet[2920]: E0911 00:36:02.029946 2920 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b8ff485d9-mlghw_calico-apiserver(3e0e7115-886e-4dea-a448-53f78d3f3647)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b8ff485d9-mlghw_calico-apiserver(3e0e7115-886e-4dea-a448-53f78d3f3647)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b3bf6eea3ecd2e5b27ad3245438185a62826ee83312150f64abd384b334f617d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b8ff485d9-mlghw" podUID="3e0e7115-886e-4dea-a448-53f78d3f3647" Sep 11 00:36:02.031062 kubelet[2920]: E0911 00:36:02.030368 2920 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-s2sct_calico-system(ed7757b9-0398-4e77-bb67-06d7580738b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-s2sct_calico-system(ed7757b9-0398-4e77-bb67-06d7580738b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ad029006c07d9170eca71eeb73dc982d1e9abeb9c02e5d761b68c29d6e29b25\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-s2sct" podUID="ed7757b9-0398-4e77-bb67-06d7580738b3" Sep 11 00:36:07.881329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount426619450.mount: Deactivated successfully. Sep 11 00:36:08.018539 containerd[1625]: time="2025-09-11T00:36:08.018406216Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:36:08.043528 containerd[1625]: time="2025-09-11T00:36:08.043492999Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:36:08.045257 containerd[1625]: time="2025-09-11T00:36:08.044835228Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 11 00:36:08.045257 containerd[1625]: time="2025-09-11T00:36:08.044985104Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:36:08.045796 containerd[1625]: time="2025-09-11T00:36:08.045775831Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 6.121159279s" Sep 11 00:36:08.045829 containerd[1625]: time="2025-09-11T00:36:08.045797535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 11 00:36:08.066195 containerd[1625]: time="2025-09-11T00:36:08.066167560Z" level=info msg="CreateContainer within sandbox \"5b6050d0e5b8b3ed6624b3415ffcc4b1522957470915633faf456e4abca5bf8f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 11 00:36:08.131168 containerd[1625]: time="2025-09-11T00:36:08.131067014Z" level=info msg="Container 18e216242c2bdffa39ca6cd8a9e2cdf3c7bfe5bae836ebd0c541c596f9a773de: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:36:08.131393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount942220366.mount: Deactivated successfully. Sep 11 00:36:08.218438 containerd[1625]: time="2025-09-11T00:36:08.218410135Z" level=info msg="CreateContainer within sandbox \"5b6050d0e5b8b3ed6624b3415ffcc4b1522957470915633faf456e4abca5bf8f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"18e216242c2bdffa39ca6cd8a9e2cdf3c7bfe5bae836ebd0c541c596f9a773de\"" Sep 11 00:36:08.218887 containerd[1625]: time="2025-09-11T00:36:08.218789578Z" level=info msg="StartContainer for \"18e216242c2bdffa39ca6cd8a9e2cdf3c7bfe5bae836ebd0c541c596f9a773de\"" Sep 11 00:36:08.230216 containerd[1625]: time="2025-09-11T00:36:08.230059620Z" level=info msg="connecting to shim 18e216242c2bdffa39ca6cd8a9e2cdf3c7bfe5bae836ebd0c541c596f9a773de" address="unix:///run/containerd/s/f781b7facb98062ef11fb05d4a12df276ed34aa52e6a9a48c9a9c16858bba816" protocol=ttrpc version=3 Sep 11 00:36:08.278461 systemd[1]: Started cri-containerd-18e216242c2bdffa39ca6cd8a9e2cdf3c7bfe5bae836ebd0c541c596f9a773de.scope - libcontainer container 18e216242c2bdffa39ca6cd8a9e2cdf3c7bfe5bae836ebd0c541c596f9a773de. Sep 11 00:36:08.333246 containerd[1625]: time="2025-09-11T00:36:08.333221634Z" level=info msg="StartContainer for \"18e216242c2bdffa39ca6cd8a9e2cdf3c7bfe5bae836ebd0c541c596f9a773de\" returns successfully" Sep 11 00:36:08.412222 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 11 00:36:08.416077 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 11 00:36:08.857810 kubelet[2920]: I0911 00:36:08.857570 2920 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/380eb68e-1f44-425e-a2fd-103bf2d1d8db-whisker-backend-key-pair\") pod \"380eb68e-1f44-425e-a2fd-103bf2d1d8db\" (UID: \"380eb68e-1f44-425e-a2fd-103bf2d1d8db\") " Sep 11 00:36:08.857810 kubelet[2920]: I0911 00:36:08.857631 2920 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjjfc\" (UniqueName: \"kubernetes.io/projected/380eb68e-1f44-425e-a2fd-103bf2d1d8db-kube-api-access-qjjfc\") pod \"380eb68e-1f44-425e-a2fd-103bf2d1d8db\" (UID: \"380eb68e-1f44-425e-a2fd-103bf2d1d8db\") " Sep 11 00:36:08.857810 kubelet[2920]: I0911 00:36:08.857646 2920 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/380eb68e-1f44-425e-a2fd-103bf2d1d8db-whisker-ca-bundle\") pod \"380eb68e-1f44-425e-a2fd-103bf2d1d8db\" (UID: \"380eb68e-1f44-425e-a2fd-103bf2d1d8db\") " Sep 11 00:36:08.864219 kubelet[2920]: I0911 00:36:08.864037 2920 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/380eb68e-1f44-425e-a2fd-103bf2d1d8db-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "380eb68e-1f44-425e-a2fd-103bf2d1d8db" (UID: "380eb68e-1f44-425e-a2fd-103bf2d1d8db"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 11 00:36:08.867511 kubelet[2920]: I0911 00:36:08.867482 2920 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/380eb68e-1f44-425e-a2fd-103bf2d1d8db-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "380eb68e-1f44-425e-a2fd-103bf2d1d8db" (UID: "380eb68e-1f44-425e-a2fd-103bf2d1d8db"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 11 00:36:08.867621 kubelet[2920]: I0911 00:36:08.867606 2920 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/380eb68e-1f44-425e-a2fd-103bf2d1d8db-kube-api-access-qjjfc" (OuterVolumeSpecName: "kube-api-access-qjjfc") pod "380eb68e-1f44-425e-a2fd-103bf2d1d8db" (UID: "380eb68e-1f44-425e-a2fd-103bf2d1d8db"). InnerVolumeSpecName "kube-api-access-qjjfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 11 00:36:08.881951 systemd[1]: var-lib-kubelet-pods-380eb68e\x2d1f44\x2d425e\x2da2fd\x2d103bf2d1d8db-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqjjfc.mount: Deactivated successfully. Sep 11 00:36:08.882015 systemd[1]: var-lib-kubelet-pods-380eb68e\x2d1f44\x2d425e\x2da2fd\x2d103bf2d1d8db-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 11 00:36:08.939687 systemd[1]: Removed slice kubepods-besteffort-pod380eb68e_1f44_425e_a2fd_103bf2d1d8db.slice - libcontainer container kubepods-besteffort-pod380eb68e_1f44_425e_a2fd_103bf2d1d8db.slice. Sep 11 00:36:08.945196 kubelet[2920]: I0911 00:36:08.945147 2920 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-g9kws" podStartSLOduration=1.326803436 podStartE2EDuration="18.942956632s" podCreationTimestamp="2025-09-11 00:35:50 +0000 UTC" firstStartedPulling="2025-09-11 00:35:50.430217933 +0000 UTC m=+18.817201935" lastFinishedPulling="2025-09-11 00:36:08.046371128 +0000 UTC m=+36.433355131" observedRunningTime="2025-09-11 00:36:08.942608286 +0000 UTC m=+37.329592298" watchObservedRunningTime="2025-09-11 00:36:08.942956632 +0000 UTC m=+37.329940638" Sep 11 00:36:08.957943 kubelet[2920]: I0911 00:36:08.957921 2920 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjjfc\" (UniqueName: \"kubernetes.io/projected/380eb68e-1f44-425e-a2fd-103bf2d1d8db-kube-api-access-qjjfc\") on node \"localhost\" DevicePath \"\"" Sep 11 00:36:08.957943 kubelet[2920]: I0911 00:36:08.957938 2920 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/380eb68e-1f44-425e-a2fd-103bf2d1d8db-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 11 00:36:08.957943 kubelet[2920]: I0911 00:36:08.957945 2920 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/380eb68e-1f44-425e-a2fd-103bf2d1d8db-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 11 00:36:09.036841 systemd[1]: Created slice kubepods-besteffort-pode4859b88_9ade_4ca3_85cf_3a7dc3bed2a5.slice - libcontainer container kubepods-besteffort-pode4859b88_9ade_4ca3_85cf_3a7dc3bed2a5.slice. Sep 11 00:36:09.159581 kubelet[2920]: I0911 00:36:09.159064 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4859b88-9ade-4ca3-85cf-3a7dc3bed2a5-whisker-ca-bundle\") pod \"whisker-7cbc5d778d-8wn82\" (UID: \"e4859b88-9ade-4ca3-85cf-3a7dc3bed2a5\") " pod="calico-system/whisker-7cbc5d778d-8wn82" Sep 11 00:36:09.159581 kubelet[2920]: I0911 00:36:09.159141 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rsnf\" (UniqueName: \"kubernetes.io/projected/e4859b88-9ade-4ca3-85cf-3a7dc3bed2a5-kube-api-access-8rsnf\") pod \"whisker-7cbc5d778d-8wn82\" (UID: \"e4859b88-9ade-4ca3-85cf-3a7dc3bed2a5\") " pod="calico-system/whisker-7cbc5d778d-8wn82" Sep 11 00:36:09.159581 kubelet[2920]: I0911 00:36:09.159162 2920 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e4859b88-9ade-4ca3-85cf-3a7dc3bed2a5-whisker-backend-key-pair\") pod \"whisker-7cbc5d778d-8wn82\" (UID: \"e4859b88-9ade-4ca3-85cf-3a7dc3bed2a5\") " pod="calico-system/whisker-7cbc5d778d-8wn82" Sep 11 00:36:09.343392 containerd[1625]: time="2025-09-11T00:36:09.343360436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cbc5d778d-8wn82,Uid:e4859b88-9ade-4ca3-85cf-3a7dc3bed2a5,Namespace:calico-system,Attempt:0,}" Sep 11 00:36:09.682530 systemd-networkd[1538]: calia5b74968f35: Link UP Sep 11 00:36:09.682841 systemd-networkd[1538]: calia5b74968f35: Gained carrier Sep 11 00:36:09.692252 containerd[1625]: 2025-09-11 00:36:09.361 [INFO][3969] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:36:09.692252 containerd[1625]: 2025-09-11 00:36:09.421 [INFO][3969] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7cbc5d778d--8wn82-eth0 whisker-7cbc5d778d- calico-system e4859b88-9ade-4ca3-85cf-3a7dc3bed2a5 892 0 2025-09-11 00:36:08 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7cbc5d778d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7cbc5d778d-8wn82 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia5b74968f35 [] [] }} ContainerID="439ba593d75b1948b12cb1ab5b14a1821b23dde120ae0ac931aa7b96be12d2fa" Namespace="calico-system" Pod="whisker-7cbc5d778d-8wn82" WorkloadEndpoint="localhost-k8s-whisker--7cbc5d778d--8wn82-" Sep 11 00:36:09.692252 containerd[1625]: 2025-09-11 00:36:09.421 [INFO][3969] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="439ba593d75b1948b12cb1ab5b14a1821b23dde120ae0ac931aa7b96be12d2fa" Namespace="calico-system" Pod="whisker-7cbc5d778d-8wn82" WorkloadEndpoint="localhost-k8s-whisker--7cbc5d778d--8wn82-eth0" Sep 11 00:36:09.692252 containerd[1625]: 2025-09-11 00:36:09.635 [INFO][3980] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="439ba593d75b1948b12cb1ab5b14a1821b23dde120ae0ac931aa7b96be12d2fa" HandleID="k8s-pod-network.439ba593d75b1948b12cb1ab5b14a1821b23dde120ae0ac931aa7b96be12d2fa" Workload="localhost-k8s-whisker--7cbc5d778d--8wn82-eth0" Sep 11 00:36:09.692566 containerd[1625]: 2025-09-11 00:36:09.637 [INFO][3980] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="439ba593d75b1948b12cb1ab5b14a1821b23dde120ae0ac931aa7b96be12d2fa" HandleID="k8s-pod-network.439ba593d75b1948b12cb1ab5b14a1821b23dde120ae0ac931aa7b96be12d2fa" Workload="localhost-k8s-whisker--7cbc5d778d--8wn82-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000336310), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7cbc5d778d-8wn82", "timestamp":"2025-09-11 00:36:09.635267466 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:36:09.692566 containerd[1625]: 2025-09-11 00:36:09.637 [INFO][3980] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:36:09.692566 containerd[1625]: 2025-09-11 00:36:09.638 [INFO][3980] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:36:09.692566 containerd[1625]: 2025-09-11 00:36:09.638 [INFO][3980] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:36:09.692566 containerd[1625]: 2025-09-11 00:36:09.652 [INFO][3980] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.439ba593d75b1948b12cb1ab5b14a1821b23dde120ae0ac931aa7b96be12d2fa" host="localhost" Sep 11 00:36:09.692566 containerd[1625]: 2025-09-11 00:36:09.661 [INFO][3980] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:36:09.692566 containerd[1625]: 2025-09-11 00:36:09.663 [INFO][3980] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:36:09.692566 containerd[1625]: 2025-09-11 00:36:09.664 [INFO][3980] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:36:09.692566 containerd[1625]: 2025-09-11 00:36:09.665 [INFO][3980] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:36:09.692566 containerd[1625]: 2025-09-11 00:36:09.665 [INFO][3980] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.439ba593d75b1948b12cb1ab5b14a1821b23dde120ae0ac931aa7b96be12d2fa" host="localhost" Sep 11 00:36:09.692807 containerd[1625]: 2025-09-11 00:36:09.666 [INFO][3980] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.439ba593d75b1948b12cb1ab5b14a1821b23dde120ae0ac931aa7b96be12d2fa Sep 11 00:36:09.692807 containerd[1625]: 2025-09-11 00:36:09.667 [INFO][3980] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.439ba593d75b1948b12cb1ab5b14a1821b23dde120ae0ac931aa7b96be12d2fa" host="localhost" Sep 11 00:36:09.692807 containerd[1625]: 2025-09-11 00:36:09.670 [INFO][3980] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.439ba593d75b1948b12cb1ab5b14a1821b23dde120ae0ac931aa7b96be12d2fa" host="localhost" Sep 11 00:36:09.692807 containerd[1625]: 2025-09-11 00:36:09.670 [INFO][3980] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.439ba593d75b1948b12cb1ab5b14a1821b23dde120ae0ac931aa7b96be12d2fa" host="localhost" Sep 11 00:36:09.692807 containerd[1625]: 2025-09-11 00:36:09.670 [INFO][3980] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:36:09.692807 containerd[1625]: 2025-09-11 00:36:09.670 [INFO][3980] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="439ba593d75b1948b12cb1ab5b14a1821b23dde120ae0ac931aa7b96be12d2fa" HandleID="k8s-pod-network.439ba593d75b1948b12cb1ab5b14a1821b23dde120ae0ac931aa7b96be12d2fa" Workload="localhost-k8s-whisker--7cbc5d778d--8wn82-eth0" Sep 11 00:36:09.692935 containerd[1625]: 2025-09-11 00:36:09.671 [INFO][3969] cni-plugin/k8s.go 418: Populated endpoint ContainerID="439ba593d75b1948b12cb1ab5b14a1821b23dde120ae0ac931aa7b96be12d2fa" Namespace="calico-system" Pod="whisker-7cbc5d778d-8wn82" WorkloadEndpoint="localhost-k8s-whisker--7cbc5d778d--8wn82-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7cbc5d778d--8wn82-eth0", GenerateName:"whisker-7cbc5d778d-", Namespace:"calico-system", SelfLink:"", UID:"e4859b88-9ade-4ca3-85cf-3a7dc3bed2a5", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 36, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7cbc5d778d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7cbc5d778d-8wn82", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia5b74968f35", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:36:09.692935 containerd[1625]: 2025-09-11 00:36:09.671 [INFO][3969] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="439ba593d75b1948b12cb1ab5b14a1821b23dde120ae0ac931aa7b96be12d2fa" Namespace="calico-system" Pod="whisker-7cbc5d778d-8wn82" WorkloadEndpoint="localhost-k8s-whisker--7cbc5d778d--8wn82-eth0" Sep 11 00:36:09.693010 containerd[1625]: 2025-09-11 00:36:09.671 [INFO][3969] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia5b74968f35 ContainerID="439ba593d75b1948b12cb1ab5b14a1821b23dde120ae0ac931aa7b96be12d2fa" Namespace="calico-system" Pod="whisker-7cbc5d778d-8wn82" WorkloadEndpoint="localhost-k8s-whisker--7cbc5d778d--8wn82-eth0" Sep 11 00:36:09.693010 containerd[1625]: 2025-09-11 00:36:09.683 [INFO][3969] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="439ba593d75b1948b12cb1ab5b14a1821b23dde120ae0ac931aa7b96be12d2fa" Namespace="calico-system" Pod="whisker-7cbc5d778d-8wn82" WorkloadEndpoint="localhost-k8s-whisker--7cbc5d778d--8wn82-eth0" Sep 11 00:36:09.693052 containerd[1625]: 2025-09-11 00:36:09.683 [INFO][3969] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="439ba593d75b1948b12cb1ab5b14a1821b23dde120ae0ac931aa7b96be12d2fa" Namespace="calico-system" Pod="whisker-7cbc5d778d-8wn82" WorkloadEndpoint="localhost-k8s-whisker--7cbc5d778d--8wn82-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7cbc5d778d--8wn82-eth0", GenerateName:"whisker-7cbc5d778d-", Namespace:"calico-system", SelfLink:"", UID:"e4859b88-9ade-4ca3-85cf-3a7dc3bed2a5", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 36, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7cbc5d778d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"439ba593d75b1948b12cb1ab5b14a1821b23dde120ae0ac931aa7b96be12d2fa", Pod:"whisker-7cbc5d778d-8wn82", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia5b74968f35", MAC:"62:0c:d8:c1:40:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:36:09.693170 containerd[1625]: 2025-09-11 00:36:09.689 [INFO][3969] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="439ba593d75b1948b12cb1ab5b14a1821b23dde120ae0ac931aa7b96be12d2fa" Namespace="calico-system" Pod="whisker-7cbc5d778d-8wn82" WorkloadEndpoint="localhost-k8s-whisker--7cbc5d778d--8wn82-eth0" Sep 11 00:36:09.705113 kubelet[2920]: I0911 00:36:09.705019 2920 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="380eb68e-1f44-425e-a2fd-103bf2d1d8db" path="/var/lib/kubelet/pods/380eb68e-1f44-425e-a2fd-103bf2d1d8db/volumes" Sep 11 00:36:09.844152 containerd[1625]: time="2025-09-11T00:36:09.844116060Z" level=info msg="connecting to shim 439ba593d75b1948b12cb1ab5b14a1821b23dde120ae0ac931aa7b96be12d2fa" address="unix:///run/containerd/s/f16c707915fbd21117ea07a330a4ea1129218683d35724fb4db51a4444fe2365" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:36:09.871187 systemd[1]: Started cri-containerd-439ba593d75b1948b12cb1ab5b14a1821b23dde120ae0ac931aa7b96be12d2fa.scope - libcontainer container 439ba593d75b1948b12cb1ab5b14a1821b23dde120ae0ac931aa7b96be12d2fa. Sep 11 00:36:09.879040 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:36:09.940177 kubelet[2920]: I0911 00:36:09.939811 2920 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:36:09.985943 containerd[1625]: time="2025-09-11T00:36:09.985861690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cbc5d778d-8wn82,Uid:e4859b88-9ade-4ca3-85cf-3a7dc3bed2a5,Namespace:calico-system,Attempt:0,} returns sandbox id \"439ba593d75b1948b12cb1ab5b14a1821b23dde120ae0ac931aa7b96be12d2fa\"" Sep 11 00:36:10.006573 containerd[1625]: time="2025-09-11T00:36:10.006514336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 11 00:36:10.733588 systemd-networkd[1538]: calia5b74968f35: Gained IPv6LL Sep 11 00:36:11.995985 containerd[1625]: time="2025-09-11T00:36:11.995956464Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:36:11.996954 containerd[1625]: time="2025-09-11T00:36:11.996942086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 11 00:36:11.997382 containerd[1625]: time="2025-09-11T00:36:11.997358928Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:36:11.998798 containerd[1625]: time="2025-09-11T00:36:11.998777935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:36:12.001904 containerd[1625]: time="2025-09-11T00:36:12.001835053Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.995086679s" Sep 11 00:36:12.001904 containerd[1625]: time="2025-09-11T00:36:12.001852395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 11 00:36:12.004061 containerd[1625]: time="2025-09-11T00:36:12.004027339Z" level=info msg="CreateContainer within sandbox \"439ba593d75b1948b12cb1ab5b14a1821b23dde120ae0ac931aa7b96be12d2fa\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 11 00:36:12.008059 containerd[1625]: time="2025-09-11T00:36:12.008042337Z" level=info msg="Container 6734695dd02c5b1c768893bab1a80274b0d41265f304559ebab4fde48d9dba49: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:36:12.011692 containerd[1625]: time="2025-09-11T00:36:12.011677720Z" level=info msg="CreateContainer within sandbox \"439ba593d75b1948b12cb1ab5b14a1821b23dde120ae0ac931aa7b96be12d2fa\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"6734695dd02c5b1c768893bab1a80274b0d41265f304559ebab4fde48d9dba49\"" Sep 11 00:36:12.012257 containerd[1625]: time="2025-09-11T00:36:12.012165167Z" level=info msg="StartContainer for \"6734695dd02c5b1c768893bab1a80274b0d41265f304559ebab4fde48d9dba49\"" Sep 11 00:36:12.013883 containerd[1625]: time="2025-09-11T00:36:12.013581148Z" level=info msg="connecting to shim 6734695dd02c5b1c768893bab1a80274b0d41265f304559ebab4fde48d9dba49" address="unix:///run/containerd/s/f16c707915fbd21117ea07a330a4ea1129218683d35724fb4db51a4444fe2365" protocol=ttrpc version=3 Sep 11 00:36:12.031198 systemd[1]: Started cri-containerd-6734695dd02c5b1c768893bab1a80274b0d41265f304559ebab4fde48d9dba49.scope - libcontainer container 6734695dd02c5b1c768893bab1a80274b0d41265f304559ebab4fde48d9dba49. Sep 11 00:36:12.071693 containerd[1625]: time="2025-09-11T00:36:12.071670857Z" level=info msg="StartContainer for \"6734695dd02c5b1c768893bab1a80274b0d41265f304559ebab4fde48d9dba49\" returns successfully" Sep 11 00:36:12.072641 containerd[1625]: time="2025-09-11T00:36:12.072629377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 11 00:36:12.704351 containerd[1625]: time="2025-09-11T00:36:12.704316023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bzm2v,Uid:fd90569b-afca-452c-ba50-8a3dd99f9227,Namespace:calico-system,Attempt:0,}" Sep 11 00:36:12.779831 systemd-networkd[1538]: cali19ccc515000: Link UP Sep 11 00:36:12.780158 systemd-networkd[1538]: cali19ccc515000: Gained carrier Sep 11 00:36:12.792104 containerd[1625]: 2025-09-11 00:36:12.723 [INFO][4212] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:36:12.792104 containerd[1625]: 2025-09-11 00:36:12.733 [INFO][4212] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--bzm2v-eth0 csi-node-driver- calico-system fd90569b-afca-452c-ba50-8a3dd99f9227 716 0 2025-09-11 00:35:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-bzm2v eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali19ccc515000 [] [] }} ContainerID="a4d3f63e1c894b93c806375e058854a13d3b50530f13207153158bd6a5858ab7" Namespace="calico-system" Pod="csi-node-driver-bzm2v" WorkloadEndpoint="localhost-k8s-csi--node--driver--bzm2v-" Sep 11 00:36:12.792104 containerd[1625]: 2025-09-11 00:36:12.733 [INFO][4212] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a4d3f63e1c894b93c806375e058854a13d3b50530f13207153158bd6a5858ab7" Namespace="calico-system" Pod="csi-node-driver-bzm2v" WorkloadEndpoint="localhost-k8s-csi--node--driver--bzm2v-eth0" Sep 11 00:36:12.792104 containerd[1625]: 2025-09-11 00:36:12.753 [INFO][4224] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4d3f63e1c894b93c806375e058854a13d3b50530f13207153158bd6a5858ab7" HandleID="k8s-pod-network.a4d3f63e1c894b93c806375e058854a13d3b50530f13207153158bd6a5858ab7" Workload="localhost-k8s-csi--node--driver--bzm2v-eth0" Sep 11 00:36:12.792336 containerd[1625]: 2025-09-11 00:36:12.753 [INFO][4224] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a4d3f63e1c894b93c806375e058854a13d3b50530f13207153158bd6a5858ab7" HandleID="k8s-pod-network.a4d3f63e1c894b93c806375e058854a13d3b50530f13207153158bd6a5858ab7" Workload="localhost-k8s-csi--node--driver--bzm2v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-bzm2v", "timestamp":"2025-09-11 00:36:12.753439388 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:36:12.792336 containerd[1625]: 2025-09-11 00:36:12.753 [INFO][4224] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:36:12.792336 containerd[1625]: 2025-09-11 00:36:12.753 [INFO][4224] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:36:12.792336 containerd[1625]: 2025-09-11 00:36:12.753 [INFO][4224] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:36:12.792336 containerd[1625]: 2025-09-11 00:36:12.757 [INFO][4224] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a4d3f63e1c894b93c806375e058854a13d3b50530f13207153158bd6a5858ab7" host="localhost" Sep 11 00:36:12.792336 containerd[1625]: 2025-09-11 00:36:12.759 [INFO][4224] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:36:12.792336 containerd[1625]: 2025-09-11 00:36:12.765 [INFO][4224] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:36:12.792336 containerd[1625]: 2025-09-11 00:36:12.766 [INFO][4224] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:36:12.792336 containerd[1625]: 2025-09-11 00:36:12.767 [INFO][4224] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:36:12.792336 containerd[1625]: 2025-09-11 00:36:12.767 [INFO][4224] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a4d3f63e1c894b93c806375e058854a13d3b50530f13207153158bd6a5858ab7" host="localhost" Sep 11 00:36:12.793767 containerd[1625]: 2025-09-11 00:36:12.768 [INFO][4224] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a4d3f63e1c894b93c806375e058854a13d3b50530f13207153158bd6a5858ab7 Sep 11 00:36:12.793767 containerd[1625]: 2025-09-11 00:36:12.772 [INFO][4224] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a4d3f63e1c894b93c806375e058854a13d3b50530f13207153158bd6a5858ab7" host="localhost" Sep 11 00:36:12.793767 containerd[1625]: 2025-09-11 00:36:12.774 [INFO][4224] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a4d3f63e1c894b93c806375e058854a13d3b50530f13207153158bd6a5858ab7" host="localhost" Sep 11 00:36:12.793767 containerd[1625]: 2025-09-11 00:36:12.774 [INFO][4224] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a4d3f63e1c894b93c806375e058854a13d3b50530f13207153158bd6a5858ab7" host="localhost" Sep 11 00:36:12.793767 containerd[1625]: 2025-09-11 00:36:12.774 [INFO][4224] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:36:12.793767 containerd[1625]: 2025-09-11 00:36:12.774 [INFO][4224] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a4d3f63e1c894b93c806375e058854a13d3b50530f13207153158bd6a5858ab7" HandleID="k8s-pod-network.a4d3f63e1c894b93c806375e058854a13d3b50530f13207153158bd6a5858ab7" Workload="localhost-k8s-csi--node--driver--bzm2v-eth0" Sep 11 00:36:12.793862 containerd[1625]: 2025-09-11 00:36:12.776 [INFO][4212] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a4d3f63e1c894b93c806375e058854a13d3b50530f13207153158bd6a5858ab7" Namespace="calico-system" Pod="csi-node-driver-bzm2v" WorkloadEndpoint="localhost-k8s-csi--node--driver--bzm2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bzm2v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fd90569b-afca-452c-ba50-8a3dd99f9227", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 35, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-bzm2v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali19ccc515000", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:36:12.793904 containerd[1625]: 2025-09-11 00:36:12.776 [INFO][4212] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a4d3f63e1c894b93c806375e058854a13d3b50530f13207153158bd6a5858ab7" Namespace="calico-system" Pod="csi-node-driver-bzm2v" WorkloadEndpoint="localhost-k8s-csi--node--driver--bzm2v-eth0" Sep 11 00:36:12.793904 containerd[1625]: 2025-09-11 00:36:12.776 [INFO][4212] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali19ccc515000 ContainerID="a4d3f63e1c894b93c806375e058854a13d3b50530f13207153158bd6a5858ab7" Namespace="calico-system" Pod="csi-node-driver-bzm2v" WorkloadEndpoint="localhost-k8s-csi--node--driver--bzm2v-eth0" Sep 11 00:36:12.793904 containerd[1625]: 2025-09-11 00:36:12.780 [INFO][4212] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a4d3f63e1c894b93c806375e058854a13d3b50530f13207153158bd6a5858ab7" Namespace="calico-system" Pod="csi-node-driver-bzm2v" WorkloadEndpoint="localhost-k8s-csi--node--driver--bzm2v-eth0" Sep 11 00:36:12.793954 containerd[1625]: 2025-09-11 00:36:12.781 [INFO][4212] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a4d3f63e1c894b93c806375e058854a13d3b50530f13207153158bd6a5858ab7" Namespace="calico-system" Pod="csi-node-driver-bzm2v" WorkloadEndpoint="localhost-k8s-csi--node--driver--bzm2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bzm2v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fd90569b-afca-452c-ba50-8a3dd99f9227", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 35, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a4d3f63e1c894b93c806375e058854a13d3b50530f13207153158bd6a5858ab7", Pod:"csi-node-driver-bzm2v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali19ccc515000", MAC:"1a:da:0a:cb:4e:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:36:12.794003 containerd[1625]: 2025-09-11 00:36:12.788 [INFO][4212] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a4d3f63e1c894b93c806375e058854a13d3b50530f13207153158bd6a5858ab7" Namespace="calico-system" Pod="csi-node-driver-bzm2v" WorkloadEndpoint="localhost-k8s-csi--node--driver--bzm2v-eth0" Sep 11 00:36:12.829221 containerd[1625]: time="2025-09-11T00:36:12.829192468Z" level=info msg="connecting to shim a4d3f63e1c894b93c806375e058854a13d3b50530f13207153158bd6a5858ab7" address="unix:///run/containerd/s/7d58e659a982e3113bc229ed9a987545c62ce671b766a37e9e8ecfeafb0de5c1" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:36:12.844181 systemd[1]: Started cri-containerd-a4d3f63e1c894b93c806375e058854a13d3b50530f13207153158bd6a5858ab7.scope - libcontainer container a4d3f63e1c894b93c806375e058854a13d3b50530f13207153158bd6a5858ab7. Sep 11 00:36:12.853359 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:36:12.861116 containerd[1625]: time="2025-09-11T00:36:12.861080119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bzm2v,Uid:fd90569b-afca-452c-ba50-8a3dd99f9227,Namespace:calico-system,Attempt:0,} returns sandbox id \"a4d3f63e1c894b93c806375e058854a13d3b50530f13207153158bd6a5858ab7\"" Sep 11 00:36:13.867224 systemd-networkd[1538]: cali19ccc515000: Gained IPv6LL Sep 11 00:36:14.599340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1294994754.mount: Deactivated successfully. Sep 11 00:36:14.703423 containerd[1625]: time="2025-09-11T00:36:14.703391444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-s2sct,Uid:ed7757b9-0398-4e77-bb67-06d7580738b3,Namespace:calico-system,Attempt:0,}" Sep 11 00:36:14.703917 containerd[1625]: time="2025-09-11T00:36:14.703708056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b8ff485d9-9gkkq,Uid:b521f003-45f2-4036-a5ef-199d49bdddda,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:36:14.703917 containerd[1625]: time="2025-09-11T00:36:14.703821103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b8ff485d9-mlghw,Uid:3e0e7115-886e-4dea-a448-53f78d3f3647,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:36:14.781130 containerd[1625]: time="2025-09-11T00:36:14.780773870Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 11 00:36:14.790454 containerd[1625]: time="2025-09-11T00:36:14.789064587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:36:14.792959 containerd[1625]: time="2025-09-11T00:36:14.792940854Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:36:14.797055 containerd[1625]: time="2025-09-11T00:36:14.796623345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:36:14.798850 containerd[1625]: time="2025-09-11T00:36:14.797155559Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.724443736s" Sep 11 00:36:14.798895 containerd[1625]: time="2025-09-11T00:36:14.798851962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 11 00:36:14.801247 containerd[1625]: time="2025-09-11T00:36:14.801195548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 11 00:36:14.803502 containerd[1625]: time="2025-09-11T00:36:14.803481918Z" level=info msg="CreateContainer within sandbox \"439ba593d75b1948b12cb1ab5b14a1821b23dde120ae0ac931aa7b96be12d2fa\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 11 00:36:14.809306 containerd[1625]: time="2025-09-11T00:36:14.809282975Z" level=info msg="Container 7a6e17e4e801c0f255a2e46e090c2129b197768fc1a79f04f4468f776edddffe: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:36:14.815380 containerd[1625]: time="2025-09-11T00:36:14.815356851Z" level=info msg="CreateContainer within sandbox \"439ba593d75b1948b12cb1ab5b14a1821b23dde120ae0ac931aa7b96be12d2fa\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"7a6e17e4e801c0f255a2e46e090c2129b197768fc1a79f04f4468f776edddffe\"" Sep 11 00:36:14.817590 containerd[1625]: time="2025-09-11T00:36:14.817572879Z" level=info msg="StartContainer for \"7a6e17e4e801c0f255a2e46e090c2129b197768fc1a79f04f4468f776edddffe\"" Sep 11 00:36:14.819445 containerd[1625]: time="2025-09-11T00:36:14.819417950Z" level=info msg="connecting to shim 7a6e17e4e801c0f255a2e46e090c2129b197768fc1a79f04f4468f776edddffe" address="unix:///run/containerd/s/f16c707915fbd21117ea07a330a4ea1129218683d35724fb4db51a4444fe2365" protocol=ttrpc version=3 Sep 11 00:36:14.848325 systemd[1]: Started cri-containerd-7a6e17e4e801c0f255a2e46e090c2129b197768fc1a79f04f4468f776edddffe.scope - libcontainer container 7a6e17e4e801c0f255a2e46e090c2129b197768fc1a79f04f4468f776edddffe. Sep 11 00:36:14.929356 systemd-networkd[1538]: cali25b982db4ef: Link UP Sep 11 00:36:14.932186 systemd-networkd[1538]: cali25b982db4ef: Gained carrier Sep 11 00:36:14.943822 containerd[1625]: 2025-09-11 00:36:14.813 [INFO][4335] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:36:14.943822 containerd[1625]: 2025-09-11 00:36:14.831 [INFO][4335] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5b8ff485d9--mlghw-eth0 calico-apiserver-5b8ff485d9- calico-apiserver 3e0e7115-886e-4dea-a448-53f78d3f3647 828 0 2025-09-11 00:35:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b8ff485d9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5b8ff485d9-mlghw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali25b982db4ef [] [] }} ContainerID="b32a285149bb4f4d7371f91c0d757aa496e0b652fdf86d83965232db482008ed" Namespace="calico-apiserver" Pod="calico-apiserver-5b8ff485d9-mlghw" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b8ff485d9--mlghw-" Sep 11 00:36:14.943822 containerd[1625]: 2025-09-11 00:36:14.831 [INFO][4335] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b32a285149bb4f4d7371f91c0d757aa496e0b652fdf86d83965232db482008ed" Namespace="calico-apiserver" Pod="calico-apiserver-5b8ff485d9-mlghw" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b8ff485d9--mlghw-eth0" Sep 11 00:36:14.943822 containerd[1625]: 2025-09-11 00:36:14.870 [INFO][4380] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b32a285149bb4f4d7371f91c0d757aa496e0b652fdf86d83965232db482008ed" HandleID="k8s-pod-network.b32a285149bb4f4d7371f91c0d757aa496e0b652fdf86d83965232db482008ed" Workload="localhost-k8s-calico--apiserver--5b8ff485d9--mlghw-eth0" Sep 11 00:36:14.944019 containerd[1625]: 2025-09-11 00:36:14.870 [INFO][4380] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b32a285149bb4f4d7371f91c0d757aa496e0b652fdf86d83965232db482008ed" HandleID="k8s-pod-network.b32a285149bb4f4d7371f91c0d757aa496e0b652fdf86d83965232db482008ed" Workload="localhost-k8s-calico--apiserver--5b8ff485d9--mlghw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d59a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5b8ff485d9-mlghw", "timestamp":"2025-09-11 00:36:14.870664041 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:36:14.944019 containerd[1625]: 2025-09-11 00:36:14.870 [INFO][4380] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:36:14.944019 containerd[1625]: 2025-09-11 00:36:14.870 [INFO][4380] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:36:14.944019 containerd[1625]: 2025-09-11 00:36:14.870 [INFO][4380] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:36:14.944019 containerd[1625]: 2025-09-11 00:36:14.877 [INFO][4380] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b32a285149bb4f4d7371f91c0d757aa496e0b652fdf86d83965232db482008ed" host="localhost" Sep 11 00:36:14.944019 containerd[1625]: 2025-09-11 00:36:14.883 [INFO][4380] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:36:14.944019 containerd[1625]: 2025-09-11 00:36:14.895 [INFO][4380] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:36:14.944019 containerd[1625]: 2025-09-11 00:36:14.898 [INFO][4380] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:36:14.944019 containerd[1625]: 2025-09-11 00:36:14.903 [INFO][4380] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:36:14.944019 containerd[1625]: 2025-09-11 00:36:14.903 [INFO][4380] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b32a285149bb4f4d7371f91c0d757aa496e0b652fdf86d83965232db482008ed" host="localhost" Sep 11 00:36:14.944309 containerd[1625]: 2025-09-11 00:36:14.905 [INFO][4380] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b32a285149bb4f4d7371f91c0d757aa496e0b652fdf86d83965232db482008ed Sep 11 00:36:14.944309 containerd[1625]: 2025-09-11 00:36:14.912 [INFO][4380] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b32a285149bb4f4d7371f91c0d757aa496e0b652fdf86d83965232db482008ed" host="localhost" Sep 11 00:36:14.944309 containerd[1625]: 2025-09-11 00:36:14.921 [INFO][4380] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b32a285149bb4f4d7371f91c0d757aa496e0b652fdf86d83965232db482008ed" host="localhost" Sep 11 00:36:14.944309 containerd[1625]: 2025-09-11 00:36:14.921 [INFO][4380] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b32a285149bb4f4d7371f91c0d757aa496e0b652fdf86d83965232db482008ed" host="localhost" Sep 11 00:36:14.944309 containerd[1625]: 2025-09-11 00:36:14.921 [INFO][4380] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:36:14.944309 containerd[1625]: 2025-09-11 00:36:14.921 [INFO][4380] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b32a285149bb4f4d7371f91c0d757aa496e0b652fdf86d83965232db482008ed" HandleID="k8s-pod-network.b32a285149bb4f4d7371f91c0d757aa496e0b652fdf86d83965232db482008ed" Workload="localhost-k8s-calico--apiserver--5b8ff485d9--mlghw-eth0" Sep 11 00:36:14.944569 containerd[1625]: 2025-09-11 00:36:14.925 [INFO][4335] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b32a285149bb4f4d7371f91c0d757aa496e0b652fdf86d83965232db482008ed" Namespace="calico-apiserver" Pod="calico-apiserver-5b8ff485d9-mlghw" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b8ff485d9--mlghw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b8ff485d9--mlghw-eth0", GenerateName:"calico-apiserver-5b8ff485d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"3e0e7115-886e-4dea-a448-53f78d3f3647", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 35, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b8ff485d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5b8ff485d9-mlghw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali25b982db4ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:36:14.944702 containerd[1625]: 2025-09-11 00:36:14.925 [INFO][4335] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b32a285149bb4f4d7371f91c0d757aa496e0b652fdf86d83965232db482008ed" Namespace="calico-apiserver" Pod="calico-apiserver-5b8ff485d9-mlghw" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b8ff485d9--mlghw-eth0" Sep 11 00:36:14.944702 containerd[1625]: 2025-09-11 00:36:14.925 [INFO][4335] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali25b982db4ef ContainerID="b32a285149bb4f4d7371f91c0d757aa496e0b652fdf86d83965232db482008ed" Namespace="calico-apiserver" Pod="calico-apiserver-5b8ff485d9-mlghw" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b8ff485d9--mlghw-eth0" Sep 11 00:36:14.944702 containerd[1625]: 2025-09-11 00:36:14.932 [INFO][4335] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b32a285149bb4f4d7371f91c0d757aa496e0b652fdf86d83965232db482008ed" Namespace="calico-apiserver" Pod="calico-apiserver-5b8ff485d9-mlghw" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b8ff485d9--mlghw-eth0" Sep 11 00:36:14.944772 containerd[1625]: 2025-09-11 00:36:14.932 [INFO][4335] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b32a285149bb4f4d7371f91c0d757aa496e0b652fdf86d83965232db482008ed" Namespace="calico-apiserver" Pod="calico-apiserver-5b8ff485d9-mlghw" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b8ff485d9--mlghw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b8ff485d9--mlghw-eth0", GenerateName:"calico-apiserver-5b8ff485d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"3e0e7115-886e-4dea-a448-53f78d3f3647", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 35, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b8ff485d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b32a285149bb4f4d7371f91c0d757aa496e0b652fdf86d83965232db482008ed", Pod:"calico-apiserver-5b8ff485d9-mlghw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali25b982db4ef", MAC:"e2:54:93:ac:99:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:36:14.944818 containerd[1625]: 2025-09-11 00:36:14.940 [INFO][4335] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b32a285149bb4f4d7371f91c0d757aa496e0b652fdf86d83965232db482008ed" Namespace="calico-apiserver" Pod="calico-apiserver-5b8ff485d9-mlghw" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b8ff485d9--mlghw-eth0" Sep 11 00:36:14.949879 containerd[1625]: time="2025-09-11T00:36:14.949845759Z" level=info msg="StartContainer for \"7a6e17e4e801c0f255a2e46e090c2129b197768fc1a79f04f4468f776edddffe\" returns successfully" Sep 11 00:36:14.970284 containerd[1625]: time="2025-09-11T00:36:14.970241489Z" level=info msg="connecting to shim b32a285149bb4f4d7371f91c0d757aa496e0b652fdf86d83965232db482008ed" address="unix:///run/containerd/s/03430d08c583a2bb071691cc97273984cfc3b67a696af22c177457f5206d4568" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:36:14.997245 systemd[1]: Started cri-containerd-b32a285149bb4f4d7371f91c0d757aa496e0b652fdf86d83965232db482008ed.scope - libcontainer container b32a285149bb4f4d7371f91c0d757aa496e0b652fdf86d83965232db482008ed. Sep 11 00:36:15.018350 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:36:15.031923 systemd-networkd[1538]: cali9a3d4097921: Link UP Sep 11 00:36:15.032574 systemd-networkd[1538]: cali9a3d4097921: Gained carrier Sep 11 00:36:15.043665 containerd[1625]: 2025-09-11 00:36:14.816 [INFO][4336] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:36:15.043665 containerd[1625]: 2025-09-11 00:36:14.838 [INFO][4336] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5b8ff485d9--9gkkq-eth0 calico-apiserver-5b8ff485d9- calico-apiserver b521f003-45f2-4036-a5ef-199d49bdddda 827 0 2025-09-11 00:35:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b8ff485d9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5b8ff485d9-9gkkq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9a3d4097921 [] [] }} ContainerID="f59c5e8636d8155e57687e6e283978073888995811e31a958d61ad0a7ed7b396" Namespace="calico-apiserver" Pod="calico-apiserver-5b8ff485d9-9gkkq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b8ff485d9--9gkkq-" Sep 11 00:36:15.043665 containerd[1625]: 2025-09-11 00:36:14.839 [INFO][4336] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f59c5e8636d8155e57687e6e283978073888995811e31a958d61ad0a7ed7b396" Namespace="calico-apiserver" Pod="calico-apiserver-5b8ff485d9-9gkkq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b8ff485d9--9gkkq-eth0" Sep 11 00:36:15.043665 containerd[1625]: 2025-09-11 00:36:14.881 [INFO][4385] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f59c5e8636d8155e57687e6e283978073888995811e31a958d61ad0a7ed7b396" HandleID="k8s-pod-network.f59c5e8636d8155e57687e6e283978073888995811e31a958d61ad0a7ed7b396" Workload="localhost-k8s-calico--apiserver--5b8ff485d9--9gkkq-eth0" Sep 11 00:36:15.043844 containerd[1625]: 2025-09-11 00:36:14.881 [INFO][4385] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f59c5e8636d8155e57687e6e283978073888995811e31a958d61ad0a7ed7b396" HandleID="k8s-pod-network.f59c5e8636d8155e57687e6e283978073888995811e31a958d61ad0a7ed7b396" Workload="localhost-k8s-calico--apiserver--5b8ff485d9--9gkkq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df740), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5b8ff485d9-9gkkq", "timestamp":"2025-09-11 00:36:14.881547367 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:36:15.043844 containerd[1625]: 2025-09-11 00:36:14.881 [INFO][4385] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:36:15.043844 containerd[1625]: 2025-09-11 00:36:14.921 [INFO][4385] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:36:15.043844 containerd[1625]: 2025-09-11 00:36:14.921 [INFO][4385] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:36:15.043844 containerd[1625]: 2025-09-11 00:36:14.977 [INFO][4385] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f59c5e8636d8155e57687e6e283978073888995811e31a958d61ad0a7ed7b396" host="localhost" Sep 11 00:36:15.043844 containerd[1625]: 2025-09-11 00:36:14.981 [INFO][4385] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:36:15.043844 containerd[1625]: 2025-09-11 00:36:14.994 [INFO][4385] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:36:15.043844 containerd[1625]: 2025-09-11 00:36:14.996 [INFO][4385] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:36:15.043844 containerd[1625]: 2025-09-11 00:36:15.001 [INFO][4385] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:36:15.043844 containerd[1625]: 2025-09-11 00:36:15.001 [INFO][4385] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f59c5e8636d8155e57687e6e283978073888995811e31a958d61ad0a7ed7b396" host="localhost" Sep 11 00:36:15.044690 containerd[1625]: 2025-09-11 00:36:15.006 [INFO][4385] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f59c5e8636d8155e57687e6e283978073888995811e31a958d61ad0a7ed7b396 Sep 11 00:36:15.044690 containerd[1625]: 2025-09-11 00:36:15.012 [INFO][4385] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f59c5e8636d8155e57687e6e283978073888995811e31a958d61ad0a7ed7b396" host="localhost" Sep 11 00:36:15.044690 containerd[1625]: 2025-09-11 00:36:15.021 [INFO][4385] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.f59c5e8636d8155e57687e6e283978073888995811e31a958d61ad0a7ed7b396" host="localhost" Sep 11 00:36:15.044690 containerd[1625]: 2025-09-11 00:36:15.021 [INFO][4385] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.f59c5e8636d8155e57687e6e283978073888995811e31a958d61ad0a7ed7b396" host="localhost" Sep 11 00:36:15.044690 containerd[1625]: 2025-09-11 00:36:15.021 [INFO][4385] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:36:15.044690 containerd[1625]: 2025-09-11 00:36:15.022 [INFO][4385] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="f59c5e8636d8155e57687e6e283978073888995811e31a958d61ad0a7ed7b396" HandleID="k8s-pod-network.f59c5e8636d8155e57687e6e283978073888995811e31a958d61ad0a7ed7b396" Workload="localhost-k8s-calico--apiserver--5b8ff485d9--9gkkq-eth0" Sep 11 00:36:15.044819 containerd[1625]: 2025-09-11 00:36:15.025 [INFO][4336] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f59c5e8636d8155e57687e6e283978073888995811e31a958d61ad0a7ed7b396" Namespace="calico-apiserver" Pod="calico-apiserver-5b8ff485d9-9gkkq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b8ff485d9--9gkkq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b8ff485d9--9gkkq-eth0", GenerateName:"calico-apiserver-5b8ff485d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"b521f003-45f2-4036-a5ef-199d49bdddda", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 35, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b8ff485d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5b8ff485d9-9gkkq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9a3d4097921", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:36:15.044872 containerd[1625]: 2025-09-11 00:36:15.025 [INFO][4336] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="f59c5e8636d8155e57687e6e283978073888995811e31a958d61ad0a7ed7b396" Namespace="calico-apiserver" Pod="calico-apiserver-5b8ff485d9-9gkkq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b8ff485d9--9gkkq-eth0" Sep 11 00:36:15.044872 containerd[1625]: 2025-09-11 00:36:15.026 [INFO][4336] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9a3d4097921 ContainerID="f59c5e8636d8155e57687e6e283978073888995811e31a958d61ad0a7ed7b396" Namespace="calico-apiserver" Pod="calico-apiserver-5b8ff485d9-9gkkq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b8ff485d9--9gkkq-eth0" Sep 11 00:36:15.044872 containerd[1625]: 2025-09-11 00:36:15.034 [INFO][4336] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f59c5e8636d8155e57687e6e283978073888995811e31a958d61ad0a7ed7b396" Namespace="calico-apiserver" Pod="calico-apiserver-5b8ff485d9-9gkkq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b8ff485d9--9gkkq-eth0" Sep 11 00:36:15.044925 containerd[1625]: 2025-09-11 00:36:15.034 [INFO][4336] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f59c5e8636d8155e57687e6e283978073888995811e31a958d61ad0a7ed7b396" Namespace="calico-apiserver" Pod="calico-apiserver-5b8ff485d9-9gkkq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b8ff485d9--9gkkq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b8ff485d9--9gkkq-eth0", GenerateName:"calico-apiserver-5b8ff485d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"b521f003-45f2-4036-a5ef-199d49bdddda", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 35, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b8ff485d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f59c5e8636d8155e57687e6e283978073888995811e31a958d61ad0a7ed7b396", Pod:"calico-apiserver-5b8ff485d9-9gkkq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9a3d4097921", MAC:"fa:fd:b0:b5:a0:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:36:15.046150 containerd[1625]: 2025-09-11 00:36:15.041 [INFO][4336] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f59c5e8636d8155e57687e6e283978073888995811e31a958d61ad0a7ed7b396" Namespace="calico-apiserver" Pod="calico-apiserver-5b8ff485d9-9gkkq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b8ff485d9--9gkkq-eth0" Sep 11 00:36:15.047539 kubelet[2920]: I0911 00:36:15.041377 2920 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7cbc5d778d-8wn82" podStartSLOduration=2.246019696 podStartE2EDuration="7.041277407s" podCreationTimestamp="2025-09-11 00:36:08 +0000 UTC" firstStartedPulling="2025-09-11 00:36:10.005795285 +0000 UTC m=+38.392779287" lastFinishedPulling="2025-09-11 00:36:14.801052995 +0000 UTC m=+43.188036998" observedRunningTime="2025-09-11 00:36:15.010790562 +0000 UTC m=+43.397774573" watchObservedRunningTime="2025-09-11 00:36:15.041277407 +0000 UTC m=+43.428261413" Sep 11 00:36:15.059590 containerd[1625]: time="2025-09-11T00:36:15.059562090Z" level=info msg="connecting to shim f59c5e8636d8155e57687e6e283978073888995811e31a958d61ad0a7ed7b396" address="unix:///run/containerd/s/1969e6e43a5ad6662ac5d548c4db2ba03483c10709eb484da7502d81a6f67164" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:36:15.090405 systemd[1]: Started cri-containerd-f59c5e8636d8155e57687e6e283978073888995811e31a958d61ad0a7ed7b396.scope - libcontainer container f59c5e8636d8155e57687e6e283978073888995811e31a958d61ad0a7ed7b396. Sep 11 00:36:15.098887 containerd[1625]: time="2025-09-11T00:36:15.098489048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b8ff485d9-mlghw,Uid:3e0e7115-886e-4dea-a448-53f78d3f3647,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b32a285149bb4f4d7371f91c0d757aa496e0b652fdf86d83965232db482008ed\"" Sep 11 00:36:15.103608 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:36:15.137031 containerd[1625]: time="2025-09-11T00:36:15.137006233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b8ff485d9-9gkkq,Uid:b521f003-45f2-4036-a5ef-199d49bdddda,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f59c5e8636d8155e57687e6e283978073888995811e31a958d61ad0a7ed7b396\"" Sep 11 00:36:15.170956 systemd-networkd[1538]: calib861f52641d: Link UP Sep 11 00:36:15.171406 systemd-networkd[1538]: calib861f52641d: Gained carrier Sep 11 00:36:15.196129 containerd[1625]: 2025-09-11 00:36:14.832 [INFO][4353] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:36:15.196129 containerd[1625]: 2025-09-11 00:36:14.847 [INFO][4353] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--s2sct-eth0 goldmane-7988f88666- calico-system ed7757b9-0398-4e77-bb67-06d7580738b3 825 0 2025-09-11 00:35:49 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-s2sct eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib861f52641d [] [] }} ContainerID="9e17cec10980c80e2e3b92a77a9f25e6a97b7912513eaa16ab1c9b30aee4421b" Namespace="calico-system" Pod="goldmane-7988f88666-s2sct" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--s2sct-" Sep 11 00:36:15.196129 containerd[1625]: 2025-09-11 00:36:14.847 [INFO][4353] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9e17cec10980c80e2e3b92a77a9f25e6a97b7912513eaa16ab1c9b30aee4421b" Namespace="calico-system" Pod="goldmane-7988f88666-s2sct" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--s2sct-eth0" Sep 11 00:36:15.196129 containerd[1625]: 2025-09-11 00:36:14.892 [INFO][4390] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e17cec10980c80e2e3b92a77a9f25e6a97b7912513eaa16ab1c9b30aee4421b" HandleID="k8s-pod-network.9e17cec10980c80e2e3b92a77a9f25e6a97b7912513eaa16ab1c9b30aee4421b" Workload="localhost-k8s-goldmane--7988f88666--s2sct-eth0" Sep 11 00:36:15.196284 containerd[1625]: 2025-09-11 00:36:14.892 [INFO][4390] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9e17cec10980c80e2e3b92a77a9f25e6a97b7912513eaa16ab1c9b30aee4421b" HandleID="k8s-pod-network.9e17cec10980c80e2e3b92a77a9f25e6a97b7912513eaa16ab1c9b30aee4421b" Workload="localhost-k8s-goldmane--7988f88666--s2sct-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5860), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-s2sct", "timestamp":"2025-09-11 00:36:14.892533963 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:36:15.196284 containerd[1625]: 2025-09-11 00:36:14.892 [INFO][4390] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:36:15.196284 containerd[1625]: 2025-09-11 00:36:15.021 [INFO][4390] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:36:15.196284 containerd[1625]: 2025-09-11 00:36:15.022 [INFO][4390] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:36:15.196284 containerd[1625]: 2025-09-11 00:36:15.077 [INFO][4390] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9e17cec10980c80e2e3b92a77a9f25e6a97b7912513eaa16ab1c9b30aee4421b" host="localhost" Sep 11 00:36:15.196284 containerd[1625]: 2025-09-11 00:36:15.085 [INFO][4390] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:36:15.196284 containerd[1625]: 2025-09-11 00:36:15.092 [INFO][4390] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:36:15.196284 containerd[1625]: 2025-09-11 00:36:15.095 [INFO][4390] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:36:15.196284 containerd[1625]: 2025-09-11 00:36:15.101 [INFO][4390] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:36:15.196284 containerd[1625]: 2025-09-11 00:36:15.101 [INFO][4390] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9e17cec10980c80e2e3b92a77a9f25e6a97b7912513eaa16ab1c9b30aee4421b" host="localhost" Sep 11 00:36:15.198933 containerd[1625]: 2025-09-11 00:36:15.120 [INFO][4390] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9e17cec10980c80e2e3b92a77a9f25e6a97b7912513eaa16ab1c9b30aee4421b Sep 11 00:36:15.198933 containerd[1625]: 2025-09-11 00:36:15.139 [INFO][4390] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9e17cec10980c80e2e3b92a77a9f25e6a97b7912513eaa16ab1c9b30aee4421b" host="localhost" Sep 11 00:36:15.198933 containerd[1625]: 2025-09-11 00:36:15.167 [INFO][4390] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.9e17cec10980c80e2e3b92a77a9f25e6a97b7912513eaa16ab1c9b30aee4421b" host="localhost" Sep 11 00:36:15.198933 containerd[1625]: 2025-09-11 00:36:15.167 [INFO][4390] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.9e17cec10980c80e2e3b92a77a9f25e6a97b7912513eaa16ab1c9b30aee4421b" host="localhost" Sep 11 00:36:15.198933 containerd[1625]: 2025-09-11 00:36:15.167 [INFO][4390] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:36:15.198933 containerd[1625]: 2025-09-11 00:36:15.167 [INFO][4390] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="9e17cec10980c80e2e3b92a77a9f25e6a97b7912513eaa16ab1c9b30aee4421b" HandleID="k8s-pod-network.9e17cec10980c80e2e3b92a77a9f25e6a97b7912513eaa16ab1c9b30aee4421b" Workload="localhost-k8s-goldmane--7988f88666--s2sct-eth0" Sep 11 00:36:15.199052 containerd[1625]: 2025-09-11 00:36:15.168 [INFO][4353] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9e17cec10980c80e2e3b92a77a9f25e6a97b7912513eaa16ab1c9b30aee4421b" Namespace="calico-system" Pod="goldmane-7988f88666-s2sct" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--s2sct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--s2sct-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"ed7757b9-0398-4e77-bb67-06d7580738b3", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 35, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-s2sct", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib861f52641d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:36:15.199052 containerd[1625]: 2025-09-11 00:36:15.169 [INFO][4353] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="9e17cec10980c80e2e3b92a77a9f25e6a97b7912513eaa16ab1c9b30aee4421b" Namespace="calico-system" Pod="goldmane-7988f88666-s2sct" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--s2sct-eth0" Sep 11 00:36:15.199132 containerd[1625]: 2025-09-11 00:36:15.169 [INFO][4353] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib861f52641d ContainerID="9e17cec10980c80e2e3b92a77a9f25e6a97b7912513eaa16ab1c9b30aee4421b" Namespace="calico-system" Pod="goldmane-7988f88666-s2sct" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--s2sct-eth0" Sep 11 00:36:15.199132 containerd[1625]: 2025-09-11 00:36:15.171 [INFO][4353] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9e17cec10980c80e2e3b92a77a9f25e6a97b7912513eaa16ab1c9b30aee4421b" Namespace="calico-system" Pod="goldmane-7988f88666-s2sct" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--s2sct-eth0" Sep 11 00:36:15.199166 containerd[1625]: 2025-09-11 00:36:15.171 [INFO][4353] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9e17cec10980c80e2e3b92a77a9f25e6a97b7912513eaa16ab1c9b30aee4421b" Namespace="calico-system" Pod="goldmane-7988f88666-s2sct" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--s2sct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--s2sct-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"ed7757b9-0398-4e77-bb67-06d7580738b3", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 35, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9e17cec10980c80e2e3b92a77a9f25e6a97b7912513eaa16ab1c9b30aee4421b", Pod:"goldmane-7988f88666-s2sct", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib861f52641d", MAC:"66:3b:81:0b:43:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:36:15.199207 containerd[1625]: 2025-09-11 00:36:15.195 [INFO][4353] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9e17cec10980c80e2e3b92a77a9f25e6a97b7912513eaa16ab1c9b30aee4421b" Namespace="calico-system" Pod="goldmane-7988f88666-s2sct" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--s2sct-eth0" Sep 11 00:36:15.483190 containerd[1625]: time="2025-09-11T00:36:15.482241366Z" level=info msg="connecting to shim 9e17cec10980c80e2e3b92a77a9f25e6a97b7912513eaa16ab1c9b30aee4421b" address="unix:///run/containerd/s/766d379ecd37b187bbfdac084015818b27135988b7d101acd08e8e2aa1c47c14" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:36:15.505292 systemd[1]: Started cri-containerd-9e17cec10980c80e2e3b92a77a9f25e6a97b7912513eaa16ab1c9b30aee4421b.scope - libcontainer container 9e17cec10980c80e2e3b92a77a9f25e6a97b7912513eaa16ab1c9b30aee4421b. Sep 11 00:36:15.521615 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:36:15.565511 containerd[1625]: time="2025-09-11T00:36:15.565428277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-s2sct,Uid:ed7757b9-0398-4e77-bb67-06d7580738b3,Namespace:calico-system,Attempt:0,} returns sandbox id \"9e17cec10980c80e2e3b92a77a9f25e6a97b7912513eaa16ab1c9b30aee4421b\"" Sep 11 00:36:15.703661 containerd[1625]: time="2025-09-11T00:36:15.703574352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bh6ht,Uid:45d522ab-bc07-4bcb-aee3-7fc8b8a95e06,Namespace:kube-system,Attempt:0,}" Sep 11 00:36:15.782022 systemd-networkd[1538]: calic3aa5539531: Link UP Sep 11 00:36:15.782357 systemd-networkd[1538]: calic3aa5539531: Gained carrier Sep 11 00:36:15.793853 containerd[1625]: 2025-09-11 00:36:15.736 [INFO][4599] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:36:15.793853 containerd[1625]: 2025-09-11 00:36:15.742 [INFO][4599] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--bh6ht-eth0 coredns-7c65d6cfc9- kube-system 45d522ab-bc07-4bcb-aee3-7fc8b8a95e06 826 0 2025-09-11 00:35:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-bh6ht eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic3aa5539531 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="656cac70935748520b40ddba4d725d3d7daf2c488063f666768375dc8d200d43" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bh6ht" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bh6ht-" Sep 11 00:36:15.793853 containerd[1625]: 2025-09-11 00:36:15.742 [INFO][4599] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="656cac70935748520b40ddba4d725d3d7daf2c488063f666768375dc8d200d43" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bh6ht" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bh6ht-eth0" Sep 11 00:36:15.793853 containerd[1625]: 2025-09-11 00:36:15.759 [INFO][4611] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="656cac70935748520b40ddba4d725d3d7daf2c488063f666768375dc8d200d43" HandleID="k8s-pod-network.656cac70935748520b40ddba4d725d3d7daf2c488063f666768375dc8d200d43" Workload="localhost-k8s-coredns--7c65d6cfc9--bh6ht-eth0" Sep 11 00:36:15.794228 containerd[1625]: 2025-09-11 00:36:15.759 [INFO][4611] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="656cac70935748520b40ddba4d725d3d7daf2c488063f666768375dc8d200d43" HandleID="k8s-pod-network.656cac70935748520b40ddba4d725d3d7daf2c488063f666768375dc8d200d43" Workload="localhost-k8s-coredns--7c65d6cfc9--bh6ht-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f640), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-bh6ht", "timestamp":"2025-09-11 00:36:15.759596788 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:36:15.794228 containerd[1625]: 2025-09-11 00:36:15.759 [INFO][4611] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:36:15.794228 containerd[1625]: 2025-09-11 00:36:15.759 [INFO][4611] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:36:15.794228 containerd[1625]: 2025-09-11 00:36:15.759 [INFO][4611] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:36:15.794228 containerd[1625]: 2025-09-11 00:36:15.763 [INFO][4611] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.656cac70935748520b40ddba4d725d3d7daf2c488063f666768375dc8d200d43" host="localhost" Sep 11 00:36:15.794228 containerd[1625]: 2025-09-11 00:36:15.766 [INFO][4611] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:36:15.794228 containerd[1625]: 2025-09-11 00:36:15.768 [INFO][4611] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:36:15.794228 containerd[1625]: 2025-09-11 00:36:15.769 [INFO][4611] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:36:15.794228 containerd[1625]: 2025-09-11 00:36:15.771 [INFO][4611] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:36:15.794228 containerd[1625]: 2025-09-11 00:36:15.771 [INFO][4611] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.656cac70935748520b40ddba4d725d3d7daf2c488063f666768375dc8d200d43" host="localhost" Sep 11 00:36:15.794804 containerd[1625]: 2025-09-11 00:36:15.772 [INFO][4611] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.656cac70935748520b40ddba4d725d3d7daf2c488063f666768375dc8d200d43 Sep 11 00:36:15.794804 containerd[1625]: 2025-09-11 00:36:15.774 [INFO][4611] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.656cac70935748520b40ddba4d725d3d7daf2c488063f666768375dc8d200d43" host="localhost" Sep 11 00:36:15.794804 containerd[1625]: 2025-09-11 00:36:15.778 [INFO][4611] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.656cac70935748520b40ddba4d725d3d7daf2c488063f666768375dc8d200d43" host="localhost" Sep 11 00:36:15.794804 containerd[1625]: 2025-09-11 00:36:15.778 [INFO][4611] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.656cac70935748520b40ddba4d725d3d7daf2c488063f666768375dc8d200d43" host="localhost" Sep 11 00:36:15.794804 containerd[1625]: 2025-09-11 00:36:15.778 [INFO][4611] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:36:15.794804 containerd[1625]: 2025-09-11 00:36:15.778 [INFO][4611] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="656cac70935748520b40ddba4d725d3d7daf2c488063f666768375dc8d200d43" HandleID="k8s-pod-network.656cac70935748520b40ddba4d725d3d7daf2c488063f666768375dc8d200d43" Workload="localhost-k8s-coredns--7c65d6cfc9--bh6ht-eth0" Sep 11 00:36:15.794940 containerd[1625]: 2025-09-11 00:36:15.779 [INFO][4599] cni-plugin/k8s.go 418: Populated endpoint ContainerID="656cac70935748520b40ddba4d725d3d7daf2c488063f666768375dc8d200d43" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bh6ht" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bh6ht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--bh6ht-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"45d522ab-bc07-4bcb-aee3-7fc8b8a95e06", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 35, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-bh6ht", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3aa5539531", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:36:15.795010 containerd[1625]: 2025-09-11 00:36:15.779 [INFO][4599] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="656cac70935748520b40ddba4d725d3d7daf2c488063f666768375dc8d200d43" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bh6ht" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bh6ht-eth0" Sep 11 00:36:15.795010 containerd[1625]: 2025-09-11 00:36:15.779 [INFO][4599] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic3aa5539531 ContainerID="656cac70935748520b40ddba4d725d3d7daf2c488063f666768375dc8d200d43" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bh6ht" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bh6ht-eth0" Sep 11 00:36:15.795010 containerd[1625]: 2025-09-11 00:36:15.782 [INFO][4599] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="656cac70935748520b40ddba4d725d3d7daf2c488063f666768375dc8d200d43" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bh6ht" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bh6ht-eth0" Sep 11 00:36:15.795079 containerd[1625]: 2025-09-11 00:36:15.782 [INFO][4599] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="656cac70935748520b40ddba4d725d3d7daf2c488063f666768375dc8d200d43" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bh6ht" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bh6ht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--bh6ht-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"45d522ab-bc07-4bcb-aee3-7fc8b8a95e06", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 35, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"656cac70935748520b40ddba4d725d3d7daf2c488063f666768375dc8d200d43", Pod:"coredns-7c65d6cfc9-bh6ht", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3aa5539531", MAC:"16:3a:43:7a:30:42", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:36:15.795079 containerd[1625]: 2025-09-11 00:36:15.788 [INFO][4599] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="656cac70935748520b40ddba4d725d3d7daf2c488063f666768375dc8d200d43" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bh6ht" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bh6ht-eth0" Sep 11 00:36:15.823650 containerd[1625]: time="2025-09-11T00:36:15.823576139Z" level=info msg="connecting to shim 656cac70935748520b40ddba4d725d3d7daf2c488063f666768375dc8d200d43" address="unix:///run/containerd/s/bbe3fb536204be9a02a2a44bfd176657191cbbddf083cfb4af196febb992ca70" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:36:15.842203 systemd[1]: Started cri-containerd-656cac70935748520b40ddba4d725d3d7daf2c488063f666768375dc8d200d43.scope - libcontainer container 656cac70935748520b40ddba4d725d3d7daf2c488063f666768375dc8d200d43. Sep 11 00:36:15.851595 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:36:15.893340 containerd[1625]: time="2025-09-11T00:36:15.893314176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bh6ht,Uid:45d522ab-bc07-4bcb-aee3-7fc8b8a95e06,Namespace:kube-system,Attempt:0,} returns sandbox id \"656cac70935748520b40ddba4d725d3d7daf2c488063f666768375dc8d200d43\"" Sep 11 00:36:15.920862 containerd[1625]: time="2025-09-11T00:36:15.920816302Z" level=info msg="CreateContainer within sandbox \"656cac70935748520b40ddba4d725d3d7daf2c488063f666768375dc8d200d43\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 00:36:15.939267 containerd[1625]: time="2025-09-11T00:36:15.939234282Z" level=info msg="Container 46d52aa1eaefa4f5fc1402f89e5a7a779f249c40c6dcc1216f6256ab743d7caa: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:36:15.941935 containerd[1625]: time="2025-09-11T00:36:15.941919045Z" level=info msg="CreateContainer within sandbox \"656cac70935748520b40ddba4d725d3d7daf2c488063f666768375dc8d200d43\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"46d52aa1eaefa4f5fc1402f89e5a7a779f249c40c6dcc1216f6256ab743d7caa\"" Sep 11 00:36:15.942493 containerd[1625]: time="2025-09-11T00:36:15.942452636Z" level=info msg="StartContainer for \"46d52aa1eaefa4f5fc1402f89e5a7a779f249c40c6dcc1216f6256ab743d7caa\"" Sep 11 00:36:15.943097 containerd[1625]: time="2025-09-11T00:36:15.943069262Z" level=info msg="connecting to shim 46d52aa1eaefa4f5fc1402f89e5a7a779f249c40c6dcc1216f6256ab743d7caa" address="unix:///run/containerd/s/bbe3fb536204be9a02a2a44bfd176657191cbbddf083cfb4af196febb992ca70" protocol=ttrpc version=3 Sep 11 00:36:15.963232 systemd[1]: Started cri-containerd-46d52aa1eaefa4f5fc1402f89e5a7a779f249c40c6dcc1216f6256ab743d7caa.scope - libcontainer container 46d52aa1eaefa4f5fc1402f89e5a7a779f249c40c6dcc1216f6256ab743d7caa. Sep 11 00:36:15.985936 containerd[1625]: time="2025-09-11T00:36:15.985904469Z" level=info msg="StartContainer for \"46d52aa1eaefa4f5fc1402f89e5a7a779f249c40c6dcc1216f6256ab743d7caa\" returns successfully" Sep 11 00:36:16.107238 systemd-networkd[1538]: cali25b982db4ef: Gained IPv6LL Sep 11 00:36:16.491171 systemd-networkd[1538]: cali9a3d4097921: Gained IPv6LL Sep 11 00:36:16.702767 containerd[1625]: time="2025-09-11T00:36:16.702732348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lxfrt,Uid:009ce57a-9c56-41ee-87a7-edcfd6755687,Namespace:kube-system,Attempt:0,}" Sep 11 00:36:16.770922 systemd-networkd[1538]: cali66d7d7057d5: Link UP Sep 11 00:36:16.771388 systemd-networkd[1538]: cali66d7d7057d5: Gained carrier Sep 11 00:36:16.782939 containerd[1625]: 2025-09-11 00:36:16.723 [INFO][4721] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:36:16.782939 containerd[1625]: 2025-09-11 00:36:16.731 [INFO][4721] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--lxfrt-eth0 coredns-7c65d6cfc9- kube-system 009ce57a-9c56-41ee-87a7-edcfd6755687 819 0 2025-09-11 00:35:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-lxfrt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali66d7d7057d5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c0a4c80c43eb07e87e4dcdca01782a32a14cc494b97a20d0f3cc9c4c140d3d38" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lxfrt" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lxfrt-" Sep 11 00:36:16.782939 containerd[1625]: 2025-09-11 00:36:16.731 [INFO][4721] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0a4c80c43eb07e87e4dcdca01782a32a14cc494b97a20d0f3cc9c4c140d3d38" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lxfrt" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lxfrt-eth0" Sep 11 00:36:16.782939 containerd[1625]: 2025-09-11 00:36:16.749 [INFO][4733] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0a4c80c43eb07e87e4dcdca01782a32a14cc494b97a20d0f3cc9c4c140d3d38" HandleID="k8s-pod-network.c0a4c80c43eb07e87e4dcdca01782a32a14cc494b97a20d0f3cc9c4c140d3d38" Workload="localhost-k8s-coredns--7c65d6cfc9--lxfrt-eth0" Sep 11 00:36:16.782939 containerd[1625]: 2025-09-11 00:36:16.749 [INFO][4733] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c0a4c80c43eb07e87e4dcdca01782a32a14cc494b97a20d0f3cc9c4c140d3d38" HandleID="k8s-pod-network.c0a4c80c43eb07e87e4dcdca01782a32a14cc494b97a20d0f3cc9c4c140d3d38" Workload="localhost-k8s-coredns--7c65d6cfc9--lxfrt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f010), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-lxfrt", "timestamp":"2025-09-11 00:36:16.749218247 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:36:16.782939 containerd[1625]: 2025-09-11 00:36:16.749 [INFO][4733] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:36:16.782939 containerd[1625]: 2025-09-11 00:36:16.749 [INFO][4733] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:36:16.782939 containerd[1625]: 2025-09-11 00:36:16.749 [INFO][4733] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:36:16.782939 containerd[1625]: 2025-09-11 00:36:16.753 [INFO][4733] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c0a4c80c43eb07e87e4dcdca01782a32a14cc494b97a20d0f3cc9c4c140d3d38" host="localhost" Sep 11 00:36:16.782939 containerd[1625]: 2025-09-11 00:36:16.755 [INFO][4733] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:36:16.782939 containerd[1625]: 2025-09-11 00:36:16.756 [INFO][4733] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:36:16.782939 containerd[1625]: 2025-09-11 00:36:16.757 [INFO][4733] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:36:16.782939 containerd[1625]: 2025-09-11 00:36:16.759 [INFO][4733] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:36:16.782939 containerd[1625]: 2025-09-11 00:36:16.759 [INFO][4733] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c0a4c80c43eb07e87e4dcdca01782a32a14cc494b97a20d0f3cc9c4c140d3d38" host="localhost" Sep 11 00:36:16.782939 containerd[1625]: 2025-09-11 00:36:16.759 [INFO][4733] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c0a4c80c43eb07e87e4dcdca01782a32a14cc494b97a20d0f3cc9c4c140d3d38 Sep 11 00:36:16.782939 containerd[1625]: 2025-09-11 00:36:16.761 [INFO][4733] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c0a4c80c43eb07e87e4dcdca01782a32a14cc494b97a20d0f3cc9c4c140d3d38" host="localhost" Sep 11 00:36:16.782939 containerd[1625]: 2025-09-11 00:36:16.765 [INFO][4733] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.c0a4c80c43eb07e87e4dcdca01782a32a14cc494b97a20d0f3cc9c4c140d3d38" host="localhost" Sep 11 00:36:16.782939 containerd[1625]: 2025-09-11 00:36:16.765 [INFO][4733] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.c0a4c80c43eb07e87e4dcdca01782a32a14cc494b97a20d0f3cc9c4c140d3d38" host="localhost" Sep 11 00:36:16.782939 containerd[1625]: 2025-09-11 00:36:16.765 [INFO][4733] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:36:16.782939 containerd[1625]: 2025-09-11 00:36:16.765 [INFO][4733] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="c0a4c80c43eb07e87e4dcdca01782a32a14cc494b97a20d0f3cc9c4c140d3d38" HandleID="k8s-pod-network.c0a4c80c43eb07e87e4dcdca01782a32a14cc494b97a20d0f3cc9c4c140d3d38" Workload="localhost-k8s-coredns--7c65d6cfc9--lxfrt-eth0" Sep 11 00:36:16.784038 containerd[1625]: 2025-09-11 00:36:16.767 [INFO][4721] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0a4c80c43eb07e87e4dcdca01782a32a14cc494b97a20d0f3cc9c4c140d3d38" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lxfrt" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lxfrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--lxfrt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"009ce57a-9c56-41ee-87a7-edcfd6755687", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 35, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-lxfrt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali66d7d7057d5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:36:16.784038 containerd[1625]: 2025-09-11 00:36:16.767 [INFO][4721] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="c0a4c80c43eb07e87e4dcdca01782a32a14cc494b97a20d0f3cc9c4c140d3d38" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lxfrt" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lxfrt-eth0" Sep 11 00:36:16.784038 containerd[1625]: 2025-09-11 00:36:16.767 [INFO][4721] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali66d7d7057d5 ContainerID="c0a4c80c43eb07e87e4dcdca01782a32a14cc494b97a20d0f3cc9c4c140d3d38" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lxfrt" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lxfrt-eth0" Sep 11 00:36:16.784038 containerd[1625]: 2025-09-11 00:36:16.771 [INFO][4721] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0a4c80c43eb07e87e4dcdca01782a32a14cc494b97a20d0f3cc9c4c140d3d38" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lxfrt" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lxfrt-eth0" Sep 11 00:36:16.784038 containerd[1625]: 2025-09-11 00:36:16.772 [INFO][4721] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0a4c80c43eb07e87e4dcdca01782a32a14cc494b97a20d0f3cc9c4c140d3d38" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lxfrt" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lxfrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--lxfrt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"009ce57a-9c56-41ee-87a7-edcfd6755687", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 35, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c0a4c80c43eb07e87e4dcdca01782a32a14cc494b97a20d0f3cc9c4c140d3d38", Pod:"coredns-7c65d6cfc9-lxfrt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali66d7d7057d5", MAC:"52:b9:01:67:35:3c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:36:16.784038 containerd[1625]: 2025-09-11 00:36:16.781 [INFO][4721] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0a4c80c43eb07e87e4dcdca01782a32a14cc494b97a20d0f3cc9c4c140d3d38" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lxfrt" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lxfrt-eth0" Sep 11 00:36:16.797255 containerd[1625]: time="2025-09-11T00:36:16.797204566Z" level=info msg="connecting to shim c0a4c80c43eb07e87e4dcdca01782a32a14cc494b97a20d0f3cc9c4c140d3d38" address="unix:///run/containerd/s/7c11067fc70e1a96da5658d2ffdfae0c7a4cbbc1ba33c7e9e3afbd76497d1cfd" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:36:16.811292 systemd-networkd[1538]: calib861f52641d: Gained IPv6LL Sep 11 00:36:16.819183 systemd[1]: Started cri-containerd-c0a4c80c43eb07e87e4dcdca01782a32a14cc494b97a20d0f3cc9c4c140d3d38.scope - libcontainer container c0a4c80c43eb07e87e4dcdca01782a32a14cc494b97a20d0f3cc9c4c140d3d38. Sep 11 00:36:16.827017 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:36:16.856404 containerd[1625]: time="2025-09-11T00:36:16.856371919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lxfrt,Uid:009ce57a-9c56-41ee-87a7-edcfd6755687,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0a4c80c43eb07e87e4dcdca01782a32a14cc494b97a20d0f3cc9c4c140d3d38\"" Sep 11 00:36:16.887761 containerd[1625]: time="2025-09-11T00:36:16.887720933Z" level=info msg="CreateContainer within sandbox \"c0a4c80c43eb07e87e4dcdca01782a32a14cc494b97a20d0f3cc9c4c140d3d38\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 00:36:16.898624 containerd[1625]: time="2025-09-11T00:36:16.898550074Z" level=info msg="Container 3833de85c99d545d31bca6950ed019f16431155552e060da8fc88b71a9eefef9: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:36:16.901297 containerd[1625]: time="2025-09-11T00:36:16.901230104Z" level=info msg="CreateContainer within sandbox \"c0a4c80c43eb07e87e4dcdca01782a32a14cc494b97a20d0f3cc9c4c140d3d38\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3833de85c99d545d31bca6950ed019f16431155552e060da8fc88b71a9eefef9\"" Sep 11 00:36:16.901723 containerd[1625]: time="2025-09-11T00:36:16.901711586Z" level=info msg="StartContainer for \"3833de85c99d545d31bca6950ed019f16431155552e060da8fc88b71a9eefef9\"" Sep 11 00:36:16.902421 containerd[1625]: time="2025-09-11T00:36:16.902396094Z" level=info msg="connecting to shim 3833de85c99d545d31bca6950ed019f16431155552e060da8fc88b71a9eefef9" address="unix:///run/containerd/s/7c11067fc70e1a96da5658d2ffdfae0c7a4cbbc1ba33c7e9e3afbd76497d1cfd" protocol=ttrpc version=3 Sep 11 00:36:16.919226 systemd[1]: Started cri-containerd-3833de85c99d545d31bca6950ed019f16431155552e060da8fc88b71a9eefef9.scope - libcontainer container 3833de85c99d545d31bca6950ed019f16431155552e060da8fc88b71a9eefef9. Sep 11 00:36:16.944568 containerd[1625]: time="2025-09-11T00:36:16.944520014Z" level=info msg="StartContainer for \"3833de85c99d545d31bca6950ed019f16431155552e060da8fc88b71a9eefef9\" returns successfully" Sep 11 00:36:17.001894 kubelet[2920]: I0911 00:36:17.001797 2920 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-bh6ht" podStartSLOduration=39.001395355 podStartE2EDuration="39.001395355s" podCreationTimestamp="2025-09-11 00:35:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:36:16.993585398 +0000 UTC m=+45.380569410" watchObservedRunningTime="2025-09-11 00:36:17.001395355 +0000 UTC m=+45.388379367" Sep 11 00:36:17.002848 kubelet[2920]: I0911 00:36:17.002070 2920 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-lxfrt" podStartSLOduration=39.002061213 podStartE2EDuration="39.002061213s" podCreationTimestamp="2025-09-11 00:35:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:36:17.001932841 +0000 UTC m=+45.388916844" watchObservedRunningTime="2025-09-11 00:36:17.002061213 +0000 UTC m=+45.389045221" Sep 11 00:36:17.188493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1533438792.mount: Deactivated successfully. Sep 11 00:36:17.358803 containerd[1625]: time="2025-09-11T00:36:17.358778529Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:36:17.359436 containerd[1625]: time="2025-09-11T00:36:17.359425040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 11 00:36:17.359794 containerd[1625]: time="2025-09-11T00:36:17.359771884Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:36:17.360981 containerd[1625]: time="2025-09-11T00:36:17.360960027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:36:17.361641 containerd[1625]: time="2025-09-11T00:36:17.361579711Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.560366054s" Sep 11 00:36:17.361641 containerd[1625]: time="2025-09-11T00:36:17.361595723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 11 00:36:17.362307 containerd[1625]: time="2025-09-11T00:36:17.362287805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 11 00:36:17.363981 containerd[1625]: time="2025-09-11T00:36:17.363955173Z" level=info msg="CreateContainer within sandbox \"a4d3f63e1c894b93c806375e058854a13d3b50530f13207153158bd6a5858ab7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 11 00:36:17.371839 containerd[1625]: time="2025-09-11T00:36:17.371816520Z" level=info msg="Container 9b75b93b82cabe24f8457a51e33b452a1b5968b1a77403a93bc65bc0554abf4c: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:36:17.387273 systemd-networkd[1538]: calic3aa5539531: Gained IPv6LL Sep 11 00:36:17.388268 containerd[1625]: time="2025-09-11T00:36:17.388249297Z" level=info msg="CreateContainer within sandbox \"a4d3f63e1c894b93c806375e058854a13d3b50530f13207153158bd6a5858ab7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9b75b93b82cabe24f8457a51e33b452a1b5968b1a77403a93bc65bc0554abf4c\"" Sep 11 00:36:17.388720 containerd[1625]: time="2025-09-11T00:36:17.388697676Z" level=info msg="StartContainer for \"9b75b93b82cabe24f8457a51e33b452a1b5968b1a77403a93bc65bc0554abf4c\"" Sep 11 00:36:17.389802 containerd[1625]: time="2025-09-11T00:36:17.389783986Z" level=info msg="connecting to shim 9b75b93b82cabe24f8457a51e33b452a1b5968b1a77403a93bc65bc0554abf4c" address="unix:///run/containerd/s/7d58e659a982e3113bc229ed9a987545c62ce671b766a37e9e8ecfeafb0de5c1" protocol=ttrpc version=3 Sep 11 00:36:17.411285 systemd[1]: Started cri-containerd-9b75b93b82cabe24f8457a51e33b452a1b5968b1a77403a93bc65bc0554abf4c.scope - libcontainer container 9b75b93b82cabe24f8457a51e33b452a1b5968b1a77403a93bc65bc0554abf4c. Sep 11 00:36:17.462675 containerd[1625]: time="2025-09-11T00:36:17.462635783Z" level=info msg="StartContainer for \"9b75b93b82cabe24f8457a51e33b452a1b5968b1a77403a93bc65bc0554abf4c\" returns successfully" Sep 11 00:36:17.703262 containerd[1625]: time="2025-09-11T00:36:17.703203971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f9574cb5b-hfwsx,Uid:38d8df35-ee9c-4504-81fd-f83175c767ed,Namespace:calico-system,Attempt:0,}" Sep 11 00:36:17.805054 systemd-networkd[1538]: cali7474d435398: Link UP Sep 11 00:36:17.805827 systemd-networkd[1538]: cali7474d435398: Gained carrier Sep 11 00:36:17.819916 containerd[1625]: 2025-09-11 00:36:17.723 [INFO][4881] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:36:17.819916 containerd[1625]: 2025-09-11 00:36:17.730 [INFO][4881] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6f9574cb5b--hfwsx-eth0 calico-kube-controllers-6f9574cb5b- calico-system 38d8df35-ee9c-4504-81fd-f83175c767ed 822 0 2025-09-11 00:35:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6f9574cb5b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6f9574cb5b-hfwsx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7474d435398 [] [] }} ContainerID="d0bc3d1c0d09bdd2653b8e0029b821c4ce4b32d5e0d4c4d5c807849a9c1950db" Namespace="calico-system" Pod="calico-kube-controllers-6f9574cb5b-hfwsx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f9574cb5b--hfwsx-" Sep 11 00:36:17.819916 containerd[1625]: 2025-09-11 00:36:17.730 [INFO][4881] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d0bc3d1c0d09bdd2653b8e0029b821c4ce4b32d5e0d4c4d5c807849a9c1950db" Namespace="calico-system" Pod="calico-kube-controllers-6f9574cb5b-hfwsx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f9574cb5b--hfwsx-eth0" Sep 11 00:36:17.819916 containerd[1625]: 2025-09-11 00:36:17.773 [INFO][4894] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d0bc3d1c0d09bdd2653b8e0029b821c4ce4b32d5e0d4c4d5c807849a9c1950db" HandleID="k8s-pod-network.d0bc3d1c0d09bdd2653b8e0029b821c4ce4b32d5e0d4c4d5c807849a9c1950db" Workload="localhost-k8s-calico--kube--controllers--6f9574cb5b--hfwsx-eth0" Sep 11 00:36:17.819916 containerd[1625]: 2025-09-11 00:36:17.773 [INFO][4894] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d0bc3d1c0d09bdd2653b8e0029b821c4ce4b32d5e0d4c4d5c807849a9c1950db" HandleID="k8s-pod-network.d0bc3d1c0d09bdd2653b8e0029b821c4ce4b32d5e0d4c4d5c807849a9c1950db" Workload="localhost-k8s-calico--kube--controllers--6f9574cb5b--hfwsx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d7020), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6f9574cb5b-hfwsx", "timestamp":"2025-09-11 00:36:17.773781079 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:36:17.819916 containerd[1625]: 2025-09-11 00:36:17.773 [INFO][4894] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:36:17.819916 containerd[1625]: 2025-09-11 00:36:17.773 [INFO][4894] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:36:17.819916 containerd[1625]: 2025-09-11 00:36:17.773 [INFO][4894] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:36:17.819916 containerd[1625]: 2025-09-11 00:36:17.782 [INFO][4894] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d0bc3d1c0d09bdd2653b8e0029b821c4ce4b32d5e0d4c4d5c807849a9c1950db" host="localhost" Sep 11 00:36:17.819916 containerd[1625]: 2025-09-11 00:36:17.785 [INFO][4894] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:36:17.819916 containerd[1625]: 2025-09-11 00:36:17.788 [INFO][4894] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:36:17.819916 containerd[1625]: 2025-09-11 00:36:17.791 [INFO][4894] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:36:17.819916 containerd[1625]: 2025-09-11 00:36:17.792 [INFO][4894] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:36:17.819916 containerd[1625]: 2025-09-11 00:36:17.792 [INFO][4894] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d0bc3d1c0d09bdd2653b8e0029b821c4ce4b32d5e0d4c4d5c807849a9c1950db" host="localhost" Sep 11 00:36:17.819916 containerd[1625]: 2025-09-11 00:36:17.793 [INFO][4894] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d0bc3d1c0d09bdd2653b8e0029b821c4ce4b32d5e0d4c4d5c807849a9c1950db Sep 11 00:36:17.819916 containerd[1625]: 2025-09-11 00:36:17.796 [INFO][4894] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d0bc3d1c0d09bdd2653b8e0029b821c4ce4b32d5e0d4c4d5c807849a9c1950db" host="localhost" Sep 11 00:36:17.819916 containerd[1625]: 2025-09-11 00:36:17.800 [INFO][4894] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.d0bc3d1c0d09bdd2653b8e0029b821c4ce4b32d5e0d4c4d5c807849a9c1950db" host="localhost" Sep 11 00:36:17.819916 containerd[1625]: 2025-09-11 00:36:17.800 [INFO][4894] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.d0bc3d1c0d09bdd2653b8e0029b821c4ce4b32d5e0d4c4d5c807849a9c1950db" host="localhost" Sep 11 00:36:17.819916 containerd[1625]: 2025-09-11 00:36:17.800 [INFO][4894] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:36:17.819916 containerd[1625]: 2025-09-11 00:36:17.800 [INFO][4894] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="d0bc3d1c0d09bdd2653b8e0029b821c4ce4b32d5e0d4c4d5c807849a9c1950db" HandleID="k8s-pod-network.d0bc3d1c0d09bdd2653b8e0029b821c4ce4b32d5e0d4c4d5c807849a9c1950db" Workload="localhost-k8s-calico--kube--controllers--6f9574cb5b--hfwsx-eth0" Sep 11 00:36:17.831250 containerd[1625]: 2025-09-11 00:36:17.803 [INFO][4881] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d0bc3d1c0d09bdd2653b8e0029b821c4ce4b32d5e0d4c4d5c807849a9c1950db" Namespace="calico-system" Pod="calico-kube-controllers-6f9574cb5b-hfwsx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f9574cb5b--hfwsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6f9574cb5b--hfwsx-eth0", GenerateName:"calico-kube-controllers-6f9574cb5b-", Namespace:"calico-system", SelfLink:"", UID:"38d8df35-ee9c-4504-81fd-f83175c767ed", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 35, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f9574cb5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6f9574cb5b-hfwsx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7474d435398", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:36:17.831250 containerd[1625]: 2025-09-11 00:36:17.803 [INFO][4881] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="d0bc3d1c0d09bdd2653b8e0029b821c4ce4b32d5e0d4c4d5c807849a9c1950db" Namespace="calico-system" Pod="calico-kube-controllers-6f9574cb5b-hfwsx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f9574cb5b--hfwsx-eth0" Sep 11 00:36:17.831250 containerd[1625]: 2025-09-11 00:36:17.803 [INFO][4881] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7474d435398 ContainerID="d0bc3d1c0d09bdd2653b8e0029b821c4ce4b32d5e0d4c4d5c807849a9c1950db" Namespace="calico-system" Pod="calico-kube-controllers-6f9574cb5b-hfwsx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f9574cb5b--hfwsx-eth0" Sep 11 00:36:17.831250 containerd[1625]: 2025-09-11 00:36:17.806 [INFO][4881] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d0bc3d1c0d09bdd2653b8e0029b821c4ce4b32d5e0d4c4d5c807849a9c1950db" Namespace="calico-system" Pod="calico-kube-controllers-6f9574cb5b-hfwsx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f9574cb5b--hfwsx-eth0" Sep 11 00:36:17.831250 containerd[1625]: 2025-09-11 00:36:17.806 [INFO][4881] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d0bc3d1c0d09bdd2653b8e0029b821c4ce4b32d5e0d4c4d5c807849a9c1950db" Namespace="calico-system" Pod="calico-kube-controllers-6f9574cb5b-hfwsx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f9574cb5b--hfwsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6f9574cb5b--hfwsx-eth0", GenerateName:"calico-kube-controllers-6f9574cb5b-", Namespace:"calico-system", SelfLink:"", UID:"38d8df35-ee9c-4504-81fd-f83175c767ed", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 35, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f9574cb5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d0bc3d1c0d09bdd2653b8e0029b821c4ce4b32d5e0d4c4d5c807849a9c1950db", Pod:"calico-kube-controllers-6f9574cb5b-hfwsx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7474d435398", MAC:"2e:c9:e9:6d:00:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:36:17.831250 containerd[1625]: 2025-09-11 00:36:17.818 [INFO][4881] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d0bc3d1c0d09bdd2653b8e0029b821c4ce4b32d5e0d4c4d5c807849a9c1950db" Namespace="calico-system" Pod="calico-kube-controllers-6f9574cb5b-hfwsx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f9574cb5b--hfwsx-eth0" Sep 11 00:36:17.856972 kubelet[2920]: I0911 00:36:17.856948 2920 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:36:17.868886 containerd[1625]: time="2025-09-11T00:36:17.868853790Z" level=info msg="connecting to shim d0bc3d1c0d09bdd2653b8e0029b821c4ce4b32d5e0d4c4d5c807849a9c1950db" address="unix:///run/containerd/s/35afd1392791e56670231f0a69c2a8abdc7ce84a7ad5478701d93599affe9779" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:36:17.893422 systemd[1]: Started cri-containerd-d0bc3d1c0d09bdd2653b8e0029b821c4ce4b32d5e0d4c4d5c807849a9c1950db.scope - libcontainer container d0bc3d1c0d09bdd2653b8e0029b821c4ce4b32d5e0d4c4d5c807849a9c1950db. Sep 11 00:36:17.904860 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:36:17.937189 containerd[1625]: time="2025-09-11T00:36:17.937169021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f9574cb5b-hfwsx,Uid:38d8df35-ee9c-4504-81fd-f83175c767ed,Namespace:calico-system,Attempt:0,} returns sandbox id \"d0bc3d1c0d09bdd2653b8e0029b821c4ce4b32d5e0d4c4d5c807849a9c1950db\"" Sep 11 00:36:17.963236 systemd-networkd[1538]: cali66d7d7057d5: Gained IPv6LL Sep 11 00:36:18.753024 systemd-networkd[1538]: vxlan.calico: Link UP Sep 11 00:36:18.753029 systemd-networkd[1538]: vxlan.calico: Gained carrier Sep 11 00:36:18.755109 kubelet[2920]: I0911 00:36:18.754286 2920 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:36:19.051291 systemd-networkd[1538]: cali7474d435398: Gained IPv6LL Sep 11 00:36:19.192274 containerd[1625]: time="2025-09-11T00:36:19.192247601Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18e216242c2bdffa39ca6cd8a9e2cdf3c7bfe5bae836ebd0c541c596f9a773de\" id:\"0b6baed59a0565f3169bd1efa71c585c722f6a62abe2519b61e937044d0a55ec\" pid:5049 exit_status:1 exited_at:{seconds:1757550979 nanos:187125970}" Sep 11 00:36:19.378506 containerd[1625]: time="2025-09-11T00:36:19.378404974Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18e216242c2bdffa39ca6cd8a9e2cdf3c7bfe5bae836ebd0c541c596f9a773de\" id:\"f9e1dfa13129b75b20aaa95d716cbae99f7603982694877a6612658a2d8dc48c\" pid:5115 exit_status:1 exited_at:{seconds:1757550979 nanos:377967529}" Sep 11 00:36:20.011201 systemd-networkd[1538]: vxlan.calico: Gained IPv6LL Sep 11 00:36:21.605202 containerd[1625]: time="2025-09-11T00:36:21.605173702Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:36:21.610147 containerd[1625]: time="2025-09-11T00:36:21.610120904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 11 00:36:21.612727 containerd[1625]: time="2025-09-11T00:36:21.612675741Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:36:21.614794 containerd[1625]: time="2025-09-11T00:36:21.614313645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:36:21.614794 containerd[1625]: time="2025-09-11T00:36:21.614700323Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 4.25239557s" Sep 11 00:36:21.614794 containerd[1625]: time="2025-09-11T00:36:21.614722557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 11 00:36:21.616580 containerd[1625]: time="2025-09-11T00:36:21.615992575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 11 00:36:21.618403 containerd[1625]: time="2025-09-11T00:36:21.618372791Z" level=info msg="CreateContainer within sandbox \"b32a285149bb4f4d7371f91c0d757aa496e0b652fdf86d83965232db482008ed\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 11 00:36:21.626304 containerd[1625]: time="2025-09-11T00:36:21.626275844Z" level=info msg="Container a633e5a1ca19363b147360e3d568514a976b174624f5b371d9c330ae6a4772f5: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:36:21.632795 containerd[1625]: time="2025-09-11T00:36:21.632727557Z" level=info msg="CreateContainer within sandbox \"b32a285149bb4f4d7371f91c0d757aa496e0b652fdf86d83965232db482008ed\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a633e5a1ca19363b147360e3d568514a976b174624f5b371d9c330ae6a4772f5\"" Sep 11 00:36:21.633487 containerd[1625]: time="2025-09-11T00:36:21.633239092Z" level=info msg="StartContainer for \"a633e5a1ca19363b147360e3d568514a976b174624f5b371d9c330ae6a4772f5\"" Sep 11 00:36:21.634248 containerd[1625]: time="2025-09-11T00:36:21.634224148Z" level=info msg="connecting to shim a633e5a1ca19363b147360e3d568514a976b174624f5b371d9c330ae6a4772f5" address="unix:///run/containerd/s/03430d08c583a2bb071691cc97273984cfc3b67a696af22c177457f5206d4568" protocol=ttrpc version=3 Sep 11 00:36:21.653221 systemd[1]: Started cri-containerd-a633e5a1ca19363b147360e3d568514a976b174624f5b371d9c330ae6a4772f5.scope - libcontainer container a633e5a1ca19363b147360e3d568514a976b174624f5b371d9c330ae6a4772f5. Sep 11 00:36:21.738786 containerd[1625]: time="2025-09-11T00:36:21.738752081Z" level=info msg="StartContainer for \"a633e5a1ca19363b147360e3d568514a976b174624f5b371d9c330ae6a4772f5\" returns successfully" Sep 11 00:36:22.067648 kubelet[2920]: I0911 00:36:22.067605 2920 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b8ff485d9-mlghw" podStartSLOduration=28.522115281 podStartE2EDuration="35.036668907s" podCreationTimestamp="2025-09-11 00:35:47 +0000 UTC" firstStartedPulling="2025-09-11 00:36:15.101354612 +0000 UTC m=+43.488338614" lastFinishedPulling="2025-09-11 00:36:21.615908232 +0000 UTC m=+50.002892240" observedRunningTime="2025-09-11 00:36:22.035571701 +0000 UTC m=+50.422555715" watchObservedRunningTime="2025-09-11 00:36:22.036668907 +0000 UTC m=+50.423653086" Sep 11 00:36:22.182835 containerd[1625]: time="2025-09-11T00:36:22.182467381Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:36:22.189332 containerd[1625]: time="2025-09-11T00:36:22.189316463Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 11 00:36:22.196379 containerd[1625]: time="2025-09-11T00:36:22.196358056Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 580.347343ms" Sep 11 00:36:22.196476 containerd[1625]: time="2025-09-11T00:36:22.196467207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 11 00:36:22.197765 containerd[1625]: time="2025-09-11T00:36:22.197624996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 11 00:36:22.202133 containerd[1625]: time="2025-09-11T00:36:22.201966761Z" level=info msg="CreateContainer within sandbox \"f59c5e8636d8155e57687e6e283978073888995811e31a958d61ad0a7ed7b396\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 11 00:36:22.208182 containerd[1625]: time="2025-09-11T00:36:22.208158161Z" level=info msg="Container f10af2d6178f5b629cf8a28036ed3c5d8ca5df199f679f2b10aa295ee426f26e: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:36:22.221478 containerd[1625]: time="2025-09-11T00:36:22.221441715Z" level=info msg="CreateContainer within sandbox \"f59c5e8636d8155e57687e6e283978073888995811e31a958d61ad0a7ed7b396\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f10af2d6178f5b629cf8a28036ed3c5d8ca5df199f679f2b10aa295ee426f26e\"" Sep 11 00:36:22.221894 containerd[1625]: time="2025-09-11T00:36:22.221786782Z" level=info msg="StartContainer for \"f10af2d6178f5b629cf8a28036ed3c5d8ca5df199f679f2b10aa295ee426f26e\"" Sep 11 00:36:22.223802 containerd[1625]: time="2025-09-11T00:36:22.223781502Z" level=info msg="connecting to shim f10af2d6178f5b629cf8a28036ed3c5d8ca5df199f679f2b10aa295ee426f26e" address="unix:///run/containerd/s/1969e6e43a5ad6662ac5d548c4db2ba03483c10709eb484da7502d81a6f67164" protocol=ttrpc version=3 Sep 11 00:36:22.253647 systemd[1]: Started cri-containerd-f10af2d6178f5b629cf8a28036ed3c5d8ca5df199f679f2b10aa295ee426f26e.scope - libcontainer container f10af2d6178f5b629cf8a28036ed3c5d8ca5df199f679f2b10aa295ee426f26e. Sep 11 00:36:22.335451 containerd[1625]: time="2025-09-11T00:36:22.335384452Z" level=info msg="StartContainer for \"f10af2d6178f5b629cf8a28036ed3c5d8ca5df199f679f2b10aa295ee426f26e\" returns successfully" Sep 11 00:36:24.016838 kubelet[2920]: I0911 00:36:24.016399 2920 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:36:27.073074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount955957113.mount: Deactivated successfully. Sep 11 00:36:28.080950 containerd[1625]: time="2025-09-11T00:36:28.080906590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:36:28.082313 containerd[1625]: time="2025-09-11T00:36:28.082290277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 11 00:36:28.088423 containerd[1625]: time="2025-09-11T00:36:28.088392963Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:36:28.090448 containerd[1625]: time="2025-09-11T00:36:28.090355185Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:36:28.090773 containerd[1625]: time="2025-09-11T00:36:28.090753745Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 5.893109089s" Sep 11 00:36:28.090831 containerd[1625]: time="2025-09-11T00:36:28.090822143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 11 00:36:28.222178 containerd[1625]: time="2025-09-11T00:36:28.221888587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 11 00:36:28.232828 containerd[1625]: time="2025-09-11T00:36:28.232795301Z" level=info msg="CreateContainer within sandbox \"9e17cec10980c80e2e3b92a77a9f25e6a97b7912513eaa16ab1c9b30aee4421b\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 11 00:36:28.381235 containerd[1625]: time="2025-09-11T00:36:28.381178382Z" level=info msg="Container d9dd9e992f2d0a15a31402fe0118d7a63f076ba9660480360c6573cc5f9b1dd0: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:36:28.384102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2760698638.mount: Deactivated successfully. Sep 11 00:36:28.437996 containerd[1625]: time="2025-09-11T00:36:28.437971590Z" level=info msg="CreateContainer within sandbox \"9e17cec10980c80e2e3b92a77a9f25e6a97b7912513eaa16ab1c9b30aee4421b\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"d9dd9e992f2d0a15a31402fe0118d7a63f076ba9660480360c6573cc5f9b1dd0\"" Sep 11 00:36:28.440839 containerd[1625]: time="2025-09-11T00:36:28.440820113Z" level=info msg="StartContainer for \"d9dd9e992f2d0a15a31402fe0118d7a63f076ba9660480360c6573cc5f9b1dd0\"" Sep 11 00:36:28.441695 containerd[1625]: time="2025-09-11T00:36:28.441669690Z" level=info msg="connecting to shim d9dd9e992f2d0a15a31402fe0118d7a63f076ba9660480360c6573cc5f9b1dd0" address="unix:///run/containerd/s/766d379ecd37b187bbfdac084015818b27135988b7d101acd08e8e2aa1c47c14" protocol=ttrpc version=3 Sep 11 00:36:28.548178 systemd[1]: Started cri-containerd-d9dd9e992f2d0a15a31402fe0118d7a63f076ba9660480360c6573cc5f9b1dd0.scope - libcontainer container d9dd9e992f2d0a15a31402fe0118d7a63f076ba9660480360c6573cc5f9b1dd0. Sep 11 00:36:28.625175 containerd[1625]: time="2025-09-11T00:36:28.625134531Z" level=info msg="StartContainer for \"d9dd9e992f2d0a15a31402fe0118d7a63f076ba9660480360c6573cc5f9b1dd0\" returns successfully" Sep 11 00:36:28.929641 kubelet[2920]: I0911 00:36:28.929579 2920 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:36:28.994666 kubelet[2920]: I0911 00:36:28.994348 2920 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b8ff485d9-9gkkq" podStartSLOduration=34.933385013 podStartE2EDuration="41.99310953s" podCreationTimestamp="2025-09-11 00:35:47 +0000 UTC" firstStartedPulling="2025-09-11 00:36:15.137767202 +0000 UTC m=+43.524751205" lastFinishedPulling="2025-09-11 00:36:22.197491718 +0000 UTC m=+50.584475722" observedRunningTime="2025-09-11 00:36:23.02683437 +0000 UTC m=+51.413818382" watchObservedRunningTime="2025-09-11 00:36:28.99310953 +0000 UTC m=+57.380093542" Sep 11 00:36:29.545863 containerd[1625]: time="2025-09-11T00:36:29.545836682Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d9dd9e992f2d0a15a31402fe0118d7a63f076ba9660480360c6573cc5f9b1dd0\" id:\"36341223d4c421eb9254648f5576c3a028984256bad8326327d2081dd88e4f08\" pid:5331 exit_status:1 exited_at:{seconds:1757550989 nanos:538237461}" Sep 11 00:36:29.640888 containerd[1625]: time="2025-09-11T00:36:29.640847600Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d9dd9e992f2d0a15a31402fe0118d7a63f076ba9660480360c6573cc5f9b1dd0\" id:\"d6707adf0bd15b9df3e8b379bd477ee248d2a4edb33a2a7a3c4e6e945e24a87b\" pid:5355 exit_status:1 exited_at:{seconds:1757550989 nanos:640284894}" Sep 11 00:36:30.124076 containerd[1625]: time="2025-09-11T00:36:30.124011057Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d9dd9e992f2d0a15a31402fe0118d7a63f076ba9660480360c6573cc5f9b1dd0\" id:\"87f98c697cb5c12df65f2737e1fdeb0eb0e09f9f29a817a4cf3508e08e5a0f4f\" pid:5377 exit_status:1 exited_at:{seconds:1757550990 nanos:123752173}" Sep 11 00:36:31.164166 containerd[1625]: time="2025-09-11T00:36:31.163522890Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:36:31.169867 containerd[1625]: time="2025-09-11T00:36:31.169836407Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 11 00:36:31.171833 containerd[1625]: time="2025-09-11T00:36:31.171233985Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:36:31.173614 containerd[1625]: time="2025-09-11T00:36:31.173439432Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:36:31.175274 containerd[1625]: time="2025-09-11T00:36:31.174474638Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.952542216s" Sep 11 00:36:31.175274 containerd[1625]: time="2025-09-11T00:36:31.174497005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 11 00:36:31.195738 containerd[1625]: time="2025-09-11T00:36:31.195709717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 11 00:36:31.198823 containerd[1625]: time="2025-09-11T00:36:31.198795538Z" level=info msg="CreateContainer within sandbox \"a4d3f63e1c894b93c806375e058854a13d3b50530f13207153158bd6a5858ab7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 11 00:36:31.260765 containerd[1625]: time="2025-09-11T00:36:31.260738312Z" level=info msg="Container b8f2f48086ffeebe40f22913d87ff3eaf45af82d9342c2a4f98a4c6cea5cad65: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:36:31.267183 containerd[1625]: time="2025-09-11T00:36:31.267163667Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d9dd9e992f2d0a15a31402fe0118d7a63f076ba9660480360c6573cc5f9b1dd0\" id:\"a06ee1e2fa9eff3ada22f661805208a648b21ee3fa2187ead0c8fd5307d083f9\" pid:5402 exited_at:{seconds:1757550991 nanos:266964005}" Sep 11 00:36:31.287663 containerd[1625]: time="2025-09-11T00:36:31.287639958Z" level=info msg="CreateContainer within sandbox \"a4d3f63e1c894b93c806375e058854a13d3b50530f13207153158bd6a5858ab7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b8f2f48086ffeebe40f22913d87ff3eaf45af82d9342c2a4f98a4c6cea5cad65\"" Sep 11 00:36:31.288778 containerd[1625]: time="2025-09-11T00:36:31.288000553Z" level=info msg="StartContainer for \"b8f2f48086ffeebe40f22913d87ff3eaf45af82d9342c2a4f98a4c6cea5cad65\"" Sep 11 00:36:31.294867 containerd[1625]: time="2025-09-11T00:36:31.289633759Z" level=info msg="connecting to shim b8f2f48086ffeebe40f22913d87ff3eaf45af82d9342c2a4f98a4c6cea5cad65" address="unix:///run/containerd/s/7d58e659a982e3113bc229ed9a987545c62ce671b766a37e9e8ecfeafb0de5c1" protocol=ttrpc version=3 Sep 11 00:36:31.317241 systemd[1]: Started cri-containerd-b8f2f48086ffeebe40f22913d87ff3eaf45af82d9342c2a4f98a4c6cea5cad65.scope - libcontainer container b8f2f48086ffeebe40f22913d87ff3eaf45af82d9342c2a4f98a4c6cea5cad65. Sep 11 00:36:31.386234 containerd[1625]: time="2025-09-11T00:36:31.386208912Z" level=info msg="StartContainer for \"b8f2f48086ffeebe40f22913d87ff3eaf45af82d9342c2a4f98a4c6cea5cad65\" returns successfully" Sep 11 00:36:31.412482 kubelet[2920]: I0911 00:36:31.359895 2920 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-s2sct" podStartSLOduration=29.791705407 podStartE2EDuration="42.341884332s" podCreationTimestamp="2025-09-11 00:35:49 +0000 UTC" firstStartedPulling="2025-09-11 00:36:15.566831442 +0000 UTC m=+43.953815444" lastFinishedPulling="2025-09-11 00:36:28.117010366 +0000 UTC m=+56.503994369" observedRunningTime="2025-09-11 00:36:29.04476387 +0000 UTC m=+57.431747881" watchObservedRunningTime="2025-09-11 00:36:31.341884332 +0000 UTC m=+59.728868340" Sep 11 00:36:32.219634 kubelet[2920]: I0911 00:36:32.219354 2920 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-bzm2v" podStartSLOduration=23.876181156 podStartE2EDuration="42.210904355s" podCreationTimestamp="2025-09-11 00:35:50 +0000 UTC" firstStartedPulling="2025-09-11 00:36:12.861836347 +0000 UTC m=+41.248820349" lastFinishedPulling="2025-09-11 00:36:31.196559543 +0000 UTC m=+59.583543548" observedRunningTime="2025-09-11 00:36:32.21065267 +0000 UTC m=+60.597636678" watchObservedRunningTime="2025-09-11 00:36:32.210904355 +0000 UTC m=+60.597888356" Sep 11 00:36:32.340722 kubelet[2920]: I0911 00:36:32.334911 2920 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 11 00:36:32.342874 kubelet[2920]: I0911 00:36:32.342852 2920 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 11 00:36:34.615037 containerd[1625]: time="2025-09-11T00:36:34.614969845Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:36:34.622062 containerd[1625]: time="2025-09-11T00:36:34.622045211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 11 00:36:34.629489 containerd[1625]: time="2025-09-11T00:36:34.624675168Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:36:34.634121 containerd[1625]: time="2025-09-11T00:36:34.632560049Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.436822239s" Sep 11 00:36:34.634121 containerd[1625]: time="2025-09-11T00:36:34.632580710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 11 00:36:34.634121 containerd[1625]: time="2025-09-11T00:36:34.633803615Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:36:34.797736 containerd[1625]: time="2025-09-11T00:36:34.797707654Z" level=info msg="CreateContainer within sandbox \"d0bc3d1c0d09bdd2653b8e0029b821c4ce4b32d5e0d4c4d5c807849a9c1950db\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 11 00:36:34.848994 containerd[1625]: time="2025-09-11T00:36:34.848960673Z" level=info msg="Container 74e1aea0fb2d7bea0fd9976b23235934f4fa17aee02748691ddcefdd04de97c3: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:36:34.874472 containerd[1625]: time="2025-09-11T00:36:34.874398460Z" level=info msg="CreateContainer within sandbox \"d0bc3d1c0d09bdd2653b8e0029b821c4ce4b32d5e0d4c4d5c807849a9c1950db\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"74e1aea0fb2d7bea0fd9976b23235934f4fa17aee02748691ddcefdd04de97c3\"" Sep 11 00:36:34.875239 containerd[1625]: time="2025-09-11T00:36:34.875215874Z" level=info msg="StartContainer for \"74e1aea0fb2d7bea0fd9976b23235934f4fa17aee02748691ddcefdd04de97c3\"" Sep 11 00:36:34.881948 containerd[1625]: time="2025-09-11T00:36:34.881909070Z" level=info msg="connecting to shim 74e1aea0fb2d7bea0fd9976b23235934f4fa17aee02748691ddcefdd04de97c3" address="unix:///run/containerd/s/35afd1392791e56670231f0a69c2a8abdc7ce84a7ad5478701d93599affe9779" protocol=ttrpc version=3 Sep 11 00:36:34.914229 systemd[1]: Started cri-containerd-74e1aea0fb2d7bea0fd9976b23235934f4fa17aee02748691ddcefdd04de97c3.scope - libcontainer container 74e1aea0fb2d7bea0fd9976b23235934f4fa17aee02748691ddcefdd04de97c3. Sep 11 00:36:34.970189 containerd[1625]: time="2025-09-11T00:36:34.970155076Z" level=info msg="StartContainer for \"74e1aea0fb2d7bea0fd9976b23235934f4fa17aee02748691ddcefdd04de97c3\" returns successfully" Sep 11 00:36:35.279444 kubelet[2920]: I0911 00:36:35.279389 2920 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6f9574cb5b-hfwsx" podStartSLOduration=28.557374087 podStartE2EDuration="45.279373004s" podCreationTimestamp="2025-09-11 00:35:50 +0000 UTC" firstStartedPulling="2025-09-11 00:36:17.93796081 +0000 UTC m=+46.324944812" lastFinishedPulling="2025-09-11 00:36:34.659959724 +0000 UTC m=+63.046943729" observedRunningTime="2025-09-11 00:36:35.278792021 +0000 UTC m=+63.665776035" watchObservedRunningTime="2025-09-11 00:36:35.279373004 +0000 UTC m=+63.666357016" Sep 11 00:36:35.336763 containerd[1625]: time="2025-09-11T00:36:35.336729042Z" level=info msg="TaskExit event in podsandbox handler container_id:\"74e1aea0fb2d7bea0fd9976b23235934f4fa17aee02748691ddcefdd04de97c3\" id:\"750a2f34ca23f187f4791c4ba89f7055aee10402935c593fe33899036e8b712a\" pid:5509 exited_at:{seconds:1757550995 nanos:322852200}" Sep 11 00:36:43.898602 containerd[1625]: time="2025-09-11T00:36:43.898534707Z" level=info msg="TaskExit event in podsandbox handler container_id:\"74e1aea0fb2d7bea0fd9976b23235934f4fa17aee02748691ddcefdd04de97c3\" id:\"cb4dbc370212d76b31ab593aeda57d13b436095ebc9aea2c41cf6d99f2878006\" pid:5555 exited_at:{seconds:1757551003 nanos:898227093}" Sep 11 00:36:47.011847 systemd[1]: Started sshd@7-139.178.70.101:22-139.178.89.65:59470.service - OpenSSH per-connection server daemon (139.178.89.65:59470). Sep 11 00:36:47.154951 sshd[5570]: Accepted publickey for core from 139.178.89.65 port 59470 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:36:47.157599 sshd-session[5570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:36:47.164096 systemd-logind[1580]: New session 10 of user core. Sep 11 00:36:47.173253 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 11 00:36:47.669106 sshd[5572]: Connection closed by 139.178.89.65 port 59470 Sep 11 00:36:47.669237 sshd-session[5570]: pam_unix(sshd:session): session closed for user core Sep 11 00:36:47.673601 systemd[1]: sshd@7-139.178.70.101:22-139.178.89.65:59470.service: Deactivated successfully. Sep 11 00:36:47.675242 systemd[1]: session-10.scope: Deactivated successfully. Sep 11 00:36:47.675972 systemd-logind[1580]: Session 10 logged out. Waiting for processes to exit. Sep 11 00:36:47.677846 systemd-logind[1580]: Removed session 10. Sep 11 00:36:49.207933 containerd[1625]: time="2025-09-11T00:36:49.207903149Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18e216242c2bdffa39ca6cd8a9e2cdf3c7bfe5bae836ebd0c541c596f9a773de\" id:\"e7367da9b8404485c8d081fe4df76ee2d53c7235c57e5f25acd944d99ae6369f\" pid:5596 exited_at:{seconds:1757551009 nanos:206441661}" Sep 11 00:36:52.686373 systemd[1]: Started sshd@8-139.178.70.101:22-139.178.89.65:40026.service - OpenSSH per-connection server daemon (139.178.89.65:40026). Sep 11 00:36:52.823898 sshd[5608]: Accepted publickey for core from 139.178.89.65 port 40026 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:36:52.835129 sshd-session[5608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:36:52.844268 systemd-logind[1580]: New session 11 of user core. Sep 11 00:36:52.850256 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 11 00:36:53.793715 sshd[5610]: Connection closed by 139.178.89.65 port 40026 Sep 11 00:36:53.792842 sshd-session[5608]: pam_unix(sshd:session): session closed for user core Sep 11 00:36:53.799223 systemd[1]: sshd@8-139.178.70.101:22-139.178.89.65:40026.service: Deactivated successfully. Sep 11 00:36:53.804268 systemd[1]: session-11.scope: Deactivated successfully. Sep 11 00:36:53.805723 systemd-logind[1580]: Session 11 logged out. Waiting for processes to exit. Sep 11 00:36:53.807207 systemd-logind[1580]: Removed session 11. Sep 11 00:36:58.804066 systemd[1]: Started sshd@9-139.178.70.101:22-139.178.89.65:40034.service - OpenSSH per-connection server daemon (139.178.89.65:40034). Sep 11 00:36:58.878796 sshd[5625]: Accepted publickey for core from 139.178.89.65 port 40034 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:36:58.879983 sshd-session[5625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:36:58.883046 systemd-logind[1580]: New session 12 of user core. Sep 11 00:36:58.888167 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 11 00:36:59.028931 sshd[5627]: Connection closed by 139.178.89.65 port 40034 Sep 11 00:36:59.029311 sshd-session[5625]: pam_unix(sshd:session): session closed for user core Sep 11 00:36:59.035523 systemd[1]: sshd@9-139.178.70.101:22-139.178.89.65:40034.service: Deactivated successfully. Sep 11 00:36:59.037371 systemd[1]: session-12.scope: Deactivated successfully. Sep 11 00:36:59.038113 systemd-logind[1580]: Session 12 logged out. Waiting for processes to exit. Sep 11 00:36:59.040452 systemd[1]: Started sshd@10-139.178.70.101:22-139.178.89.65:40042.service - OpenSSH per-connection server daemon (139.178.89.65:40042). Sep 11 00:36:59.041450 systemd-logind[1580]: Removed session 12. Sep 11 00:36:59.096338 sshd[5639]: Accepted publickey for core from 139.178.89.65 port 40042 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:36:59.097153 sshd-session[5639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:36:59.100148 systemd-logind[1580]: New session 13 of user core. Sep 11 00:36:59.105169 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 11 00:36:59.228968 sshd[5641]: Connection closed by 139.178.89.65 port 40042 Sep 11 00:36:59.229992 sshd-session[5639]: pam_unix(sshd:session): session closed for user core Sep 11 00:36:59.237053 systemd[1]: sshd@10-139.178.70.101:22-139.178.89.65:40042.service: Deactivated successfully. Sep 11 00:36:59.239226 systemd[1]: session-13.scope: Deactivated successfully. Sep 11 00:36:59.240572 systemd-logind[1580]: Session 13 logged out. Waiting for processes to exit. Sep 11 00:36:59.245304 systemd[1]: Started sshd@11-139.178.70.101:22-139.178.89.65:40048.service - OpenSSH per-connection server daemon (139.178.89.65:40048). Sep 11 00:36:59.247315 systemd-logind[1580]: Removed session 13. Sep 11 00:36:59.296312 sshd[5650]: Accepted publickey for core from 139.178.89.65 port 40048 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:36:59.297098 sshd-session[5650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:36:59.299929 systemd-logind[1580]: New session 14 of user core. Sep 11 00:36:59.308177 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 11 00:36:59.426156 sshd[5652]: Connection closed by 139.178.89.65 port 40048 Sep 11 00:36:59.425227 sshd-session[5650]: pam_unix(sshd:session): session closed for user core Sep 11 00:36:59.432592 systemd[1]: sshd@11-139.178.70.101:22-139.178.89.65:40048.service: Deactivated successfully. Sep 11 00:36:59.432808 systemd-logind[1580]: Session 14 logged out. Waiting for processes to exit. Sep 11 00:36:59.434152 systemd[1]: session-14.scope: Deactivated successfully. Sep 11 00:36:59.436951 systemd-logind[1580]: Removed session 14. Sep 11 00:36:59.761069 containerd[1625]: time="2025-09-11T00:36:59.760907975Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d9dd9e992f2d0a15a31402fe0118d7a63f076ba9660480360c6573cc5f9b1dd0\" id:\"ce33bce0fd2cdbc3cd1fe9329c59f96184707759d53febe7c0d5b49c48614fff\" pid:5684 exited_at:{seconds:1757551019 nanos:751616910}" Sep 11 00:37:04.436264 systemd[1]: Started sshd@12-139.178.70.101:22-139.178.89.65:47322.service - OpenSSH per-connection server daemon (139.178.89.65:47322). Sep 11 00:37:04.550364 sshd[5700]: Accepted publickey for core from 139.178.89.65 port 47322 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:37:04.552647 sshd-session[5700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:37:04.555706 systemd-logind[1580]: New session 15 of user core. Sep 11 00:37:04.563296 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 11 00:37:05.001459 sshd[5702]: Connection closed by 139.178.89.65 port 47322 Sep 11 00:37:05.001795 sshd-session[5700]: pam_unix(sshd:session): session closed for user core Sep 11 00:37:05.006144 systemd-logind[1580]: Session 15 logged out. Waiting for processes to exit. Sep 11 00:37:05.006506 systemd[1]: sshd@12-139.178.70.101:22-139.178.89.65:47322.service: Deactivated successfully. Sep 11 00:37:05.007846 systemd[1]: session-15.scope: Deactivated successfully. Sep 11 00:37:05.009029 systemd-logind[1580]: Removed session 15. Sep 11 00:37:10.012639 systemd[1]: Started sshd@13-139.178.70.101:22-139.178.89.65:40308.service - OpenSSH per-connection server daemon (139.178.89.65:40308). Sep 11 00:37:10.080018 sshd[5719]: Accepted publickey for core from 139.178.89.65 port 40308 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:37:10.080979 sshd-session[5719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:37:10.085054 systemd-logind[1580]: New session 16 of user core. Sep 11 00:37:10.090246 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 11 00:37:10.292604 sshd[5721]: Connection closed by 139.178.89.65 port 40308 Sep 11 00:37:10.292915 sshd-session[5719]: pam_unix(sshd:session): session closed for user core Sep 11 00:37:10.295535 systemd[1]: sshd@13-139.178.70.101:22-139.178.89.65:40308.service: Deactivated successfully. Sep 11 00:37:10.296928 systemd[1]: session-16.scope: Deactivated successfully. Sep 11 00:37:10.297754 systemd-logind[1580]: Session 16 logged out. Waiting for processes to exit. Sep 11 00:37:10.299793 systemd-logind[1580]: Removed session 16. Sep 11 00:37:10.723759 containerd[1625]: time="2025-09-11T00:37:10.723643137Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d9dd9e992f2d0a15a31402fe0118d7a63f076ba9660480360c6573cc5f9b1dd0\" id:\"fd4893eae4b5ef98c812803ac97c831b8f185f430153bcd13d829397e14f79cf\" pid:5745 exited_at:{seconds:1757551030 nanos:722734567}" Sep 11 00:37:13.927020 containerd[1625]: time="2025-09-11T00:37:13.926984866Z" level=info msg="TaskExit event in podsandbox handler container_id:\"74e1aea0fb2d7bea0fd9976b23235934f4fa17aee02748691ddcefdd04de97c3\" id:\"6fc008433bdb7dd0384fd6b031ab731134ce485bfd271f33fbe23e89f5e90e42\" pid:5765 exited_at:{seconds:1757551033 nanos:926642404}" Sep 11 00:37:15.302932 systemd[1]: Started sshd@14-139.178.70.101:22-139.178.89.65:40322.service - OpenSSH per-connection server daemon (139.178.89.65:40322). Sep 11 00:37:15.466082 sshd[5777]: Accepted publickey for core from 139.178.89.65 port 40322 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:37:15.468601 sshd-session[5777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:37:15.475125 systemd-logind[1580]: New session 17 of user core. Sep 11 00:37:15.481219 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 11 00:37:16.719389 sshd[5779]: Connection closed by 139.178.89.65 port 40322 Sep 11 00:37:16.746398 sshd-session[5777]: pam_unix(sshd:session): session closed for user core Sep 11 00:37:16.781958 systemd[1]: sshd@14-139.178.70.101:22-139.178.89.65:40322.service: Deactivated successfully. Sep 11 00:37:16.785370 systemd[1]: session-17.scope: Deactivated successfully. Sep 11 00:37:16.789213 systemd-logind[1580]: Session 17 logged out. Waiting for processes to exit. Sep 11 00:37:16.792600 systemd-logind[1580]: Removed session 17. Sep 11 00:37:17.597300 containerd[1625]: time="2025-09-11T00:37:17.597269528Z" level=info msg="TaskExit event in podsandbox handler container_id:\"74e1aea0fb2d7bea0fd9976b23235934f4fa17aee02748691ddcefdd04de97c3\" id:\"6a8fa6b0a520ad169d6a66d6a1108953fbc76bbad93d71d038637c6152e98347\" pid:5803 exited_at:{seconds:1757551037 nanos:596701821}" Sep 11 00:37:19.550253 containerd[1625]: time="2025-09-11T00:37:19.550222449Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18e216242c2bdffa39ca6cd8a9e2cdf3c7bfe5bae836ebd0c541c596f9a773de\" id:\"09763ab1226d19ca2e0ad1b5d0d46058bf4c78c36fd3602b84243058d0ff3b2b\" pid:5824 exited_at:{seconds:1757551039 nanos:549948151}" Sep 11 00:37:21.747725 systemd[1]: Started sshd@15-139.178.70.101:22-139.178.89.65:45246.service - OpenSSH per-connection server daemon (139.178.89.65:45246). Sep 11 00:37:21.955094 sshd[5838]: Accepted publickey for core from 139.178.89.65 port 45246 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:37:21.958577 sshd-session[5838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:37:21.963793 systemd-logind[1580]: New session 18 of user core. Sep 11 00:37:21.970178 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 11 00:37:22.515614 sshd[5840]: Connection closed by 139.178.89.65 port 45246 Sep 11 00:37:22.515048 sshd-session[5838]: pam_unix(sshd:session): session closed for user core Sep 11 00:37:22.522581 systemd[1]: sshd@15-139.178.70.101:22-139.178.89.65:45246.service: Deactivated successfully. Sep 11 00:37:22.524853 systemd[1]: session-18.scope: Deactivated successfully. Sep 11 00:37:22.526788 systemd-logind[1580]: Session 18 logged out. Waiting for processes to exit. Sep 11 00:37:22.530371 systemd[1]: Started sshd@16-139.178.70.101:22-139.178.89.65:45262.service - OpenSSH per-connection server daemon (139.178.89.65:45262). Sep 11 00:37:22.532468 systemd-logind[1580]: Removed session 18. Sep 11 00:37:22.582469 sshd[5851]: Accepted publickey for core from 139.178.89.65 port 45262 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:37:22.583765 sshd-session[5851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:37:22.586860 systemd-logind[1580]: New session 19 of user core. Sep 11 00:37:22.589165 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 11 00:37:23.024099 sshd[5853]: Connection closed by 139.178.89.65 port 45262 Sep 11 00:37:23.024388 sshd-session[5851]: pam_unix(sshd:session): session closed for user core Sep 11 00:37:23.033560 systemd[1]: sshd@16-139.178.70.101:22-139.178.89.65:45262.service: Deactivated successfully. Sep 11 00:37:23.034855 systemd[1]: session-19.scope: Deactivated successfully. Sep 11 00:37:23.038577 systemd-logind[1580]: Session 19 logged out. Waiting for processes to exit. Sep 11 00:37:23.043733 systemd[1]: Started sshd@17-139.178.70.101:22-139.178.89.65:45276.service - OpenSSH per-connection server daemon (139.178.89.65:45276). Sep 11 00:37:23.046117 systemd-logind[1580]: Removed session 19. Sep 11 00:37:23.130832 sshd[5863]: Accepted publickey for core from 139.178.89.65 port 45276 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:37:23.132124 sshd-session[5863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:37:23.144111 systemd-logind[1580]: New session 20 of user core. Sep 11 00:37:23.149616 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 11 00:37:25.450796 sshd[5865]: Connection closed by 139.178.89.65 port 45276 Sep 11 00:37:25.509153 systemd[1]: sshd@17-139.178.70.101:22-139.178.89.65:45276.service: Deactivated successfully. Sep 11 00:37:25.449688 sshd-session[5863]: pam_unix(sshd:session): session closed for user core Sep 11 00:37:25.511359 systemd[1]: session-20.scope: Deactivated successfully. Sep 11 00:37:25.511490 systemd[1]: session-20.scope: Consumed 349ms CPU time, 78.2M memory peak. Sep 11 00:37:25.511889 systemd-logind[1580]: Session 20 logged out. Waiting for processes to exit. Sep 11 00:37:25.539539 systemd[1]: Started sshd@18-139.178.70.101:22-139.178.89.65:45286.service - OpenSSH per-connection server daemon (139.178.89.65:45286). Sep 11 00:37:25.540178 systemd-logind[1580]: Removed session 20. Sep 11 00:37:25.725690 sshd[5881]: Accepted publickey for core from 139.178.89.65 port 45286 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:37:25.727609 sshd-session[5881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:37:25.736547 systemd-logind[1580]: New session 21 of user core. Sep 11 00:37:25.740192 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 11 00:37:26.621288 sshd[5886]: Connection closed by 139.178.89.65 port 45286 Sep 11 00:37:26.623530 sshd-session[5881]: pam_unix(sshd:session): session closed for user core Sep 11 00:37:26.636640 systemd[1]: sshd@18-139.178.70.101:22-139.178.89.65:45286.service: Deactivated successfully. Sep 11 00:37:26.640572 systemd[1]: session-21.scope: Deactivated successfully. Sep 11 00:37:26.640746 systemd[1]: session-21.scope: Consumed 330ms CPU time, 70.6M memory peak. Sep 11 00:37:26.642336 systemd-logind[1580]: Session 21 logged out. Waiting for processes to exit. Sep 11 00:37:26.644032 systemd-logind[1580]: Removed session 21. Sep 11 00:37:26.646225 systemd[1]: Started sshd@19-139.178.70.101:22-139.178.89.65:45296.service - OpenSSH per-connection server daemon (139.178.89.65:45296). Sep 11 00:37:26.856019 sshd[5896]: Accepted publickey for core from 139.178.89.65 port 45296 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:37:26.859562 sshd-session[5896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:37:26.864526 systemd-logind[1580]: New session 22 of user core. Sep 11 00:37:26.868762 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 11 00:37:28.641332 sshd[5898]: Connection closed by 139.178.89.65 port 45296 Sep 11 00:37:28.739225 sshd-session[5896]: pam_unix(sshd:session): session closed for user core Sep 11 00:37:29.441390 systemd[1]: sshd@19-139.178.70.101:22-139.178.89.65:45296.service: Deactivated successfully. Sep 11 00:37:29.444247 systemd[1]: session-22.scope: Deactivated successfully. Sep 11 00:37:29.445535 systemd-logind[1580]: Session 22 logged out. Waiting for processes to exit. Sep 11 00:37:29.482478 systemd-logind[1580]: Removed session 22. Sep 11 00:37:30.674210 systemd-journald[1239]: Under memory pressure, flushing caches. Sep 11 00:37:31.731702 containerd[1625]: time="2025-09-11T00:37:31.728971193Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d9dd9e992f2d0a15a31402fe0118d7a63f076ba9660480360c6573cc5f9b1dd0\" id:\"60158048da9ae5ad9e6930d78689127ca4db89a165556156df5049b7d7d217bc\" pid:5925 exited_at:{seconds:1757551051 nanos:681666108}" Sep 11 00:37:33.760615 systemd[1]: Started sshd@20-139.178.70.101:22-139.178.89.65:60898.service - OpenSSH per-connection server daemon (139.178.89.65:60898). Sep 11 00:37:34.167319 sshd[5941]: Accepted publickey for core from 139.178.89.65 port 60898 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:37:34.198451 sshd-session[5941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:37:34.214298 systemd-logind[1580]: New session 23 of user core. Sep 11 00:37:34.224504 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 11 00:37:39.131392 sshd[5943]: Connection closed by 139.178.89.65 port 60898 Sep 11 00:37:39.110333 sshd-session[5941]: pam_unix(sshd:session): session closed for user core Sep 11 00:37:39.220668 systemd[1]: sshd@20-139.178.70.101:22-139.178.89.65:60898.service: Deactivated successfully. Sep 11 00:37:39.221854 systemd[1]: session-23.scope: Deactivated successfully. Sep 11 00:37:39.223309 systemd-logind[1580]: Session 23 logged out. Waiting for processes to exit. Sep 11 00:37:39.223938 systemd-logind[1580]: Removed session 23. Sep 11 00:37:44.050017 containerd[1625]: time="2025-09-11T00:37:44.047812185Z" level=info msg="TaskExit event in podsandbox handler container_id:\"74e1aea0fb2d7bea0fd9976b23235934f4fa17aee02748691ddcefdd04de97c3\" id:\"cd2386bde444c86499393a191d549aaf797d8cebab45cb8c5900cfabd8bd7730\" pid:5979 exited_at:{seconds:1757551064 nanos:25452149}" Sep 11 00:37:44.142259 systemd[1]: Started sshd@21-139.178.70.101:22-139.178.89.65:50258.service - OpenSSH per-connection server daemon (139.178.89.65:50258). Sep 11 00:37:44.332046 sshd[5990]: Accepted publickey for core from 139.178.89.65 port 50258 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:37:44.336305 sshd-session[5990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:37:44.347120 systemd-logind[1580]: New session 24 of user core. Sep 11 00:37:44.355264 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 11 00:37:45.098562 sshd[5992]: Connection closed by 139.178.89.65 port 50258 Sep 11 00:37:45.100689 sshd-session[5990]: pam_unix(sshd:session): session closed for user core Sep 11 00:37:45.107743 systemd[1]: sshd@21-139.178.70.101:22-139.178.89.65:50258.service: Deactivated successfully. Sep 11 00:37:45.109672 systemd[1]: session-24.scope: Deactivated successfully. Sep 11 00:37:45.110866 systemd-logind[1580]: Session 24 logged out. Waiting for processes to exit. Sep 11 00:37:45.111686 systemd-logind[1580]: Removed session 24.