Jul 10 00:14:54.721072 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Jul 9 22:15:30 -00 2025 Jul 10 00:14:54.721097 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:14:54.721107 kernel: Disabled fast string operations Jul 10 00:14:54.721113 kernel: BIOS-provided physical RAM map: Jul 10 00:14:54.721120 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jul 10 00:14:54.721124 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jul 10 00:14:54.721133 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jul 10 00:14:54.721139 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jul 10 00:14:54.721146 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jul 10 00:14:54.721154 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jul 10 00:14:54.721162 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jul 10 00:14:54.721170 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jul 10 00:14:54.721176 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jul 10 00:14:54.721184 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jul 10 00:14:54.721195 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jul 10 00:14:54.721204 kernel: NX (Execute Disable) protection: active Jul 10 00:14:54.721212 kernel: APIC: Static calls initialized Jul 10 00:14:54.721218 kernel: SMBIOS 2.7 present. Jul 10 00:14:54.721223 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jul 10 00:14:54.721228 kernel: DMI: Memory slots populated: 1/128 Jul 10 00:14:54.721234 kernel: vmware: hypercall mode: 0x00 Jul 10 00:14:54.721239 kernel: Hypervisor detected: VMware Jul 10 00:14:54.721244 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jul 10 00:14:54.721249 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jul 10 00:14:54.721254 kernel: vmware: using clock offset of 4189449069 ns Jul 10 00:14:54.721259 kernel: tsc: Detected 3408.000 MHz processor Jul 10 00:14:54.721264 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 10 00:14:54.721270 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 10 00:14:54.721275 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jul 10 00:14:54.721280 kernel: total RAM covered: 3072M Jul 10 00:14:54.721286 kernel: Found optimal setting for mtrr clean up Jul 10 00:14:54.721292 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jul 10 00:14:54.721297 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Jul 10 00:14:54.721302 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 10 00:14:54.721307 kernel: Using GB pages for direct mapping Jul 10 00:14:54.721312 kernel: ACPI: Early table checksum verification disabled Jul 10 00:14:54.721317 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jul 10 00:14:54.721322 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jul 10 00:14:54.721327 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jul 10 00:14:54.721333 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jul 10 00:14:54.721340 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 10 00:14:54.721345 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 10 00:14:54.721353 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jul 10 00:14:54.721359 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jul 10 00:14:54.721364 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jul 10 00:14:54.721370 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jul 10 00:14:54.721376 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jul 10 00:14:54.721384 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jul 10 00:14:54.721390 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jul 10 00:14:54.721395 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jul 10 00:14:54.721401 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 10 00:14:54.721406 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 10 00:14:54.721411 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jul 10 00:14:54.721419 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jul 10 00:14:54.721426 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jul 10 00:14:54.721431 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jul 10 00:14:54.721439 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jul 10 00:14:54.721444 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jul 10 00:14:54.721458 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 10 00:14:54.721468 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 10 00:14:54.721476 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jul 10 00:14:54.721481 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00001000-0x7fffffff] Jul 10 00:14:54.721489 kernel: NODE_DATA(0) allocated [mem 0x7fff8dc0-0x7fffffff] Jul 10 00:14:54.721497 kernel: Zone ranges: Jul 10 00:14:54.721505 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 10 00:14:54.721511 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jul 10 00:14:54.721516 kernel: Normal empty Jul 10 00:14:54.721521 kernel: Device empty Jul 10 00:14:54.721527 kernel: Movable zone start for each node Jul 10 00:14:54.721536 kernel: Early memory node ranges Jul 10 00:14:54.721542 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jul 10 00:14:54.721548 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jul 10 00:14:54.721557 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jul 10 00:14:54.721569 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jul 10 00:14:54.721578 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 10 00:14:54.721587 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jul 10 00:14:54.721596 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jul 10 00:14:54.721605 kernel: ACPI: PM-Timer IO Port: 0x1008 Jul 10 00:14:54.721611 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jul 10 00:14:54.721616 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jul 10 00:14:54.721621 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jul 10 00:14:54.721629 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jul 10 00:14:54.721639 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jul 10 00:14:54.721645 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jul 10 00:14:54.721651 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jul 10 00:14:54.721656 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jul 10 00:14:54.721661 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jul 10 00:14:54.721666 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jul 10 00:14:54.721671 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jul 10 00:14:54.721676 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jul 10 00:14:54.721681 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jul 10 00:14:54.721686 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jul 10 00:14:54.721693 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jul 10 00:14:54.721698 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jul 10 00:14:54.721703 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jul 10 00:14:54.721708 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jul 10 00:14:54.721713 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jul 10 00:14:54.721718 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jul 10 00:14:54.721723 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jul 10 00:14:54.721728 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jul 10 00:14:54.721733 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jul 10 00:14:54.721738 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jul 10 00:14:54.721745 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jul 10 00:14:54.721750 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jul 10 00:14:54.721755 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jul 10 00:14:54.721760 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jul 10 00:14:54.721765 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jul 10 00:14:54.721770 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jul 10 00:14:54.721775 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jul 10 00:14:54.721781 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jul 10 00:14:54.721786 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jul 10 00:14:54.721791 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jul 10 00:14:54.721797 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jul 10 00:14:54.721802 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jul 10 00:14:54.721807 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jul 10 00:14:54.721812 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jul 10 00:14:54.721817 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jul 10 00:14:54.721824 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jul 10 00:14:54.721832 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jul 10 00:14:54.721838 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jul 10 00:14:54.721843 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jul 10 00:14:54.721849 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jul 10 00:14:54.721857 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jul 10 00:14:54.721864 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jul 10 00:14:54.721870 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jul 10 00:14:54.721876 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jul 10 00:14:54.721881 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jul 10 00:14:54.721886 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jul 10 00:14:54.721893 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jul 10 00:14:54.721902 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jul 10 00:14:54.721913 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jul 10 00:14:54.721923 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jul 10 00:14:54.721933 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jul 10 00:14:54.721939 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jul 10 00:14:54.721945 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jul 10 00:14:54.721951 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jul 10 00:14:54.721956 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jul 10 00:14:54.721966 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jul 10 00:14:54.721973 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jul 10 00:14:54.721980 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jul 10 00:14:54.721986 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jul 10 00:14:54.721991 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jul 10 00:14:54.721996 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jul 10 00:14:54.722002 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jul 10 00:14:54.722007 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jul 10 00:14:54.722013 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jul 10 00:14:54.722018 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jul 10 00:14:54.722024 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jul 10 00:14:54.722029 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jul 10 00:14:54.722035 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jul 10 00:14:54.722041 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jul 10 00:14:54.722047 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jul 10 00:14:54.722052 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jul 10 00:14:54.722057 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jul 10 00:14:54.722063 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jul 10 00:14:54.722068 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jul 10 00:14:54.722073 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jul 10 00:14:54.722079 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jul 10 00:14:54.722084 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jul 10 00:14:54.722091 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jul 10 00:14:54.722096 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jul 10 00:14:54.722101 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jul 10 00:14:54.722107 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jul 10 00:14:54.722112 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jul 10 00:14:54.722118 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jul 10 00:14:54.722123 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jul 10 00:14:54.722128 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jul 10 00:14:54.722134 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jul 10 00:14:54.722139 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jul 10 00:14:54.722146 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jul 10 00:14:54.722151 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jul 10 00:14:54.722156 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jul 10 00:14:54.722162 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jul 10 00:14:54.722167 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jul 10 00:14:54.722173 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jul 10 00:14:54.722179 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jul 10 00:14:54.722189 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jul 10 00:14:54.722199 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jul 10 00:14:54.722210 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jul 10 00:14:54.722215 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jul 10 00:14:54.722221 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jul 10 00:14:54.722226 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jul 10 00:14:54.722232 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jul 10 00:14:54.722237 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jul 10 00:14:54.722242 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jul 10 00:14:54.722248 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jul 10 00:14:54.722253 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jul 10 00:14:54.722259 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jul 10 00:14:54.722265 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jul 10 00:14:54.722271 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jul 10 00:14:54.722277 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jul 10 00:14:54.722286 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jul 10 00:14:54.722296 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jul 10 00:14:54.722306 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jul 10 00:14:54.722316 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jul 10 00:14:54.722322 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jul 10 00:14:54.722330 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jul 10 00:14:54.722337 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jul 10 00:14:54.722346 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jul 10 00:14:54.722355 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jul 10 00:14:54.722361 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jul 10 00:14:54.722367 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jul 10 00:14:54.722372 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jul 10 00:14:54.722378 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jul 10 00:14:54.722383 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jul 10 00:14:54.722388 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jul 10 00:14:54.722394 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jul 10 00:14:54.722399 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jul 10 00:14:54.722407 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 10 00:14:54.722412 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jul 10 00:14:54.722418 kernel: TSC deadline timer available Jul 10 00:14:54.722423 kernel: CPU topo: Max. logical packages: 128 Jul 10 00:14:54.722429 kernel: CPU topo: Max. logical dies: 128 Jul 10 00:14:54.722434 kernel: CPU topo: Max. dies per package: 1 Jul 10 00:14:54.722440 kernel: CPU topo: Max. threads per core: 1 Jul 10 00:14:54.722448 kernel: CPU topo: Num. cores per package: 1 Jul 10 00:14:54.722463 kernel: CPU topo: Num. threads per package: 1 Jul 10 00:14:54.722471 kernel: CPU topo: Allowing 2 present CPUs plus 126 hotplug CPUs Jul 10 00:14:54.722478 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jul 10 00:14:54.722486 kernel: Booting paravirtualized kernel on VMware hypervisor Jul 10 00:14:54.722492 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 10 00:14:54.722497 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jul 10 00:14:54.722504 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Jul 10 00:14:54.722514 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Jul 10 00:14:54.722524 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jul 10 00:14:54.722533 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jul 10 00:14:54.722545 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jul 10 00:14:54.722555 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jul 10 00:14:54.722563 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jul 10 00:14:54.722569 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jul 10 00:14:54.722577 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jul 10 00:14:54.722586 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jul 10 00:14:54.722595 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jul 10 00:14:54.722605 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jul 10 00:14:54.722615 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jul 10 00:14:54.722627 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jul 10 00:14:54.722637 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jul 10 00:14:54.722642 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jul 10 00:14:54.722648 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jul 10 00:14:54.722653 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jul 10 00:14:54.722660 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:14:54.722666 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 00:14:54.722671 kernel: random: crng init done Jul 10 00:14:54.722678 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jul 10 00:14:54.722684 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jul 10 00:14:54.722689 kernel: printk: log_buf_len min size: 262144 bytes Jul 10 00:14:54.722695 kernel: printk: log_buf_len: 1048576 bytes Jul 10 00:14:54.722700 kernel: printk: early log buf free: 245576(93%) Jul 10 00:14:54.722706 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 00:14:54.722711 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 10 00:14:54.722717 kernel: Fallback order for Node 0: 0 Jul 10 00:14:54.722722 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524157 Jul 10 00:14:54.722729 kernel: Policy zone: DMA32 Jul 10 00:14:54.722735 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 00:14:54.722740 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jul 10 00:14:54.722746 kernel: ftrace: allocating 40095 entries in 157 pages Jul 10 00:14:54.722751 kernel: ftrace: allocated 157 pages with 5 groups Jul 10 00:14:54.722757 kernel: Dynamic Preempt: voluntary Jul 10 00:14:54.722762 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 00:14:54.722769 kernel: rcu: RCU event tracing is enabled. Jul 10 00:14:54.722774 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jul 10 00:14:54.722780 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 00:14:54.722786 kernel: Rude variant of Tasks RCU enabled. Jul 10 00:14:54.722792 kernel: Tracing variant of Tasks RCU enabled. Jul 10 00:14:54.722797 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 00:14:54.722803 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jul 10 00:14:54.722808 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 10 00:14:54.722817 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 10 00:14:54.722826 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 10 00:14:54.722832 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jul 10 00:14:54.722838 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jul 10 00:14:54.722845 kernel: Console: colour VGA+ 80x25 Jul 10 00:14:54.722850 kernel: printk: legacy console [tty0] enabled Jul 10 00:14:54.722856 kernel: printk: legacy console [ttyS0] enabled Jul 10 00:14:54.722861 kernel: ACPI: Core revision 20240827 Jul 10 00:14:54.722870 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jul 10 00:14:54.722878 kernel: APIC: Switch to symmetric I/O mode setup Jul 10 00:14:54.722884 kernel: x2apic enabled Jul 10 00:14:54.722889 kernel: APIC: Switched APIC routing to: physical x2apic Jul 10 00:14:54.722897 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 10 00:14:54.722906 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 10 00:14:54.722911 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jul 10 00:14:54.722920 kernel: Disabled fast string operations Jul 10 00:14:54.722929 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 10 00:14:54.722939 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 10 00:14:54.722949 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 10 00:14:54.722959 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Jul 10 00:14:54.722966 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 10 00:14:54.722972 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 10 00:14:54.722979 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 10 00:14:54.722984 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 10 00:14:54.722990 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 10 00:14:54.722996 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 10 00:14:54.723001 kernel: SRBDS: Unknown: Dependent on hypervisor status Jul 10 00:14:54.723007 kernel: GDS: Unknown: Dependent on hypervisor status Jul 10 00:14:54.723012 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 10 00:14:54.723018 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 10 00:14:54.723023 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 10 00:14:54.723030 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 10 00:14:54.723036 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 10 00:14:54.723041 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 10 00:14:54.723047 kernel: Freeing SMP alternatives memory: 32K Jul 10 00:14:54.723053 kernel: pid_max: default: 131072 minimum: 1024 Jul 10 00:14:54.723059 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 10 00:14:54.723064 kernel: landlock: Up and running. Jul 10 00:14:54.723069 kernel: SELinux: Initializing. Jul 10 00:14:54.723075 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 10 00:14:54.723082 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 10 00:14:54.723088 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jul 10 00:14:54.723093 kernel: Performance Events: Skylake events, core PMU driver. Jul 10 00:14:54.723099 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jul 10 00:14:54.723104 kernel: core: CPUID marked event: 'instructions' unavailable Jul 10 00:14:54.723111 kernel: core: CPUID marked event: 'bus cycles' unavailable Jul 10 00:14:54.723116 kernel: core: CPUID marked event: 'cache references' unavailable Jul 10 00:14:54.723122 kernel: core: CPUID marked event: 'cache misses' unavailable Jul 10 00:14:54.723132 kernel: core: CPUID marked event: 'branch instructions' unavailable Jul 10 00:14:54.723140 kernel: core: CPUID marked event: 'branch misses' unavailable Jul 10 00:14:54.723145 kernel: ... version: 1 Jul 10 00:14:54.723151 kernel: ... bit width: 48 Jul 10 00:14:54.723157 kernel: ... generic registers: 4 Jul 10 00:14:54.723163 kernel: ... value mask: 0000ffffffffffff Jul 10 00:14:54.723168 kernel: ... max period: 000000007fffffff Jul 10 00:14:54.723173 kernel: ... fixed-purpose events: 0 Jul 10 00:14:54.723179 kernel: ... event mask: 000000000000000f Jul 10 00:14:54.723190 kernel: signal: max sigframe size: 1776 Jul 10 00:14:54.723200 kernel: rcu: Hierarchical SRCU implementation. Jul 10 00:14:54.723210 kernel: rcu: Max phase no-delay instances is 400. Jul 10 00:14:54.723215 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Jul 10 00:14:54.723221 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 10 00:14:54.723227 kernel: smp: Bringing up secondary CPUs ... Jul 10 00:14:54.723233 kernel: smpboot: x86: Booting SMP configuration: Jul 10 00:14:54.723242 kernel: .... node #0, CPUs: #1 Jul 10 00:14:54.723252 kernel: Disabled fast string operations Jul 10 00:14:54.723262 kernel: smp: Brought up 1 node, 2 CPUs Jul 10 00:14:54.723274 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jul 10 00:14:54.723284 kernel: Memory: 1924252K/2096628K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54420K init, 2548K bss, 160992K reserved, 0K cma-reserved) Jul 10 00:14:54.723292 kernel: devtmpfs: initialized Jul 10 00:14:54.723297 kernel: x86/mm: Memory block size: 128MB Jul 10 00:14:54.723303 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jul 10 00:14:54.723309 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 00:14:54.723314 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jul 10 00:14:54.723320 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 00:14:54.723327 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 00:14:54.723333 kernel: audit: initializing netlink subsys (disabled) Jul 10 00:14:54.723338 kernel: audit: type=2000 audit(1752106491.304:1): state=initialized audit_enabled=0 res=1 Jul 10 00:14:54.723344 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 00:14:54.723349 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 10 00:14:54.723355 kernel: cpuidle: using governor menu Jul 10 00:14:54.723360 kernel: Simple Boot Flag at 0x36 set to 0x80 Jul 10 00:14:54.723366 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 00:14:54.723371 kernel: dca service started, version 1.12.1 Jul 10 00:14:54.723378 kernel: PCI: ECAM [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) for domain 0000 [bus 00-7f] Jul 10 00:14:54.723390 kernel: PCI: Using configuration type 1 for base access Jul 10 00:14:54.723397 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 10 00:14:54.723403 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 00:14:54.723409 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 10 00:14:54.723415 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 00:14:54.723421 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 10 00:14:54.723426 kernel: ACPI: Added _OSI(Module Device) Jul 10 00:14:54.723432 kernel: ACPI: Added _OSI(Processor Device) Jul 10 00:14:54.723439 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 00:14:54.723445 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 00:14:54.723458 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jul 10 00:14:54.723464 kernel: ACPI: Interpreter enabled Jul 10 00:14:54.723469 kernel: ACPI: PM: (supports S0 S1 S5) Jul 10 00:14:54.723475 kernel: ACPI: Using IOAPIC for interrupt routing Jul 10 00:14:54.723481 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 10 00:14:54.723487 kernel: PCI: Using E820 reservations for host bridge windows Jul 10 00:14:54.723493 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jul 10 00:14:54.723500 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jul 10 00:14:54.723597 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 10 00:14:54.723676 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jul 10 00:14:54.723727 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jul 10 00:14:54.723736 kernel: PCI host bridge to bus 0000:00 Jul 10 00:14:54.723789 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 10 00:14:54.723848 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jul 10 00:14:54.723896 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 10 00:14:54.723944 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 10 00:14:54.723989 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jul 10 00:14:54.724050 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jul 10 00:14:54.724114 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 conventional PCI endpoint Jul 10 00:14:54.724172 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 conventional PCI bridge Jul 10 00:14:54.724227 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 10 00:14:54.724284 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Jul 10 00:14:54.724353 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a conventional PCI endpoint Jul 10 00:14:54.724419 kernel: pci 0000:00:07.1: BAR 4 [io 0x1060-0x106f] Jul 10 00:14:54.724497 kernel: pci 0000:00:07.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Jul 10 00:14:54.724549 kernel: pci 0000:00:07.1: BAR 1 [io 0x03f6]: legacy IDE quirk Jul 10 00:14:54.724599 kernel: pci 0000:00:07.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Jul 10 00:14:54.724664 kernel: pci 0000:00:07.1: BAR 3 [io 0x0376]: legacy IDE quirk Jul 10 00:14:54.724726 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jul 10 00:14:54.724792 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jul 10 00:14:54.724845 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jul 10 00:14:54.724900 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 conventional PCI endpoint Jul 10 00:14:54.724961 kernel: pci 0000:00:07.7: BAR 0 [io 0x1080-0x10bf] Jul 10 00:14:54.725027 kernel: pci 0000:00:07.7: BAR 1 [mem 0xfebfe000-0xfebfffff 64bit] Jul 10 00:14:54.725094 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 conventional PCI endpoint Jul 10 00:14:54.725147 kernel: pci 0000:00:0f.0: BAR 0 [io 0x1070-0x107f] Jul 10 00:14:54.725215 kernel: pci 0000:00:0f.0: BAR 1 [mem 0xe8000000-0xefffffff pref] Jul 10 00:14:54.725277 kernel: pci 0000:00:0f.0: BAR 2 [mem 0xfe000000-0xfe7fffff] Jul 10 00:14:54.725335 kernel: pci 0000:00:0f.0: ROM [mem 0x00000000-0x00007fff pref] Jul 10 00:14:54.725387 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 10 00:14:54.725443 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 conventional PCI bridge Jul 10 00:14:54.725520 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jul 10 00:14:54.725575 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 10 00:14:54.725641 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 10 00:14:54.725705 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 10 00:14:54.725774 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 10 00:14:54.725838 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 10 00:14:54.725890 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 10 00:14:54.725943 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 10 00:14:54.725998 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jul 10 00:14:54.726063 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 10 00:14:54.726119 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 10 00:14:54.726172 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 10 00:14:54.726233 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 10 00:14:54.726288 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 10 00:14:54.726337 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jul 10 00:14:54.726392 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 10 00:14:54.726443 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 10 00:14:54.726683 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 10 00:14:54.726735 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 10 00:14:54.726785 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 10 00:14:54.726839 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jul 10 00:14:54.727123 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 10 00:14:54.727178 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 10 00:14:54.727232 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 10 00:14:54.727282 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 10 00:14:54.727332 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jul 10 00:14:54.727388 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 10 00:14:54.727440 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 10 00:14:54.727524 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 10 00:14:54.727575 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 10 00:14:54.727628 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jul 10 00:14:54.727681 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 10 00:14:54.727731 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 10 00:14:54.727780 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 10 00:14:54.727829 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 10 00:14:54.727878 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jul 10 00:14:54.727933 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 10 00:14:54.727986 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 10 00:14:54.728036 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 10 00:14:54.728085 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 10 00:14:54.728135 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jul 10 00:14:54.728188 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 10 00:14:54.728238 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 10 00:14:54.728288 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 10 00:14:54.728340 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 10 00:14:54.728389 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jul 10 00:14:54.728444 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 10 00:14:54.728514 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 10 00:14:54.728565 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 10 00:14:54.729737 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 10 00:14:54.729801 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jul 10 00:14:54.729860 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 10 00:14:54.729917 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 10 00:14:54.729968 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 10 00:14:54.730018 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 10 00:14:54.730068 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 10 00:14:54.730117 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jul 10 00:14:54.730171 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 10 00:14:54.730221 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 10 00:14:54.730274 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 10 00:14:54.730323 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 10 00:14:54.730373 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 10 00:14:54.730422 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jul 10 00:14:54.730508 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 10 00:14:54.730590 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 10 00:14:54.730643 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 10 00:14:54.730695 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 10 00:14:54.730745 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jul 10 00:14:54.730805 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 10 00:14:54.730857 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 10 00:14:54.730911 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 10 00:14:54.730961 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 10 00:14:54.731011 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jul 10 00:14:54.731532 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 10 00:14:54.731590 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 10 00:14:54.731643 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 10 00:14:54.731694 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 10 00:14:54.731744 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jul 10 00:14:54.731799 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 10 00:14:54.731853 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 10 00:14:54.731906 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 10 00:14:54.731955 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 10 00:14:54.732004 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jul 10 00:14:54.732057 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 10 00:14:54.732107 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 10 00:14:54.732156 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 10 00:14:54.732205 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 10 00:14:54.732257 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jul 10 00:14:54.732313 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 10 00:14:54.732364 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 10 00:14:54.732413 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 10 00:14:54.734488 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 10 00:14:54.734559 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 10 00:14:54.734614 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jul 10 00:14:54.734674 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 10 00:14:54.734726 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 10 00:14:54.734776 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 10 00:14:54.734831 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 10 00:14:54.734883 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 10 00:14:54.734933 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jul 10 00:14:54.734988 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 10 00:14:54.735040 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 10 00:14:54.735089 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 10 00:14:54.735139 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 10 00:14:54.735188 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 10 00:14:54.735241 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jul 10 00:14:54.735295 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 10 00:14:54.735346 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 10 00:14:54.735396 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 10 00:14:54.735446 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 10 00:14:54.737049 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jul 10 00:14:54.737110 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 10 00:14:54.737169 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 10 00:14:54.737221 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 10 00:14:54.737272 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 10 00:14:54.737323 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jul 10 00:14:54.737381 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 10 00:14:54.737432 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 10 00:14:54.737507 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 10 00:14:54.737562 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 10 00:14:54.737613 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jul 10 00:14:54.737668 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 10 00:14:54.737720 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 10 00:14:54.737769 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 10 00:14:54.737826 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 10 00:14:54.737877 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jul 10 00:14:54.737931 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 10 00:14:54.737984 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 10 00:14:54.738035 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 10 00:14:54.738084 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 10 00:14:54.738133 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jul 10 00:14:54.738188 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 10 00:14:54.738238 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 10 00:14:54.738287 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 10 00:14:54.738339 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 10 00:14:54.738398 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 10 00:14:54.740244 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jul 10 00:14:54.740329 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 10 00:14:54.740386 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 10 00:14:54.740438 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 10 00:14:54.740512 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 10 00:14:54.740567 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 10 00:14:54.740618 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jul 10 00:14:54.740674 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 10 00:14:54.740726 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 10 00:14:54.740777 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 10 00:14:54.740839 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 10 00:14:54.740891 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jul 10 00:14:54.740948 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 10 00:14:54.740999 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 10 00:14:54.741050 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 10 00:14:54.741100 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 10 00:14:54.741150 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jul 10 00:14:54.741205 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 10 00:14:54.741256 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 10 00:14:54.741309 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 10 00:14:54.741359 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 10 00:14:54.741408 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jul 10 00:14:54.741480 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 10 00:14:54.741533 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 10 00:14:54.741584 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 10 00:14:54.741633 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 10 00:14:54.741685 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jul 10 00:14:54.741740 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 10 00:14:54.741792 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 10 00:14:54.741850 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 10 00:14:54.741931 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 10 00:14:54.741992 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jul 10 00:14:54.742048 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 10 00:14:54.742114 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 10 00:14:54.742169 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 10 00:14:54.742810 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 10 00:14:54.742873 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jul 10 00:14:54.742929 kernel: pci_bus 0000:01: extended config space not accessible Jul 10 00:14:54.742984 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 10 00:14:54.743038 kernel: pci_bus 0000:02: extended config space not accessible Jul 10 00:14:54.743050 kernel: acpiphp: Slot [32] registered Jul 10 00:14:54.743056 kernel: acpiphp: Slot [33] registered Jul 10 00:14:54.743062 kernel: acpiphp: Slot [34] registered Jul 10 00:14:54.743068 kernel: acpiphp: Slot [35] registered Jul 10 00:14:54.743074 kernel: acpiphp: Slot [36] registered Jul 10 00:14:54.743080 kernel: acpiphp: Slot [37] registered Jul 10 00:14:54.743085 kernel: acpiphp: Slot [38] registered Jul 10 00:14:54.743091 kernel: acpiphp: Slot [39] registered Jul 10 00:14:54.743101 kernel: acpiphp: Slot [40] registered Jul 10 00:14:54.743108 kernel: acpiphp: Slot [41] registered Jul 10 00:14:54.743115 kernel: acpiphp: Slot [42] registered Jul 10 00:14:54.743121 kernel: acpiphp: Slot [43] registered Jul 10 00:14:54.743127 kernel: acpiphp: Slot [44] registered Jul 10 00:14:54.743132 kernel: acpiphp: Slot [45] registered Jul 10 00:14:54.743138 kernel: acpiphp: Slot [46] registered Jul 10 00:14:54.743144 kernel: acpiphp: Slot [47] registered Jul 10 00:14:54.743150 kernel: acpiphp: Slot [48] registered Jul 10 00:14:54.743156 kernel: acpiphp: Slot [49] registered Jul 10 00:14:54.743161 kernel: acpiphp: Slot [50] registered Jul 10 00:14:54.743168 kernel: acpiphp: Slot [51] registered Jul 10 00:14:54.743174 kernel: acpiphp: Slot [52] registered Jul 10 00:14:54.743180 kernel: acpiphp: Slot [53] registered Jul 10 00:14:54.743186 kernel: acpiphp: Slot [54] registered Jul 10 00:14:54.743192 kernel: acpiphp: Slot [55] registered Jul 10 00:14:54.743198 kernel: acpiphp: Slot [56] registered Jul 10 00:14:54.743204 kernel: acpiphp: Slot [57] registered Jul 10 00:14:54.743209 kernel: acpiphp: Slot [58] registered Jul 10 00:14:54.743219 kernel: acpiphp: Slot [59] registered Jul 10 00:14:54.743225 kernel: acpiphp: Slot [60] registered Jul 10 00:14:54.743232 kernel: acpiphp: Slot [61] registered Jul 10 00:14:54.743238 kernel: acpiphp: Slot [62] registered Jul 10 00:14:54.743244 kernel: acpiphp: Slot [63] registered Jul 10 00:14:54.743305 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jul 10 00:14:54.743358 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jul 10 00:14:54.743408 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jul 10 00:14:54.744488 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jul 10 00:14:54.744554 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jul 10 00:14:54.744611 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jul 10 00:14:54.744673 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 PCIe Endpoint Jul 10 00:14:54.744727 kernel: pci 0000:03:00.0: BAR 0 [io 0x4000-0x4007] Jul 10 00:14:54.744779 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfd5f8000-0xfd5fffff 64bit] Jul 10 00:14:54.744830 kernel: pci 0000:03:00.0: ROM [mem 0x00000000-0x0000ffff pref] Jul 10 00:14:54.744881 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jul 10 00:14:54.744932 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 10 00:14:54.744988 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 10 00:14:54.745041 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 10 00:14:54.745093 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 10 00:14:54.745146 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 10 00:14:54.745199 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 10 00:14:54.745251 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 10 00:14:54.745303 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 10 00:14:54.745359 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 10 00:14:54.745417 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 PCIe Endpoint Jul 10 00:14:54.746498 kernel: pci 0000:0b:00.0: BAR 0 [mem 0xfd4fc000-0xfd4fcfff] Jul 10 00:14:54.746554 kernel: pci 0000:0b:00.0: BAR 1 [mem 0xfd4fd000-0xfd4fdfff] Jul 10 00:14:54.746606 kernel: pci 0000:0b:00.0: BAR 2 [mem 0xfd4fe000-0xfd4fffff] Jul 10 00:14:54.746657 kernel: pci 0000:0b:00.0: BAR 3 [io 0x5000-0x500f] Jul 10 00:14:54.746709 kernel: pci 0000:0b:00.0: ROM [mem 0x00000000-0x0000ffff pref] Jul 10 00:14:54.746764 kernel: pci 0000:0b:00.0: supports D1 D2 Jul 10 00:14:54.746819 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 10 00:14:54.746870 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 10 00:14:54.746922 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 10 00:14:54.746975 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 10 00:14:54.747027 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 10 00:14:54.747078 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 10 00:14:54.747132 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 10 00:14:54.747187 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 10 00:14:54.747239 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 10 00:14:54.747290 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 10 00:14:54.747342 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 10 00:14:54.747395 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 10 00:14:54.747446 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 10 00:14:54.748546 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 10 00:14:54.748634 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 10 00:14:54.748695 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 10 00:14:54.748750 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 10 00:14:54.748810 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 10 00:14:54.748869 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 10 00:14:54.748922 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 10 00:14:54.748977 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 10 00:14:54.749031 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 10 00:14:54.749086 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 10 00:14:54.749139 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 10 00:14:54.749193 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 10 00:14:54.749245 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 10 00:14:54.749254 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jul 10 00:14:54.749261 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jul 10 00:14:54.749267 kernel: ACPI: PCI: Interrupt link LNKB disabled Jul 10 00:14:54.749273 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 10 00:14:54.749281 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jul 10 00:14:54.749287 kernel: iommu: Default domain type: Translated Jul 10 00:14:54.749293 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 10 00:14:54.749300 kernel: PCI: Using ACPI for IRQ routing Jul 10 00:14:54.749306 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 10 00:14:54.749312 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jul 10 00:14:54.749318 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jul 10 00:14:54.749369 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jul 10 00:14:54.749419 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jul 10 00:14:54.750521 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 10 00:14:54.750534 kernel: vgaarb: loaded Jul 10 00:14:54.750541 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jul 10 00:14:54.750547 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jul 10 00:14:54.750553 kernel: clocksource: Switched to clocksource tsc-early Jul 10 00:14:54.750559 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 00:14:54.750565 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 00:14:54.750571 kernel: pnp: PnP ACPI init Jul 10 00:14:54.750626 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jul 10 00:14:54.750677 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jul 10 00:14:54.750723 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jul 10 00:14:54.750775 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jul 10 00:14:54.750834 kernel: pnp 00:06: [dma 2] Jul 10 00:14:54.750884 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jul 10 00:14:54.750930 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jul 10 00:14:54.750987 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jul 10 00:14:54.750995 kernel: pnp: PnP ACPI: found 8 devices Jul 10 00:14:54.751002 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 10 00:14:54.751008 kernel: NET: Registered PF_INET protocol family Jul 10 00:14:54.751014 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 00:14:54.751020 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 10 00:14:54.751026 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 00:14:54.751032 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 10 00:14:54.751040 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 10 00:14:54.751046 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 10 00:14:54.751052 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 10 00:14:54.751059 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 10 00:14:54.751065 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 00:14:54.751071 kernel: NET: Registered PF_XDP protocol family Jul 10 00:14:54.751131 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 10 00:14:54.751184 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 10 00:14:54.751240 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 10 00:14:54.751293 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 10 00:14:54.751345 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 10 00:14:54.751396 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jul 10 00:14:54.751448 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jul 10 00:14:54.751518 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jul 10 00:14:54.751570 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jul 10 00:14:54.751621 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jul 10 00:14:54.751676 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jul 10 00:14:54.751727 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jul 10 00:14:54.751778 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jul 10 00:14:54.751830 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jul 10 00:14:54.751881 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jul 10 00:14:54.751933 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jul 10 00:14:54.751984 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jul 10 00:14:54.752036 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jul 10 00:14:54.752089 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jul 10 00:14:54.752141 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jul 10 00:14:54.752193 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jul 10 00:14:54.752245 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jul 10 00:14:54.752297 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jul 10 00:14:54.752347 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref]: assigned Jul 10 00:14:54.752397 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref]: assigned Jul 10 00:14:54.752447 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.756575 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.756635 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.756688 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.756742 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.756792 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.756852 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.756903 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.756958 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.757009 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.757060 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.757110 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.757162 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.757213 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.757264 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.757313 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.757367 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.757417 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.758516 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.758578 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.758634 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.758686 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.758738 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.758789 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.758845 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.758896 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.758947 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.758997 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.759048 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.759098 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.759148 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.759198 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.759252 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.759302 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.759353 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.759402 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.759981 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.760045 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.760102 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.760154 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.760222 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.760275 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.760325 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.760375 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.760425 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.760493 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.760546 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.760602 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.760654 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.760707 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.760757 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.760815 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.760867 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.760917 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.760968 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.761018 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.761068 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.761118 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.761170 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.761220 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.761270 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.761321 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.761370 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.761420 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.761484 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.761537 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.761589 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.761639 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.761693 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.761743 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.761795 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.761845 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.761897 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.761947 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.762001 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.762052 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.762103 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.762152 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.762204 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.762255 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.762308 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.762378 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.762431 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Jul 10 00:14:54.762505 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Jul 10 00:14:54.762564 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 10 00:14:54.762653 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jul 10 00:14:54.762707 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 10 00:14:54.762757 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 10 00:14:54.762806 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 10 00:14:54.762861 kernel: pci 0000:03:00.0: ROM [mem 0xfd500000-0xfd50ffff pref]: assigned Jul 10 00:14:54.762912 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 10 00:14:54.762966 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 10 00:14:54.763015 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 10 00:14:54.763065 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jul 10 00:14:54.763116 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 10 00:14:54.763166 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 10 00:14:54.763215 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 10 00:14:54.763265 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 10 00:14:54.763315 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 10 00:14:54.763365 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 10 00:14:54.763414 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 10 00:14:54.763490 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 10 00:14:54.763542 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 10 00:14:54.763591 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 10 00:14:54.763642 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 10 00:14:54.763693 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 10 00:14:54.763742 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 10 00:14:54.763792 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 10 00:14:54.763851 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 10 00:14:54.763901 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 10 00:14:54.763951 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 10 00:14:54.764001 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 10 00:14:54.765484 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 10 00:14:54.765555 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 10 00:14:54.765611 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 10 00:14:54.765664 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 10 00:14:54.765719 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 10 00:14:54.765774 kernel: pci 0000:0b:00.0: ROM [mem 0xfd400000-0xfd40ffff pref]: assigned Jul 10 00:14:54.765834 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 10 00:14:54.765897 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 10 00:14:54.765950 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 10 00:14:54.766011 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jul 10 00:14:54.766915 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 10 00:14:54.766977 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 10 00:14:54.767034 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 10 00:14:54.767085 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 10 00:14:54.767138 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 10 00:14:54.767188 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 10 00:14:54.767238 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 10 00:14:54.767287 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 10 00:14:54.767339 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 10 00:14:54.767390 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 10 00:14:54.767440 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 10 00:14:54.768466 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 10 00:14:54.768542 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 10 00:14:54.768599 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 10 00:14:54.768653 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 10 00:14:54.768705 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 10 00:14:54.768755 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 10 00:14:54.768813 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 10 00:14:54.768864 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 10 00:14:54.768917 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 10 00:14:54.768968 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 10 00:14:54.769018 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 10 00:14:54.769067 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 10 00:14:54.769117 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 10 00:14:54.769168 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 10 00:14:54.769218 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 10 00:14:54.769270 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 10 00:14:54.769323 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 10 00:14:54.769372 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 10 00:14:54.769422 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 10 00:14:54.769806 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 10 00:14:54.769864 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 10 00:14:54.769915 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 10 00:14:54.769965 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 10 00:14:54.770014 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 10 00:14:54.770066 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 10 00:14:54.770120 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 10 00:14:54.770169 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 10 00:14:54.770220 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 10 00:14:54.770270 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 10 00:14:54.770320 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 10 00:14:54.770370 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 10 00:14:54.770419 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 10 00:14:54.770485 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 10 00:14:54.770836 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 10 00:14:54.770896 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 10 00:14:54.770949 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 10 00:14:54.771003 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 10 00:14:54.771055 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 10 00:14:54.771106 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 10 00:14:54.771161 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 10 00:14:54.771212 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 10 00:14:54.771262 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 10 00:14:54.771312 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 10 00:14:54.771364 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 10 00:14:54.771415 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 10 00:14:54.771801 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 10 00:14:54.771858 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 10 00:14:54.771911 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 10 00:14:54.771962 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 10 00:14:54.772016 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 10 00:14:54.772067 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 10 00:14:54.772117 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 10 00:14:54.772167 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 10 00:14:54.772219 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 10 00:14:54.772269 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 10 00:14:54.772319 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 10 00:14:54.772375 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 10 00:14:54.772426 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 10 00:14:54.773893 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 10 00:14:54.773961 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 10 00:14:54.774015 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 10 00:14:54.774068 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 10 00:14:54.774121 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 10 00:14:54.774177 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 10 00:14:54.774228 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 10 00:14:54.774280 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jul 10 00:14:54.774325 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 10 00:14:54.774369 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 10 00:14:54.774416 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jul 10 00:14:54.774477 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jul 10 00:14:54.774529 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jul 10 00:14:54.774580 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jul 10 00:14:54.774627 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 10 00:14:54.774674 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jul 10 00:14:54.774719 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 10 00:14:54.774769 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 10 00:14:54.774815 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jul 10 00:14:54.774862 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jul 10 00:14:54.774917 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jul 10 00:14:54.774965 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jul 10 00:14:54.775011 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jul 10 00:14:54.775063 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jul 10 00:14:54.775111 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jul 10 00:14:54.775157 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jul 10 00:14:54.775208 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jul 10 00:14:54.775258 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jul 10 00:14:54.775304 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jul 10 00:14:54.775354 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jul 10 00:14:54.775402 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jul 10 00:14:54.775819 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jul 10 00:14:54.775881 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 10 00:14:54.775937 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jul 10 00:14:54.776031 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jul 10 00:14:54.776086 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jul 10 00:14:54.776133 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jul 10 00:14:54.776529 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jul 10 00:14:54.776582 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jul 10 00:14:54.776637 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jul 10 00:14:54.776685 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jul 10 00:14:54.776731 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jul 10 00:14:54.776786 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jul 10 00:14:54.776833 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jul 10 00:14:54.776878 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jul 10 00:14:54.776932 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jul 10 00:14:54.776979 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jul 10 00:14:54.777025 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jul 10 00:14:54.777077 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jul 10 00:14:54.777124 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 10 00:14:54.777175 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jul 10 00:14:54.777220 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 10 00:14:54.777276 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jul 10 00:14:54.777323 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jul 10 00:14:54.777373 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jul 10 00:14:54.777419 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jul 10 00:14:54.777481 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jul 10 00:14:54.777529 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 10 00:14:54.777582 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jul 10 00:14:54.777628 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jul 10 00:14:54.777674 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 10 00:14:54.777724 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jul 10 00:14:54.777771 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jul 10 00:14:54.777816 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jul 10 00:14:54.777868 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jul 10 00:14:54.777917 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jul 10 00:14:54.777963 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jul 10 00:14:54.778013 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jul 10 00:14:54.778060 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 10 00:14:54.778110 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jul 10 00:14:54.778156 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 10 00:14:54.778209 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jul 10 00:14:54.778256 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jul 10 00:14:54.778309 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jul 10 00:14:54.778355 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jul 10 00:14:54.778407 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jul 10 00:14:54.779507 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 10 00:14:54.779578 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jul 10 00:14:54.779627 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jul 10 00:14:54.779674 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jul 10 00:14:54.779726 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jul 10 00:14:54.779772 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jul 10 00:14:54.779819 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jul 10 00:14:54.779873 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jul 10 00:14:54.779923 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jul 10 00:14:54.779974 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jul 10 00:14:54.780020 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 10 00:14:54.780071 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jul 10 00:14:54.780119 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jul 10 00:14:54.780170 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jul 10 00:14:54.780219 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jul 10 00:14:54.780269 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jul 10 00:14:54.780315 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jul 10 00:14:54.780371 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jul 10 00:14:54.780417 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 10 00:14:54.781514 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 10 00:14:54.781537 kernel: PCI: CLS 32 bytes, default 64 Jul 10 00:14:54.781544 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 10 00:14:54.781551 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 10 00:14:54.781557 kernel: clocksource: Switched to clocksource tsc Jul 10 00:14:54.781563 kernel: Initialise system trusted keyrings Jul 10 00:14:54.781569 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 10 00:14:54.781575 kernel: Key type asymmetric registered Jul 10 00:14:54.781581 kernel: Asymmetric key parser 'x509' registered Jul 10 00:14:54.781587 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 10 00:14:54.781594 kernel: io scheduler mq-deadline registered Jul 10 00:14:54.781600 kernel: io scheduler kyber registered Jul 10 00:14:54.781606 kernel: io scheduler bfq registered Jul 10 00:14:54.781667 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jul 10 00:14:54.781721 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:14:54.781778 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jul 10 00:14:54.781829 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:14:54.781883 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jul 10 00:14:54.781937 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:14:54.781990 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jul 10 00:14:54.782042 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:14:54.782094 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jul 10 00:14:54.782146 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:14:54.782200 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jul 10 00:14:54.782251 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:14:54.782305 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jul 10 00:14:54.782356 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:14:54.782410 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jul 10 00:14:54.782496 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:14:54.782551 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jul 10 00:14:54.782602 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:14:54.782657 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jul 10 00:14:54.782713 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:14:54.782764 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jul 10 00:14:54.782817 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:14:54.782869 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jul 10 00:14:54.782920 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:14:54.782982 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jul 10 00:14:54.783036 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:14:54.783090 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jul 10 00:14:54.783144 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:14:54.783197 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jul 10 00:14:54.783248 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:14:54.783300 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jul 10 00:14:54.783352 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:14:54.783404 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jul 10 00:14:54.783469 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:14:54.783531 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jul 10 00:14:54.783582 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:14:54.783635 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jul 10 00:14:54.783686 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:14:54.783738 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jul 10 00:14:54.783790 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:14:54.783848 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jul 10 00:14:54.783899 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:14:54.783955 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jul 10 00:14:54.784006 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:14:54.784058 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jul 10 00:14:54.784110 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:14:54.784162 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jul 10 00:14:54.784214 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:14:54.784266 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jul 10 00:14:54.784319 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:14:54.784371 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jul 10 00:14:54.784422 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:14:54.784728 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jul 10 00:14:54.784786 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:14:54.784841 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jul 10 00:14:54.784895 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:14:54.784952 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jul 10 00:14:54.785004 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:14:54.785057 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jul 10 00:14:54.785108 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:14:54.785160 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jul 10 00:14:54.785211 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:14:54.785263 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jul 10 00:14:54.785314 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 00:14:54.785325 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 10 00:14:54.785334 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 00:14:54.785341 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 10 00:14:54.785347 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jul 10 00:14:54.785354 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 10 00:14:54.785360 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 10 00:14:54.785413 kernel: rtc_cmos 00:01: registered as rtc0 Jul 10 00:14:54.785491 kernel: rtc_cmos 00:01: setting system clock to 2025-07-10T00:14:54 UTC (1752106494) Jul 10 00:14:54.785501 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 10 00:14:54.785548 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jul 10 00:14:54.785557 kernel: intel_pstate: CPU model not supported Jul 10 00:14:54.785564 kernel: NET: Registered PF_INET6 protocol family Jul 10 00:14:54.785570 kernel: Segment Routing with IPv6 Jul 10 00:14:54.785577 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 00:14:54.785583 kernel: NET: Registered PF_PACKET protocol family Jul 10 00:14:54.785590 kernel: Key type dns_resolver registered Jul 10 00:14:54.785598 kernel: IPI shorthand broadcast: enabled Jul 10 00:14:54.785605 kernel: sched_clock: Marking stable (2859140903, 175511998)->(3050434176, -15781275) Jul 10 00:14:54.785611 kernel: registered taskstats version 1 Jul 10 00:14:54.785617 kernel: Loading compiled-in X.509 certificates Jul 10 00:14:54.785624 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: f515550de55d4e43b2ea11ae212aa0cb3a4e55cf' Jul 10 00:14:54.785630 kernel: Demotion targets for Node 0: null Jul 10 00:14:54.785636 kernel: Key type .fscrypt registered Jul 10 00:14:54.785643 kernel: Key type fscrypt-provisioning registered Jul 10 00:14:54.785650 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 00:14:54.785656 kernel: ima: Allocated hash algorithm: sha1 Jul 10 00:14:54.785663 kernel: ima: No architecture policies found Jul 10 00:14:54.785669 kernel: clk: Disabling unused clocks Jul 10 00:14:54.785676 kernel: Warning: unable to open an initial console. Jul 10 00:14:54.785682 kernel: Freeing unused kernel image (initmem) memory: 54420K Jul 10 00:14:54.785689 kernel: Write protecting the kernel read-only data: 24576k Jul 10 00:14:54.785695 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 10 00:14:54.785701 kernel: Run /init as init process Jul 10 00:14:54.785709 kernel: with arguments: Jul 10 00:14:54.785716 kernel: /init Jul 10 00:14:54.785722 kernel: with environment: Jul 10 00:14:54.785728 kernel: HOME=/ Jul 10 00:14:54.785734 kernel: TERM=linux Jul 10 00:14:54.785740 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 00:14:54.785747 systemd[1]: Successfully made /usr/ read-only. Jul 10 00:14:54.785756 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 00:14:54.785764 systemd[1]: Detected virtualization vmware. Jul 10 00:14:54.785770 systemd[1]: Detected architecture x86-64. Jul 10 00:14:54.785776 systemd[1]: Running in initrd. Jul 10 00:14:54.785783 systemd[1]: No hostname configured, using default hostname. Jul 10 00:14:54.785789 systemd[1]: Hostname set to . Jul 10 00:14:54.785795 systemd[1]: Initializing machine ID from random generator. Jul 10 00:14:54.785802 systemd[1]: Queued start job for default target initrd.target. Jul 10 00:14:54.785808 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:14:54.785816 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:14:54.785824 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 10 00:14:54.785830 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:14:54.785837 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 10 00:14:54.785844 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 10 00:14:54.785851 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 10 00:14:54.785858 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 10 00:14:54.785865 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:14:54.785872 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:14:54.785879 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:14:54.785885 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:14:54.785892 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:14:54.785898 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:14:54.785905 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:14:54.785911 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:14:54.785918 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 00:14:54.785926 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 10 00:14:54.785933 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:14:54.785939 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:14:54.785946 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:14:54.785952 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:14:54.785959 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 10 00:14:54.785966 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:14:54.785972 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 10 00:14:54.785980 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 10 00:14:54.785987 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 00:14:54.785993 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:14:54.786000 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:14:54.786006 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:14:54.786013 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 10 00:14:54.786033 systemd-journald[244]: Collecting audit messages is disabled. Jul 10 00:14:54.786050 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:14:54.786059 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 00:14:54.786066 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 00:14:54.786073 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 00:14:54.786080 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:14:54.786086 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:14:54.786093 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 00:14:54.786099 kernel: Bridge firewalling registered Jul 10 00:14:54.786106 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:14:54.786112 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:14:54.786120 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:14:54.786127 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:14:54.786134 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:14:54.786141 systemd-journald[244]: Journal started Jul 10 00:14:54.786157 systemd-journald[244]: Runtime Journal (/run/log/journal/b146ac6780b04295b12a1556c801e01d) is 4.8M, max 38.8M, 34M free. Jul 10 00:14:54.740056 systemd-modules-load[245]: Inserted module 'overlay' Jul 10 00:14:54.768811 systemd-modules-load[245]: Inserted module 'br_netfilter' Jul 10 00:14:54.788469 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:14:54.790391 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:14:54.790635 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:14:54.791549 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 10 00:14:54.804328 systemd-tmpfiles[280]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 10 00:14:54.806366 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:14:54.807482 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:14:54.813006 dracut-cmdline[281]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:14:54.839397 systemd-resolved[290]: Positive Trust Anchors: Jul 10 00:14:54.839405 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:14:54.839428 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:14:54.841735 systemd-resolved[290]: Defaulting to hostname 'linux'. Jul 10 00:14:54.842965 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:14:54.843519 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:14:54.873470 kernel: SCSI subsystem initialized Jul 10 00:14:54.890467 kernel: Loading iSCSI transport class v2.0-870. Jul 10 00:14:54.899470 kernel: iscsi: registered transport (tcp) Jul 10 00:14:54.923717 kernel: iscsi: registered transport (qla4xxx) Jul 10 00:14:54.923785 kernel: QLogic iSCSI HBA Driver Jul 10 00:14:54.937977 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:14:54.950433 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:14:54.951936 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:14:54.979672 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 10 00:14:54.981138 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 10 00:14:55.021472 kernel: raid6: avx2x4 gen() 39505 MB/s Jul 10 00:14:55.038466 kernel: raid6: avx2x2 gen() 43659 MB/s Jul 10 00:14:55.055705 kernel: raid6: avx2x1 gen() 43880 MB/s Jul 10 00:14:55.055762 kernel: raid6: using algorithm avx2x1 gen() 43880 MB/s Jul 10 00:14:55.073706 kernel: raid6: .... xor() 23727 MB/s, rmw enabled Jul 10 00:14:55.073756 kernel: raid6: using avx2x2 recovery algorithm Jul 10 00:14:55.088483 kernel: xor: automatically using best checksumming function avx Jul 10 00:14:55.195471 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 10 00:14:55.198905 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:14:55.200047 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:14:55.223140 systemd-udevd[495]: Using default interface naming scheme 'v255'. Jul 10 00:14:55.227812 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:14:55.229491 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 10 00:14:55.248706 dracut-pre-trigger[501]: rd.md=0: removing MD RAID activation Jul 10 00:14:55.263728 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:14:55.264910 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:14:55.339044 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:14:55.340529 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 10 00:14:55.409471 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jul 10 00:14:55.415948 kernel: vmw_pvscsi: using 64bit dma Jul 10 00:14:55.415980 kernel: vmw_pvscsi: max_id: 16 Jul 10 00:14:55.415988 kernel: vmw_pvscsi: setting ring_pages to 8 Jul 10 00:14:55.425476 kernel: vmw_pvscsi: enabling reqCallThreshold Jul 10 00:14:55.425509 kernel: vmw_pvscsi: driver-based request coalescing enabled Jul 10 00:14:55.425518 kernel: vmw_pvscsi: using MSI-X Jul 10 00:14:55.429481 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jul 10 00:14:55.434878 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jul 10 00:14:55.435009 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jul 10 00:14:55.451674 kernel: VMware vmxnet3 virtual NIC driver - version 1.9.0.0-k-NAPI Jul 10 00:14:55.451713 kernel: libata version 3.00 loaded. Jul 10 00:14:55.451728 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jul 10 00:14:55.453619 kernel: ata_piix 0000:00:07.1: version 2.13 Jul 10 00:14:55.454461 kernel: scsi host1: ata_piix Jul 10 00:14:55.457837 kernel: scsi host2: ata_piix Jul 10 00:14:55.457979 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 lpm-pol 0 Jul 10 00:14:55.457990 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 lpm-pol 0 Jul 10 00:14:55.461466 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jul 10 00:14:55.468594 (udev-worker)[539]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jul 10 00:14:55.471524 kernel: cryptd: max_cpu_qlen set to 1000 Jul 10 00:14:55.475470 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jul 10 00:14:55.475595 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Jul 10 00:14:55.475606 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 10 00:14:55.475670 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jul 10 00:14:55.477515 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jul 10 00:14:55.477612 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jul 10 00:14:55.478558 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:14:55.478679 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:14:55.478982 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:14:55.480315 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:14:55.506608 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:14:55.539483 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 10 00:14:55.539527 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 10 00:14:55.627473 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jul 10 00:14:55.630480 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jul 10 00:14:55.637476 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jul 10 00:14:55.640481 kernel: AES CTR mode by8 optimization enabled Jul 10 00:14:55.661703 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jul 10 00:14:55.661871 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 10 00:14:55.685470 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 10 00:14:55.725225 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jul 10 00:14:55.730881 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jul 10 00:14:55.736292 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jul 10 00:14:55.740851 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jul 10 00:14:55.741155 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jul 10 00:14:55.741940 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 10 00:14:55.796480 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 10 00:14:55.978649 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 10 00:14:55.978988 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:14:55.979122 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:14:55.979328 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:14:55.979988 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 10 00:14:55.990597 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:14:56.869846 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 10 00:14:56.869881 disk-uuid[647]: The operation has completed successfully. Jul 10 00:14:56.952083 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 00:14:56.952167 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 10 00:14:56.969719 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 10 00:14:56.987520 sh[676]: Success Jul 10 00:14:57.012869 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 00:14:57.012930 kernel: device-mapper: uevent: version 1.0.3 Jul 10 00:14:57.012946 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 10 00:14:57.020463 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jul 10 00:14:57.251872 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 10 00:14:57.253152 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 10 00:14:57.256923 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 10 00:14:57.331478 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 10 00:14:57.333474 kernel: BTRFS: device fsid c4cb30b0-bb74-4f98-aab6-7a1c6f47edee devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (688) Jul 10 00:14:57.337483 kernel: BTRFS info (device dm-0): first mount of filesystem c4cb30b0-bb74-4f98-aab6-7a1c6f47edee Jul 10 00:14:57.337524 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:14:57.337533 kernel: BTRFS info (device dm-0): using free-space-tree Jul 10 00:14:57.351356 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 10 00:14:57.351717 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 10 00:14:57.352458 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jul 10 00:14:57.353517 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 10 00:14:57.425476 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (711) Jul 10 00:14:57.429084 kernel: BTRFS info (device sda6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:14:57.429122 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:14:57.429134 kernel: BTRFS info (device sda6): using free-space-tree Jul 10 00:14:57.443468 kernel: BTRFS info (device sda6): last unmount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:14:57.445566 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 10 00:14:57.447792 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 10 00:14:57.526694 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jul 10 00:14:57.527364 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 10 00:14:57.580522 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:14:57.581570 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:14:57.604862 systemd-networkd[863]: lo: Link UP Jul 10 00:14:57.605088 systemd-networkd[863]: lo: Gained carrier Jul 10 00:14:57.605924 systemd-networkd[863]: Enumeration completed Jul 10 00:14:57.606127 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:14:57.606276 systemd[1]: Reached target network.target - Network. Jul 10 00:14:57.606398 systemd-networkd[863]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jul 10 00:14:57.608494 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 10 00:14:57.608590 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 10 00:14:57.609526 systemd-networkd[863]: ens192: Link UP Jul 10 00:14:57.609528 systemd-networkd[863]: ens192: Gained carrier Jul 10 00:14:57.758393 ignition[731]: Ignition 2.21.0 Jul 10 00:14:57.758405 ignition[731]: Stage: fetch-offline Jul 10 00:14:57.758432 ignition[731]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:14:57.758437 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 10 00:14:57.758510 ignition[731]: parsed url from cmdline: "" Jul 10 00:14:57.758512 ignition[731]: no config URL provided Jul 10 00:14:57.758515 ignition[731]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:14:57.758519 ignition[731]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:14:57.758884 ignition[731]: config successfully fetched Jul 10 00:14:57.758902 ignition[731]: parsing config with SHA512: 0b9d93258bd1f9f7978c0cd5b1dfb478805cc03ffdd316ba04163ae0307a9ce30c3dff7116371f2c068848156fa6deb6f0fe62212ba98ea5340d54301782c15e Jul 10 00:14:57.762544 unknown[731]: fetched base config from "system" Jul 10 00:14:57.762551 unknown[731]: fetched user config from "vmware" Jul 10 00:14:57.762792 ignition[731]: fetch-offline: fetch-offline passed Jul 10 00:14:57.762824 ignition[731]: Ignition finished successfully Jul 10 00:14:57.763843 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:14:57.764051 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 10 00:14:57.764517 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 10 00:14:57.779654 ignition[872]: Ignition 2.21.0 Jul 10 00:14:57.779927 ignition[872]: Stage: kargs Jul 10 00:14:57.780123 ignition[872]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:14:57.780250 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 10 00:14:57.780932 ignition[872]: kargs: kargs passed Jul 10 00:14:57.781071 ignition[872]: Ignition finished successfully Jul 10 00:14:57.782302 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 10 00:14:57.783108 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 10 00:14:57.794900 ignition[879]: Ignition 2.21.0 Jul 10 00:14:57.794907 ignition[879]: Stage: disks Jul 10 00:14:57.794985 ignition[879]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:14:57.794991 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 10 00:14:57.795469 ignition[879]: disks: disks passed Jul 10 00:14:57.795494 ignition[879]: Ignition finished successfully Jul 10 00:14:57.796243 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 10 00:14:57.796595 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 10 00:14:57.796737 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 00:14:57.796920 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:14:57.797112 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:14:57.797297 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:14:57.798042 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 10 00:14:57.969895 systemd-fsck[887]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jul 10 00:14:57.980137 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 10 00:14:57.980875 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 10 00:14:58.464480 kernel: EXT4-fs (sda9): mounted filesystem a310c019-7915-47f5-9fce-db4a09ac26c2 r/w with ordered data mode. Quota mode: none. Jul 10 00:14:58.464869 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 10 00:14:58.465398 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 10 00:14:58.467228 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:14:58.469513 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 10 00:14:58.469825 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 10 00:14:58.469855 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 00:14:58.469871 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:14:58.479438 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 10 00:14:58.480702 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 10 00:14:58.527756 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (895) Jul 10 00:14:58.529478 kernel: BTRFS info (device sda6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:14:58.529505 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:14:58.530486 kernel: BTRFS info (device sda6): using free-space-tree Jul 10 00:14:58.535129 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:14:58.548193 initrd-setup-root[919]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 00:14:58.551087 initrd-setup-root[926]: cut: /sysroot/etc/group: No such file or directory Jul 10 00:14:58.555617 initrd-setup-root[933]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 00:14:58.558793 initrd-setup-root[940]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 00:14:58.662921 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 10 00:14:58.664275 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 10 00:14:58.665554 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 10 00:14:58.679208 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 10 00:14:58.681476 kernel: BTRFS info (device sda6): last unmount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:14:58.698363 ignition[1007]: INFO : Ignition 2.21.0 Jul 10 00:14:58.698363 ignition[1007]: INFO : Stage: mount Jul 10 00:14:58.698781 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:14:58.698781 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 10 00:14:58.699411 ignition[1007]: INFO : mount: mount passed Jul 10 00:14:58.699594 ignition[1007]: INFO : Ignition finished successfully Jul 10 00:14:58.700254 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 10 00:14:58.701204 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 10 00:14:58.713429 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:14:58.731287 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1016) Jul 10 00:14:58.731344 kernel: BTRFS info (device sda6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:14:58.731886 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:14:58.731867 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 10 00:14:58.733473 kernel: BTRFS info (device sda6): using free-space-tree Jul 10 00:14:58.736376 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:14:58.750274 ignition[1037]: INFO : Ignition 2.21.0 Jul 10 00:14:58.750274 ignition[1037]: INFO : Stage: files Jul 10 00:14:58.750673 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:14:58.750673 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 10 00:14:58.751276 ignition[1037]: DEBUG : files: compiled without relabeling support, skipping Jul 10 00:14:58.751744 ignition[1037]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 00:14:58.751744 ignition[1037]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 00:14:58.753897 ignition[1037]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 00:14:58.754064 ignition[1037]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 00:14:58.754218 ignition[1037]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 00:14:58.754153 unknown[1037]: wrote ssh authorized keys file for user: core Jul 10 00:14:58.774999 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 10 00:14:58.775381 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 10 00:14:58.825777 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 10 00:14:59.185984 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 10 00:14:59.185984 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 10 00:14:59.185984 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 00:14:59.185984 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:14:59.185984 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:14:59.185984 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:14:59.185984 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:14:59.185984 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:14:59.185984 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:14:59.187869 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:14:59.187869 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:14:59.187869 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 10 00:14:59.192070 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 10 00:14:59.192070 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 10 00:14:59.192070 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 10 00:14:59.452688 systemd-networkd[863]: ens192: Gained IPv6LL Jul 10 00:14:59.875166 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 10 00:15:00.098069 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 10 00:15:00.098332 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 10 00:15:00.098847 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 10 00:15:00.099029 ignition[1037]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 10 00:15:00.099180 ignition[1037]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:15:00.099633 ignition[1037]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:15:00.099633 ignition[1037]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 10 00:15:00.099633 ignition[1037]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 10 00:15:00.099633 ignition[1037]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:15:00.099633 ignition[1037]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:15:00.099633 ignition[1037]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 10 00:15:00.099633 ignition[1037]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 10 00:15:00.124192 ignition[1037]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:15:00.126626 ignition[1037]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:15:00.126863 ignition[1037]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 10 00:15:00.126863 ignition[1037]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 10 00:15:00.126863 ignition[1037]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 00:15:00.126863 ignition[1037]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:15:00.127998 ignition[1037]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:15:00.127998 ignition[1037]: INFO : files: files passed Jul 10 00:15:00.127998 ignition[1037]: INFO : Ignition finished successfully Jul 10 00:15:00.127745 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 10 00:15:00.128676 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 10 00:15:00.129158 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 10 00:15:00.136030 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 00:15:00.136476 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 10 00:15:00.138908 initrd-setup-root-after-ignition[1069]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:15:00.138908 initrd-setup-root-after-ignition[1069]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:15:00.139951 initrd-setup-root-after-ignition[1072]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:15:00.140718 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:15:00.141226 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 10 00:15:00.141950 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 10 00:15:00.177603 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 00:15:00.177677 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 10 00:15:00.178117 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 10 00:15:00.178253 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 10 00:15:00.178630 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 10 00:15:00.179137 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 10 00:15:00.194206 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:15:00.195103 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 10 00:15:00.209301 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:15:00.209497 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:15:00.209669 systemd[1]: Stopped target timers.target - Timer Units. Jul 10 00:15:00.209820 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 00:15:00.209895 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:15:00.210132 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 10 00:15:00.210286 systemd[1]: Stopped target basic.target - Basic System. Jul 10 00:15:00.210424 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 10 00:15:00.210604 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:15:00.210825 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 10 00:15:00.211037 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 10 00:15:00.211234 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 10 00:15:00.211431 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:15:00.211663 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 10 00:15:00.211858 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 10 00:15:00.212033 systemd[1]: Stopped target swap.target - Swaps. Jul 10 00:15:00.212217 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 00:15:00.212280 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:15:00.212563 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:15:00.212821 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:15:00.212999 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 10 00:15:00.213046 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:15:00.213226 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 00:15:00.213292 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 10 00:15:00.213615 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 00:15:00.213681 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:15:00.213919 systemd[1]: Stopped target paths.target - Path Units. Jul 10 00:15:00.214071 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 00:15:00.217480 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:15:00.217667 systemd[1]: Stopped target slices.target - Slice Units. Jul 10 00:15:00.217873 systemd[1]: Stopped target sockets.target - Socket Units. Jul 10 00:15:00.218042 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 00:15:00.218111 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:15:00.218341 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 00:15:00.218386 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:15:00.218644 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 00:15:00.218738 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:15:00.218941 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 00:15:00.218998 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 10 00:15:00.219784 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 10 00:15:00.219897 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 00:15:00.219982 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:15:00.221557 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 10 00:15:00.221693 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 00:15:00.221786 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:15:00.222068 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 00:15:00.222151 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:15:00.225165 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 00:15:00.226725 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 10 00:15:00.234923 ignition[1093]: INFO : Ignition 2.21.0 Jul 10 00:15:00.235203 ignition[1093]: INFO : Stage: umount Jul 10 00:15:00.235392 ignition[1093]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:15:00.235519 ignition[1093]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 10 00:15:00.236161 ignition[1093]: INFO : umount: umount passed Jul 10 00:15:00.236287 ignition[1093]: INFO : Ignition finished successfully Jul 10 00:15:00.237195 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 00:15:00.237387 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 10 00:15:00.237718 systemd[1]: Stopped target network.target - Network. Jul 10 00:15:00.237914 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 00:15:00.238034 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 10 00:15:00.238282 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 00:15:00.238401 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 10 00:15:00.238636 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 00:15:00.238751 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 10 00:15:00.238975 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 10 00:15:00.239091 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 10 00:15:00.239388 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 10 00:15:00.239650 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 10 00:15:00.241015 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 00:15:00.241175 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 10 00:15:00.242752 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 10 00:15:00.243099 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 10 00:15:00.243140 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:15:00.244256 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:15:00.247302 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 00:15:00.247561 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 10 00:15:00.248793 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 10 00:15:00.249038 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 10 00:15:00.249215 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 00:15:00.249243 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:15:00.250533 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 10 00:15:00.250651 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 00:15:00.250685 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:15:00.250910 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jul 10 00:15:00.250937 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jul 10 00:15:00.251134 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:15:00.251159 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:15:00.251693 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 00:15:00.251721 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 10 00:15:00.253683 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:15:00.255538 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 00:15:00.262396 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 00:15:00.262577 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 10 00:15:00.267014 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 00:15:00.267088 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:15:00.267435 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 00:15:00.267479 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 10 00:15:00.267600 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 00:15:00.267615 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:15:00.267720 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 00:15:00.267744 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:15:00.267897 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 00:15:00.267921 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 10 00:15:00.268075 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:15:00.268101 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:15:00.269509 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 10 00:15:00.269671 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 10 00:15:00.269699 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:15:00.270439 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 00:15:00.270558 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:15:00.271781 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:15:00.271809 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:15:00.278526 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 00:15:00.278740 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 10 00:15:00.464545 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 00:15:00.464815 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 10 00:15:00.464842 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 10 00:15:00.464869 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:15:00.558895 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 00:15:00.558977 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 10 00:15:00.559405 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 10 00:15:00.559600 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 00:15:00.559641 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 10 00:15:00.560549 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 10 00:15:00.590501 systemd[1]: Switching root. Jul 10 00:15:00.620361 systemd-journald[244]: Journal stopped Jul 10 00:15:01.935609 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Jul 10 00:15:01.935639 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 00:15:01.935648 kernel: SELinux: policy capability open_perms=1 Jul 10 00:15:01.935654 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 00:15:01.935660 kernel: SELinux: policy capability always_check_network=0 Jul 10 00:15:01.935667 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 00:15:01.935673 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 00:15:01.935679 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 00:15:01.935685 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 00:15:01.935693 kernel: SELinux: policy capability userspace_initial_context=0 Jul 10 00:15:01.935699 systemd[1]: Successfully loaded SELinux policy in 45.379ms. Jul 10 00:15:01.935707 kernel: audit: type=1403 audit(1752106501.349:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 00:15:01.935714 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.324ms. Jul 10 00:15:01.935721 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 00:15:01.935728 systemd[1]: Detected virtualization vmware. Jul 10 00:15:01.935735 systemd[1]: Detected architecture x86-64. Jul 10 00:15:01.935742 systemd[1]: Detected first boot. Jul 10 00:15:01.935750 systemd[1]: Initializing machine ID from random generator. Jul 10 00:15:01.935756 zram_generator::config[1137]: No configuration found. Jul 10 00:15:01.935852 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Jul 10 00:15:01.935863 kernel: Guest personality initialized and is active Jul 10 00:15:01.935871 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 10 00:15:01.935881 kernel: Initialized host personality Jul 10 00:15:01.935894 kernel: NET: Registered PF_VSOCK protocol family Jul 10 00:15:01.935905 systemd[1]: Populated /etc with preset unit settings. Jul 10 00:15:01.935917 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 10 00:15:01.935928 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Jul 10 00:15:01.935935 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 10 00:15:01.935944 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 00:15:01.935954 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 10 00:15:01.935965 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 00:15:01.935973 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 10 00:15:01.935980 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 10 00:15:01.935987 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 10 00:15:01.935993 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 10 00:15:01.936000 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 10 00:15:01.936007 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 10 00:15:01.936015 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 10 00:15:01.936022 systemd[1]: Created slice user.slice - User and Session Slice. Jul 10 00:15:01.936029 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:15:01.936038 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:15:01.936046 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 10 00:15:01.936053 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 10 00:15:01.936060 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 10 00:15:01.936067 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:15:01.936075 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 10 00:15:01.936082 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:15:01.936089 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:15:01.936096 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 10 00:15:01.936103 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 10 00:15:01.936110 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 10 00:15:01.936117 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 10 00:15:01.936126 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:15:01.936135 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:15:01.936142 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:15:01.936149 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:15:01.936156 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 10 00:15:01.936165 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 10 00:15:01.936179 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 10 00:15:01.936188 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:15:01.936195 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:15:01.936202 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:15:01.936209 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 10 00:15:01.936216 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 10 00:15:01.936223 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 10 00:15:01.936230 systemd[1]: Mounting media.mount - External Media Directory... Jul 10 00:15:01.936238 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:15:01.936245 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 10 00:15:01.936252 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 10 00:15:01.936259 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 10 00:15:01.936268 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 00:15:01.936275 systemd[1]: Reached target machines.target - Containers. Jul 10 00:15:01.936282 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 10 00:15:01.936289 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Jul 10 00:15:01.936297 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:15:01.936304 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 10 00:15:01.936311 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:15:01.936318 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:15:01.936325 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:15:01.936332 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 10 00:15:01.936339 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:15:01.936346 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 00:15:01.936354 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 00:15:01.936362 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 10 00:15:01.936368 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 00:15:01.936375 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 00:15:01.936383 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:15:01.936389 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:15:01.936397 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:15:01.936403 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:15:01.936410 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 10 00:15:01.936419 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 10 00:15:01.936426 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:15:01.936433 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 00:15:01.936440 systemd[1]: Stopped verity-setup.service. Jul 10 00:15:01.936447 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:15:01.936467 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 10 00:15:01.936475 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 10 00:15:01.936483 systemd[1]: Mounted media.mount - External Media Directory. Jul 10 00:15:01.936491 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 10 00:15:01.936499 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 10 00:15:01.936506 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 10 00:15:01.936516 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:15:01.936523 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 00:15:01.936530 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 10 00:15:01.936536 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:15:01.936546 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:15:01.936558 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:15:01.936570 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:15:01.936578 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:15:01.936586 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:15:01.936597 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 10 00:15:01.936608 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:15:01.936616 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 10 00:15:01.936623 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 00:15:01.936632 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:15:01.936641 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 10 00:15:01.936665 systemd-journald[1227]: Collecting audit messages is disabled. Jul 10 00:15:01.936687 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 10 00:15:01.936699 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:15:01.936710 systemd-journald[1227]: Journal started Jul 10 00:15:01.936726 systemd-journald[1227]: Runtime Journal (/run/log/journal/a8c8b24bb24543d0841ea6ee1f5c628c) is 4.8M, max 38.8M, 34M free. Jul 10 00:15:01.940647 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 10 00:15:01.940691 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:15:01.940708 kernel: fuse: init (API version 7.41) Jul 10 00:15:01.731417 systemd[1]: Queued start job for default target multi-user.target. Jul 10 00:15:01.738621 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 10 00:15:01.738887 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 00:15:01.941208 jq[1207]: true Jul 10 00:15:01.941742 jq[1242]: true Jul 10 00:15:01.943502 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 10 00:15:01.952482 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:15:01.952519 kernel: loop: module loaded Jul 10 00:15:01.955488 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 10 00:15:01.955516 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:15:01.957032 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 00:15:01.957166 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 10 00:15:01.957398 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:15:01.957514 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:15:01.957792 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 10 00:15:01.958051 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 10 00:15:01.958234 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 10 00:15:01.969574 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 10 00:15:01.974688 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 10 00:15:01.974838 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:15:01.976530 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 10 00:15:01.983217 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 10 00:15:01.985962 ignition[1247]: Ignition 2.21.0 Jul 10 00:15:01.986133 ignition[1247]: deleting config from guestinfo properties Jul 10 00:15:02.007999 kernel: loop0: detected capacity change from 0 to 224512 Jul 10 00:15:02.008148 systemd-journald[1227]: Time spent on flushing to /var/log/journal/a8c8b24bb24543d0841ea6ee1f5c628c is 70.294ms for 1757 entries. Jul 10 00:15:02.008148 systemd-journald[1227]: System Journal (/var/log/journal/a8c8b24bb24543d0841ea6ee1f5c628c) is 8M, max 584.8M, 576.8M free. Jul 10 00:15:02.093175 systemd-journald[1227]: Received client request to flush runtime journal. Jul 10 00:15:02.093209 kernel: ACPI: bus type drm_connector registered Jul 10 00:15:02.093224 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 00:15:02.000754 ignition[1247]: Successfully deleted config Jul 10 00:15:02.003915 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 10 00:15:02.004471 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Jul 10 00:15:02.005112 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 10 00:15:02.007697 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 10 00:15:02.034043 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:15:02.055892 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:15:02.056011 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:15:02.060944 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 10 00:15:02.084144 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 10 00:15:02.087516 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:15:02.094000 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 10 00:15:02.101158 kernel: loop1: detected capacity change from 0 to 113872 Jul 10 00:15:02.127369 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Jul 10 00:15:02.127599 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Jul 10 00:15:02.135500 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:15:02.152471 kernel: loop2: detected capacity change from 0 to 2960 Jul 10 00:15:02.153287 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:15:02.208465 kernel: loop3: detected capacity change from 0 to 146240 Jul 10 00:15:02.264466 kernel: loop4: detected capacity change from 0 to 224512 Jul 10 00:15:02.304471 kernel: loop5: detected capacity change from 0 to 113872 Jul 10 00:15:02.330464 kernel: loop6: detected capacity change from 0 to 2960 Jul 10 00:15:02.356472 kernel: loop7: detected capacity change from 0 to 146240 Jul 10 00:15:02.383200 (sd-merge)[1313]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Jul 10 00:15:02.383507 (sd-merge)[1313]: Merged extensions into '/usr'. Jul 10 00:15:02.391126 systemd[1]: Reload requested from client PID 1264 ('systemd-sysext') (unit systemd-sysext.service)... Jul 10 00:15:02.391136 systemd[1]: Reloading... Jul 10 00:15:02.458468 zram_generator::config[1339]: No configuration found. Jul 10 00:15:02.543643 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:15:02.553125 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 10 00:15:02.599630 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 00:15:02.599836 systemd[1]: Reloading finished in 208 ms. Jul 10 00:15:02.620781 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 10 00:15:02.628619 systemd[1]: Starting ensure-sysext.service... Jul 10 00:15:02.630635 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:15:02.646796 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 10 00:15:02.646987 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 10 00:15:02.647178 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 00:15:02.647372 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 10 00:15:02.647914 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 00:15:02.648126 systemd-tmpfiles[1395]: ACLs are not supported, ignoring. Jul 10 00:15:02.648195 systemd-tmpfiles[1395]: ACLs are not supported, ignoring. Jul 10 00:15:02.669068 systemd[1]: Reload requested from client PID 1394 ('systemctl') (unit ensure-sysext.service)... Jul 10 00:15:02.669080 systemd[1]: Reloading... Jul 10 00:15:02.693305 systemd-tmpfiles[1395]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:15:02.693312 systemd-tmpfiles[1395]: Skipping /boot Jul 10 00:15:02.705495 systemd-tmpfiles[1395]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:15:02.706543 systemd-tmpfiles[1395]: Skipping /boot Jul 10 00:15:02.714756 zram_generator::config[1419]: No configuration found. Jul 10 00:15:02.791622 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:15:02.799846 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 10 00:15:02.802594 ldconfig[1253]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 00:15:02.845616 systemd[1]: Reloading finished in 176 ms. Jul 10 00:15:02.856075 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 10 00:15:02.856493 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 10 00:15:02.859517 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:15:02.864769 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 00:15:02.866896 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 10 00:15:02.874608 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 10 00:15:02.879544 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:15:02.880734 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:15:02.882747 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 10 00:15:02.888212 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 10 00:15:02.889329 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:15:02.891199 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:15:02.894696 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:15:02.899724 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:15:02.900018 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:15:02.900094 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:15:02.900162 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:15:02.904379 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:15:02.905624 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:15:02.905684 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:15:02.905742 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:15:02.906143 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 10 00:15:02.909926 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:15:02.912585 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:15:02.912791 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:15:02.912876 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:15:02.912969 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:15:02.914761 systemd[1]: Finished ensure-sysext.service. Jul 10 00:15:02.919312 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 10 00:15:02.926360 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 10 00:15:02.927686 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:15:02.927809 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:15:02.928242 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:15:02.930619 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:15:02.930849 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:15:02.943235 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:15:02.943912 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:15:02.944655 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:15:02.945646 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:15:02.945913 systemd-udevd[1486]: Using default interface naming scheme 'v255'. Jul 10 00:15:02.946116 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:15:02.946567 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:15:02.953246 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 10 00:15:02.955086 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 10 00:15:02.966494 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 10 00:15:02.967430 augenrules[1524]: No rules Jul 10 00:15:02.968663 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:15:02.969225 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 00:15:02.978587 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 10 00:15:02.978802 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:15:02.983953 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:15:03.062839 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 10 00:15:03.105281 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 10 00:15:03.106237 systemd[1]: Reached target time-set.target - System Time Set. Jul 10 00:15:03.122799 systemd-networkd[1540]: lo: Link UP Jul 10 00:15:03.122805 systemd-networkd[1540]: lo: Gained carrier Jul 10 00:15:03.124732 systemd-networkd[1540]: Enumeration completed Jul 10 00:15:03.124820 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:15:03.125343 systemd-networkd[1540]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Jul 10 00:15:03.127465 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 10 00:15:03.127634 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 10 00:15:03.126673 systemd-resolved[1485]: Positive Trust Anchors: Jul 10 00:15:03.126684 systemd-resolved[1485]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:15:03.126712 systemd-resolved[1485]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:15:03.129332 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 10 00:15:03.130892 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 10 00:15:03.131648 systemd-networkd[1540]: ens192: Link UP Jul 10 00:15:03.131733 systemd-networkd[1540]: ens192: Gained carrier Jul 10 00:15:03.135680 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Jul 10 00:15:03.137933 systemd-resolved[1485]: Defaulting to hostname 'linux'. Jul 10 00:15:03.139893 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:15:03.140312 systemd[1]: Reached target network.target - Network. Jul 10 00:15:03.140420 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:15:03.140941 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:15:03.141123 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 10 00:15:03.141265 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 10 00:15:03.141380 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 10 00:15:03.141681 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 10 00:15:03.142169 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 10 00:15:03.142294 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 10 00:15:03.142423 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 00:15:03.142443 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:15:03.142553 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:15:03.148427 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 10 00:15:03.151098 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 10 00:15:03.153356 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 10 00:15:03.153980 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 10 00:15:03.154265 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 10 00:15:03.158067 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 10 00:15:03.158581 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 10 00:15:03.160050 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 10 00:15:03.160326 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 10 00:15:03.165720 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:15:03.165859 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:15:03.165993 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:15:03.166008 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:15:03.167390 systemd[1]: Starting containerd.service - containerd container runtime... Jul 10 00:15:03.169482 kernel: mousedev: PS/2 mouse device common for all mice Jul 10 00:15:03.169587 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 10 00:15:03.170723 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 10 00:15:03.174567 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 10 00:15:03.180847 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 10 00:15:03.178611 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 10 00:15:03.178756 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 10 00:15:03.182628 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 10 00:15:03.187930 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 10 00:15:03.189457 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 10 00:15:03.192673 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 10 00:15:03.193484 kernel: ACPI: button: Power Button [PWRF] Jul 10 00:15:03.200976 jq[1581]: false Jul 10 00:15:03.203065 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 10 00:15:03.209606 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 10 00:15:03.210217 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 00:15:03.213100 google_oslogin_nss_cache[1583]: oslogin_cache_refresh[1583]: Refreshing passwd entry cache Jul 10 00:15:03.213780 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 00:15:03.216570 systemd[1]: Starting update-engine.service - Update Engine... Jul 10 00:15:03.218534 oslogin_cache_refresh[1583]: Refreshing passwd entry cache Jul 10 00:15:03.220668 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 10 00:15:03.228340 google_oslogin_nss_cache[1583]: oslogin_cache_refresh[1583]: Failure getting users, quitting Jul 10 00:15:03.228384 oslogin_cache_refresh[1583]: Failure getting users, quitting Jul 10 00:15:03.228501 google_oslogin_nss_cache[1583]: oslogin_cache_refresh[1583]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 00:15:03.228501 google_oslogin_nss_cache[1583]: oslogin_cache_refresh[1583]: Refreshing group entry cache Jul 10 00:15:03.228398 oslogin_cache_refresh[1583]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 00:15:03.228422 oslogin_cache_refresh[1583]: Refreshing group entry cache Jul 10 00:15:03.228995 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Jul 10 00:15:03.231012 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 10 00:15:03.231641 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 00:15:03.231778 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 10 00:15:03.232522 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 00:15:03.232650 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 10 00:15:03.237272 google_oslogin_nss_cache[1583]: oslogin_cache_refresh[1583]: Failure getting groups, quitting Jul 10 00:15:03.237272 google_oslogin_nss_cache[1583]: oslogin_cache_refresh[1583]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 00:15:03.236142 oslogin_cache_refresh[1583]: Failure getting groups, quitting Jul 10 00:15:03.236149 oslogin_cache_refresh[1583]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 00:15:03.237596 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 10 00:15:03.238640 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 10 00:15:03.248252 extend-filesystems[1582]: Found /dev/sda6 Jul 10 00:15:03.248746 update_engine[1590]: I20250710 00:15:03.248051 1590 main.cc:92] Flatcar Update Engine starting Jul 10 00:15:03.256435 extend-filesystems[1582]: Found /dev/sda9 Jul 10 00:15:03.257194 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jul 10 00:15:03.260713 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 10 00:15:03.265833 jq[1592]: true Jul 10 00:15:03.277603 extend-filesystems[1582]: Checking size of /dev/sda9 Jul 10 00:15:03.280549 tar[1598]: linux-amd64/LICENSE Jul 10 00:15:03.280549 tar[1598]: linux-amd64/helm Jul 10 00:15:03.277770 systemd-logind[1589]: New seat seat0. Jul 10 00:15:03.281627 dbus-daemon[1579]: [system] SELinux support is enabled Jul 10 00:15:03.281717 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 10 00:15:03.285306 update_engine[1590]: I20250710 00:15:03.285277 1590 update_check_scheduler.cc:74] Next update check in 10m50s Jul 10 00:15:03.287070 systemd[1]: Started systemd-logind.service - User Login Management. Jul 10 00:15:03.288731 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 00:15:03.289608 dbus-daemon[1579]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 10 00:15:03.288753 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 10 00:15:03.289520 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 00:15:03.289532 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 10 00:15:03.289681 systemd[1]: Started update-engine.service - Update Engine. Jul 10 00:15:03.293622 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 10 00:15:03.294712 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 00:15:03.295283 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 10 00:15:03.302636 (ntainerd)[1618]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 10 00:15:03.314520 jq[1616]: true Jul 10 00:15:03.316787 extend-filesystems[1582]: Old size kept for /dev/sda9 Jul 10 00:15:03.317001 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 00:15:03.318324 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 10 00:15:03.331595 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Jul 10 00:15:03.332814 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 10 00:15:03.339255 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Jul 10 00:15:03.401476 bash[1649]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:15:03.401784 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Jul 10 00:15:03.402551 unknown[1632]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Jul 10 00:15:03.403140 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 10 00:15:03.404336 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 10 00:15:03.407419 unknown[1632]: Core dump limit set to -1 Jul 10 00:15:03.522668 sshd_keygen[1626]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 00:15:03.568664 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 10 00:15:03.570956 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 10 00:15:03.579777 locksmithd[1622]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 00:15:03.596486 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 00:15:03.596629 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 10 00:15:03.600892 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 10 00:15:03.627801 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 10 00:15:03.630269 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 10 00:15:03.632385 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 10 00:15:03.632600 systemd[1]: Reached target getty.target - Login Prompts. Jul 10 00:15:03.659241 containerd[1618]: time="2025-07-10T00:15:03Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 10 00:15:03.660365 containerd[1618]: time="2025-07-10T00:15:03.660339742Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 10 00:15:03.676535 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Jul 10 00:15:03.682106 containerd[1618]: time="2025-07-10T00:15:03.682013102Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.524µs" Jul 10 00:15:03.682106 containerd[1618]: time="2025-07-10T00:15:03.682037148Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 10 00:15:03.682106 containerd[1618]: time="2025-07-10T00:15:03.682051162Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 10 00:15:03.682189 containerd[1618]: time="2025-07-10T00:15:03.682137230Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 10 00:15:03.682189 containerd[1618]: time="2025-07-10T00:15:03.682147202Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 10 00:15:03.682189 containerd[1618]: time="2025-07-10T00:15:03.682161989Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 00:15:03.682228 containerd[1618]: time="2025-07-10T00:15:03.682193529Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 00:15:03.682228 containerd[1618]: time="2025-07-10T00:15:03.682200500Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 00:15:03.682339 containerd[1618]: time="2025-07-10T00:15:03.682324985Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 00:15:03.682339 containerd[1618]: time="2025-07-10T00:15:03.682335785Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 00:15:03.682374 containerd[1618]: time="2025-07-10T00:15:03.682342251Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 00:15:03.682374 containerd[1618]: time="2025-07-10T00:15:03.682347193Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 10 00:15:03.682423 containerd[1618]: time="2025-07-10T00:15:03.682384442Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 10 00:15:03.685482 containerd[1618]: time="2025-07-10T00:15:03.685465400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 00:15:03.685509 containerd[1618]: time="2025-07-10T00:15:03.685490487Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 00:15:03.685509 containerd[1618]: time="2025-07-10T00:15:03.685497865Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 10 00:15:03.685509 containerd[1618]: time="2025-07-10T00:15:03.685512790Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 10 00:15:03.685678 containerd[1618]: time="2025-07-10T00:15:03.685628654Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 10 00:15:03.685678 containerd[1618]: time="2025-07-10T00:15:03.685661320Z" level=info msg="metadata content store policy set" policy=shared Jul 10 00:15:03.737091 containerd[1618]: time="2025-07-10T00:15:03.734308252Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 10 00:15:03.737091 containerd[1618]: time="2025-07-10T00:15:03.734353592Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 10 00:15:03.737091 containerd[1618]: time="2025-07-10T00:15:03.734364360Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 10 00:15:03.737091 containerd[1618]: time="2025-07-10T00:15:03.734374422Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 10 00:15:03.737091 containerd[1618]: time="2025-07-10T00:15:03.734384107Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 10 00:15:03.737091 containerd[1618]: time="2025-07-10T00:15:03.734390101Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 10 00:15:03.737091 containerd[1618]: time="2025-07-10T00:15:03.734397684Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 10 00:15:03.737091 containerd[1618]: time="2025-07-10T00:15:03.734404094Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 10 00:15:03.737091 containerd[1618]: time="2025-07-10T00:15:03.734409946Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 10 00:15:03.737091 containerd[1618]: time="2025-07-10T00:15:03.734415041Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 10 00:15:03.737091 containerd[1618]: time="2025-07-10T00:15:03.734420011Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 10 00:15:03.737091 containerd[1618]: time="2025-07-10T00:15:03.734431718Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 10 00:15:03.737091 containerd[1618]: time="2025-07-10T00:15:03.734603594Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 10 00:15:03.737091 containerd[1618]: time="2025-07-10T00:15:03.734616948Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 10 00:15:03.737314 containerd[1618]: time="2025-07-10T00:15:03.734646029Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 10 00:15:03.737314 containerd[1618]: time="2025-07-10T00:15:03.734656566Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 10 00:15:03.737314 containerd[1618]: time="2025-07-10T00:15:03.734664647Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 10 00:15:03.737314 containerd[1618]: time="2025-07-10T00:15:03.734671014Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 10 00:15:03.737314 containerd[1618]: time="2025-07-10T00:15:03.734677208Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 10 00:15:03.737314 containerd[1618]: time="2025-07-10T00:15:03.734682474Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 10 00:15:03.737314 containerd[1618]: time="2025-07-10T00:15:03.734688302Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 10 00:15:03.737314 containerd[1618]: time="2025-07-10T00:15:03.734693801Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 10 00:15:03.737314 containerd[1618]: time="2025-07-10T00:15:03.734704288Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 10 00:15:03.737314 containerd[1618]: time="2025-07-10T00:15:03.734754081Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 10 00:15:03.737314 containerd[1618]: time="2025-07-10T00:15:03.734765413Z" level=info msg="Start snapshots syncer" Jul 10 00:15:03.737314 containerd[1618]: time="2025-07-10T00:15:03.734783293Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 10 00:15:03.737680 containerd[1618]: time="2025-07-10T00:15:03.737655909Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 10 00:15:03.739475 containerd[1618]: time="2025-07-10T00:15:03.738632871Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 10 00:15:03.746982 containerd[1618]: time="2025-07-10T00:15:03.745252346Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 10 00:15:03.746982 containerd[1618]: time="2025-07-10T00:15:03.746592453Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 10 00:15:03.748217 containerd[1618]: time="2025-07-10T00:15:03.747846903Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 10 00:15:03.748217 containerd[1618]: time="2025-07-10T00:15:03.747862502Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 10 00:15:03.748217 containerd[1618]: time="2025-07-10T00:15:03.747870332Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 10 00:15:03.748217 containerd[1618]: time="2025-07-10T00:15:03.747878511Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 10 00:15:03.748217 containerd[1618]: time="2025-07-10T00:15:03.747884607Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 10 00:15:03.748217 containerd[1618]: time="2025-07-10T00:15:03.747891407Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 10 00:15:03.748217 containerd[1618]: time="2025-07-10T00:15:03.747921680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 10 00:15:03.748217 containerd[1618]: time="2025-07-10T00:15:03.747931475Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 10 00:15:03.748217 containerd[1618]: time="2025-07-10T00:15:03.747938323Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 10 00:15:03.748217 containerd[1618]: time="2025-07-10T00:15:03.747957330Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 00:15:03.748217 containerd[1618]: time="2025-07-10T00:15:03.747967288Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 00:15:03.748217 containerd[1618]: time="2025-07-10T00:15:03.747971769Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 00:15:03.748217 containerd[1618]: time="2025-07-10T00:15:03.747976995Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 00:15:03.748217 containerd[1618]: time="2025-07-10T00:15:03.747993170Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 10 00:15:03.748422 containerd[1618]: time="2025-07-10T00:15:03.748000521Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 10 00:15:03.748422 containerd[1618]: time="2025-07-10T00:15:03.748006586Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 10 00:15:03.748422 containerd[1618]: time="2025-07-10T00:15:03.748017635Z" level=info msg="runtime interface created" Jul 10 00:15:03.748422 containerd[1618]: time="2025-07-10T00:15:03.748023574Z" level=info msg="created NRI interface" Jul 10 00:15:03.748422 containerd[1618]: time="2025-07-10T00:15:03.748030105Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 10 00:15:03.748422 containerd[1618]: time="2025-07-10T00:15:03.748040339Z" level=info msg="Connect containerd service" Jul 10 00:15:03.748422 containerd[1618]: time="2025-07-10T00:15:03.748069499Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 10 00:15:03.749227 containerd[1618]: time="2025-07-10T00:15:03.749205161Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:15:03.751315 systemd-logind[1589]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 10 00:15:03.779956 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:15:03.782975 systemd-logind[1589]: Watching system buttons on /dev/input/event2 (Power Button) Jul 10 00:15:03.808998 (udev-worker)[1539]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jul 10 00:15:03.934320 tar[1598]: linux-amd64/README.md Jul 10 00:15:03.951923 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 10 00:15:03.988459 containerd[1618]: time="2025-07-10T00:15:03.988425041Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 00:15:03.989545 containerd[1618]: time="2025-07-10T00:15:03.989475700Z" level=info msg="Start subscribing containerd event" Jul 10 00:15:03.989545 containerd[1618]: time="2025-07-10T00:15:03.989500582Z" level=info msg="Start recovering state" Jul 10 00:15:03.989636 containerd[1618]: time="2025-07-10T00:15:03.989627697Z" level=info msg="Start event monitor" Jul 10 00:15:03.989671 containerd[1618]: time="2025-07-10T00:15:03.989664544Z" level=info msg="Start cni network conf syncer for default" Jul 10 00:15:03.989811 containerd[1618]: time="2025-07-10T00:15:03.989692488Z" level=info msg="Start streaming server" Jul 10 00:15:03.989811 containerd[1618]: time="2025-07-10T00:15:03.989700292Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 10 00:15:03.989811 containerd[1618]: time="2025-07-10T00:15:03.989705056Z" level=info msg="runtime interface starting up..." Jul 10 00:15:03.989811 containerd[1618]: time="2025-07-10T00:15:03.989708093Z" level=info msg="starting plugins..." Jul 10 00:15:03.989811 containerd[1618]: time="2025-07-10T00:15:03.989717086Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 10 00:15:03.989934 containerd[1618]: time="2025-07-10T00:15:03.989925525Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 00:15:03.990081 containerd[1618]: time="2025-07-10T00:15:03.990073886Z" level=info msg="containerd successfully booted in 0.331038s" Jul 10 00:15:03.990414 systemd[1]: Started containerd.service - containerd container runtime. Jul 10 00:15:04.029598 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:15:04.572585 systemd-networkd[1540]: ens192: Gained IPv6LL Jul 10 00:15:04.572914 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Jul 10 00:15:04.573738 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 10 00:15:04.574645 systemd[1]: Reached target network-online.target - Network is Online. Jul 10 00:15:04.575817 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Jul 10 00:15:04.578569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:15:04.583424 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 10 00:15:04.609687 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 10 00:15:04.615066 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 10 00:15:04.615293 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Jul 10 00:15:04.615865 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 10 00:15:05.484019 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:15:05.485050 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 10 00:15:05.485551 systemd[1]: Startup finished in 2.921s (kernel) + 6.735s (initrd) + 4.180s (userspace) = 13.837s. Jul 10 00:15:05.497986 (kubelet)[1802]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:15:05.530771 login[1696]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 10 00:15:05.533146 login[1697]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 10 00:15:05.542542 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 10 00:15:05.544928 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 10 00:15:05.549506 systemd-logind[1589]: New session 2 of user core. Jul 10 00:15:05.554862 systemd-logind[1589]: New session 1 of user core. Jul 10 00:15:05.564214 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 10 00:15:05.567941 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 10 00:15:05.578422 (systemd)[1809]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:15:05.580718 systemd-logind[1589]: New session c1 of user core. Jul 10 00:15:05.668935 systemd[1809]: Queued start job for default target default.target. Jul 10 00:15:05.674274 systemd[1809]: Created slice app.slice - User Application Slice. Jul 10 00:15:05.674291 systemd[1809]: Reached target paths.target - Paths. Jul 10 00:15:05.674329 systemd[1809]: Reached target timers.target - Timers. Jul 10 00:15:05.675072 systemd[1809]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 10 00:15:05.683817 systemd[1809]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 10 00:15:05.683855 systemd[1809]: Reached target sockets.target - Sockets. Jul 10 00:15:05.683881 systemd[1809]: Reached target basic.target - Basic System. Jul 10 00:15:05.683903 systemd[1809]: Reached target default.target - Main User Target. Jul 10 00:15:05.683920 systemd[1809]: Startup finished in 98ms. Jul 10 00:15:05.684061 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 10 00:15:05.687496 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 10 00:15:05.688350 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 10 00:15:05.833884 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Jul 10 00:15:06.017048 kubelet[1802]: E0710 00:15:06.017011 1802 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:15:06.018488 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:15:06.018572 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:15:06.019002 systemd[1]: kubelet.service: Consumed 614ms CPU time, 263M memory peak. Jul 10 00:15:16.263878 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 00:15:16.265487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:15:16.965585 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:15:16.974732 (kubelet)[1851]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:15:17.059862 kubelet[1851]: E0710 00:15:17.059823 1851 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:15:17.062235 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:15:17.062321 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:15:17.062568 systemd[1]: kubelet.service: Consumed 118ms CPU time, 111.2M memory peak. Jul 10 00:15:27.263781 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 10 00:15:27.264895 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:15:27.605294 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:15:27.613627 (kubelet)[1865]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:15:27.665665 kubelet[1865]: E0710 00:15:27.665631 1865 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:15:27.667111 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:15:27.667248 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:15:27.667610 systemd[1]: kubelet.service: Consumed 90ms CPU time, 110.7M memory peak. Jul 10 00:15:33.524370 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 10 00:15:33.526591 systemd[1]: Started sshd@0-139.178.70.100:22-139.178.68.195:38860.service - OpenSSH per-connection server daemon (139.178.68.195:38860). Jul 10 00:15:33.571669 sshd[1873]: Accepted publickey for core from 139.178.68.195 port 38860 ssh2: RSA SHA256:4dbLs3K8zeCUdpJVvc+oLD6Wxu1uro36XJoOlJl6xXw Jul 10 00:15:33.572414 sshd-session[1873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:15:33.575682 systemd-logind[1589]: New session 3 of user core. Jul 10 00:15:33.581531 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 10 00:15:33.635654 systemd[1]: Started sshd@1-139.178.70.100:22-139.178.68.195:38866.service - OpenSSH per-connection server daemon (139.178.68.195:38866). Jul 10 00:15:33.673267 sshd[1878]: Accepted publickey for core from 139.178.68.195 port 38866 ssh2: RSA SHA256:4dbLs3K8zeCUdpJVvc+oLD6Wxu1uro36XJoOlJl6xXw Jul 10 00:15:33.674269 sshd-session[1878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:15:33.677484 systemd-logind[1589]: New session 4 of user core. Jul 10 00:15:33.686608 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 10 00:15:33.734337 sshd[1880]: Connection closed by 139.178.68.195 port 38866 Jul 10 00:15:33.734703 sshd-session[1878]: pam_unix(sshd:session): session closed for user core Jul 10 00:15:33.743930 systemd[1]: sshd@1-139.178.70.100:22-139.178.68.195:38866.service: Deactivated successfully. Jul 10 00:15:33.744975 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 00:15:33.745476 systemd-logind[1589]: Session 4 logged out. Waiting for processes to exit. Jul 10 00:15:33.747444 systemd[1]: Started sshd@2-139.178.70.100:22-139.178.68.195:38880.service - OpenSSH per-connection server daemon (139.178.68.195:38880). Jul 10 00:15:33.748822 systemd-logind[1589]: Removed session 4. Jul 10 00:15:33.783044 sshd[1886]: Accepted publickey for core from 139.178.68.195 port 38880 ssh2: RSA SHA256:4dbLs3K8zeCUdpJVvc+oLD6Wxu1uro36XJoOlJl6xXw Jul 10 00:15:33.783655 sshd-session[1886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:15:33.787495 systemd-logind[1589]: New session 5 of user core. Jul 10 00:15:33.793589 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 10 00:15:33.839917 sshd[1888]: Connection closed by 139.178.68.195 port 38880 Jul 10 00:15:33.840216 sshd-session[1886]: pam_unix(sshd:session): session closed for user core Jul 10 00:15:33.847379 systemd[1]: sshd@2-139.178.70.100:22-139.178.68.195:38880.service: Deactivated successfully. Jul 10 00:15:33.848751 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 00:15:33.849327 systemd-logind[1589]: Session 5 logged out. Waiting for processes to exit. Jul 10 00:15:33.851206 systemd[1]: Started sshd@3-139.178.70.100:22-139.178.68.195:38886.service - OpenSSH per-connection server daemon (139.178.68.195:38886). Jul 10 00:15:33.852083 systemd-logind[1589]: Removed session 5. Jul 10 00:15:33.886604 sshd[1894]: Accepted publickey for core from 139.178.68.195 port 38886 ssh2: RSA SHA256:4dbLs3K8zeCUdpJVvc+oLD6Wxu1uro36XJoOlJl6xXw Jul 10 00:15:33.887372 sshd-session[1894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:15:33.890278 systemd-logind[1589]: New session 6 of user core. Jul 10 00:15:33.896600 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 10 00:15:33.946051 sshd[1896]: Connection closed by 139.178.68.195 port 38886 Jul 10 00:15:33.946824 sshd-session[1894]: pam_unix(sshd:session): session closed for user core Jul 10 00:15:33.955573 systemd[1]: sshd@3-139.178.70.100:22-139.178.68.195:38886.service: Deactivated successfully. Jul 10 00:15:33.956487 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 00:15:33.956997 systemd-logind[1589]: Session 6 logged out. Waiting for processes to exit. Jul 10 00:15:33.958371 systemd[1]: Started sshd@4-139.178.70.100:22-139.178.68.195:38898.service - OpenSSH per-connection server daemon (139.178.68.195:38898). Jul 10 00:15:33.959496 systemd-logind[1589]: Removed session 6. Jul 10 00:15:34.000867 sshd[1902]: Accepted publickey for core from 139.178.68.195 port 38898 ssh2: RSA SHA256:4dbLs3K8zeCUdpJVvc+oLD6Wxu1uro36XJoOlJl6xXw Jul 10 00:15:34.001909 sshd-session[1902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:15:34.005380 systemd-logind[1589]: New session 7 of user core. Jul 10 00:15:34.010586 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 10 00:15:34.069814 sudo[1905]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 00:15:34.069988 sudo[1905]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:15:34.078842 sudo[1905]: pam_unix(sudo:session): session closed for user root Jul 10 00:15:34.079599 sshd[1904]: Connection closed by 139.178.68.195 port 38898 Jul 10 00:15:34.079903 sshd-session[1902]: pam_unix(sshd:session): session closed for user core Jul 10 00:15:34.095165 systemd[1]: sshd@4-139.178.70.100:22-139.178.68.195:38898.service: Deactivated successfully. Jul 10 00:15:34.096353 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 00:15:34.097192 systemd-logind[1589]: Session 7 logged out. Waiting for processes to exit. Jul 10 00:15:34.098738 systemd-logind[1589]: Removed session 7. Jul 10 00:15:34.099581 systemd[1]: Started sshd@5-139.178.70.100:22-139.178.68.195:38910.service - OpenSSH per-connection server daemon (139.178.68.195:38910). Jul 10 00:15:34.146239 sshd[1911]: Accepted publickey for core from 139.178.68.195 port 38910 ssh2: RSA SHA256:4dbLs3K8zeCUdpJVvc+oLD6Wxu1uro36XJoOlJl6xXw Jul 10 00:15:34.147071 sshd-session[1911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:15:34.150374 systemd-logind[1589]: New session 8 of user core. Jul 10 00:15:34.158601 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 10 00:15:34.207104 sudo[1915]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 00:15:34.207258 sudo[1915]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:15:34.210004 sudo[1915]: pam_unix(sudo:session): session closed for user root Jul 10 00:15:34.213062 sudo[1914]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 10 00:15:34.213213 sudo[1914]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:15:34.219465 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 00:15:34.246123 augenrules[1937]: No rules Jul 10 00:15:34.246848 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:15:34.247065 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 00:15:34.248495 sudo[1914]: pam_unix(sudo:session): session closed for user root Jul 10 00:15:34.249186 sshd[1913]: Connection closed by 139.178.68.195 port 38910 Jul 10 00:15:34.249443 sshd-session[1911]: pam_unix(sshd:session): session closed for user core Jul 10 00:15:34.254835 systemd[1]: sshd@5-139.178.70.100:22-139.178.68.195:38910.service: Deactivated successfully. Jul 10 00:15:34.255954 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 00:15:34.256488 systemd-logind[1589]: Session 8 logged out. Waiting for processes to exit. Jul 10 00:15:34.258275 systemd[1]: Started sshd@6-139.178.70.100:22-139.178.68.195:38916.service - OpenSSH per-connection server daemon (139.178.68.195:38916). Jul 10 00:15:34.258842 systemd-logind[1589]: Removed session 8. Jul 10 00:15:34.297118 sshd[1946]: Accepted publickey for core from 139.178.68.195 port 38916 ssh2: RSA SHA256:4dbLs3K8zeCUdpJVvc+oLD6Wxu1uro36XJoOlJl6xXw Jul 10 00:15:34.297901 sshd-session[1946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:15:34.300428 systemd-logind[1589]: New session 9 of user core. Jul 10 00:15:34.310553 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 10 00:15:34.358105 sudo[1949]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 00:15:34.358560 sudo[1949]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:15:34.717341 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 10 00:15:34.721779 (dockerd)[1968]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 10 00:15:35.068162 dockerd[1968]: time="2025-07-10T00:15:35.067994650Z" level=info msg="Starting up" Jul 10 00:15:35.069473 dockerd[1968]: time="2025-07-10T00:15:35.069336278Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 10 00:15:35.131250 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1274030045-merged.mount: Deactivated successfully. Jul 10 00:15:35.220982 systemd[1]: var-lib-docker-metacopy\x2dcheck832336079-merged.mount: Deactivated successfully. Jul 10 00:15:35.247799 dockerd[1968]: time="2025-07-10T00:15:35.247746209Z" level=info msg="Loading containers: start." Jul 10 00:15:35.295478 kernel: Initializing XFRM netlink socket Jul 10 00:15:35.680377 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Jul 10 00:15:35.736381 systemd-networkd[1540]: docker0: Link UP Jul 10 00:15:35.738033 dockerd[1968]: time="2025-07-10T00:15:35.738007899Z" level=info msg="Loading containers: done." Jul 10 00:15:35.751666 dockerd[1968]: time="2025-07-10T00:15:35.751631123Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 00:15:35.751781 dockerd[1968]: time="2025-07-10T00:15:35.751691319Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 10 00:15:35.751781 dockerd[1968]: time="2025-07-10T00:15:35.751757001Z" level=info msg="Initializing buildkit" Jul 10 00:17:02.237916 systemd-resolved[1485]: Clock change detected. Flushing caches. Jul 10 00:17:02.238091 systemd-timesyncd[1501]: Contacted time server 142.202.190.19:123 (2.flatcar.pool.ntp.org). Jul 10 00:17:02.238123 systemd-timesyncd[1501]: Initial clock synchronization to Thu 2025-07-10 00:17:02.237836 UTC. Jul 10 00:17:02.249857 dockerd[1968]: time="2025-07-10T00:17:02.249820853Z" level=info msg="Completed buildkit initialization" Jul 10 00:17:02.255556 dockerd[1968]: time="2025-07-10T00:17:02.255523978Z" level=info msg="Daemon has completed initialization" Jul 10 00:17:02.255772 dockerd[1968]: time="2025-07-10T00:17:02.255607518Z" level=info msg="API listen on /run/docker.sock" Jul 10 00:17:02.255916 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 10 00:17:02.614721 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3412972260-merged.mount: Deactivated successfully. Jul 10 00:17:02.996304 containerd[1618]: time="2025-07-10T00:17:02.996071692Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 10 00:17:03.461553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount157576721.mount: Deactivated successfully. Jul 10 00:17:04.249657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 10 00:17:04.252137 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:17:04.457087 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:17:04.464768 (kubelet)[2231]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:17:04.475203 containerd[1618]: time="2025-07-10T00:17:04.475174602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:17:04.478752 containerd[1618]: time="2025-07-10T00:17:04.478737983Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jul 10 00:17:04.482398 containerd[1618]: time="2025-07-10T00:17:04.482382489Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:17:04.490452 kubelet[2231]: E0710 00:17:04.490416 2231 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:17:04.491867 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:17:04.491964 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:17:04.492202 systemd[1]: kubelet.service: Consumed 88ms CPU time, 110.1M memory peak. Jul 10 00:17:04.492846 containerd[1618]: time="2025-07-10T00:17:04.492828145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:17:04.493244 containerd[1618]: time="2025-07-10T00:17:04.493153188Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 1.496885783s" Jul 10 00:17:04.493294 containerd[1618]: time="2025-07-10T00:17:04.493286052Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 10 00:17:04.493680 containerd[1618]: time="2025-07-10T00:17:04.493665233Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 10 00:17:05.747642 containerd[1618]: time="2025-07-10T00:17:05.746986264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:17:05.749015 containerd[1618]: time="2025-07-10T00:17:05.748998837Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jul 10 00:17:05.753872 containerd[1618]: time="2025-07-10T00:17:05.753858910Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:17:05.761881 containerd[1618]: time="2025-07-10T00:17:05.761858332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:17:05.762227 containerd[1618]: time="2025-07-10T00:17:05.762130527Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.26844622s" Jul 10 00:17:05.762227 containerd[1618]: time="2025-07-10T00:17:05.762147501Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 10 00:17:05.762405 containerd[1618]: time="2025-07-10T00:17:05.762391250Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 10 00:17:07.170045 containerd[1618]: time="2025-07-10T00:17:07.170015080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:17:07.170563 containerd[1618]: time="2025-07-10T00:17:07.170546675Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jul 10 00:17:07.170808 containerd[1618]: time="2025-07-10T00:17:07.170794392Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:17:07.172578 containerd[1618]: time="2025-07-10T00:17:07.172561063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:17:07.173401 containerd[1618]: time="2025-07-10T00:17:07.173384126Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.410963332s" Jul 10 00:17:07.173427 containerd[1618]: time="2025-07-10T00:17:07.173403923Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 10 00:17:07.173719 containerd[1618]: time="2025-07-10T00:17:07.173703617Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 10 00:17:08.400161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4067538939.mount: Deactivated successfully. Jul 10 00:17:08.773191 containerd[1618]: time="2025-07-10T00:17:08.772856502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:17:08.778095 containerd[1618]: time="2025-07-10T00:17:08.778082580Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jul 10 00:17:08.785211 containerd[1618]: time="2025-07-10T00:17:08.785194528Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:17:08.792621 containerd[1618]: time="2025-07-10T00:17:08.792599149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:17:08.792912 containerd[1618]: time="2025-07-10T00:17:08.792832054Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.619111606s" Jul 10 00:17:08.792912 containerd[1618]: time="2025-07-10T00:17:08.792856153Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 10 00:17:08.793152 containerd[1618]: time="2025-07-10T00:17:08.793140909Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 10 00:17:10.347180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1654464457.mount: Deactivated successfully. Jul 10 00:17:11.001463 containerd[1618]: time="2025-07-10T00:17:11.001267064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:17:11.003480 containerd[1618]: time="2025-07-10T00:17:11.003461001Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 10 00:17:11.010386 containerd[1618]: time="2025-07-10T00:17:11.010348855Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:17:11.015401 containerd[1618]: time="2025-07-10T00:17:11.015366611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:17:11.016083 containerd[1618]: time="2025-07-10T00:17:11.016000402Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.222650052s" Jul 10 00:17:11.016083 containerd[1618]: time="2025-07-10T00:17:11.016020609Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 10 00:17:11.016461 containerd[1618]: time="2025-07-10T00:17:11.016350424Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 00:17:11.430150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2782284999.mount: Deactivated successfully. Jul 10 00:17:11.432960 containerd[1618]: time="2025-07-10T00:17:11.432926501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:17:11.433315 containerd[1618]: time="2025-07-10T00:17:11.433304179Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 10 00:17:11.433371 containerd[1618]: time="2025-07-10T00:17:11.433333478Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:17:11.434399 containerd[1618]: time="2025-07-10T00:17:11.434388354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:17:11.434792 containerd[1618]: time="2025-07-10T00:17:11.434776171Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 418.41168ms" Jul 10 00:17:11.434822 containerd[1618]: time="2025-07-10T00:17:11.434793633Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 10 00:17:11.435194 containerd[1618]: time="2025-07-10T00:17:11.435163129Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 10 00:17:12.087375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2568200229.mount: Deactivated successfully. Jul 10 00:17:14.499803 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 10 00:17:14.501488 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:17:15.134724 update_engine[1590]: I20250710 00:17:15.134672 1590 update_attempter.cc:509] Updating boot flags... Jul 10 00:17:15.351588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:17:15.356618 (kubelet)[2387]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:17:15.612322 kubelet[2387]: E0710 00:17:15.612280 2387 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:17:15.614095 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:17:15.614308 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:17:15.614625 systemd[1]: kubelet.service: Consumed 118ms CPU time, 105.2M memory peak. Jul 10 00:17:15.907464 containerd[1618]: time="2025-07-10T00:17:15.906676238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:17:15.912953 containerd[1618]: time="2025-07-10T00:17:15.912919157Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jul 10 00:17:15.920010 containerd[1618]: time="2025-07-10T00:17:15.919997741Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:17:15.931976 containerd[1618]: time="2025-07-10T00:17:15.931953856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:17:15.932520 containerd[1618]: time="2025-07-10T00:17:15.932506805Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.497328446s" Jul 10 00:17:15.932603 containerd[1618]: time="2025-07-10T00:17:15.932593495Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 10 00:17:17.791335 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:17:17.791682 systemd[1]: kubelet.service: Consumed 118ms CPU time, 105.2M memory peak. Jul 10 00:17:17.793356 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:17:17.811040 systemd[1]: Reload requested from client PID 2423 ('systemctl') (unit session-9.scope)... Jul 10 00:17:17.811050 systemd[1]: Reloading... Jul 10 00:17:17.896752 zram_generator::config[2473]: No configuration found. Jul 10 00:17:17.952781 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:17:17.961863 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 10 00:17:18.035598 systemd[1]: Reloading finished in 224 ms. Jul 10 00:17:18.067515 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 10 00:17:18.067585 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 10 00:17:18.067949 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:17:18.070260 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:17:18.443149 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:17:18.453738 (kubelet)[2534]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:17:18.478688 kubelet[2534]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:17:18.478900 kubelet[2534]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 00:17:18.478933 kubelet[2534]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:17:18.498688 kubelet[2534]: I0710 00:17:18.498660 2534 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:17:19.019109 kubelet[2534]: I0710 00:17:19.019087 2534 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 10 00:17:19.019247 kubelet[2534]: I0710 00:17:19.019241 2534 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:17:19.019474 kubelet[2534]: I0710 00:17:19.019461 2534 server.go:954] "Client rotation is on, will bootstrap in background" Jul 10 00:17:19.205475 kubelet[2534]: E0710 00:17:19.205423 2534 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:17:19.209055 kubelet[2534]: I0710 00:17:19.209029 2534 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:17:19.217117 kubelet[2534]: I0710 00:17:19.217088 2534 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 00:17:19.221131 kubelet[2534]: I0710 00:17:19.221109 2534 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:17:19.227052 kubelet[2534]: I0710 00:17:19.226997 2534 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:17:19.227235 kubelet[2534]: I0710 00:17:19.227052 2534 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:17:19.228591 kubelet[2534]: I0710 00:17:19.228566 2534 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:17:19.228655 kubelet[2534]: I0710 00:17:19.228593 2534 container_manager_linux.go:304] "Creating device plugin manager" Jul 10 00:17:19.230459 kubelet[2534]: I0710 00:17:19.230416 2534 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:17:19.237678 kubelet[2534]: I0710 00:17:19.237549 2534 kubelet.go:446] "Attempting to sync node with API server" Jul 10 00:17:19.237678 kubelet[2534]: I0710 00:17:19.237605 2534 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:17:19.239234 kubelet[2534]: I0710 00:17:19.239026 2534 kubelet.go:352] "Adding apiserver pod source" Jul 10 00:17:19.239234 kubelet[2534]: I0710 00:17:19.239043 2534 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:17:19.241456 kubelet[2534]: W0710 00:17:19.240702 2534 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Jul 10 00:17:19.241456 kubelet[2534]: E0710 00:17:19.240745 2534 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:17:19.241569 kubelet[2534]: W0710 00:17:19.241486 2534 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Jul 10 00:17:19.241569 kubelet[2534]: E0710 00:17:19.241523 2534 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:17:19.243580 kubelet[2534]: I0710 00:17:19.243559 2534 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 00:17:19.247155 kubelet[2534]: I0710 00:17:19.246845 2534 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:17:19.247525 kubelet[2534]: W0710 00:17:19.247510 2534 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 00:17:19.251257 kubelet[2534]: I0710 00:17:19.251222 2534 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 00:17:19.251257 kubelet[2534]: I0710 00:17:19.251249 2534 server.go:1287] "Started kubelet" Jul 10 00:17:19.262906 kubelet[2534]: I0710 00:17:19.262867 2534 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:17:19.286447 kubelet[2534]: I0710 00:17:19.285997 2534 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:17:19.286447 kubelet[2534]: I0710 00:17:19.286334 2534 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:17:19.295928 kubelet[2534]: I0710 00:17:19.295904 2534 server.go:479] "Adding debug handlers to kubelet server" Jul 10 00:17:19.305414 kubelet[2534]: E0710 00:17:19.300884 2534 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.100:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.100:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1850bbb4912c66ca default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 00:17:19.251236554 +0000 UTC m=+0.795445532,LastTimestamp:2025-07-10 00:17:19.251236554 +0000 UTC m=+0.795445532,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 00:17:19.306051 kubelet[2534]: I0710 00:17:19.305749 2534 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:17:19.308452 kubelet[2534]: I0710 00:17:19.308269 2534 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:17:19.309931 kubelet[2534]: I0710 00:17:19.309216 2534 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 00:17:19.309931 kubelet[2534]: E0710 00:17:19.309351 2534 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:17:19.311455 kubelet[2534]: I0710 00:17:19.310821 2534 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 00:17:19.311455 kubelet[2534]: I0710 00:17:19.310851 2534 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:17:19.314582 kubelet[2534]: E0710 00:17:19.313904 2534 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="200ms" Jul 10 00:17:19.314582 kubelet[2534]: W0710 00:17:19.314112 2534 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Jul 10 00:17:19.314582 kubelet[2534]: E0710 00:17:19.314140 2534 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:17:19.315916 kubelet[2534]: I0710 00:17:19.315906 2534 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:17:19.316050 kubelet[2534]: I0710 00:17:19.316029 2534 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:17:19.317721 kubelet[2534]: I0710 00:17:19.317696 2534 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:17:19.318351 kubelet[2534]: I0710 00:17:19.318337 2534 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:17:19.318351 kubelet[2534]: I0710 00:17:19.318351 2534 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 10 00:17:19.318426 kubelet[2534]: I0710 00:17:19.318362 2534 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 00:17:19.318426 kubelet[2534]: I0710 00:17:19.318368 2534 kubelet.go:2382] "Starting kubelet main sync loop" Jul 10 00:17:19.318426 kubelet[2534]: E0710 00:17:19.318407 2534 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:17:19.322154 kubelet[2534]: W0710 00:17:19.322100 2534 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Jul 10 00:17:19.322154 kubelet[2534]: E0710 00:17:19.322132 2534 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:17:19.322587 kubelet[2534]: E0710 00:17:19.322577 2534 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:17:19.322693 kubelet[2534]: I0710 00:17:19.322686 2534 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:17:19.341994 kubelet[2534]: I0710 00:17:19.341925 2534 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 00:17:19.341994 kubelet[2534]: I0710 00:17:19.341961 2534 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 00:17:19.342129 kubelet[2534]: I0710 00:17:19.342004 2534 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:17:19.343058 kubelet[2534]: I0710 00:17:19.343045 2534 policy_none.go:49] "None policy: Start" Jul 10 00:17:19.343086 kubelet[2534]: I0710 00:17:19.343063 2534 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 00:17:19.343086 kubelet[2534]: I0710 00:17:19.343071 2534 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:17:19.346627 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 10 00:17:19.360723 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 10 00:17:19.370323 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 10 00:17:19.371344 kubelet[2534]: I0710 00:17:19.371325 2534 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:17:19.371462 kubelet[2534]: I0710 00:17:19.371450 2534 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:17:19.371494 kubelet[2534]: I0710 00:17:19.371462 2534 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:17:19.373847 kubelet[2534]: E0710 00:17:19.373809 2534 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 00:17:19.373904 kubelet[2534]: E0710 00:17:19.373850 2534 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 10 00:17:19.376381 kubelet[2534]: I0710 00:17:19.376141 2534 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:17:19.426318 systemd[1]: Created slice kubepods-burstable-podfff6e4d96789cfd115678e034c5bbf53.slice - libcontainer container kubepods-burstable-podfff6e4d96789cfd115678e034c5bbf53.slice. Jul 10 00:17:19.435801 kubelet[2534]: E0710 00:17:19.435778 2534 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:17:19.437832 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 10 00:17:19.439560 kubelet[2534]: E0710 00:17:19.439459 2534 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:17:19.441303 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 10 00:17:19.442986 kubelet[2534]: E0710 00:17:19.442974 2534 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:17:19.473110 kubelet[2534]: I0710 00:17:19.473090 2534 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:17:19.473543 kubelet[2534]: E0710 00:17:19.473522 2534 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.100:6443/api/v1/nodes\": dial tcp 139.178.70.100:6443: connect: connection refused" node="localhost" Jul 10 00:17:19.512169 kubelet[2534]: I0710 00:17:19.512035 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fff6e4d96789cfd115678e034c5bbf53-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fff6e4d96789cfd115678e034c5bbf53\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:17:19.512169 kubelet[2534]: I0710 00:17:19.512062 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fff6e4d96789cfd115678e034c5bbf53-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fff6e4d96789cfd115678e034c5bbf53\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:17:19.512169 kubelet[2534]: I0710 00:17:19.512075 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:17:19.512169 kubelet[2534]: I0710 00:17:19.512087 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:17:19.512169 kubelet[2534]: I0710 00:17:19.512097 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:17:19.512502 kubelet[2534]: I0710 00:17:19.512108 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:17:19.512502 kubelet[2534]: I0710 00:17:19.512119 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:17:19.512502 kubelet[2534]: I0710 00:17:19.512138 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fff6e4d96789cfd115678e034c5bbf53-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fff6e4d96789cfd115678e034c5bbf53\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:17:19.512502 kubelet[2534]: I0710 00:17:19.512148 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:17:19.514363 kubelet[2534]: E0710 00:17:19.514340 2534 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="400ms" Jul 10 00:17:19.675056 kubelet[2534]: I0710 00:17:19.674963 2534 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:17:19.675252 kubelet[2534]: E0710 00:17:19.675232 2534 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.100:6443/api/v1/nodes\": dial tcp 139.178.70.100:6443: connect: connection refused" node="localhost" Jul 10 00:17:19.739430 containerd[1618]: time="2025-07-10T00:17:19.739165723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fff6e4d96789cfd115678e034c5bbf53,Namespace:kube-system,Attempt:0,}" Jul 10 00:17:19.740387 containerd[1618]: time="2025-07-10T00:17:19.740278371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 10 00:17:19.744392 containerd[1618]: time="2025-07-10T00:17:19.744220934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 10 00:17:19.846201 containerd[1618]: time="2025-07-10T00:17:19.846170214Z" level=info msg="connecting to shim 7a509c82c0ea82b8d771923c83e9fd4f2cb65bcbf1c289e6ac079b09935b0d28" address="unix:///run/containerd/s/608bc79bac7b77c281c5cc51d7ef10c408c0be8924f8402e3db7eb19bdb4e8bf" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:17:19.849020 containerd[1618]: time="2025-07-10T00:17:19.849000621Z" level=info msg="connecting to shim 48a7b0d08fd3c19fdbabfdaf1ce5616c09c2c8faf5252eab0f63179558499bd3" address="unix:///run/containerd/s/3ec068ac5c8d9a586c9bd1d573d574a3fdb584684e771b7d162bf7c8adb5ab8d" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:17:19.857688 containerd[1618]: time="2025-07-10T00:17:19.857637359Z" level=info msg="connecting to shim ab6ddc075262b8105e01a4cc58fc952d53c76d2f10ac86672af96d014fa5c58e" address="unix:///run/containerd/s/e370aa56cc2b71429aad417132542dec3382388a61c940fe9e990838c8049189" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:17:19.916455 kubelet[2534]: E0710 00:17:19.915711 2534 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="800ms" Jul 10 00:17:19.962670 systemd[1]: Started cri-containerd-48a7b0d08fd3c19fdbabfdaf1ce5616c09c2c8faf5252eab0f63179558499bd3.scope - libcontainer container 48a7b0d08fd3c19fdbabfdaf1ce5616c09c2c8faf5252eab0f63179558499bd3. Jul 10 00:17:19.964140 systemd[1]: Started cri-containerd-7a509c82c0ea82b8d771923c83e9fd4f2cb65bcbf1c289e6ac079b09935b0d28.scope - libcontainer container 7a509c82c0ea82b8d771923c83e9fd4f2cb65bcbf1c289e6ac079b09935b0d28. Jul 10 00:17:19.965052 systemd[1]: Started cri-containerd-ab6ddc075262b8105e01a4cc58fc952d53c76d2f10ac86672af96d014fa5c58e.scope - libcontainer container ab6ddc075262b8105e01a4cc58fc952d53c76d2f10ac86672af96d014fa5c58e. Jul 10 00:17:20.048684 containerd[1618]: time="2025-07-10T00:17:20.048571989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fff6e4d96789cfd115678e034c5bbf53,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a509c82c0ea82b8d771923c83e9fd4f2cb65bcbf1c289e6ac079b09935b0d28\"" Jul 10 00:17:20.052568 containerd[1618]: time="2025-07-10T00:17:20.052545312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"48a7b0d08fd3c19fdbabfdaf1ce5616c09c2c8faf5252eab0f63179558499bd3\"" Jul 10 00:17:20.053848 containerd[1618]: time="2025-07-10T00:17:20.053820861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab6ddc075262b8105e01a4cc58fc952d53c76d2f10ac86672af96d014fa5c58e\"" Jul 10 00:17:20.054934 containerd[1618]: time="2025-07-10T00:17:20.054835688Z" level=info msg="CreateContainer within sandbox \"7a509c82c0ea82b8d771923c83e9fd4f2cb65bcbf1c289e6ac079b09935b0d28\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 00:17:20.055637 containerd[1618]: time="2025-07-10T00:17:20.055435752Z" level=info msg="CreateContainer within sandbox \"48a7b0d08fd3c19fdbabfdaf1ce5616c09c2c8faf5252eab0f63179558499bd3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 00:17:20.057409 containerd[1618]: time="2025-07-10T00:17:20.057389720Z" level=info msg="CreateContainer within sandbox \"ab6ddc075262b8105e01a4cc58fc952d53c76d2f10ac86672af96d014fa5c58e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 00:17:20.068065 containerd[1618]: time="2025-07-10T00:17:20.068035299Z" level=info msg="Container 8be9ce0797550cfd9c63996fbef7d18647ba00ca9a04318b7f286ef5f06f11ad: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:17:20.069455 containerd[1618]: time="2025-07-10T00:17:20.068746637Z" level=info msg="Container 05fd2d3404b6e8006b5270da11f7a01ec311bc7f91d1134bf235c8686d78bb5d: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:17:20.070576 containerd[1618]: time="2025-07-10T00:17:20.070554288Z" level=info msg="Container 72d600d5fc79d5a5a6c1ee28c0fd5537880ef6d0a60bafe54e06e61df2f7f30c: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:17:20.075205 containerd[1618]: time="2025-07-10T00:17:20.075183380Z" level=info msg="CreateContainer within sandbox \"ab6ddc075262b8105e01a4cc58fc952d53c76d2f10ac86672af96d014fa5c58e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8be9ce0797550cfd9c63996fbef7d18647ba00ca9a04318b7f286ef5f06f11ad\"" Jul 10 00:17:20.076362 containerd[1618]: time="2025-07-10T00:17:20.076343937Z" level=info msg="StartContainer for \"8be9ce0797550cfd9c63996fbef7d18647ba00ca9a04318b7f286ef5f06f11ad\"" Jul 10 00:17:20.078584 containerd[1618]: time="2025-07-10T00:17:20.078565041Z" level=info msg="connecting to shim 8be9ce0797550cfd9c63996fbef7d18647ba00ca9a04318b7f286ef5f06f11ad" address="unix:///run/containerd/s/e370aa56cc2b71429aad417132542dec3382388a61c940fe9e990838c8049189" protocol=ttrpc version=3 Jul 10 00:17:20.078713 containerd[1618]: time="2025-07-10T00:17:20.078700930Z" level=info msg="CreateContainer within sandbox \"7a509c82c0ea82b8d771923c83e9fd4f2cb65bcbf1c289e6ac079b09935b0d28\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"05fd2d3404b6e8006b5270da11f7a01ec311bc7f91d1134bf235c8686d78bb5d\"" Jul 10 00:17:20.079093 containerd[1618]: time="2025-07-10T00:17:20.079081142Z" level=info msg="CreateContainer within sandbox \"48a7b0d08fd3c19fdbabfdaf1ce5616c09c2c8faf5252eab0f63179558499bd3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"72d600d5fc79d5a5a6c1ee28c0fd5537880ef6d0a60bafe54e06e61df2f7f30c\"" Jul 10 00:17:20.079788 kubelet[2534]: I0710 00:17:20.079773 2534 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:17:20.079941 containerd[1618]: time="2025-07-10T00:17:20.079182917Z" level=info msg="StartContainer for \"05fd2d3404b6e8006b5270da11f7a01ec311bc7f91d1134bf235c8686d78bb5d\"" Jul 10 00:17:20.080118 kubelet[2534]: E0710 00:17:20.080107 2534 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.100:6443/api/v1/nodes\": dial tcp 139.178.70.100:6443: connect: connection refused" node="localhost" Jul 10 00:17:20.080454 containerd[1618]: time="2025-07-10T00:17:20.079992660Z" level=info msg="StartContainer for \"72d600d5fc79d5a5a6c1ee28c0fd5537880ef6d0a60bafe54e06e61df2f7f30c\"" Jul 10 00:17:20.080868 containerd[1618]: time="2025-07-10T00:17:20.080854084Z" level=info msg="connecting to shim 72d600d5fc79d5a5a6c1ee28c0fd5537880ef6d0a60bafe54e06e61df2f7f30c" address="unix:///run/containerd/s/3ec068ac5c8d9a586c9bd1d573d574a3fdb584684e771b7d162bf7c8adb5ab8d" protocol=ttrpc version=3 Jul 10 00:17:20.082256 containerd[1618]: time="2025-07-10T00:17:20.082236911Z" level=info msg="connecting to shim 05fd2d3404b6e8006b5270da11f7a01ec311bc7f91d1134bf235c8686d78bb5d" address="unix:///run/containerd/s/608bc79bac7b77c281c5cc51d7ef10c408c0be8924f8402e3db7eb19bdb4e8bf" protocol=ttrpc version=3 Jul 10 00:17:20.100533 systemd[1]: Started cri-containerd-8be9ce0797550cfd9c63996fbef7d18647ba00ca9a04318b7f286ef5f06f11ad.scope - libcontainer container 8be9ce0797550cfd9c63996fbef7d18647ba00ca9a04318b7f286ef5f06f11ad. Jul 10 00:17:20.104748 systemd[1]: Started cri-containerd-05fd2d3404b6e8006b5270da11f7a01ec311bc7f91d1134bf235c8686d78bb5d.scope - libcontainer container 05fd2d3404b6e8006b5270da11f7a01ec311bc7f91d1134bf235c8686d78bb5d. Jul 10 00:17:20.106356 systemd[1]: Started cri-containerd-72d600d5fc79d5a5a6c1ee28c0fd5537880ef6d0a60bafe54e06e61df2f7f30c.scope - libcontainer container 72d600d5fc79d5a5a6c1ee28c0fd5537880ef6d0a60bafe54e06e61df2f7f30c. Jul 10 00:17:20.151904 containerd[1618]: time="2025-07-10T00:17:20.151880751Z" level=info msg="StartContainer for \"05fd2d3404b6e8006b5270da11f7a01ec311bc7f91d1134bf235c8686d78bb5d\" returns successfully" Jul 10 00:17:20.165997 containerd[1618]: time="2025-07-10T00:17:20.165971690Z" level=info msg="StartContainer for \"8be9ce0797550cfd9c63996fbef7d18647ba00ca9a04318b7f286ef5f06f11ad\" returns successfully" Jul 10 00:17:20.171125 containerd[1618]: time="2025-07-10T00:17:20.171069543Z" level=info msg="StartContainer for \"72d600d5fc79d5a5a6c1ee28c0fd5537880ef6d0a60bafe54e06e61df2f7f30c\" returns successfully" Jul 10 00:17:20.325572 kubelet[2534]: E0710 00:17:20.325554 2534 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:17:20.326678 kubelet[2534]: W0710 00:17:20.326650 2534 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Jul 10 00:17:20.326717 kubelet[2534]: E0710 00:17:20.326682 2534 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:17:20.327258 kubelet[2534]: E0710 00:17:20.327248 2534 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:17:20.329598 kubelet[2534]: E0710 00:17:20.329585 2534 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:17:20.517704 kubelet[2534]: W0710 00:17:20.517668 2534 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Jul 10 00:17:20.517929 kubelet[2534]: E0710 00:17:20.517708 2534 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:17:20.649333 kubelet[2534]: W0710 00:17:20.649035 2534 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Jul 10 00:17:20.649333 kubelet[2534]: E0710 00:17:20.649079 2534 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:17:20.706907 kubelet[2534]: W0710 00:17:20.706812 2534 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Jul 10 00:17:20.706907 kubelet[2534]: E0710 00:17:20.706872 2534 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:17:20.716673 kubelet[2534]: E0710 00:17:20.716649 2534 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="1.6s" Jul 10 00:17:20.881874 kubelet[2534]: I0710 00:17:20.881605 2534 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:17:20.881874 kubelet[2534]: E0710 00:17:20.881835 2534 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.100:6443/api/v1/nodes\": dial tcp 139.178.70.100:6443: connect: connection refused" node="localhost" Jul 10 00:17:21.244310 kubelet[2534]: E0710 00:17:21.244273 2534 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:17:21.330325 kubelet[2534]: E0710 00:17:21.330242 2534 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:17:21.330420 kubelet[2534]: E0710 00:17:21.330412 2534 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:17:22.332462 kubelet[2534]: E0710 00:17:22.332255 2534 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 10 00:17:22.483906 kubelet[2534]: I0710 00:17:22.483884 2534 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:17:22.556382 kubelet[2534]: I0710 00:17:22.556353 2534 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 10 00:17:22.556382 kubelet[2534]: E0710 00:17:22.556383 2534 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 10 00:17:22.568820 kubelet[2534]: E0710 00:17:22.568797 2534 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:17:22.669140 kubelet[2534]: E0710 00:17:22.669063 2534 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:17:22.769170 kubelet[2534]: E0710 00:17:22.769131 2534 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:17:22.869792 kubelet[2534]: E0710 00:17:22.869754 2534 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:17:22.970433 kubelet[2534]: E0710 00:17:22.970346 2534 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:17:23.070930 kubelet[2534]: E0710 00:17:23.070893 2534 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:17:23.145015 kubelet[2534]: E0710 00:17:23.144898 2534 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:17:23.171461 kubelet[2534]: E0710 00:17:23.171424 2534 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:17:23.271792 kubelet[2534]: E0710 00:17:23.271723 2534 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:17:23.410018 kubelet[2534]: I0710 00:17:23.409830 2534 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 00:17:23.418368 kubelet[2534]: I0710 00:17:23.418340 2534 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:17:23.421673 kubelet[2534]: I0710 00:17:23.421645 2534 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 00:17:24.181617 systemd[1]: Reload requested from client PID 2803 ('systemctl') (unit session-9.scope)... Jul 10 00:17:24.181628 systemd[1]: Reloading... Jul 10 00:17:24.238481 zram_generator::config[2853]: No configuration found. Jul 10 00:17:24.248683 kubelet[2534]: I0710 00:17:24.247850 2534 apiserver.go:52] "Watching apiserver" Jul 10 00:17:24.310145 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:17:24.311196 kubelet[2534]: I0710 00:17:24.311172 2534 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 00:17:24.318349 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 10 00:17:24.395623 systemd[1]: Reloading finished in 213 ms. Jul 10 00:17:24.410703 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:17:24.425659 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:17:24.425802 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:17:24.425830 systemd[1]: kubelet.service: Consumed 757ms CPU time, 130.8M memory peak. Jul 10 00:17:24.427521 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:17:24.809013 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:17:24.815819 (kubelet)[2914]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:17:24.863065 kubelet[2914]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:17:24.863065 kubelet[2914]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 00:17:24.863065 kubelet[2914]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:17:24.863065 kubelet[2914]: I0710 00:17:24.862991 2914 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:17:24.867873 kubelet[2914]: I0710 00:17:24.867856 2914 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 10 00:17:24.867873 kubelet[2914]: I0710 00:17:24.867870 2914 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:17:24.868004 kubelet[2914]: I0710 00:17:24.867992 2914 server.go:954] "Client rotation is on, will bootstrap in background" Jul 10 00:17:24.869431 kubelet[2914]: I0710 00:17:24.869418 2914 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 10 00:17:24.873687 kubelet[2914]: I0710 00:17:24.873675 2914 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:17:24.878101 kubelet[2914]: I0710 00:17:24.878056 2914 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 00:17:24.880448 kubelet[2914]: I0710 00:17:24.880207 2914 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:17:24.880448 kubelet[2914]: I0710 00:17:24.880315 2914 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:17:24.880636 kubelet[2914]: I0710 00:17:24.880327 2914 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:17:24.880695 kubelet[2914]: I0710 00:17:24.880640 2914 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:17:24.880695 kubelet[2914]: I0710 00:17:24.880647 2914 container_manager_linux.go:304] "Creating device plugin manager" Jul 10 00:17:24.880695 kubelet[2914]: I0710 00:17:24.880677 2914 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:17:24.880793 kubelet[2914]: I0710 00:17:24.880785 2914 kubelet.go:446] "Attempting to sync node with API server" Jul 10 00:17:24.880815 kubelet[2914]: I0710 00:17:24.880799 2914 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:17:24.880833 kubelet[2914]: I0710 00:17:24.880820 2914 kubelet.go:352] "Adding apiserver pod source" Jul 10 00:17:24.880833 kubelet[2914]: I0710 00:17:24.880828 2914 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:17:24.884443 kubelet[2914]: I0710 00:17:24.884426 2914 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 00:17:24.884717 kubelet[2914]: I0710 00:17:24.884709 2914 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:17:24.889318 kubelet[2914]: I0710 00:17:24.889309 2914 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 00:17:24.889386 kubelet[2914]: I0710 00:17:24.889381 2914 server.go:1287] "Started kubelet" Jul 10 00:17:24.895487 kubelet[2914]: I0710 00:17:24.895334 2914 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:17:24.897376 kubelet[2914]: I0710 00:17:24.897358 2914 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:17:24.898060 kubelet[2914]: I0710 00:17:24.897893 2914 server.go:479] "Adding debug handlers to kubelet server" Jul 10 00:17:24.898408 kubelet[2914]: I0710 00:17:24.898379 2914 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:17:24.898517 kubelet[2914]: I0710 00:17:24.898505 2914 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:17:24.898613 kubelet[2914]: I0710 00:17:24.898602 2914 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:17:24.900302 kubelet[2914]: I0710 00:17:24.900289 2914 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 00:17:24.900353 kubelet[2914]: I0710 00:17:24.900341 2914 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 00:17:24.900406 kubelet[2914]: I0710 00:17:24.900396 2914 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:17:24.901617 kubelet[2914]: I0710 00:17:24.901607 2914 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:17:24.901669 kubelet[2914]: I0710 00:17:24.901657 2914 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:17:24.902194 kubelet[2914]: I0710 00:17:24.902182 2914 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:17:24.904817 kubelet[2914]: I0710 00:17:24.904785 2914 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:17:24.905970 kubelet[2914]: I0710 00:17:24.905960 2914 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:17:24.906031 kubelet[2914]: I0710 00:17:24.906026 2914 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 10 00:17:24.906071 kubelet[2914]: I0710 00:17:24.906066 2914 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 00:17:24.906100 kubelet[2914]: I0710 00:17:24.906097 2914 kubelet.go:2382] "Starting kubelet main sync loop" Jul 10 00:17:24.907242 kubelet[2914]: E0710 00:17:24.907225 2914 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:17:24.936039 kubelet[2914]: I0710 00:17:24.936025 2914 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 00:17:24.936136 kubelet[2914]: I0710 00:17:24.936130 2914 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 00:17:24.936180 kubelet[2914]: I0710 00:17:24.936175 2914 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:17:24.936290 kubelet[2914]: I0710 00:17:24.936283 2914 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 00:17:24.936334 kubelet[2914]: I0710 00:17:24.936321 2914 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 00:17:24.936368 kubelet[2914]: I0710 00:17:24.936364 2914 policy_none.go:49] "None policy: Start" Jul 10 00:17:24.936401 kubelet[2914]: I0710 00:17:24.936397 2914 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 00:17:24.936433 kubelet[2914]: I0710 00:17:24.936429 2914 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:17:24.936532 kubelet[2914]: I0710 00:17:24.936526 2914 state_mem.go:75] "Updated machine memory state" Jul 10 00:17:24.939225 kubelet[2914]: I0710 00:17:24.939209 2914 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:17:24.939423 kubelet[2914]: I0710 00:17:24.939300 2914 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:17:24.939423 kubelet[2914]: I0710 00:17:24.939308 2914 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:17:24.939423 kubelet[2914]: I0710 00:17:24.939414 2914 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:17:24.941827 kubelet[2914]: E0710 00:17:24.941814 2914 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 00:17:25.007884 kubelet[2914]: I0710 00:17:25.007856 2914 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 00:17:25.009639 kubelet[2914]: I0710 00:17:25.009319 2914 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 00:17:25.009639 kubelet[2914]: I0710 00:17:25.009335 2914 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:17:25.012425 kubelet[2914]: E0710 00:17:25.012393 2914 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 10 00:17:25.012500 kubelet[2914]: E0710 00:17:25.012464 2914 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 10 00:17:25.012784 kubelet[2914]: E0710 00:17:25.012772 2914 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:17:25.043758 kubelet[2914]: I0710 00:17:25.043735 2914 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:17:25.048412 kubelet[2914]: I0710 00:17:25.048390 2914 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 10 00:17:25.048490 kubelet[2914]: I0710 00:17:25.048463 2914 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 10 00:17:25.102199 kubelet[2914]: I0710 00:17:25.102124 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fff6e4d96789cfd115678e034c5bbf53-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fff6e4d96789cfd115678e034c5bbf53\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:17:25.102199 kubelet[2914]: I0710 00:17:25.102161 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:17:25.102299 kubelet[2914]: I0710 00:17:25.102208 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:17:25.102299 kubelet[2914]: I0710 00:17:25.102228 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:17:25.102299 kubelet[2914]: I0710 00:17:25.102239 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fff6e4d96789cfd115678e034c5bbf53-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fff6e4d96789cfd115678e034c5bbf53\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:17:25.102299 kubelet[2914]: I0710 00:17:25.102247 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fff6e4d96789cfd115678e034c5bbf53-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fff6e4d96789cfd115678e034c5bbf53\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:17:25.102299 kubelet[2914]: I0710 00:17:25.102256 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:17:25.102379 kubelet[2914]: I0710 00:17:25.102288 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:17:25.102379 kubelet[2914]: I0710 00:17:25.102298 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:17:25.882009 kubelet[2914]: I0710 00:17:25.881980 2914 apiserver.go:52] "Watching apiserver" Jul 10 00:17:25.900929 kubelet[2914]: I0710 00:17:25.900876 2914 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 00:17:25.931335 kubelet[2914]: I0710 00:17:25.931235 2914 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 00:17:25.931512 kubelet[2914]: I0710 00:17:25.931505 2914 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 00:17:25.934261 kubelet[2914]: E0710 00:17:25.934243 2914 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 10 00:17:25.934410 kubelet[2914]: E0710 00:17:25.934400 2914 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 10 00:17:25.943879 kubelet[2914]: I0710 00:17:25.943758 2914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.943737844 podStartE2EDuration="2.943737844s" podCreationTimestamp="2025-07-10 00:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:17:25.943405161 +0000 UTC m=+1.106422159" watchObservedRunningTime="2025-07-10 00:17:25.943737844 +0000 UTC m=+1.106754839" Jul 10 00:17:25.948718 kubelet[2914]: I0710 00:17:25.948687 2914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.948676242 podStartE2EDuration="2.948676242s" podCreationTimestamp="2025-07-10 00:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:17:25.948455796 +0000 UTC m=+1.111472795" watchObservedRunningTime="2025-07-10 00:17:25.948676242 +0000 UTC m=+1.111693247" Jul 10 00:17:25.964463 kubelet[2914]: I0710 00:17:25.964333 2914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.964319692 podStartE2EDuration="2.964319692s" podCreationTimestamp="2025-07-10 00:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:17:25.954727235 +0000 UTC m=+1.117744232" watchObservedRunningTime="2025-07-10 00:17:25.964319692 +0000 UTC m=+1.127336687" Jul 10 00:17:28.517677 kubelet[2914]: I0710 00:17:28.517591 2914 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 00:17:28.517905 containerd[1618]: time="2025-07-10T00:17:28.517780023Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 00:17:28.518460 kubelet[2914]: I0710 00:17:28.518073 2914 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 00:17:29.526896 kubelet[2914]: I0710 00:17:29.526873 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b12793d4-ebcf-449c-a3e6-7a44efce8e35-var-lib-calico\") pod \"tigera-operator-747864d56d-s8ckt\" (UID: \"b12793d4-ebcf-449c-a3e6-7a44efce8e35\") " pod="tigera-operator/tigera-operator-747864d56d-s8ckt" Jul 10 00:17:29.527091 kubelet[2914]: I0710 00:17:29.526904 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k56s8\" (UniqueName: \"kubernetes.io/projected/b12793d4-ebcf-449c-a3e6-7a44efce8e35-kube-api-access-k56s8\") pod \"tigera-operator-747864d56d-s8ckt\" (UID: \"b12793d4-ebcf-449c-a3e6-7a44efce8e35\") " pod="tigera-operator/tigera-operator-747864d56d-s8ckt" Jul 10 00:17:29.527145 systemd[1]: Created slice kubepods-besteffort-podb12793d4_ebcf_449c_a3e6_7a44efce8e35.slice - libcontainer container kubepods-besteffort-podb12793d4_ebcf_449c_a3e6_7a44efce8e35.slice. Jul 10 00:17:29.569482 systemd[1]: Created slice kubepods-besteffort-podc5b07c37_80df_4697_9b66_d7cd81c54477.slice - libcontainer container kubepods-besteffort-podc5b07c37_80df_4697_9b66_d7cd81c54477.slice. Jul 10 00:17:29.627272 kubelet[2914]: I0710 00:17:29.627244 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5b07c37-80df-4697-9b66-d7cd81c54477-xtables-lock\") pod \"kube-proxy-zh4nr\" (UID: \"c5b07c37-80df-4697-9b66-d7cd81c54477\") " pod="kube-system/kube-proxy-zh4nr" Jul 10 00:17:29.627272 kubelet[2914]: I0710 00:17:29.627269 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5b07c37-80df-4697-9b66-d7cd81c54477-lib-modules\") pod \"kube-proxy-zh4nr\" (UID: \"c5b07c37-80df-4697-9b66-d7cd81c54477\") " pod="kube-system/kube-proxy-zh4nr" Jul 10 00:17:29.627532 kubelet[2914]: I0710 00:17:29.627518 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c5b07c37-80df-4697-9b66-d7cd81c54477-kube-proxy\") pod \"kube-proxy-zh4nr\" (UID: \"c5b07c37-80df-4697-9b66-d7cd81c54477\") " pod="kube-system/kube-proxy-zh4nr" Jul 10 00:17:29.627573 kubelet[2914]: I0710 00:17:29.627535 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lj6d\" (UniqueName: \"kubernetes.io/projected/c5b07c37-80df-4697-9b66-d7cd81c54477-kube-api-access-8lj6d\") pod \"kube-proxy-zh4nr\" (UID: \"c5b07c37-80df-4697-9b66-d7cd81c54477\") " pod="kube-system/kube-proxy-zh4nr" Jul 10 00:17:29.836534 containerd[1618]: time="2025-07-10T00:17:29.836463642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-s8ckt,Uid:b12793d4-ebcf-449c-a3e6-7a44efce8e35,Namespace:tigera-operator,Attempt:0,}" Jul 10 00:17:29.848354 containerd[1618]: time="2025-07-10T00:17:29.848018482Z" level=info msg="connecting to shim b26c179477db25acd043d3c0d4e0f6d5c4ba1c2e2880bef8131b1362f15a706b" address="unix:///run/containerd/s/847ae7210d1481afcfb91e8dd408a7349ca1616494c1e727fe2fd256d2eb160c" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:17:29.867573 systemd[1]: Started cri-containerd-b26c179477db25acd043d3c0d4e0f6d5c4ba1c2e2880bef8131b1362f15a706b.scope - libcontainer container b26c179477db25acd043d3c0d4e0f6d5c4ba1c2e2880bef8131b1362f15a706b. Jul 10 00:17:29.873662 containerd[1618]: time="2025-07-10T00:17:29.873634399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zh4nr,Uid:c5b07c37-80df-4697-9b66-d7cd81c54477,Namespace:kube-system,Attempt:0,}" Jul 10 00:17:29.938984 containerd[1618]: time="2025-07-10T00:17:29.938958520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-s8ckt,Uid:b12793d4-ebcf-449c-a3e6-7a44efce8e35,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b26c179477db25acd043d3c0d4e0f6d5c4ba1c2e2880bef8131b1362f15a706b\"" Jul 10 00:17:29.940025 containerd[1618]: time="2025-07-10T00:17:29.940003972Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 10 00:17:29.991352 containerd[1618]: time="2025-07-10T00:17:29.991303176Z" level=info msg="connecting to shim 8ce7f9548695ccbb2059b9433bfe19b7190c38a8b3ef6089f8b0b9d503fdf504" address="unix:///run/containerd/s/a3b309bfb46436ab27a8d0942458414a531ec894365854cc8349f44aa7ad4b37" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:17:30.012622 systemd[1]: Started cri-containerd-8ce7f9548695ccbb2059b9433bfe19b7190c38a8b3ef6089f8b0b9d503fdf504.scope - libcontainer container 8ce7f9548695ccbb2059b9433bfe19b7190c38a8b3ef6089f8b0b9d503fdf504. Jul 10 00:17:30.028912 containerd[1618]: time="2025-07-10T00:17:30.028869887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zh4nr,Uid:c5b07c37-80df-4697-9b66-d7cd81c54477,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ce7f9548695ccbb2059b9433bfe19b7190c38a8b3ef6089f8b0b9d503fdf504\"" Jul 10 00:17:30.031285 containerd[1618]: time="2025-07-10T00:17:30.031256315Z" level=info msg="CreateContainer within sandbox \"8ce7f9548695ccbb2059b9433bfe19b7190c38a8b3ef6089f8b0b9d503fdf504\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 00:17:30.036233 containerd[1618]: time="2025-07-10T00:17:30.036197206Z" level=info msg="Container 36ad4a360f54c1c1a4c23c406bd0181e9f516e8e647ce18324235b7b4329f153: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:17:30.039238 containerd[1618]: time="2025-07-10T00:17:30.039210046Z" level=info msg="CreateContainer within sandbox \"8ce7f9548695ccbb2059b9433bfe19b7190c38a8b3ef6089f8b0b9d503fdf504\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"36ad4a360f54c1c1a4c23c406bd0181e9f516e8e647ce18324235b7b4329f153\"" Jul 10 00:17:30.039681 containerd[1618]: time="2025-07-10T00:17:30.039634850Z" level=info msg="StartContainer for \"36ad4a360f54c1c1a4c23c406bd0181e9f516e8e647ce18324235b7b4329f153\"" Jul 10 00:17:30.041499 containerd[1618]: time="2025-07-10T00:17:30.041472940Z" level=info msg="connecting to shim 36ad4a360f54c1c1a4c23c406bd0181e9f516e8e647ce18324235b7b4329f153" address="unix:///run/containerd/s/a3b309bfb46436ab27a8d0942458414a531ec894365854cc8349f44aa7ad4b37" protocol=ttrpc version=3 Jul 10 00:17:30.056562 systemd[1]: Started cri-containerd-36ad4a360f54c1c1a4c23c406bd0181e9f516e8e647ce18324235b7b4329f153.scope - libcontainer container 36ad4a360f54c1c1a4c23c406bd0181e9f516e8e647ce18324235b7b4329f153. Jul 10 00:17:30.080335 containerd[1618]: time="2025-07-10T00:17:30.080310099Z" level=info msg="StartContainer for \"36ad4a360f54c1c1a4c23c406bd0181e9f516e8e647ce18324235b7b4329f153\" returns successfully" Jul 10 00:17:30.948623 kubelet[2914]: I0710 00:17:30.948577 2914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zh4nr" podStartSLOduration=1.948565432 podStartE2EDuration="1.948565432s" podCreationTimestamp="2025-07-10 00:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:17:30.943666814 +0000 UTC m=+6.106683818" watchObservedRunningTime="2025-07-10 00:17:30.948565432 +0000 UTC m=+6.111582431" Jul 10 00:17:31.346315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1234115165.mount: Deactivated successfully. Jul 10 00:17:32.677866 containerd[1618]: time="2025-07-10T00:17:32.677839184Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:17:32.678433 containerd[1618]: time="2025-07-10T00:17:32.678408126Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 10 00:17:32.678635 containerd[1618]: time="2025-07-10T00:17:32.678620437Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:17:32.679794 containerd[1618]: time="2025-07-10T00:17:32.679778550Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:17:32.680536 containerd[1618]: time="2025-07-10T00:17:32.680520705Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.740442323s" Jul 10 00:17:32.680562 containerd[1618]: time="2025-07-10T00:17:32.680547353Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 10 00:17:32.682002 containerd[1618]: time="2025-07-10T00:17:32.681974648Z" level=info msg="CreateContainer within sandbox \"b26c179477db25acd043d3c0d4e0f6d5c4ba1c2e2880bef8131b1362f15a706b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 10 00:17:32.686683 containerd[1618]: time="2025-07-10T00:17:32.686513163Z" level=info msg="Container 85ae5648ec741eefccf36e1066e7181e44b7f479c7a35f06aa6ade1d270ff81a: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:17:32.689374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4105899766.mount: Deactivated successfully. Jul 10 00:17:32.692157 containerd[1618]: time="2025-07-10T00:17:32.692103184Z" level=info msg="CreateContainer within sandbox \"b26c179477db25acd043d3c0d4e0f6d5c4ba1c2e2880bef8131b1362f15a706b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"85ae5648ec741eefccf36e1066e7181e44b7f479c7a35f06aa6ade1d270ff81a\"" Jul 10 00:17:32.693944 containerd[1618]: time="2025-07-10T00:17:32.693924996Z" level=info msg="StartContainer for \"85ae5648ec741eefccf36e1066e7181e44b7f479c7a35f06aa6ade1d270ff81a\"" Jul 10 00:17:32.694607 containerd[1618]: time="2025-07-10T00:17:32.694586937Z" level=info msg="connecting to shim 85ae5648ec741eefccf36e1066e7181e44b7f479c7a35f06aa6ade1d270ff81a" address="unix:///run/containerd/s/847ae7210d1481afcfb91e8dd408a7349ca1616494c1e727fe2fd256d2eb160c" protocol=ttrpc version=3 Jul 10 00:17:32.714663 systemd[1]: Started cri-containerd-85ae5648ec741eefccf36e1066e7181e44b7f479c7a35f06aa6ade1d270ff81a.scope - libcontainer container 85ae5648ec741eefccf36e1066e7181e44b7f479c7a35f06aa6ade1d270ff81a. Jul 10 00:17:32.730168 containerd[1618]: time="2025-07-10T00:17:32.730143956Z" level=info msg="StartContainer for \"85ae5648ec741eefccf36e1066e7181e44b7f479c7a35f06aa6ade1d270ff81a\" returns successfully" Jul 10 00:17:32.951163 kubelet[2914]: I0710 00:17:32.950936 2914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-s8ckt" podStartSLOduration=1.20975988 podStartE2EDuration="3.950922924s" podCreationTimestamp="2025-07-10 00:17:29 +0000 UTC" firstStartedPulling="2025-07-10 00:17:29.939776337 +0000 UTC m=+5.102793332" lastFinishedPulling="2025-07-10 00:17:32.680939379 +0000 UTC m=+7.843956376" observedRunningTime="2025-07-10 00:17:32.950878832 +0000 UTC m=+8.113895838" watchObservedRunningTime="2025-07-10 00:17:32.950922924 +0000 UTC m=+8.113939931" Jul 10 00:17:37.871894 sudo[1949]: pam_unix(sudo:session): session closed for user root Jul 10 00:17:37.872764 sshd[1948]: Connection closed by 139.178.68.195 port 38916 Jul 10 00:17:37.873372 sshd-session[1946]: pam_unix(sshd:session): session closed for user core Jul 10 00:17:37.876410 systemd-logind[1589]: Session 9 logged out. Waiting for processes to exit. Jul 10 00:17:37.877683 systemd[1]: sshd@6-139.178.70.100:22-139.178.68.195:38916.service: Deactivated successfully. Jul 10 00:17:37.879662 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 00:17:37.880007 systemd[1]: session-9.scope: Consumed 2.784s CPU time, 151.6M memory peak. Jul 10 00:17:37.883485 systemd-logind[1589]: Removed session 9. Jul 10 00:17:40.700723 kubelet[2914]: I0710 00:17:40.700692 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd2b798e-dccf-4e82-8ddd-4c6cafdec83b-tigera-ca-bundle\") pod \"calico-typha-575c59d687-hhdx2\" (UID: \"cd2b798e-dccf-4e82-8ddd-4c6cafdec83b\") " pod="calico-system/calico-typha-575c59d687-hhdx2" Jul 10 00:17:40.700966 kubelet[2914]: I0710 00:17:40.700805 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx994\" (UniqueName: \"kubernetes.io/projected/cd2b798e-dccf-4e82-8ddd-4c6cafdec83b-kube-api-access-tx994\") pod \"calico-typha-575c59d687-hhdx2\" (UID: \"cd2b798e-dccf-4e82-8ddd-4c6cafdec83b\") " pod="calico-system/calico-typha-575c59d687-hhdx2" Jul 10 00:17:40.700966 kubelet[2914]: I0710 00:17:40.700819 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cd2b798e-dccf-4e82-8ddd-4c6cafdec83b-typha-certs\") pod \"calico-typha-575c59d687-hhdx2\" (UID: \"cd2b798e-dccf-4e82-8ddd-4c6cafdec83b\") " pod="calico-system/calico-typha-575c59d687-hhdx2" Jul 10 00:17:40.714535 systemd[1]: Created slice kubepods-besteffort-podcd2b798e_dccf_4e82_8ddd_4c6cafdec83b.slice - libcontainer container kubepods-besteffort-podcd2b798e_dccf_4e82_8ddd_4c6cafdec83b.slice. Jul 10 00:17:40.959823 systemd[1]: Created slice kubepods-besteffort-podea196422_e456_4aa7_8855_978b471290d5.slice - libcontainer container kubepods-besteffort-podea196422_e456_4aa7_8855_978b471290d5.slice. Jul 10 00:17:41.003387 kubelet[2914]: I0710 00:17:41.003331 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ea196422-e456-4aa7-8855-978b471290d5-cni-net-dir\") pod \"calico-node-g6gmj\" (UID: \"ea196422-e456-4aa7-8855-978b471290d5\") " pod="calico-system/calico-node-g6gmj" Jul 10 00:17:41.003387 kubelet[2914]: I0710 00:17:41.003366 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ea196422-e456-4aa7-8855-978b471290d5-flexvol-driver-host\") pod \"calico-node-g6gmj\" (UID: \"ea196422-e456-4aa7-8855-978b471290d5\") " pod="calico-system/calico-node-g6gmj" Jul 10 00:17:41.003883 kubelet[2914]: I0710 00:17:41.003639 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ea196422-e456-4aa7-8855-978b471290d5-node-certs\") pod \"calico-node-g6gmj\" (UID: \"ea196422-e456-4aa7-8855-978b471290d5\") " pod="calico-system/calico-node-g6gmj" Jul 10 00:17:41.003883 kubelet[2914]: I0710 00:17:41.003662 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ea196422-e456-4aa7-8855-978b471290d5-policysync\") pod \"calico-node-g6gmj\" (UID: \"ea196422-e456-4aa7-8855-978b471290d5\") " pod="calico-system/calico-node-g6gmj" Jul 10 00:17:41.003883 kubelet[2914]: I0710 00:17:41.003688 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ea196422-e456-4aa7-8855-978b471290d5-cni-bin-dir\") pod \"calico-node-g6gmj\" (UID: \"ea196422-e456-4aa7-8855-978b471290d5\") " pod="calico-system/calico-node-g6gmj" Jul 10 00:17:41.003883 kubelet[2914]: I0710 00:17:41.003717 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea196422-e456-4aa7-8855-978b471290d5-lib-modules\") pod \"calico-node-g6gmj\" (UID: \"ea196422-e456-4aa7-8855-978b471290d5\") " pod="calico-system/calico-node-g6gmj" Jul 10 00:17:41.003883 kubelet[2914]: I0710 00:17:41.003736 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ea196422-e456-4aa7-8855-978b471290d5-var-run-calico\") pod \"calico-node-g6gmj\" (UID: \"ea196422-e456-4aa7-8855-978b471290d5\") " pod="calico-system/calico-node-g6gmj" Jul 10 00:17:41.004001 kubelet[2914]: I0710 00:17:41.003748 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ea196422-e456-4aa7-8855-978b471290d5-cni-log-dir\") pod \"calico-node-g6gmj\" (UID: \"ea196422-e456-4aa7-8855-978b471290d5\") " pod="calico-system/calico-node-g6gmj" Jul 10 00:17:41.004290 kubelet[2914]: I0710 00:17:41.004126 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea196422-e456-4aa7-8855-978b471290d5-xtables-lock\") pod \"calico-node-g6gmj\" (UID: \"ea196422-e456-4aa7-8855-978b471290d5\") " pod="calico-system/calico-node-g6gmj" Jul 10 00:17:41.004290 kubelet[2914]: I0710 00:17:41.004149 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea196422-e456-4aa7-8855-978b471290d5-tigera-ca-bundle\") pod \"calico-node-g6gmj\" (UID: \"ea196422-e456-4aa7-8855-978b471290d5\") " pod="calico-system/calico-node-g6gmj" Jul 10 00:17:41.004290 kubelet[2914]: I0710 00:17:41.004160 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ea196422-e456-4aa7-8855-978b471290d5-var-lib-calico\") pod \"calico-node-g6gmj\" (UID: \"ea196422-e456-4aa7-8855-978b471290d5\") " pod="calico-system/calico-node-g6gmj" Jul 10 00:17:41.004290 kubelet[2914]: I0710 00:17:41.004170 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v26vc\" (UniqueName: \"kubernetes.io/projected/ea196422-e456-4aa7-8855-978b471290d5-kube-api-access-v26vc\") pod \"calico-node-g6gmj\" (UID: \"ea196422-e456-4aa7-8855-978b471290d5\") " pod="calico-system/calico-node-g6gmj" Jul 10 00:17:41.035246 containerd[1618]: time="2025-07-10T00:17:41.035191289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-575c59d687-hhdx2,Uid:cd2b798e-dccf-4e82-8ddd-4c6cafdec83b,Namespace:calico-system,Attempt:0,}" Jul 10 00:17:41.186322 containerd[1618]: time="2025-07-10T00:17:41.186270934Z" level=info msg="connecting to shim 8b34190aac7297d943a69c71bc72183947aed2396b5c21d447e0f2b3fb2f6787" address="unix:///run/containerd/s/941c93bb650d6a6f05d9c8f461c2a13104cf954342b5f5266a9deff5335756bb" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:17:41.204684 systemd[1]: Started cri-containerd-8b34190aac7297d943a69c71bc72183947aed2396b5c21d447e0f2b3fb2f6787.scope - libcontainer container 8b34190aac7297d943a69c71bc72183947aed2396b5c21d447e0f2b3fb2f6787. Jul 10 00:17:41.247585 kubelet[2914]: E0710 00:17:41.247191 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pblgr" podUID="85f38ee8-8868-4fd3-804f-d03ca4c958a9" Jul 10 00:17:41.259106 containerd[1618]: time="2025-07-10T00:17:41.259079000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-575c59d687-hhdx2,Uid:cd2b798e-dccf-4e82-8ddd-4c6cafdec83b,Namespace:calico-system,Attempt:0,} returns sandbox id \"8b34190aac7297d943a69c71bc72183947aed2396b5c21d447e0f2b3fb2f6787\"" Jul 10 00:17:41.260114 containerd[1618]: time="2025-07-10T00:17:41.260093900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 10 00:17:41.264964 containerd[1618]: time="2025-07-10T00:17:41.264867118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g6gmj,Uid:ea196422-e456-4aa7-8855-978b471290d5,Namespace:calico-system,Attempt:0,}" Jul 10 00:17:41.290750 kubelet[2914]: E0710 00:17:41.289283 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.290750 kubelet[2914]: W0710 00:17:41.289304 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.297311 kubelet[2914]: E0710 00:17:41.297284 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.297481 kubelet[2914]: E0710 00:17:41.297467 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.297516 kubelet[2914]: W0710 00:17:41.297479 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.297516 kubelet[2914]: E0710 00:17:41.297493 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.297622 kubelet[2914]: E0710 00:17:41.297609 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.297622 kubelet[2914]: W0710 00:17:41.297618 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.297664 kubelet[2914]: E0710 00:17:41.297623 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.297775 kubelet[2914]: E0710 00:17:41.297764 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.297775 kubelet[2914]: W0710 00:17:41.297772 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.297820 kubelet[2914]: E0710 00:17:41.297778 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.297896 kubelet[2914]: E0710 00:17:41.297885 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.297896 kubelet[2914]: W0710 00:17:41.297895 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.297933 kubelet[2914]: E0710 00:17:41.297903 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.297991 kubelet[2914]: E0710 00:17:41.297979 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.297991 kubelet[2914]: W0710 00:17:41.297989 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.298042 kubelet[2914]: E0710 00:17:41.297996 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.298079 kubelet[2914]: E0710 00:17:41.298067 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.298098 kubelet[2914]: W0710 00:17:41.298076 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.298098 kubelet[2914]: E0710 00:17:41.298083 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.298173 kubelet[2914]: E0710 00:17:41.298161 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.298173 kubelet[2914]: W0710 00:17:41.298170 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.298213 kubelet[2914]: E0710 00:17:41.298175 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.298266 kubelet[2914]: E0710 00:17:41.298256 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.298266 kubelet[2914]: W0710 00:17:41.298263 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.298306 kubelet[2914]: E0710 00:17:41.298268 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.298353 kubelet[2914]: E0710 00:17:41.298342 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.298353 kubelet[2914]: W0710 00:17:41.298350 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.298392 kubelet[2914]: E0710 00:17:41.298354 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.298457 kubelet[2914]: E0710 00:17:41.298446 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.298457 kubelet[2914]: W0710 00:17:41.298453 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.298500 kubelet[2914]: E0710 00:17:41.298458 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.298561 kubelet[2914]: E0710 00:17:41.298549 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.298583 kubelet[2914]: W0710 00:17:41.298560 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.298583 kubelet[2914]: E0710 00:17:41.298568 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.298664 kubelet[2914]: E0710 00:17:41.298652 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.298664 kubelet[2914]: W0710 00:17:41.298662 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.298699 kubelet[2914]: E0710 00:17:41.298669 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.298759 kubelet[2914]: E0710 00:17:41.298748 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.298759 kubelet[2914]: W0710 00:17:41.298757 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.298830 kubelet[2914]: E0710 00:17:41.298762 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.298853 kubelet[2914]: E0710 00:17:41.298850 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.298870 kubelet[2914]: W0710 00:17:41.298854 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.298870 kubelet[2914]: E0710 00:17:41.298859 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.298951 kubelet[2914]: E0710 00:17:41.298940 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.298951 kubelet[2914]: W0710 00:17:41.298948 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.299006 kubelet[2914]: E0710 00:17:41.298953 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.299044 kubelet[2914]: E0710 00:17:41.299034 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.299044 kubelet[2914]: W0710 00:17:41.299040 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.299044 kubelet[2914]: E0710 00:17:41.299045 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.299128 kubelet[2914]: E0710 00:17:41.299118 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.299128 kubelet[2914]: W0710 00:17:41.299126 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.299168 kubelet[2914]: E0710 00:17:41.299131 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.299256 kubelet[2914]: E0710 00:17:41.299198 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.299256 kubelet[2914]: W0710 00:17:41.299207 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.299256 kubelet[2914]: E0710 00:17:41.299211 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.299312 kubelet[2914]: E0710 00:17:41.299292 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.299312 kubelet[2914]: W0710 00:17:41.299298 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.299312 kubelet[2914]: E0710 00:17:41.299302 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.306640 kubelet[2914]: E0710 00:17:41.306608 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.306640 kubelet[2914]: W0710 00:17:41.306637 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.306843 kubelet[2914]: E0710 00:17:41.306652 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.306843 kubelet[2914]: I0710 00:17:41.306690 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/85f38ee8-8868-4fd3-804f-d03ca4c958a9-registration-dir\") pod \"csi-node-driver-pblgr\" (UID: \"85f38ee8-8868-4fd3-804f-d03ca4c958a9\") " pod="calico-system/csi-node-driver-pblgr" Jul 10 00:17:41.306843 kubelet[2914]: E0710 00:17:41.306826 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.306843 kubelet[2914]: W0710 00:17:41.306838 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.306919 kubelet[2914]: E0710 00:17:41.306850 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.306919 kubelet[2914]: I0710 00:17:41.306861 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/85f38ee8-8868-4fd3-804f-d03ca4c958a9-kubelet-dir\") pod \"csi-node-driver-pblgr\" (UID: \"85f38ee8-8868-4fd3-804f-d03ca4c958a9\") " pod="calico-system/csi-node-driver-pblgr" Jul 10 00:17:41.306956 kubelet[2914]: E0710 00:17:41.306943 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.307425 kubelet[2914]: W0710 00:17:41.306948 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.307425 kubelet[2914]: E0710 00:17:41.307008 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.307425 kubelet[2914]: I0710 00:17:41.307018 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/85f38ee8-8868-4fd3-804f-d03ca4c958a9-socket-dir\") pod \"csi-node-driver-pblgr\" (UID: \"85f38ee8-8868-4fd3-804f-d03ca4c958a9\") " pod="calico-system/csi-node-driver-pblgr" Jul 10 00:17:41.307544 kubelet[2914]: E0710 00:17:41.307495 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.307544 kubelet[2914]: W0710 00:17:41.307502 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.307544 kubelet[2914]: E0710 00:17:41.307515 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.321279 kubelet[2914]: I0710 00:17:41.307616 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/85f38ee8-8868-4fd3-804f-d03ca4c958a9-varrun\") pod \"csi-node-driver-pblgr\" (UID: \"85f38ee8-8868-4fd3-804f-d03ca4c958a9\") " pod="calico-system/csi-node-driver-pblgr" Jul 10 00:17:41.321279 kubelet[2914]: E0710 00:17:41.307645 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.321279 kubelet[2914]: W0710 00:17:41.307650 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.321279 kubelet[2914]: E0710 00:17:41.307656 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.321279 kubelet[2914]: E0710 00:17:41.307761 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.321279 kubelet[2914]: W0710 00:17:41.307766 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.321279 kubelet[2914]: E0710 00:17:41.307774 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.321279 kubelet[2914]: E0710 00:17:41.308063 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.321279 kubelet[2914]: W0710 00:17:41.308069 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.321500 kubelet[2914]: E0710 00:17:41.308083 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.321500 kubelet[2914]: E0710 00:17:41.308297 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.321500 kubelet[2914]: W0710 00:17:41.308305 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.321500 kubelet[2914]: E0710 00:17:41.308313 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.321500 kubelet[2914]: E0710 00:17:41.308670 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.321500 kubelet[2914]: W0710 00:17:41.308676 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.321500 kubelet[2914]: E0710 00:17:41.308686 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.321500 kubelet[2914]: E0710 00:17:41.308796 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.321500 kubelet[2914]: W0710 00:17:41.308803 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.321500 kubelet[2914]: E0710 00:17:41.308811 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.321675 kubelet[2914]: E0710 00:17:41.309062 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.321675 kubelet[2914]: W0710 00:17:41.309068 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.321675 kubelet[2914]: E0710 00:17:41.309074 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.321675 kubelet[2914]: E0710 00:17:41.309204 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.321675 kubelet[2914]: W0710 00:17:41.309210 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.321675 kubelet[2914]: E0710 00:17:41.309217 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.321675 kubelet[2914]: E0710 00:17:41.309341 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.321675 kubelet[2914]: W0710 00:17:41.309346 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.321675 kubelet[2914]: E0710 00:17:41.309360 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.321833 kubelet[2914]: I0710 00:17:41.309536 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkxnd\" (UniqueName: \"kubernetes.io/projected/85f38ee8-8868-4fd3-804f-d03ca4c958a9-kube-api-access-kkxnd\") pod \"csi-node-driver-pblgr\" (UID: \"85f38ee8-8868-4fd3-804f-d03ca4c958a9\") " pod="calico-system/csi-node-driver-pblgr" Jul 10 00:17:41.321833 kubelet[2914]: E0710 00:17:41.309603 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.321833 kubelet[2914]: W0710 00:17:41.309608 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.321833 kubelet[2914]: E0710 00:17:41.309613 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.321833 kubelet[2914]: E0710 00:17:41.309712 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.321833 kubelet[2914]: W0710 00:17:41.309717 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.321833 kubelet[2914]: E0710 00:17:41.309722 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.410497 kubelet[2914]: E0710 00:17:41.410473 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.410497 kubelet[2914]: W0710 00:17:41.410490 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.410497 kubelet[2914]: E0710 00:17:41.410504 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.410707 kubelet[2914]: E0710 00:17:41.410618 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.410707 kubelet[2914]: W0710 00:17:41.410623 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.410707 kubelet[2914]: E0710 00:17:41.410633 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.410832 kubelet[2914]: E0710 00:17:41.410721 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.410832 kubelet[2914]: W0710 00:17:41.410727 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.410832 kubelet[2914]: E0710 00:17:41.410747 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.410832 kubelet[2914]: E0710 00:17:41.410839 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.410999 kubelet[2914]: W0710 00:17:41.410844 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.410999 kubelet[2914]: E0710 00:17:41.410852 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.429414 kubelet[2914]: E0710 00:17:41.411126 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.429414 kubelet[2914]: W0710 00:17:41.411135 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.429414 kubelet[2914]: E0710 00:17:41.411149 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.429414 kubelet[2914]: E0710 00:17:41.411244 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.429414 kubelet[2914]: W0710 00:17:41.411250 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.429414 kubelet[2914]: E0710 00:17:41.411259 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.429414 kubelet[2914]: E0710 00:17:41.411355 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.429414 kubelet[2914]: W0710 00:17:41.411360 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.429414 kubelet[2914]: E0710 00:17:41.411369 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.429414 kubelet[2914]: E0710 00:17:41.411477 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.429676 kubelet[2914]: W0710 00:17:41.411483 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.429676 kubelet[2914]: E0710 00:17:41.411491 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.429676 kubelet[2914]: E0710 00:17:41.411607 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.429676 kubelet[2914]: W0710 00:17:41.411612 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.429676 kubelet[2914]: E0710 00:17:41.411621 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.429676 kubelet[2914]: E0710 00:17:41.411720 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.429676 kubelet[2914]: W0710 00:17:41.411726 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.429676 kubelet[2914]: E0710 00:17:41.411736 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.429676 kubelet[2914]: E0710 00:17:41.411834 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.429676 kubelet[2914]: W0710 00:17:41.411841 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.445291 kubelet[2914]: E0710 00:17:41.411849 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.445291 kubelet[2914]: E0710 00:17:41.411939 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.445291 kubelet[2914]: W0710 00:17:41.411944 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.445291 kubelet[2914]: E0710 00:17:41.411959 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.445291 kubelet[2914]: E0710 00:17:41.412030 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.445291 kubelet[2914]: W0710 00:17:41.412035 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.445291 kubelet[2914]: E0710 00:17:41.412056 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.445291 kubelet[2914]: E0710 00:17:41.412117 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.445291 kubelet[2914]: W0710 00:17:41.412123 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.445291 kubelet[2914]: E0710 00:17:41.412139 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.445505 kubelet[2914]: E0710 00:17:41.412213 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.445505 kubelet[2914]: W0710 00:17:41.412220 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.445505 kubelet[2914]: E0710 00:17:41.412229 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.445505 kubelet[2914]: E0710 00:17:41.412348 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.445505 kubelet[2914]: W0710 00:17:41.412355 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.445505 kubelet[2914]: E0710 00:17:41.412367 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.445505 kubelet[2914]: E0710 00:17:41.412481 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.445505 kubelet[2914]: W0710 00:17:41.412487 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.445505 kubelet[2914]: E0710 00:17:41.412497 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.445505 kubelet[2914]: E0710 00:17:41.412596 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.445686 kubelet[2914]: W0710 00:17:41.412602 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.445686 kubelet[2914]: E0710 00:17:41.412611 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.445686 kubelet[2914]: E0710 00:17:41.412713 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.445686 kubelet[2914]: W0710 00:17:41.412719 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.445686 kubelet[2914]: E0710 00:17:41.412727 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.445686 kubelet[2914]: E0710 00:17:41.412843 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.445686 kubelet[2914]: W0710 00:17:41.412851 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.445686 kubelet[2914]: E0710 00:17:41.412863 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.445686 kubelet[2914]: E0710 00:17:41.412945 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.445686 kubelet[2914]: W0710 00:17:41.412951 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.445866 kubelet[2914]: E0710 00:17:41.412958 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.445866 kubelet[2914]: E0710 00:17:41.413113 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.445866 kubelet[2914]: W0710 00:17:41.413119 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.445866 kubelet[2914]: E0710 00:17:41.413134 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.445866 kubelet[2914]: E0710 00:17:41.413344 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.445866 kubelet[2914]: W0710 00:17:41.413368 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.445866 kubelet[2914]: E0710 00:17:41.413392 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.445866 kubelet[2914]: E0710 00:17:41.413733 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.445866 kubelet[2914]: W0710 00:17:41.413740 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.445866 kubelet[2914]: E0710 00:17:41.413746 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.446048 kubelet[2914]: E0710 00:17:41.413941 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.446048 kubelet[2914]: W0710 00:17:41.413948 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.446048 kubelet[2914]: E0710 00:17:41.413954 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.452282 kubelet[2914]: E0710 00:17:41.452217 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:41.452282 kubelet[2914]: W0710 00:17:41.452231 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:41.452282 kubelet[2914]: E0710 00:17:41.452243 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:41.979119 containerd[1618]: time="2025-07-10T00:17:41.979066358Z" level=info msg="connecting to shim ce7bece0134951dab1aaa15d293f84762954483ea1bfab690ddfdb05dce4e7fe" address="unix:///run/containerd/s/9966c4c2668afc44ff1de6f0b8fb3c57720fb16d01541cc233bd95e82a951583" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:17:41.997583 systemd[1]: Started cri-containerd-ce7bece0134951dab1aaa15d293f84762954483ea1bfab690ddfdb05dce4e7fe.scope - libcontainer container ce7bece0134951dab1aaa15d293f84762954483ea1bfab690ddfdb05dce4e7fe. Jul 10 00:17:42.032827 containerd[1618]: time="2025-07-10T00:17:42.032806775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g6gmj,Uid:ea196422-e456-4aa7-8855-978b471290d5,Namespace:calico-system,Attempt:0,} returns sandbox id \"ce7bece0134951dab1aaa15d293f84762954483ea1bfab690ddfdb05dce4e7fe\"" Jul 10 00:17:42.907421 kubelet[2914]: E0710 00:17:42.907360 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pblgr" podUID="85f38ee8-8868-4fd3-804f-d03ca4c958a9" Jul 10 00:17:43.082791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3824949200.mount: Deactivated successfully. Jul 10 00:17:44.790893 containerd[1618]: time="2025-07-10T00:17:44.790528257Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:17:44.802903 containerd[1618]: time="2025-07-10T00:17:44.802874220Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 10 00:17:44.818529 containerd[1618]: time="2025-07-10T00:17:44.818494731Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:17:44.829454 containerd[1618]: time="2025-07-10T00:17:44.829403746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:17:44.829698 containerd[1618]: time="2025-07-10T00:17:44.829673693Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 3.569564595s" Jul 10 00:17:44.829698 containerd[1618]: time="2025-07-10T00:17:44.829696686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 10 00:17:44.830802 containerd[1618]: time="2025-07-10T00:17:44.830780854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 10 00:17:44.848555 containerd[1618]: time="2025-07-10T00:17:44.848257974Z" level=info msg="CreateContainer within sandbox \"8b34190aac7297d943a69c71bc72183947aed2396b5c21d447e0f2b3fb2f6787\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 10 00:17:44.880302 containerd[1618]: time="2025-07-10T00:17:44.879650723Z" level=info msg="Container 024bc0d8604d337fcfe44dcb2a348ab3e03ceb2acec82f56b29ef60dee430cf8: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:17:44.892518 containerd[1618]: time="2025-07-10T00:17:44.892435935Z" level=info msg="CreateContainer within sandbox \"8b34190aac7297d943a69c71bc72183947aed2396b5c21d447e0f2b3fb2f6787\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"024bc0d8604d337fcfe44dcb2a348ab3e03ceb2acec82f56b29ef60dee430cf8\"" Jul 10 00:17:44.893219 containerd[1618]: time="2025-07-10T00:17:44.893196951Z" level=info msg="StartContainer for \"024bc0d8604d337fcfe44dcb2a348ab3e03ceb2acec82f56b29ef60dee430cf8\"" Jul 10 00:17:44.894048 containerd[1618]: time="2025-07-10T00:17:44.894025263Z" level=info msg="connecting to shim 024bc0d8604d337fcfe44dcb2a348ab3e03ceb2acec82f56b29ef60dee430cf8" address="unix:///run/containerd/s/941c93bb650d6a6f05d9c8f461c2a13104cf954342b5f5266a9deff5335756bb" protocol=ttrpc version=3 Jul 10 00:17:44.907782 kubelet[2914]: E0710 00:17:44.907738 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pblgr" podUID="85f38ee8-8868-4fd3-804f-d03ca4c958a9" Jul 10 00:17:44.930620 systemd[1]: Started cri-containerd-024bc0d8604d337fcfe44dcb2a348ab3e03ceb2acec82f56b29ef60dee430cf8.scope - libcontainer container 024bc0d8604d337fcfe44dcb2a348ab3e03ceb2acec82f56b29ef60dee430cf8. Jul 10 00:17:44.989448 containerd[1618]: time="2025-07-10T00:17:44.989367135Z" level=info msg="StartContainer for \"024bc0d8604d337fcfe44dcb2a348ab3e03ceb2acec82f56b29ef60dee430cf8\" returns successfully" Jul 10 00:17:45.995849 kubelet[2914]: I0710 00:17:45.995811 2914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-575c59d687-hhdx2" podStartSLOduration=2.425488528 podStartE2EDuration="5.995799974s" podCreationTimestamp="2025-07-10 00:17:40 +0000 UTC" firstStartedPulling="2025-07-10 00:17:41.259979365 +0000 UTC m=+16.422996361" lastFinishedPulling="2025-07-10 00:17:44.830290811 +0000 UTC m=+19.993307807" observedRunningTime="2025-07-10 00:17:45.995418047 +0000 UTC m=+21.158435052" watchObservedRunningTime="2025-07-10 00:17:45.995799974 +0000 UTC m=+21.158816976" Jul 10 00:17:46.030930 kubelet[2914]: E0710 00:17:46.030896 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:46.030930 kubelet[2914]: W0710 00:17:46.030920 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:46.031086 kubelet[2914]: E0710 00:17:46.030961 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:46.031144 kubelet[2914]: E0710 00:17:46.031118 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:46.031144 kubelet[2914]: W0710 00:17:46.031134 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:46.031203 kubelet[2914]: E0710 00:17:46.031152 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:46.031294 kubelet[2914]: E0710 00:17:46.031271 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:46.031294 kubelet[2914]: W0710 00:17:46.031283 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:46.031294 kubelet[2914]: E0710 00:17:46.031291 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:46.031494 kubelet[2914]: E0710 00:17:46.031459 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:46.031494 kubelet[2914]: W0710 00:17:46.031467 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:46.031494 kubelet[2914]: E0710 00:17:46.031479 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:46.031609 kubelet[2914]: E0710 00:17:46.031592 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:46.031609 kubelet[2914]: W0710 00:17:46.031601 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:46.031674 kubelet[2914]: E0710 00:17:46.031622 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:46.031762 kubelet[2914]: E0710 00:17:46.031739 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:46.031784 kubelet[2914]: W0710 00:17:46.031756 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:46.031784 kubelet[2914]: E0710 00:17:46.031770 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:46.031919 kubelet[2914]: E0710 00:17:46.031911 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:46.031945 kubelet[2914]: W0710 00:17:46.031919 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:46.031945 kubelet[2914]: E0710 00:17:46.031933 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:46.032069 kubelet[2914]: E0710 00:17:46.032057 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:46.032069 kubelet[2914]: W0710 00:17:46.032065 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:46.032069 kubelet[2914]: E0710 00:17:46.032074 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:46.032211 kubelet[2914]: E0710 00:17:46.032198 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:46.032245 kubelet[2914]: W0710 00:17:46.032212 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:46.032245 kubelet[2914]: E0710 00:17:46.032217 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:46.032363 kubelet[2914]: E0710 00:17:46.032342 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:46.032401 kubelet[2914]: W0710 00:17:46.032366 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:46.032401 kubelet[2914]: E0710 00:17:46.032372 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:46.032487 kubelet[2914]: E0710 00:17:46.032477 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:46.032487 kubelet[2914]: W0710 00:17:46.032482 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:46.032487 kubelet[2914]: E0710 00:17:46.032486 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:46.032619 kubelet[2914]: E0710 00:17:46.032604 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:46.032619 kubelet[2914]: W0710 00:17:46.032612 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:46.032619 kubelet[2914]: E0710 00:17:46.032616 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:46.032779 kubelet[2914]: E0710 00:17:46.032720 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:46.032779 kubelet[2914]: W0710 00:17:46.032731 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:46.032779 kubelet[2914]: E0710 00:17:46.032739 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:46.032869 kubelet[2914]: E0710 00:17:46.032863 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:46.032869 kubelet[2914]: W0710 00:17:46.032868 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:46.032922 kubelet[2914]: E0710 00:17:46.032873 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:46.032952 kubelet[2914]: E0710 00:17:46.032946 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:46.032972 kubelet[2914]: W0710 00:17:46.032952 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:46.032972 kubelet[2914]: E0710 00:17:46.032956 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:46.043494 kubelet[2914]: E0710 00:17:46.043386 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:46.043494 kubelet[2914]: W0710 00:17:46.043402 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:46.043494 kubelet[2914]: E0710 00:17:46.043415 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:46.043703 kubelet[2914]: E0710 00:17:46.043582 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:46.043703 kubelet[2914]: W0710 00:17:46.043588 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:46.043703 kubelet[2914]: E0710 00:17:46.043597 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:46.043703 kubelet[2914]: E0710 00:17:46.043699 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:46.043703 kubelet[2914]: W0710 00:17:46.043704 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:46.044032 kubelet[2914]: E0710 00:17:46.043709 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:46.044167 kubelet[2914]: E0710 00:17:46.044144 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:46.044167 kubelet[2914]: W0710 00:17:46.044155 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:46.051834 kubelet[2914]: E0710 00:17:46.044285 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:46.051834 kubelet[2914]: E0710 00:17:46.044401 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:46.051834 kubelet[2914]: W0710 00:17:46.044407 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:46.051834 kubelet[2914]: E0710 00:17:46.044417 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:46.051834 kubelet[2914]: E0710 00:17:46.044532 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:46.051834 kubelet[2914]: W0710 00:17:46.044537 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:46.051834 kubelet[2914]: E0710 00:17:46.044546 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:46.051834 kubelet[2914]: E0710 00:17:46.044670 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:46.051834 kubelet[2914]: W0710 00:17:46.044675 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:46.051834 kubelet[2914]: E0710 00:17:46.044694 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:46.052061 kubelet[2914]: E0710 00:17:46.044774 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:46.052061 kubelet[2914]: W0710 00:17:46.044779 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:46.052061 kubelet[2914]: E0710 00:17:46.044804 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:46.052061 kubelet[2914]: E0710 00:17:46.044857 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:46.052061 kubelet[2914]: W0710 00:17:46.044863 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:46.052061 kubelet[2914]: E0710 00:17:46.044872 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:46.052061 kubelet[2914]: E0710 00:17:46.044995 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:46.052061 kubelet[2914]: W0710 00:17:46.045001 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:46.052061 kubelet[2914]: E0710 00:17:46.045012 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:46.052061 kubelet[2914]: E0710 00:17:46.045115 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:46.052289 kubelet[2914]: W0710 00:17:46.045121 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:46.052289 kubelet[2914]: E0710 00:17:46.045132 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:46.052289 kubelet[2914]: E0710 00:17:46.045256 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:46.052289 kubelet[2914]: W0710 00:17:46.045264 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:46.052289 kubelet[2914]: E0710 00:17:46.045276 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:46.052289 kubelet[2914]: E0710 00:17:46.045383 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:46.052289 kubelet[2914]: W0710 00:17:46.045388 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:46.052289 kubelet[2914]: E0710 00:17:46.045398 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:46.052289 kubelet[2914]: E0710 00:17:46.045537 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:46.052289 kubelet[2914]: W0710 00:17:46.045544 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:46.052571 kubelet[2914]: E0710 00:17:46.045555 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:46.052571 kubelet[2914]: E0710 00:17:46.045727 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:46.052571 kubelet[2914]: W0710 00:17:46.045732 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:46.052571 kubelet[2914]: E0710 00:17:46.045749 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:46.052571 kubelet[2914]: E0710 00:17:46.045859 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:46.052571 kubelet[2914]: W0710 00:17:46.045864 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:46.052571 kubelet[2914]: E0710 00:17:46.045887 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:46.052571 kubelet[2914]: E0710 00:17:46.046098 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:46.052571 kubelet[2914]: W0710 00:17:46.046109 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:46.052571 kubelet[2914]: E0710 00:17:46.046118 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:46.052785 kubelet[2914]: E0710 00:17:46.046229 2914 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:17:46.052785 kubelet[2914]: W0710 00:17:46.046233 2914 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:17:46.052785 kubelet[2914]: E0710 00:17:46.046238 2914 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:17:46.241710 containerd[1618]: time="2025-07-10T00:17:46.241494525Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:17:46.242379 containerd[1618]: time="2025-07-10T00:17:46.242359959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 10 00:17:46.242634 containerd[1618]: time="2025-07-10T00:17:46.242595727Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:17:46.244031 containerd[1618]: time="2025-07-10T00:17:46.244009779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:17:46.244815 containerd[1618]: time="2025-07-10T00:17:46.244746185Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.413937141s" Jul 10 00:17:46.244815 containerd[1618]: time="2025-07-10T00:17:46.244763768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 10 00:17:46.247485 containerd[1618]: time="2025-07-10T00:17:46.247172932Z" level=info msg="CreateContainer within sandbox \"ce7bece0134951dab1aaa15d293f84762954483ea1bfab690ddfdb05dce4e7fe\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 10 00:17:46.282033 containerd[1618]: time="2025-07-10T00:17:46.281937055Z" level=info msg="Container d6d351fe6c8be77385517371d7616a3dbb548e28ff823135d92af75244bc604d: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:17:46.285548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3266124777.mount: Deactivated successfully. Jul 10 00:17:46.323228 containerd[1618]: time="2025-07-10T00:17:46.323102898Z" level=info msg="CreateContainer within sandbox \"ce7bece0134951dab1aaa15d293f84762954483ea1bfab690ddfdb05dce4e7fe\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d6d351fe6c8be77385517371d7616a3dbb548e28ff823135d92af75244bc604d\"" Jul 10 00:17:46.323833 containerd[1618]: time="2025-07-10T00:17:46.323813414Z" level=info msg="StartContainer for \"d6d351fe6c8be77385517371d7616a3dbb548e28ff823135d92af75244bc604d\"" Jul 10 00:17:46.325136 containerd[1618]: time="2025-07-10T00:17:46.325117980Z" level=info msg="connecting to shim d6d351fe6c8be77385517371d7616a3dbb548e28ff823135d92af75244bc604d" address="unix:///run/containerd/s/9966c4c2668afc44ff1de6f0b8fb3c57720fb16d01541cc233bd95e82a951583" protocol=ttrpc version=3 Jul 10 00:17:46.344587 systemd[1]: Started cri-containerd-d6d351fe6c8be77385517371d7616a3dbb548e28ff823135d92af75244bc604d.scope - libcontainer container d6d351fe6c8be77385517371d7616a3dbb548e28ff823135d92af75244bc604d. Jul 10 00:17:46.395170 containerd[1618]: time="2025-07-10T00:17:46.395143059Z" level=info msg="StartContainer for \"d6d351fe6c8be77385517371d7616a3dbb548e28ff823135d92af75244bc604d\" returns successfully" Jul 10 00:17:46.396700 systemd[1]: cri-containerd-d6d351fe6c8be77385517371d7616a3dbb548e28ff823135d92af75244bc604d.scope: Deactivated successfully. Jul 10 00:17:46.441858 containerd[1618]: time="2025-07-10T00:17:46.441630546Z" level=info msg="received exit event container_id:\"d6d351fe6c8be77385517371d7616a3dbb548e28ff823135d92af75244bc604d\" id:\"d6d351fe6c8be77385517371d7616a3dbb548e28ff823135d92af75244bc604d\" pid:3575 exited_at:{seconds:1752106666 nanos:398621763}" Jul 10 00:17:46.443310 containerd[1618]: time="2025-07-10T00:17:46.443192569Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d6d351fe6c8be77385517371d7616a3dbb548e28ff823135d92af75244bc604d\" id:\"d6d351fe6c8be77385517371d7616a3dbb548e28ff823135d92af75244bc604d\" pid:3575 exited_at:{seconds:1752106666 nanos:398621763}" Jul 10 00:17:46.456637 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6d351fe6c8be77385517371d7616a3dbb548e28ff823135d92af75244bc604d-rootfs.mount: Deactivated successfully. Jul 10 00:17:46.907954 kubelet[2914]: E0710 00:17:46.907393 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pblgr" podUID="85f38ee8-8868-4fd3-804f-d03ca4c958a9" Jul 10 00:17:46.973739 kubelet[2914]: I0710 00:17:46.973720 2914 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:17:46.974997 containerd[1618]: time="2025-07-10T00:17:46.974793864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 10 00:17:48.907459 kubelet[2914]: E0710 00:17:48.907398 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pblgr" podUID="85f38ee8-8868-4fd3-804f-d03ca4c958a9" Jul 10 00:17:50.106571 containerd[1618]: time="2025-07-10T00:17:50.106521032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:17:50.107612 containerd[1618]: time="2025-07-10T00:17:50.107127271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 10 00:17:50.108292 containerd[1618]: time="2025-07-10T00:17:50.108122272Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:17:50.109431 containerd[1618]: time="2025-07-10T00:17:50.109409778Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:17:50.110107 containerd[1618]: time="2025-07-10T00:17:50.110085950Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.135018138s" Jul 10 00:17:50.110154 containerd[1618]: time="2025-07-10T00:17:50.110107373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 10 00:17:50.112202 containerd[1618]: time="2025-07-10T00:17:50.112164931Z" level=info msg="CreateContainer within sandbox \"ce7bece0134951dab1aaa15d293f84762954483ea1bfab690ddfdb05dce4e7fe\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 10 00:17:50.121040 containerd[1618]: time="2025-07-10T00:17:50.120589574Z" level=info msg="Container a42e353b629df7b05a806a3f2a240562280dafa501e1f45f48240b9cffc19868: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:17:50.137543 containerd[1618]: time="2025-07-10T00:17:50.137483086Z" level=info msg="CreateContainer within sandbox \"ce7bece0134951dab1aaa15d293f84762954483ea1bfab690ddfdb05dce4e7fe\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a42e353b629df7b05a806a3f2a240562280dafa501e1f45f48240b9cffc19868\"" Jul 10 00:17:50.138362 containerd[1618]: time="2025-07-10T00:17:50.138188200Z" level=info msg="StartContainer for \"a42e353b629df7b05a806a3f2a240562280dafa501e1f45f48240b9cffc19868\"" Jul 10 00:17:50.139458 containerd[1618]: time="2025-07-10T00:17:50.139417631Z" level=info msg="connecting to shim a42e353b629df7b05a806a3f2a240562280dafa501e1f45f48240b9cffc19868" address="unix:///run/containerd/s/9966c4c2668afc44ff1de6f0b8fb3c57720fb16d01541cc233bd95e82a951583" protocol=ttrpc version=3 Jul 10 00:17:50.160611 systemd[1]: Started cri-containerd-a42e353b629df7b05a806a3f2a240562280dafa501e1f45f48240b9cffc19868.scope - libcontainer container a42e353b629df7b05a806a3f2a240562280dafa501e1f45f48240b9cffc19868. Jul 10 00:17:50.249327 containerd[1618]: time="2025-07-10T00:17:50.249303550Z" level=info msg="StartContainer for \"a42e353b629df7b05a806a3f2a240562280dafa501e1f45f48240b9cffc19868\" returns successfully" Jul 10 00:17:50.907938 kubelet[2914]: E0710 00:17:50.907673 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pblgr" podUID="85f38ee8-8868-4fd3-804f-d03ca4c958a9" Jul 10 00:17:52.137686 systemd[1]: cri-containerd-a42e353b629df7b05a806a3f2a240562280dafa501e1f45f48240b9cffc19868.scope: Deactivated successfully. Jul 10 00:17:52.137882 systemd[1]: cri-containerd-a42e353b629df7b05a806a3f2a240562280dafa501e1f45f48240b9cffc19868.scope: Consumed 325ms CPU time, 159.5M memory peak, 883K read from disk, 171.2M written to disk. Jul 10 00:17:52.239308 containerd[1618]: time="2025-07-10T00:17:52.239261976Z" level=info msg="received exit event container_id:\"a42e353b629df7b05a806a3f2a240562280dafa501e1f45f48240b9cffc19868\" id:\"a42e353b629df7b05a806a3f2a240562280dafa501e1f45f48240b9cffc19868\" pid:3635 exited_at:{seconds:1752106672 nanos:239101689}" Jul 10 00:17:52.245603 containerd[1618]: time="2025-07-10T00:17:52.245574604Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a42e353b629df7b05a806a3f2a240562280dafa501e1f45f48240b9cffc19868\" id:\"a42e353b629df7b05a806a3f2a240562280dafa501e1f45f48240b9cffc19868\" pid:3635 exited_at:{seconds:1752106672 nanos:239101689}" Jul 10 00:17:52.251637 kubelet[2914]: I0710 00:17:52.251341 2914 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 10 00:17:52.363681 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a42e353b629df7b05a806a3f2a240562280dafa501e1f45f48240b9cffc19868-rootfs.mount: Deactivated successfully. Jul 10 00:17:52.451867 systemd[1]: Created slice kubepods-burstable-pod06016ed6_c265_4984_9437_f30edfbeb627.slice - libcontainer container kubepods-burstable-pod06016ed6_c265_4984_9437_f30edfbeb627.slice. Jul 10 00:17:52.459298 systemd[1]: Created slice kubepods-burstable-pod355aa274_4760_4539_8c0e_5af0b659ef62.slice - libcontainer container kubepods-burstable-pod355aa274_4760_4539_8c0e_5af0b659ef62.slice. Jul 10 00:17:52.476851 systemd[1]: Created slice kubepods-besteffort-pode6130889_62e9_44d8_9fa0_8fe4c4301a67.slice - libcontainer container kubepods-besteffort-pode6130889_62e9_44d8_9fa0_8fe4c4301a67.slice. Jul 10 00:17:52.482330 systemd[1]: Created slice kubepods-besteffort-podfb6624e5_1abf_4638_86f1_56b2a2c177a9.slice - libcontainer container kubepods-besteffort-podfb6624e5_1abf_4638_86f1_56b2a2c177a9.slice. Jul 10 00:17:52.489593 systemd[1]: Created slice kubepods-besteffort-podc6dc13b4_2d80_4cf6_a904_6a6c8f0d6444.slice - libcontainer container kubepods-besteffort-podc6dc13b4_2d80_4cf6_a904_6a6c8f0d6444.slice. Jul 10 00:17:52.496683 systemd[1]: Created slice kubepods-besteffort-pod89b075c8_08cd_4903_a32e_138339255054.slice - libcontainer container kubepods-besteffort-pod89b075c8_08cd_4903_a32e_138339255054.slice. Jul 10 00:17:52.499386 kubelet[2914]: I0710 00:17:52.498318 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5k48\" (UniqueName: \"kubernetes.io/projected/4136c2e0-77ff-4fe9-bf8f-cbf21d4df8c9-kube-api-access-d5k48\") pod \"calico-apiserver-5bcf4cfc7f-7945z\" (UID: \"4136c2e0-77ff-4fe9-bf8f-cbf21d4df8c9\") " pod="calico-apiserver/calico-apiserver-5bcf4cfc7f-7945z" Jul 10 00:17:52.499386 kubelet[2914]: I0710 00:17:52.498343 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06016ed6-c265-4984-9437-f30edfbeb627-config-volume\") pod \"coredns-668d6bf9bc-ktmfr\" (UID: \"06016ed6-c265-4984-9437-f30edfbeb627\") " pod="kube-system/coredns-668d6bf9bc-ktmfr" Jul 10 00:17:52.499386 kubelet[2914]: I0710 00:17:52.498356 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqhtk\" (UniqueName: \"kubernetes.io/projected/e075a451-649b-4d0c-8fd9-f058337570ca-kube-api-access-cqhtk\") pod \"goldmane-768f4c5c69-lwqb8\" (UID: \"e075a451-649b-4d0c-8fd9-f058337570ca\") " pod="calico-system/goldmane-768f4c5c69-lwqb8" Jul 10 00:17:52.499386 kubelet[2914]: I0710 00:17:52.498369 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dm7d\" (UniqueName: \"kubernetes.io/projected/c6dc13b4-2d80-4cf6-a904-6a6c8f0d6444-kube-api-access-5dm7d\") pod \"whisker-7c96fdd7fc-944pp\" (UID: \"c6dc13b4-2d80-4cf6-a904-6a6c8f0d6444\") " pod="calico-system/whisker-7c96fdd7fc-944pp" Jul 10 00:17:52.499386 kubelet[2914]: I0710 00:17:52.498380 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e075a451-649b-4d0c-8fd9-f058337570ca-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-lwqb8\" (UID: \"e075a451-649b-4d0c-8fd9-f058337570ca\") " pod="calico-system/goldmane-768f4c5c69-lwqb8" Jul 10 00:17:52.499548 kubelet[2914]: I0710 00:17:52.498392 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6130889-62e9-44d8-9fa0-8fe4c4301a67-tigera-ca-bundle\") pod \"calico-kube-controllers-c6bccbf5d-bxtv5\" (UID: \"e6130889-62e9-44d8-9fa0-8fe4c4301a67\") " pod="calico-system/calico-kube-controllers-c6bccbf5d-bxtv5" Jul 10 00:17:52.499548 kubelet[2914]: I0710 00:17:52.498401 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e075a451-649b-4d0c-8fd9-f058337570ca-config\") pod \"goldmane-768f4c5c69-lwqb8\" (UID: \"e075a451-649b-4d0c-8fd9-f058337570ca\") " pod="calico-system/goldmane-768f4c5c69-lwqb8" Jul 10 00:17:52.499548 kubelet[2914]: I0710 00:17:52.498414 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fb6624e5-1abf-4638-86f1-56b2a2c177a9-calico-apiserver-certs\") pod \"calico-apiserver-c5dc87db9-p7g8k\" (UID: \"fb6624e5-1abf-4638-86f1-56b2a2c177a9\") " pod="calico-apiserver/calico-apiserver-c5dc87db9-p7g8k" Jul 10 00:17:52.499548 kubelet[2914]: I0710 00:17:52.498424 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/355aa274-4760-4539-8c0e-5af0b659ef62-config-volume\") pod \"coredns-668d6bf9bc-wl5zx\" (UID: \"355aa274-4760-4539-8c0e-5af0b659ef62\") " pod="kube-system/coredns-668d6bf9bc-wl5zx" Jul 10 00:17:52.499548 kubelet[2914]: I0710 00:17:52.498434 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvxw8\" (UniqueName: \"kubernetes.io/projected/89b075c8-08cd-4903-a32e-138339255054-kube-api-access-nvxw8\") pod \"calico-apiserver-5bcf4cfc7f-j2vlz\" (UID: \"89b075c8-08cd-4903-a32e-138339255054\") " pod="calico-apiserver/calico-apiserver-5bcf4cfc7f-j2vlz" Jul 10 00:17:52.505724 kubelet[2914]: I0710 00:17:52.498454 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rtk8\" (UniqueName: \"kubernetes.io/projected/06016ed6-c265-4984-9437-f30edfbeb627-kube-api-access-6rtk8\") pod \"coredns-668d6bf9bc-ktmfr\" (UID: \"06016ed6-c265-4984-9437-f30edfbeb627\") " pod="kube-system/coredns-668d6bf9bc-ktmfr" Jul 10 00:17:52.505724 kubelet[2914]: I0710 00:17:52.498463 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4136c2e0-77ff-4fe9-bf8f-cbf21d4df8c9-calico-apiserver-certs\") pod \"calico-apiserver-5bcf4cfc7f-7945z\" (UID: \"4136c2e0-77ff-4fe9-bf8f-cbf21d4df8c9\") " pod="calico-apiserver/calico-apiserver-5bcf4cfc7f-7945z" Jul 10 00:17:52.505724 kubelet[2914]: I0710 00:17:52.498472 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6dc13b4-2d80-4cf6-a904-6a6c8f0d6444-whisker-ca-bundle\") pod \"whisker-7c96fdd7fc-944pp\" (UID: \"c6dc13b4-2d80-4cf6-a904-6a6c8f0d6444\") " pod="calico-system/whisker-7c96fdd7fc-944pp" Jul 10 00:17:52.505724 kubelet[2914]: I0710 00:17:52.498482 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c6dc13b4-2d80-4cf6-a904-6a6c8f0d6444-whisker-backend-key-pair\") pod \"whisker-7c96fdd7fc-944pp\" (UID: \"c6dc13b4-2d80-4cf6-a904-6a6c8f0d6444\") " pod="calico-system/whisker-7c96fdd7fc-944pp" Jul 10 00:17:52.505724 kubelet[2914]: I0710 00:17:52.498495 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e075a451-649b-4d0c-8fd9-f058337570ca-goldmane-key-pair\") pod \"goldmane-768f4c5c69-lwqb8\" (UID: \"e075a451-649b-4d0c-8fd9-f058337570ca\") " pod="calico-system/goldmane-768f4c5c69-lwqb8" Jul 10 00:17:52.502477 systemd[1]: Created slice kubepods-besteffort-pod4136c2e0_77ff_4fe9_bf8f_cbf21d4df8c9.slice - libcontainer container kubepods-besteffort-pod4136c2e0_77ff_4fe9_bf8f_cbf21d4df8c9.slice. Jul 10 00:17:52.516237 kubelet[2914]: I0710 00:17:52.498530 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqzp7\" (UniqueName: \"kubernetes.io/projected/355aa274-4760-4539-8c0e-5af0b659ef62-kube-api-access-fqzp7\") pod \"coredns-668d6bf9bc-wl5zx\" (UID: \"355aa274-4760-4539-8c0e-5af0b659ef62\") " pod="kube-system/coredns-668d6bf9bc-wl5zx" Jul 10 00:17:52.516237 kubelet[2914]: I0710 00:17:52.498543 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6htrh\" (UniqueName: \"kubernetes.io/projected/fb6624e5-1abf-4638-86f1-56b2a2c177a9-kube-api-access-6htrh\") pod \"calico-apiserver-c5dc87db9-p7g8k\" (UID: \"fb6624e5-1abf-4638-86f1-56b2a2c177a9\") " pod="calico-apiserver/calico-apiserver-c5dc87db9-p7g8k" Jul 10 00:17:52.516237 kubelet[2914]: I0710 00:17:52.498553 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/89b075c8-08cd-4903-a32e-138339255054-calico-apiserver-certs\") pod \"calico-apiserver-5bcf4cfc7f-j2vlz\" (UID: \"89b075c8-08cd-4903-a32e-138339255054\") " pod="calico-apiserver/calico-apiserver-5bcf4cfc7f-j2vlz" Jul 10 00:17:52.516237 kubelet[2914]: I0710 00:17:52.498566 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr954\" (UniqueName: \"kubernetes.io/projected/e6130889-62e9-44d8-9fa0-8fe4c4301a67-kube-api-access-gr954\") pod \"calico-kube-controllers-c6bccbf5d-bxtv5\" (UID: \"e6130889-62e9-44d8-9fa0-8fe4c4301a67\") " pod="calico-system/calico-kube-controllers-c6bccbf5d-bxtv5" Jul 10 00:17:52.506392 systemd[1]: Created slice kubepods-besteffort-pode075a451_649b_4d0c_8fd9_f058337570ca.slice - libcontainer container kubepods-besteffort-pode075a451_649b_4d0c_8fd9_f058337570ca.slice. Jul 10 00:17:52.766106 containerd[1618]: time="2025-07-10T00:17:52.765888417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wl5zx,Uid:355aa274-4760-4539-8c0e-5af0b659ef62,Namespace:kube-system,Attempt:0,}" Jul 10 00:17:52.766801 containerd[1618]: time="2025-07-10T00:17:52.766615251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ktmfr,Uid:06016ed6-c265-4984-9437-f30edfbeb627,Namespace:kube-system,Attempt:0,}" Jul 10 00:17:52.793739 containerd[1618]: time="2025-07-10T00:17:52.793594279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c96fdd7fc-944pp,Uid:c6dc13b4-2d80-4cf6-a904-6a6c8f0d6444,Namespace:calico-system,Attempt:0,}" Jul 10 00:17:52.793850 containerd[1618]: time="2025-07-10T00:17:52.793744299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c6bccbf5d-bxtv5,Uid:e6130889-62e9-44d8-9fa0-8fe4c4301a67,Namespace:calico-system,Attempt:0,}" Jul 10 00:17:52.793933 containerd[1618]: time="2025-07-10T00:17:52.793921731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5dc87db9-p7g8k,Uid:fb6624e5-1abf-4638-86f1-56b2a2c177a9,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:17:52.803428 containerd[1618]: time="2025-07-10T00:17:52.803354773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bcf4cfc7f-j2vlz,Uid:89b075c8-08cd-4903-a32e-138339255054,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:17:52.804670 containerd[1618]: time="2025-07-10T00:17:52.804582217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bcf4cfc7f-7945z,Uid:4136c2e0-77ff-4fe9-bf8f-cbf21d4df8c9,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:17:52.809364 containerd[1618]: time="2025-07-10T00:17:52.809315411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-lwqb8,Uid:e075a451-649b-4d0c-8fd9-f058337570ca,Namespace:calico-system,Attempt:0,}" Jul 10 00:17:52.914689 systemd[1]: Created slice kubepods-besteffort-pod85f38ee8_8868_4fd3_804f_d03ca4c958a9.slice - libcontainer container kubepods-besteffort-pod85f38ee8_8868_4fd3_804f_d03ca4c958a9.slice. Jul 10 00:17:52.916973 containerd[1618]: time="2025-07-10T00:17:52.916778405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pblgr,Uid:85f38ee8-8868-4fd3-804f-d03ca4c958a9,Namespace:calico-system,Attempt:0,}" Jul 10 00:17:52.998991 containerd[1618]: time="2025-07-10T00:17:52.998355274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 10 00:17:53.138727 containerd[1618]: time="2025-07-10T00:17:53.138576543Z" level=error msg="Failed to destroy network for sandbox \"5754ea6f096c4caac358232b6e4c86fd317281d622f0b44067586c55489b0ab2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:17:53.140951 containerd[1618]: time="2025-07-10T00:17:53.140422702Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5dc87db9-p7g8k,Uid:fb6624e5-1abf-4638-86f1-56b2a2c177a9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5754ea6f096c4caac358232b6e4c86fd317281d622f0b44067586c55489b0ab2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:17:53.141742 kubelet[2914]: E0710 00:17:53.141542 2914 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5754ea6f096c4caac358232b6e4c86fd317281d622f0b44067586c55489b0ab2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:17:53.141742 kubelet[2914]: E0710 00:17:53.141605 2914 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5754ea6f096c4caac358232b6e4c86fd317281d622f0b44067586c55489b0ab2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5dc87db9-p7g8k" Jul 10 00:17:53.141742 kubelet[2914]: E0710 00:17:53.141620 2914 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5754ea6f096c4caac358232b6e4c86fd317281d622f0b44067586c55489b0ab2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5dc87db9-p7g8k" Jul 10 00:17:53.142693 kubelet[2914]: E0710 00:17:53.141653 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c5dc87db9-p7g8k_calico-apiserver(fb6624e5-1abf-4638-86f1-56b2a2c177a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c5dc87db9-p7g8k_calico-apiserver(fb6624e5-1abf-4638-86f1-56b2a2c177a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5754ea6f096c4caac358232b6e4c86fd317281d622f0b44067586c55489b0ab2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c5dc87db9-p7g8k" podUID="fb6624e5-1abf-4638-86f1-56b2a2c177a9" Jul 10 00:17:53.146671 containerd[1618]: time="2025-07-10T00:17:53.146636258Z" level=error msg="Failed to destroy network for sandbox \"d1dd93db3201d5c7b7faa2c0f5d912c279947b2619d45dcedff14477fa2f7291\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:17:53.147606 containerd[1618]: time="2025-07-10T00:17:53.147581569Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c6bccbf5d-bxtv5,Uid:e6130889-62e9-44d8-9fa0-8fe4c4301a67,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1dd93db3201d5c7b7faa2c0f5d912c279947b2619d45dcedff14477fa2f7291\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:17:53.147943 kubelet[2914]: E0710 00:17:53.147911 2914 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1dd93db3201d5c7b7faa2c0f5d912c279947b2619d45dcedff14477fa2f7291\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:17:53.147994 kubelet[2914]: E0710 00:17:53.147958 2914 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1dd93db3201d5c7b7faa2c0f5d912c279947b2619d45dcedff14477fa2f7291\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c6bccbf5d-bxtv5" Jul 10 00:17:53.147994 kubelet[2914]: E0710 00:17:53.147972 2914 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1dd93db3201d5c7b7faa2c0f5d912c279947b2619d45dcedff14477fa2f7291\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c6bccbf5d-bxtv5" Jul 10 00:17:53.148101 kubelet[2914]: E0710 00:17:53.148082 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c6bccbf5d-bxtv5_calico-system(e6130889-62e9-44d8-9fa0-8fe4c4301a67)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c6bccbf5d-bxtv5_calico-system(e6130889-62e9-44d8-9fa0-8fe4c4301a67)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1dd93db3201d5c7b7faa2c0f5d912c279947b2619d45dcedff14477fa2f7291\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c6bccbf5d-bxtv5" podUID="e6130889-62e9-44d8-9fa0-8fe4c4301a67" Jul 10 00:17:53.150745 containerd[1618]: time="2025-07-10T00:17:53.150631169Z" level=error msg="Failed to destroy network for sandbox \"54b715b1dd9a39c72afdeb926c79758ce900456af87a4c83fe4d5e826605b227\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:17:53.151781 containerd[1618]: time="2025-07-10T00:17:53.151258058Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wl5zx,Uid:355aa274-4760-4539-8c0e-5af0b659ef62,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"54b715b1dd9a39c72afdeb926c79758ce900456af87a4c83fe4d5e826605b227\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:17:53.152720 kubelet[2914]: E0710 00:17:53.151952 2914 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54b715b1dd9a39c72afdeb926c79758ce900456af87a4c83fe4d5e826605b227\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:17:53.152720 kubelet[2914]: E0710 00:17:53.151988 2914 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54b715b1dd9a39c72afdeb926c79758ce900456af87a4c83fe4d5e826605b227\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wl5zx" Jul 10 00:17:53.152720 kubelet[2914]: E0710 00:17:53.152001 2914 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54b715b1dd9a39c72afdeb926c79758ce900456af87a4c83fe4d5e826605b227\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wl5zx" Jul 10 00:17:53.152803 kubelet[2914]: E0710 00:17:53.152033 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wl5zx_kube-system(355aa274-4760-4539-8c0e-5af0b659ef62)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wl5zx_kube-system(355aa274-4760-4539-8c0e-5af0b659ef62)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54b715b1dd9a39c72afdeb926c79758ce900456af87a4c83fe4d5e826605b227\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wl5zx" podUID="355aa274-4760-4539-8c0e-5af0b659ef62" Jul 10 00:17:53.157934 containerd[1618]: time="2025-07-10T00:17:53.157898356Z" level=error msg="Failed to destroy network for sandbox \"f746371d57cb3f0788096ab11b6820f148aeafdc97ae841d397aa36bd0c68d36\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:17:53.158577 containerd[1618]: time="2025-07-10T00:17:53.158548465Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c96fdd7fc-944pp,Uid:c6dc13b4-2d80-4cf6-a904-6a6c8f0d6444,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f746371d57cb3f0788096ab11b6820f148aeafdc97ae841d397aa36bd0c68d36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:17:53.158777 kubelet[2914]: E0710 00:17:53.158756 2914 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f746371d57cb3f0788096ab11b6820f148aeafdc97ae841d397aa36bd0c68d36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:17:53.158819 kubelet[2914]: E0710 00:17:53.158788 2914 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f746371d57cb3f0788096ab11b6820f148aeafdc97ae841d397aa36bd0c68d36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7c96fdd7fc-944pp" Jul 10 00:17:53.158819 kubelet[2914]: E0710 00:17:53.158801 2914 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f746371d57cb3f0788096ab11b6820f148aeafdc97ae841d397aa36bd0c68d36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7c96fdd7fc-944pp" Jul 10 00:17:53.158880 kubelet[2914]: E0710 00:17:53.158827 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7c96fdd7fc-944pp_calico-system(c6dc13b4-2d80-4cf6-a904-6a6c8f0d6444)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7c96fdd7fc-944pp_calico-system(c6dc13b4-2d80-4cf6-a904-6a6c8f0d6444)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f746371d57cb3f0788096ab11b6820f148aeafdc97ae841d397aa36bd0c68d36\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7c96fdd7fc-944pp" podUID="c6dc13b4-2d80-4cf6-a904-6a6c8f0d6444" Jul 10 00:17:53.159427 containerd[1618]: time="2025-07-10T00:17:53.159398638Z" level=error msg="Failed to destroy network for sandbox \"5315e3765db75514a6f04d18b83a2af39d95a910b3f8edd2c00ffd02e119f695\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:17:53.160238 containerd[1618]: time="2025-07-10T00:17:53.159949045Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pblgr,Uid:85f38ee8-8868-4fd3-804f-d03ca4c958a9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5315e3765db75514a6f04d18b83a2af39d95a910b3f8edd2c00ffd02e119f695\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:17:53.160372 kubelet[2914]: E0710 00:17:53.160345 2914 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5315e3765db75514a6f04d18b83a2af39d95a910b3f8edd2c00ffd02e119f695\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:17:53.160413 kubelet[2914]: E0710 00:17:53.160379 2914 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5315e3765db75514a6f04d18b83a2af39d95a910b3f8edd2c00ffd02e119f695\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pblgr" Jul 10 00:17:53.160413 kubelet[2914]: E0710 00:17:53.160391 2914 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5315e3765db75514a6f04d18b83a2af39d95a910b3f8edd2c00ffd02e119f695\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pblgr" Jul 10 00:17:53.160744 kubelet[2914]: E0710 00:17:53.160538 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pblgr_calico-system(85f38ee8-8868-4fd3-804f-d03ca4c958a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pblgr_calico-system(85f38ee8-8868-4fd3-804f-d03ca4c958a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5315e3765db75514a6f04d18b83a2af39d95a910b3f8edd2c00ffd02e119f695\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pblgr" podUID="85f38ee8-8868-4fd3-804f-d03ca4c958a9" Jul 10 00:17:53.161398 containerd[1618]: time="2025-07-10T00:17:53.161349120Z" level=error msg="Failed to destroy network for sandbox \"7e3d90b61f6e89dcf411742b9988f8ea3a3e9e9e69b616fc3df68e14efc13550\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:17:53.161535 containerd[1618]: time="2025-07-10T00:17:53.161522016Z" level=error msg="Failed to destroy network for sandbox \"d7885d5a73ac0d505db590cfea42f80a75eb71c2ebb2f0db15ba2f33490bec71\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:17:53.162210 containerd[1618]: time="2025-07-10T00:17:53.162157105Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-lwqb8,Uid:e075a451-649b-4d0c-8fd9-f058337570ca,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7885d5a73ac0d505db590cfea42f80a75eb71c2ebb2f0db15ba2f33490bec71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:17:53.162644 kubelet[2914]: E0710 00:17:53.162491 2914 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7885d5a73ac0d505db590cfea42f80a75eb71c2ebb2f0db15ba2f33490bec71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:17:53.162644 kubelet[2914]: E0710 00:17:53.162513 2914 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7885d5a73ac0d505db590cfea42f80a75eb71c2ebb2f0db15ba2f33490bec71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-lwqb8" Jul 10 00:17:53.162644 kubelet[2914]: E0710 00:17:53.162524 2914 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7885d5a73ac0d505db590cfea42f80a75eb71c2ebb2f0db15ba2f33490bec71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-lwqb8" Jul 10 00:17:53.162720 kubelet[2914]: E0710 00:17:53.162546 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-lwqb8_calico-system(e075a451-649b-4d0c-8fd9-f058337570ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-lwqb8_calico-system(e075a451-649b-4d0c-8fd9-f058337570ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7885d5a73ac0d505db590cfea42f80a75eb71c2ebb2f0db15ba2f33490bec71\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-lwqb8" podUID="e075a451-649b-4d0c-8fd9-f058337570ca" Jul 10 00:17:53.163213 containerd[1618]: time="2025-07-10T00:17:53.163158661Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bcf4cfc7f-7945z,Uid:4136c2e0-77ff-4fe9-bf8f-cbf21d4df8c9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e3d90b61f6e89dcf411742b9988f8ea3a3e9e9e69b616fc3df68e14efc13550\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:17:53.163568 kubelet[2914]: E0710 00:17:53.163296 2914 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e3d90b61f6e89dcf411742b9988f8ea3a3e9e9e69b616fc3df68e14efc13550\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:17:53.163568 kubelet[2914]: E0710 00:17:53.163336 2914 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e3d90b61f6e89dcf411742b9988f8ea3a3e9e9e69b616fc3df68e14efc13550\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bcf4cfc7f-7945z" Jul 10 00:17:53.163568 kubelet[2914]: E0710 00:17:53.163349 2914 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e3d90b61f6e89dcf411742b9988f8ea3a3e9e9e69b616fc3df68e14efc13550\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bcf4cfc7f-7945z" Jul 10 00:17:53.163705 kubelet[2914]: E0710 00:17:53.163403 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bcf4cfc7f-7945z_calico-apiserver(4136c2e0-77ff-4fe9-bf8f-cbf21d4df8c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bcf4cfc7f-7945z_calico-apiserver(4136c2e0-77ff-4fe9-bf8f-cbf21d4df8c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e3d90b61f6e89dcf411742b9988f8ea3a3e9e9e69b616fc3df68e14efc13550\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bcf4cfc7f-7945z" podUID="4136c2e0-77ff-4fe9-bf8f-cbf21d4df8c9" Jul 10 00:17:53.164232 containerd[1618]: time="2025-07-10T00:17:53.164098511Z" level=error msg="Failed to destroy network for sandbox \"154c0d314cd020536d8c8d3ae24cc34997a38e9ed065bf207f213b9cd1468ff7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:17:53.164984 containerd[1618]: time="2025-07-10T00:17:53.164904361Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bcf4cfc7f-j2vlz,Uid:89b075c8-08cd-4903-a32e-138339255054,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"154c0d314cd020536d8c8d3ae24cc34997a38e9ed065bf207f213b9cd1468ff7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:17:53.165201 kubelet[2914]: E0710 00:17:53.165181 2914 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"154c0d314cd020536d8c8d3ae24cc34997a38e9ed065bf207f213b9cd1468ff7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:17:53.165328 kubelet[2914]: E0710 00:17:53.165288 2914 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"154c0d314cd020536d8c8d3ae24cc34997a38e9ed065bf207f213b9cd1468ff7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bcf4cfc7f-j2vlz" Jul 10 00:17:53.165328 kubelet[2914]: E0710 00:17:53.165304 2914 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"154c0d314cd020536d8c8d3ae24cc34997a38e9ed065bf207f213b9cd1468ff7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bcf4cfc7f-j2vlz" Jul 10 00:17:53.165879 kubelet[2914]: E0710 00:17:53.165327 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bcf4cfc7f-j2vlz_calico-apiserver(89b075c8-08cd-4903-a32e-138339255054)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bcf4cfc7f-j2vlz_calico-apiserver(89b075c8-08cd-4903-a32e-138339255054)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"154c0d314cd020536d8c8d3ae24cc34997a38e9ed065bf207f213b9cd1468ff7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bcf4cfc7f-j2vlz" podUID="89b075c8-08cd-4903-a32e-138339255054" Jul 10 00:17:53.168797 containerd[1618]: time="2025-07-10T00:17:53.168755209Z" level=error msg="Failed to destroy network for sandbox \"2743f6b6350627513b44660544262b516db91b2b062dca2a8484084f751233e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:17:53.169422 containerd[1618]: time="2025-07-10T00:17:53.169320297Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ktmfr,Uid:06016ed6-c265-4984-9437-f30edfbeb627,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2743f6b6350627513b44660544262b516db91b2b062dca2a8484084f751233e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:17:53.169826 kubelet[2914]: E0710 00:17:53.169787 2914 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2743f6b6350627513b44660544262b516db91b2b062dca2a8484084f751233e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:17:53.169860 kubelet[2914]: E0710 00:17:53.169827 2914 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2743f6b6350627513b44660544262b516db91b2b062dca2a8484084f751233e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ktmfr" Jul 10 00:17:53.169860 kubelet[2914]: E0710 00:17:53.169842 2914 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2743f6b6350627513b44660544262b516db91b2b062dca2a8484084f751233e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ktmfr" Jul 10 00:17:53.169900 kubelet[2914]: E0710 00:17:53.169872 2914 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-ktmfr_kube-system(06016ed6-c265-4984-9437-f30edfbeb627)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-ktmfr_kube-system(06016ed6-c265-4984-9437-f30edfbeb627)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2743f6b6350627513b44660544262b516db91b2b062dca2a8484084f751233e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ktmfr" podUID="06016ed6-c265-4984-9437-f30edfbeb627" Jul 10 00:17:57.638105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount518218126.mount: Deactivated successfully. Jul 10 00:17:57.703827 containerd[1618]: time="2025-07-10T00:17:57.697325477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 10 00:17:57.707804 containerd[1618]: time="2025-07-10T00:17:57.707745726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:17:57.716525 containerd[1618]: time="2025-07-10T00:17:57.716476795Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:17:57.717246 containerd[1618]: time="2025-07-10T00:17:57.717011984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:17:57.717353 containerd[1618]: time="2025-07-10T00:17:57.717331588Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 4.718455887s" Jul 10 00:17:57.717389 containerd[1618]: time="2025-07-10T00:17:57.717354294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 10 00:17:57.734516 containerd[1618]: time="2025-07-10T00:17:57.734430441Z" level=info msg="CreateContainer within sandbox \"ce7bece0134951dab1aaa15d293f84762954483ea1bfab690ddfdb05dce4e7fe\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 10 00:17:57.771264 containerd[1618]: time="2025-07-10T00:17:57.769184131Z" level=info msg="Container f90873fd3cb87b8aa9a96512b14a2ac653258afd1cbd00a321ca809106cb6ae1: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:17:57.771157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3763229113.mount: Deactivated successfully. Jul 10 00:17:57.865969 containerd[1618]: time="2025-07-10T00:17:57.865898538Z" level=info msg="CreateContainer within sandbox \"ce7bece0134951dab1aaa15d293f84762954483ea1bfab690ddfdb05dce4e7fe\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f90873fd3cb87b8aa9a96512b14a2ac653258afd1cbd00a321ca809106cb6ae1\"" Jul 10 00:17:57.866833 containerd[1618]: time="2025-07-10T00:17:57.866763917Z" level=info msg="StartContainer for \"f90873fd3cb87b8aa9a96512b14a2ac653258afd1cbd00a321ca809106cb6ae1\"" Jul 10 00:17:57.885965 containerd[1618]: time="2025-07-10T00:17:57.885914805Z" level=info msg="connecting to shim f90873fd3cb87b8aa9a96512b14a2ac653258afd1cbd00a321ca809106cb6ae1" address="unix:///run/containerd/s/9966c4c2668afc44ff1de6f0b8fb3c57720fb16d01541cc233bd95e82a951583" protocol=ttrpc version=3 Jul 10 00:17:58.095537 systemd[1]: Started cri-containerd-f90873fd3cb87b8aa9a96512b14a2ac653258afd1cbd00a321ca809106cb6ae1.scope - libcontainer container f90873fd3cb87b8aa9a96512b14a2ac653258afd1cbd00a321ca809106cb6ae1. Jul 10 00:17:58.146249 containerd[1618]: time="2025-07-10T00:17:58.146230048Z" level=info msg="StartContainer for \"f90873fd3cb87b8aa9a96512b14a2ac653258afd1cbd00a321ca809106cb6ae1\" returns successfully" Jul 10 00:17:58.275536 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 10 00:17:58.277177 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 10 00:17:58.784663 kubelet[2914]: I0710 00:17:58.784614 2914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6dc13b4-2d80-4cf6-a904-6a6c8f0d6444-whisker-ca-bundle\") pod \"c6dc13b4-2d80-4cf6-a904-6a6c8f0d6444\" (UID: \"c6dc13b4-2d80-4cf6-a904-6a6c8f0d6444\") " Jul 10 00:17:58.785350 kubelet[2914]: I0710 00:17:58.784650 2914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dm7d\" (UniqueName: \"kubernetes.io/projected/c6dc13b4-2d80-4cf6-a904-6a6c8f0d6444-kube-api-access-5dm7d\") pod \"c6dc13b4-2d80-4cf6-a904-6a6c8f0d6444\" (UID: \"c6dc13b4-2d80-4cf6-a904-6a6c8f0d6444\") " Jul 10 00:17:58.785350 kubelet[2914]: I0710 00:17:58.785124 2914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c6dc13b4-2d80-4cf6-a904-6a6c8f0d6444-whisker-backend-key-pair\") pod \"c6dc13b4-2d80-4cf6-a904-6a6c8f0d6444\" (UID: \"c6dc13b4-2d80-4cf6-a904-6a6c8f0d6444\") " Jul 10 00:17:58.793893 kubelet[2914]: I0710 00:17:58.793352 2914 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6dc13b4-2d80-4cf6-a904-6a6c8f0d6444-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c6dc13b4-2d80-4cf6-a904-6a6c8f0d6444" (UID: "c6dc13b4-2d80-4cf6-a904-6a6c8f0d6444"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 00:17:58.798827 kubelet[2914]: I0710 00:17:58.798803 2914 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6dc13b4-2d80-4cf6-a904-6a6c8f0d6444-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c6dc13b4-2d80-4cf6-a904-6a6c8f0d6444" (UID: "c6dc13b4-2d80-4cf6-a904-6a6c8f0d6444"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 00:17:58.799195 kubelet[2914]: I0710 00:17:58.799171 2914 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6dc13b4-2d80-4cf6-a904-6a6c8f0d6444-kube-api-access-5dm7d" (OuterVolumeSpecName: "kube-api-access-5dm7d") pod "c6dc13b4-2d80-4cf6-a904-6a6c8f0d6444" (UID: "c6dc13b4-2d80-4cf6-a904-6a6c8f0d6444"). InnerVolumeSpecName "kube-api-access-5dm7d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:17:58.799562 systemd[1]: var-lib-kubelet-pods-c6dc13b4\x2d2d80\x2d4cf6\x2da904\x2d6a6c8f0d6444-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 10 00:17:58.801304 systemd[1]: var-lib-kubelet-pods-c6dc13b4\x2d2d80\x2d4cf6\x2da904\x2d6a6c8f0d6444-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5dm7d.mount: Deactivated successfully. Jul 10 00:17:58.885435 kubelet[2914]: I0710 00:17:58.885404 2914 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c6dc13b4-2d80-4cf6-a904-6a6c8f0d6444-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 10 00:17:58.885435 kubelet[2914]: I0710 00:17:58.885433 2914 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6dc13b4-2d80-4cf6-a904-6a6c8f0d6444-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 10 00:17:58.885589 kubelet[2914]: I0710 00:17:58.885471 2914 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5dm7d\" (UniqueName: \"kubernetes.io/projected/c6dc13b4-2d80-4cf6-a904-6a6c8f0d6444-kube-api-access-5dm7d\") on node \"localhost\" DevicePath \"\"" Jul 10 00:17:58.917643 systemd[1]: Removed slice kubepods-besteffort-podc6dc13b4_2d80_4cf6_a904_6a6c8f0d6444.slice - libcontainer container kubepods-besteffort-podc6dc13b4_2d80_4cf6_a904_6a6c8f0d6444.slice. Jul 10 00:17:59.063616 kubelet[2914]: I0710 00:17:59.063429 2914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-g6gmj" podStartSLOduration=3.379042817 podStartE2EDuration="19.063256374s" podCreationTimestamp="2025-07-10 00:17:40 +0000 UTC" firstStartedPulling="2025-07-10 00:17:42.033546769 +0000 UTC m=+17.196563765" lastFinishedPulling="2025-07-10 00:17:57.717760325 +0000 UTC m=+32.880777322" observedRunningTime="2025-07-10 00:17:59.063034732 +0000 UTC m=+34.226051738" watchObservedRunningTime="2025-07-10 00:17:59.063256374 +0000 UTC m=+34.226273374" Jul 10 00:17:59.123334 systemd[1]: Created slice kubepods-besteffort-pod03cf87f0_2918_4fae_b6e7_5e2e8bed9888.slice - libcontainer container kubepods-besteffort-pod03cf87f0_2918_4fae_b6e7_5e2e8bed9888.slice. Jul 10 00:17:59.188749 kubelet[2914]: I0710 00:17:59.188415 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03cf87f0-2918-4fae-b6e7-5e2e8bed9888-whisker-ca-bundle\") pod \"whisker-6d568d6c4c-t6mbx\" (UID: \"03cf87f0-2918-4fae-b6e7-5e2e8bed9888\") " pod="calico-system/whisker-6d568d6c4c-t6mbx" Jul 10 00:17:59.188749 kubelet[2914]: I0710 00:17:59.188490 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrpk7\" (UniqueName: \"kubernetes.io/projected/03cf87f0-2918-4fae-b6e7-5e2e8bed9888-kube-api-access-nrpk7\") pod \"whisker-6d568d6c4c-t6mbx\" (UID: \"03cf87f0-2918-4fae-b6e7-5e2e8bed9888\") " pod="calico-system/whisker-6d568d6c4c-t6mbx" Jul 10 00:17:59.188749 kubelet[2914]: I0710 00:17:59.188514 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/03cf87f0-2918-4fae-b6e7-5e2e8bed9888-whisker-backend-key-pair\") pod \"whisker-6d568d6c4c-t6mbx\" (UID: \"03cf87f0-2918-4fae-b6e7-5e2e8bed9888\") " pod="calico-system/whisker-6d568d6c4c-t6mbx" Jul 10 00:17:59.426504 containerd[1618]: time="2025-07-10T00:17:59.426413620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d568d6c4c-t6mbx,Uid:03cf87f0-2918-4fae-b6e7-5e2e8bed9888,Namespace:calico-system,Attempt:0,}" Jul 10 00:17:59.983395 systemd-networkd[1540]: calife5fb499771: Link UP Jul 10 00:17:59.983553 systemd-networkd[1540]: calife5fb499771: Gained carrier Jul 10 00:17:59.997771 containerd[1618]: 2025-07-10 00:17:59.469 [INFO][3992] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:17:59.997771 containerd[1618]: 2025-07-10 00:17:59.509 [INFO][3992] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6d568d6c4c--t6mbx-eth0 whisker-6d568d6c4c- calico-system 03cf87f0-2918-4fae-b6e7-5e2e8bed9888 905 0 2025-07-10 00:17:59 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6d568d6c4c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6d568d6c4c-t6mbx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calife5fb499771 [] [] }} ContainerID="4a95e85ca0473205265a95f0791a3b8b29dc14b2d46686d789454a161c3e5792" Namespace="calico-system" Pod="whisker-6d568d6c4c-t6mbx" WorkloadEndpoint="localhost-k8s-whisker--6d568d6c4c--t6mbx-" Jul 10 00:17:59.997771 containerd[1618]: 2025-07-10 00:17:59.509 [INFO][3992] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4a95e85ca0473205265a95f0791a3b8b29dc14b2d46686d789454a161c3e5792" Namespace="calico-system" Pod="whisker-6d568d6c4c-t6mbx" WorkloadEndpoint="localhost-k8s-whisker--6d568d6c4c--t6mbx-eth0" Jul 10 00:17:59.997771 containerd[1618]: 2025-07-10 00:17:59.926 [INFO][4000] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a95e85ca0473205265a95f0791a3b8b29dc14b2d46686d789454a161c3e5792" HandleID="k8s-pod-network.4a95e85ca0473205265a95f0791a3b8b29dc14b2d46686d789454a161c3e5792" Workload="localhost-k8s-whisker--6d568d6c4c--t6mbx-eth0" Jul 10 00:17:59.997992 containerd[1618]: 2025-07-10 00:17:59.931 [INFO][4000] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4a95e85ca0473205265a95f0791a3b8b29dc14b2d46686d789454a161c3e5792" HandleID="k8s-pod-network.4a95e85ca0473205265a95f0791a3b8b29dc14b2d46686d789454a161c3e5792" Workload="localhost-k8s-whisker--6d568d6c4c--t6mbx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb5c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6d568d6c4c-t6mbx", "timestamp":"2025-07-10 00:17:59.925991295 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:17:59.997992 containerd[1618]: 2025-07-10 00:17:59.931 [INFO][4000] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:17:59.997992 containerd[1618]: 2025-07-10 00:17:59.931 [INFO][4000] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:17:59.997992 containerd[1618]: 2025-07-10 00:17:59.932 [INFO][4000] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:17:59.997992 containerd[1618]: 2025-07-10 00:17:59.945 [INFO][4000] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4a95e85ca0473205265a95f0791a3b8b29dc14b2d46686d789454a161c3e5792" host="localhost" Jul 10 00:17:59.997992 containerd[1618]: 2025-07-10 00:17:59.959 [INFO][4000] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:17:59.997992 containerd[1618]: 2025-07-10 00:17:59.962 [INFO][4000] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:17:59.997992 containerd[1618]: 2025-07-10 00:17:59.963 [INFO][4000] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:17:59.997992 containerd[1618]: 2025-07-10 00:17:59.964 [INFO][4000] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:17:59.997992 containerd[1618]: 2025-07-10 00:17:59.964 [INFO][4000] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4a95e85ca0473205265a95f0791a3b8b29dc14b2d46686d789454a161c3e5792" host="localhost" Jul 10 00:17:59.998653 containerd[1618]: 2025-07-10 00:17:59.965 [INFO][4000] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4a95e85ca0473205265a95f0791a3b8b29dc14b2d46686d789454a161c3e5792 Jul 10 00:17:59.998653 containerd[1618]: 2025-07-10 00:17:59.967 [INFO][4000] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4a95e85ca0473205265a95f0791a3b8b29dc14b2d46686d789454a161c3e5792" host="localhost" Jul 10 00:17:59.998653 containerd[1618]: 2025-07-10 00:17:59.969 [INFO][4000] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.4a95e85ca0473205265a95f0791a3b8b29dc14b2d46686d789454a161c3e5792" host="localhost" Jul 10 00:17:59.998653 containerd[1618]: 2025-07-10 00:17:59.969 [INFO][4000] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.4a95e85ca0473205265a95f0791a3b8b29dc14b2d46686d789454a161c3e5792" host="localhost" Jul 10 00:17:59.998653 containerd[1618]: 2025-07-10 00:17:59.969 [INFO][4000] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:17:59.998653 containerd[1618]: 2025-07-10 00:17:59.969 [INFO][4000] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="4a95e85ca0473205265a95f0791a3b8b29dc14b2d46686d789454a161c3e5792" HandleID="k8s-pod-network.4a95e85ca0473205265a95f0791a3b8b29dc14b2d46686d789454a161c3e5792" Workload="localhost-k8s-whisker--6d568d6c4c--t6mbx-eth0" Jul 10 00:17:59.999735 containerd[1618]: 2025-07-10 00:17:59.971 [INFO][3992] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4a95e85ca0473205265a95f0791a3b8b29dc14b2d46686d789454a161c3e5792" Namespace="calico-system" Pod="whisker-6d568d6c4c-t6mbx" WorkloadEndpoint="localhost-k8s-whisker--6d568d6c4c--t6mbx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6d568d6c4c--t6mbx-eth0", GenerateName:"whisker-6d568d6c4c-", Namespace:"calico-system", SelfLink:"", UID:"03cf87f0-2918-4fae-b6e7-5e2e8bed9888", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 17, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d568d6c4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6d568d6c4c-t6mbx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calife5fb499771", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:17:59.999735 containerd[1618]: 2025-07-10 00:17:59.971 [INFO][3992] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="4a95e85ca0473205265a95f0791a3b8b29dc14b2d46686d789454a161c3e5792" Namespace="calico-system" Pod="whisker-6d568d6c4c-t6mbx" WorkloadEndpoint="localhost-k8s-whisker--6d568d6c4c--t6mbx-eth0" Jul 10 00:17:59.999797 containerd[1618]: 2025-07-10 00:17:59.971 [INFO][3992] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calife5fb499771 ContainerID="4a95e85ca0473205265a95f0791a3b8b29dc14b2d46686d789454a161c3e5792" Namespace="calico-system" Pod="whisker-6d568d6c4c-t6mbx" WorkloadEndpoint="localhost-k8s-whisker--6d568d6c4c--t6mbx-eth0" Jul 10 00:17:59.999797 containerd[1618]: 2025-07-10 00:17:59.985 [INFO][3992] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a95e85ca0473205265a95f0791a3b8b29dc14b2d46686d789454a161c3e5792" Namespace="calico-system" Pod="whisker-6d568d6c4c-t6mbx" WorkloadEndpoint="localhost-k8s-whisker--6d568d6c4c--t6mbx-eth0" Jul 10 00:17:59.999833 containerd[1618]: 2025-07-10 00:17:59.985 [INFO][3992] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4a95e85ca0473205265a95f0791a3b8b29dc14b2d46686d789454a161c3e5792" Namespace="calico-system" Pod="whisker-6d568d6c4c-t6mbx" WorkloadEndpoint="localhost-k8s-whisker--6d568d6c4c--t6mbx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6d568d6c4c--t6mbx-eth0", GenerateName:"whisker-6d568d6c4c-", Namespace:"calico-system", SelfLink:"", UID:"03cf87f0-2918-4fae-b6e7-5e2e8bed9888", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 17, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d568d6c4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4a95e85ca0473205265a95f0791a3b8b29dc14b2d46686d789454a161c3e5792", Pod:"whisker-6d568d6c4c-t6mbx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calife5fb499771", MAC:"9a:14:0f:31:b5:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:17:59.999873 containerd[1618]: 2025-07-10 00:17:59.992 [INFO][3992] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4a95e85ca0473205265a95f0791a3b8b29dc14b2d46686d789454a161c3e5792" Namespace="calico-system" Pod="whisker-6d568d6c4c-t6mbx" WorkloadEndpoint="localhost-k8s-whisker--6d568d6c4c--t6mbx-eth0" Jul 10 00:18:00.167995 containerd[1618]: time="2025-07-10T00:18:00.167657686Z" level=info msg="connecting to shim 4a95e85ca0473205265a95f0791a3b8b29dc14b2d46686d789454a161c3e5792" address="unix:///run/containerd/s/dd12ea76731ba0b10a2058c1c84ad43ccbda3af6c92d9e8ec0ce2434bb157db0" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:18:00.190633 systemd[1]: Started cri-containerd-4a95e85ca0473205265a95f0791a3b8b29dc14b2d46686d789454a161c3e5792.scope - libcontainer container 4a95e85ca0473205265a95f0791a3b8b29dc14b2d46686d789454a161c3e5792. Jul 10 00:18:00.201139 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:18:00.243476 containerd[1618]: time="2025-07-10T00:18:00.243168977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d568d6c4c-t6mbx,Uid:03cf87f0-2918-4fae-b6e7-5e2e8bed9888,Namespace:calico-system,Attempt:0,} returns sandbox id \"4a95e85ca0473205265a95f0791a3b8b29dc14b2d46686d789454a161c3e5792\"" Jul 10 00:18:00.247679 containerd[1618]: time="2025-07-10T00:18:00.247657127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 10 00:18:00.909646 kubelet[2914]: I0710 00:18:00.909619 2914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6dc13b4-2d80-4cf6-a904-6a6c8f0d6444" path="/var/lib/kubelet/pods/c6dc13b4-2d80-4cf6-a904-6a6c8f0d6444/volumes" Jul 10 00:18:01.426621 systemd-networkd[1540]: calife5fb499771: Gained IPv6LL Jul 10 00:18:01.566280 containerd[1618]: time="2025-07-10T00:18:01.566248849Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:18:01.566742 containerd[1618]: time="2025-07-10T00:18:01.566708456Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 10 00:18:01.567266 containerd[1618]: time="2025-07-10T00:18:01.567086681Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:18:01.568121 containerd[1618]: time="2025-07-10T00:18:01.568105670Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:18:01.568831 containerd[1618]: time="2025-07-10T00:18:01.568817134Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.321137111s" Jul 10 00:18:01.568904 containerd[1618]: time="2025-07-10T00:18:01.568894253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 10 00:18:01.571430 containerd[1618]: time="2025-07-10T00:18:01.571398900Z" level=info msg="CreateContainer within sandbox \"4a95e85ca0473205265a95f0791a3b8b29dc14b2d46686d789454a161c3e5792\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 10 00:18:01.576544 containerd[1618]: time="2025-07-10T00:18:01.576523106Z" level=info msg="Container 8d15b2298a15163bb20b79fec4405c60e51611c1a42a976dc461fa22455a532c: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:18:01.580805 containerd[1618]: time="2025-07-10T00:18:01.580743587Z" level=info msg="CreateContainer within sandbox \"4a95e85ca0473205265a95f0791a3b8b29dc14b2d46686d789454a161c3e5792\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"8d15b2298a15163bb20b79fec4405c60e51611c1a42a976dc461fa22455a532c\"" Jul 10 00:18:01.581244 containerd[1618]: time="2025-07-10T00:18:01.581229284Z" level=info msg="StartContainer for \"8d15b2298a15163bb20b79fec4405c60e51611c1a42a976dc461fa22455a532c\"" Jul 10 00:18:01.582176 containerd[1618]: time="2025-07-10T00:18:01.582155628Z" level=info msg="connecting to shim 8d15b2298a15163bb20b79fec4405c60e51611c1a42a976dc461fa22455a532c" address="unix:///run/containerd/s/dd12ea76731ba0b10a2058c1c84ad43ccbda3af6c92d9e8ec0ce2434bb157db0" protocol=ttrpc version=3 Jul 10 00:18:01.612604 systemd[1]: Started cri-containerd-8d15b2298a15163bb20b79fec4405c60e51611c1a42a976dc461fa22455a532c.scope - libcontainer container 8d15b2298a15163bb20b79fec4405c60e51611c1a42a976dc461fa22455a532c. Jul 10 00:18:01.648045 containerd[1618]: time="2025-07-10T00:18:01.648021602Z" level=info msg="StartContainer for \"8d15b2298a15163bb20b79fec4405c60e51611c1a42a976dc461fa22455a532c\" returns successfully" Jul 10 00:18:01.648926 containerd[1618]: time="2025-07-10T00:18:01.648806074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 10 00:18:03.970546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2397545889.mount: Deactivated successfully. Jul 10 00:18:03.972224 containerd[1618]: time="2025-07-10T00:18:03.972156899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wl5zx,Uid:355aa274-4760-4539-8c0e-5af0b659ef62,Namespace:kube-system,Attempt:0,}" Jul 10 00:18:03.983589 containerd[1618]: time="2025-07-10T00:18:03.982888491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:18:03.983989 containerd[1618]: time="2025-07-10T00:18:03.983676530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 10 00:18:03.984189 containerd[1618]: time="2025-07-10T00:18:03.984172458Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:18:03.985838 containerd[1618]: time="2025-07-10T00:18:03.985822093Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:18:03.986560 containerd[1618]: time="2025-07-10T00:18:03.986519791Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.337529533s" Jul 10 00:18:03.986560 containerd[1618]: time="2025-07-10T00:18:03.986537655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 10 00:18:04.003049 containerd[1618]: time="2025-07-10T00:18:04.003022654Z" level=info msg="CreateContainer within sandbox \"4a95e85ca0473205265a95f0791a3b8b29dc14b2d46686d789454a161c3e5792\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 10 00:18:04.012074 containerd[1618]: time="2025-07-10T00:18:04.011642039Z" level=info msg="Container c0de4e80ad081357e93970dfebe7e419469e99e37098697a4c77df4dd3ef3fc1: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:18:04.014822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2855704957.mount: Deactivated successfully. Jul 10 00:18:04.018645 containerd[1618]: time="2025-07-10T00:18:04.018336086Z" level=info msg="CreateContainer within sandbox \"4a95e85ca0473205265a95f0791a3b8b29dc14b2d46686d789454a161c3e5792\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c0de4e80ad081357e93970dfebe7e419469e99e37098697a4c77df4dd3ef3fc1\"" Jul 10 00:18:04.022024 containerd[1618]: time="2025-07-10T00:18:04.022004544Z" level=info msg="StartContainer for \"c0de4e80ad081357e93970dfebe7e419469e99e37098697a4c77df4dd3ef3fc1\"" Jul 10 00:18:04.022870 containerd[1618]: time="2025-07-10T00:18:04.022851426Z" level=info msg="connecting to shim c0de4e80ad081357e93970dfebe7e419469e99e37098697a4c77df4dd3ef3fc1" address="unix:///run/containerd/s/dd12ea76731ba0b10a2058c1c84ad43ccbda3af6c92d9e8ec0ce2434bb157db0" protocol=ttrpc version=3 Jul 10 00:18:04.043539 systemd[1]: Started cri-containerd-c0de4e80ad081357e93970dfebe7e419469e99e37098697a4c77df4dd3ef3fc1.scope - libcontainer container c0de4e80ad081357e93970dfebe7e419469e99e37098697a4c77df4dd3ef3fc1. Jul 10 00:18:04.097830 systemd-networkd[1540]: cali6f2f3ec6518: Link UP Jul 10 00:18:04.098813 systemd-networkd[1540]: cali6f2f3ec6518: Gained carrier Jul 10 00:18:04.101162 containerd[1618]: time="2025-07-10T00:18:04.100671203Z" level=info msg="StartContainer for \"c0de4e80ad081357e93970dfebe7e419469e99e37098697a4c77df4dd3ef3fc1\" returns successfully" Jul 10 00:18:04.114970 containerd[1618]: 2025-07-10 00:18:03.998 [INFO][4259] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:18:04.114970 containerd[1618]: 2025-07-10 00:18:04.009 [INFO][4259] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--wl5zx-eth0 coredns-668d6bf9bc- kube-system 355aa274-4760-4539-8c0e-5af0b659ef62 839 0 2025-07-10 00:17:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-wl5zx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6f2f3ec6518 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5fc512ecc25e9ec9702cbaa4b0ab7c0bc79b4438cf7f42edb63f61784e743d59" Namespace="kube-system" Pod="coredns-668d6bf9bc-wl5zx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wl5zx-" Jul 10 00:18:04.114970 containerd[1618]: 2025-07-10 00:18:04.009 [INFO][4259] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5fc512ecc25e9ec9702cbaa4b0ab7c0bc79b4438cf7f42edb63f61784e743d59" Namespace="kube-system" Pod="coredns-668d6bf9bc-wl5zx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wl5zx-eth0" Jul 10 00:18:04.114970 containerd[1618]: 2025-07-10 00:18:04.051 [INFO][4273] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5fc512ecc25e9ec9702cbaa4b0ab7c0bc79b4438cf7f42edb63f61784e743d59" HandleID="k8s-pod-network.5fc512ecc25e9ec9702cbaa4b0ab7c0bc79b4438cf7f42edb63f61784e743d59" Workload="localhost-k8s-coredns--668d6bf9bc--wl5zx-eth0" Jul 10 00:18:04.117582 containerd[1618]: 2025-07-10 00:18:04.052 [INFO][4273] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5fc512ecc25e9ec9702cbaa4b0ab7c0bc79b4438cf7f42edb63f61784e743d59" HandleID="k8s-pod-network.5fc512ecc25e9ec9702cbaa4b0ab7c0bc79b4438cf7f42edb63f61784e743d59" Workload="localhost-k8s-coredns--668d6bf9bc--wl5zx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f890), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-wl5zx", "timestamp":"2025-07-10 00:18:04.05133824 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:18:04.117582 containerd[1618]: 2025-07-10 00:18:04.052 [INFO][4273] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:18:04.117582 containerd[1618]: 2025-07-10 00:18:04.052 [INFO][4273] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:18:04.117582 containerd[1618]: 2025-07-10 00:18:04.052 [INFO][4273] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:18:04.117582 containerd[1618]: 2025-07-10 00:18:04.058 [INFO][4273] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5fc512ecc25e9ec9702cbaa4b0ab7c0bc79b4438cf7f42edb63f61784e743d59" host="localhost" Jul 10 00:18:04.117582 containerd[1618]: 2025-07-10 00:18:04.068 [INFO][4273] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:18:04.117582 containerd[1618]: 2025-07-10 00:18:04.071 [INFO][4273] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:18:04.117582 containerd[1618]: 2025-07-10 00:18:04.078 [INFO][4273] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:18:04.117582 containerd[1618]: 2025-07-10 00:18:04.081 [INFO][4273] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:18:04.117582 containerd[1618]: 2025-07-10 00:18:04.082 [INFO][4273] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5fc512ecc25e9ec9702cbaa4b0ab7c0bc79b4438cf7f42edb63f61784e743d59" host="localhost" Jul 10 00:18:04.118129 containerd[1618]: 2025-07-10 00:18:04.083 [INFO][4273] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5fc512ecc25e9ec9702cbaa4b0ab7c0bc79b4438cf7f42edb63f61784e743d59 Jul 10 00:18:04.118129 containerd[1618]: 2025-07-10 00:18:04.087 [INFO][4273] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5fc512ecc25e9ec9702cbaa4b0ab7c0bc79b4438cf7f42edb63f61784e743d59" host="localhost" Jul 10 00:18:04.118129 containerd[1618]: 2025-07-10 00:18:04.091 [INFO][4273] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.5fc512ecc25e9ec9702cbaa4b0ab7c0bc79b4438cf7f42edb63f61784e743d59" host="localhost" Jul 10 00:18:04.118129 containerd[1618]: 2025-07-10 00:18:04.092 [INFO][4273] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.5fc512ecc25e9ec9702cbaa4b0ab7c0bc79b4438cf7f42edb63f61784e743d59" host="localhost" Jul 10 00:18:04.118129 containerd[1618]: 2025-07-10 00:18:04.092 [INFO][4273] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:18:04.118129 containerd[1618]: 2025-07-10 00:18:04.092 [INFO][4273] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="5fc512ecc25e9ec9702cbaa4b0ab7c0bc79b4438cf7f42edb63f61784e743d59" HandleID="k8s-pod-network.5fc512ecc25e9ec9702cbaa4b0ab7c0bc79b4438cf7f42edb63f61784e743d59" Workload="localhost-k8s-coredns--668d6bf9bc--wl5zx-eth0" Jul 10 00:18:04.118874 containerd[1618]: 2025-07-10 00:18:04.095 [INFO][4259] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5fc512ecc25e9ec9702cbaa4b0ab7c0bc79b4438cf7f42edb63f61784e743d59" Namespace="kube-system" Pod="coredns-668d6bf9bc-wl5zx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wl5zx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wl5zx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"355aa274-4760-4539-8c0e-5af0b659ef62", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 17, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-wl5zx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f2f3ec6518", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:18:04.119039 containerd[1618]: 2025-07-10 00:18:04.096 [INFO][4259] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="5fc512ecc25e9ec9702cbaa4b0ab7c0bc79b4438cf7f42edb63f61784e743d59" Namespace="kube-system" Pod="coredns-668d6bf9bc-wl5zx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wl5zx-eth0" Jul 10 00:18:04.119039 containerd[1618]: 2025-07-10 00:18:04.096 [INFO][4259] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f2f3ec6518 ContainerID="5fc512ecc25e9ec9702cbaa4b0ab7c0bc79b4438cf7f42edb63f61784e743d59" Namespace="kube-system" Pod="coredns-668d6bf9bc-wl5zx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wl5zx-eth0" Jul 10 00:18:04.119039 containerd[1618]: 2025-07-10 00:18:04.099 [INFO][4259] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5fc512ecc25e9ec9702cbaa4b0ab7c0bc79b4438cf7f42edb63f61784e743d59" Namespace="kube-system" Pod="coredns-668d6bf9bc-wl5zx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wl5zx-eth0" Jul 10 00:18:04.119309 containerd[1618]: 2025-07-10 00:18:04.099 [INFO][4259] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5fc512ecc25e9ec9702cbaa4b0ab7c0bc79b4438cf7f42edb63f61784e743d59" Namespace="kube-system" Pod="coredns-668d6bf9bc-wl5zx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wl5zx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wl5zx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"355aa274-4760-4539-8c0e-5af0b659ef62", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 17, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5fc512ecc25e9ec9702cbaa4b0ab7c0bc79b4438cf7f42edb63f61784e743d59", Pod:"coredns-668d6bf9bc-wl5zx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f2f3ec6518", MAC:"12:d4:ee:5b:e4:ca", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:18:04.119309 containerd[1618]: 2025-07-10 00:18:04.109 [INFO][4259] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5fc512ecc25e9ec9702cbaa4b0ab7c0bc79b4438cf7f42edb63f61784e743d59" Namespace="kube-system" Pod="coredns-668d6bf9bc-wl5zx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wl5zx-eth0" Jul 10 00:18:04.205759 containerd[1618]: time="2025-07-10T00:18:04.205588150Z" level=info msg="connecting to shim 5fc512ecc25e9ec9702cbaa4b0ab7c0bc79b4438cf7f42edb63f61784e743d59" address="unix:///run/containerd/s/e7c9533d4ede6cdb9f4de5a765098b6d402f7ff542d83245b244e0e3c2c43ee3" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:18:04.222540 systemd[1]: Started cri-containerd-5fc512ecc25e9ec9702cbaa4b0ab7c0bc79b4438cf7f42edb63f61784e743d59.scope - libcontainer container 5fc512ecc25e9ec9702cbaa4b0ab7c0bc79b4438cf7f42edb63f61784e743d59. Jul 10 00:18:04.231724 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:18:04.258033 containerd[1618]: time="2025-07-10T00:18:04.257999307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wl5zx,Uid:355aa274-4760-4539-8c0e-5af0b659ef62,Namespace:kube-system,Attempt:0,} returns sandbox id \"5fc512ecc25e9ec9702cbaa4b0ab7c0bc79b4438cf7f42edb63f61784e743d59\"" Jul 10 00:18:04.259994 containerd[1618]: time="2025-07-10T00:18:04.259954397Z" level=info msg="CreateContainer within sandbox \"5fc512ecc25e9ec9702cbaa4b0ab7c0bc79b4438cf7f42edb63f61784e743d59\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:18:04.318242 containerd[1618]: time="2025-07-10T00:18:04.318169563Z" level=info msg="Container c469e2303532ec4e78271b0b54c32cbe868fff8c313df9836b97f9ef8f9dc58e: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:18:04.322874 containerd[1618]: time="2025-07-10T00:18:04.322832854Z" level=info msg="CreateContainer within sandbox \"5fc512ecc25e9ec9702cbaa4b0ab7c0bc79b4438cf7f42edb63f61784e743d59\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c469e2303532ec4e78271b0b54c32cbe868fff8c313df9836b97f9ef8f9dc58e\"" Jul 10 00:18:04.323691 containerd[1618]: time="2025-07-10T00:18:04.323535567Z" level=info msg="StartContainer for \"c469e2303532ec4e78271b0b54c32cbe868fff8c313df9836b97f9ef8f9dc58e\"" Jul 10 00:18:04.324525 containerd[1618]: time="2025-07-10T00:18:04.324485434Z" level=info msg="connecting to shim c469e2303532ec4e78271b0b54c32cbe868fff8c313df9836b97f9ef8f9dc58e" address="unix:///run/containerd/s/e7c9533d4ede6cdb9f4de5a765098b6d402f7ff542d83245b244e0e3c2c43ee3" protocol=ttrpc version=3 Jul 10 00:18:04.348328 systemd[1]: Started cri-containerd-c469e2303532ec4e78271b0b54c32cbe868fff8c313df9836b97f9ef8f9dc58e.scope - libcontainer container c469e2303532ec4e78271b0b54c32cbe868fff8c313df9836b97f9ef8f9dc58e. Jul 10 00:18:04.378473 containerd[1618]: time="2025-07-10T00:18:04.378330794Z" level=info msg="StartContainer for \"c469e2303532ec4e78271b0b54c32cbe868fff8c313df9836b97f9ef8f9dc58e\" returns successfully" Jul 10 00:18:04.917919 containerd[1618]: time="2025-07-10T00:18:04.917814308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bcf4cfc7f-j2vlz,Uid:89b075c8-08cd-4903-a32e-138339255054,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:18:04.918360 containerd[1618]: time="2025-07-10T00:18:04.918209418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-lwqb8,Uid:e075a451-649b-4d0c-8fd9-f058337570ca,Namespace:calico-system,Attempt:0,}" Jul 10 00:18:05.054102 systemd-networkd[1540]: cali8afde4479f9: Link UP Jul 10 00:18:05.054518 systemd-networkd[1540]: cali8afde4479f9: Gained carrier Jul 10 00:18:05.070363 containerd[1618]: 2025-07-10 00:18:04.955 [INFO][4419] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:18:05.070363 containerd[1618]: 2025-07-10 00:18:04.968 [INFO][4419] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--lwqb8-eth0 goldmane-768f4c5c69- calico-system e075a451-649b-4d0c-8fd9-f058337570ca 840 0 2025-07-10 00:17:40 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-lwqb8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8afde4479f9 [] [] }} ContainerID="0dfb8f49a88175836260f32c9f9dfaed6189568e9174969f94c98cd8007599c0" Namespace="calico-system" Pod="goldmane-768f4c5c69-lwqb8" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lwqb8-" Jul 10 00:18:05.070363 containerd[1618]: 2025-07-10 00:18:04.968 [INFO][4419] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0dfb8f49a88175836260f32c9f9dfaed6189568e9174969f94c98cd8007599c0" Namespace="calico-system" Pod="goldmane-768f4c5c69-lwqb8" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lwqb8-eth0" Jul 10 00:18:05.070363 containerd[1618]: 2025-07-10 00:18:05.019 [INFO][4442] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0dfb8f49a88175836260f32c9f9dfaed6189568e9174969f94c98cd8007599c0" HandleID="k8s-pod-network.0dfb8f49a88175836260f32c9f9dfaed6189568e9174969f94c98cd8007599c0" Workload="localhost-k8s-goldmane--768f4c5c69--lwqb8-eth0" Jul 10 00:18:05.070363 containerd[1618]: 2025-07-10 00:18:05.019 [INFO][4442] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0dfb8f49a88175836260f32c9f9dfaed6189568e9174969f94c98cd8007599c0" HandleID="k8s-pod-network.0dfb8f49a88175836260f32c9f9dfaed6189568e9174969f94c98cd8007599c0" Workload="localhost-k8s-goldmane--768f4c5c69--lwqb8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ccff0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-lwqb8", "timestamp":"2025-07-10 00:18:05.019554087 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:18:05.070363 containerd[1618]: 2025-07-10 00:18:05.019 [INFO][4442] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:18:05.070363 containerd[1618]: 2025-07-10 00:18:05.019 [INFO][4442] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:18:05.070363 containerd[1618]: 2025-07-10 00:18:05.019 [INFO][4442] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:18:05.070363 containerd[1618]: 2025-07-10 00:18:05.024 [INFO][4442] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0dfb8f49a88175836260f32c9f9dfaed6189568e9174969f94c98cd8007599c0" host="localhost" Jul 10 00:18:05.070363 containerd[1618]: 2025-07-10 00:18:05.031 [INFO][4442] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:18:05.070363 containerd[1618]: 2025-07-10 00:18:05.036 [INFO][4442] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:18:05.070363 containerd[1618]: 2025-07-10 00:18:05.038 [INFO][4442] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:18:05.070363 containerd[1618]: 2025-07-10 00:18:05.039 [INFO][4442] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:18:05.070363 containerd[1618]: 2025-07-10 00:18:05.039 [INFO][4442] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0dfb8f49a88175836260f32c9f9dfaed6189568e9174969f94c98cd8007599c0" host="localhost" Jul 10 00:18:05.070363 containerd[1618]: 2025-07-10 00:18:05.040 [INFO][4442] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0dfb8f49a88175836260f32c9f9dfaed6189568e9174969f94c98cd8007599c0 Jul 10 00:18:05.070363 containerd[1618]: 2025-07-10 00:18:05.044 [INFO][4442] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0dfb8f49a88175836260f32c9f9dfaed6189568e9174969f94c98cd8007599c0" host="localhost" Jul 10 00:18:05.070363 containerd[1618]: 2025-07-10 00:18:05.048 [INFO][4442] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.0dfb8f49a88175836260f32c9f9dfaed6189568e9174969f94c98cd8007599c0" host="localhost" Jul 10 00:18:05.070363 containerd[1618]: 2025-07-10 00:18:05.048 [INFO][4442] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.0dfb8f49a88175836260f32c9f9dfaed6189568e9174969f94c98cd8007599c0" host="localhost" Jul 10 00:18:05.070363 containerd[1618]: 2025-07-10 00:18:05.048 [INFO][4442] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:18:05.070363 containerd[1618]: 2025-07-10 00:18:05.048 [INFO][4442] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="0dfb8f49a88175836260f32c9f9dfaed6189568e9174969f94c98cd8007599c0" HandleID="k8s-pod-network.0dfb8f49a88175836260f32c9f9dfaed6189568e9174969f94c98cd8007599c0" Workload="localhost-k8s-goldmane--768f4c5c69--lwqb8-eth0" Jul 10 00:18:05.071014 containerd[1618]: 2025-07-10 00:18:05.051 [INFO][4419] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0dfb8f49a88175836260f32c9f9dfaed6189568e9174969f94c98cd8007599c0" Namespace="calico-system" Pod="goldmane-768f4c5c69-lwqb8" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lwqb8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--lwqb8-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"e075a451-649b-4d0c-8fd9-f058337570ca", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 17, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-lwqb8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8afde4479f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:18:05.071014 containerd[1618]: 2025-07-10 00:18:05.051 [INFO][4419] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="0dfb8f49a88175836260f32c9f9dfaed6189568e9174969f94c98cd8007599c0" Namespace="calico-system" Pod="goldmane-768f4c5c69-lwqb8" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lwqb8-eth0" Jul 10 00:18:05.071014 containerd[1618]: 2025-07-10 00:18:05.051 [INFO][4419] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8afde4479f9 ContainerID="0dfb8f49a88175836260f32c9f9dfaed6189568e9174969f94c98cd8007599c0" Namespace="calico-system" Pod="goldmane-768f4c5c69-lwqb8" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lwqb8-eth0" Jul 10 00:18:05.071014 containerd[1618]: 2025-07-10 00:18:05.054 [INFO][4419] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0dfb8f49a88175836260f32c9f9dfaed6189568e9174969f94c98cd8007599c0" Namespace="calico-system" Pod="goldmane-768f4c5c69-lwqb8" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lwqb8-eth0" Jul 10 00:18:05.071014 containerd[1618]: 2025-07-10 00:18:05.059 [INFO][4419] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0dfb8f49a88175836260f32c9f9dfaed6189568e9174969f94c98cd8007599c0" Namespace="calico-system" Pod="goldmane-768f4c5c69-lwqb8" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lwqb8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--lwqb8-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"e075a451-649b-4d0c-8fd9-f058337570ca", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 17, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0dfb8f49a88175836260f32c9f9dfaed6189568e9174969f94c98cd8007599c0", Pod:"goldmane-768f4c5c69-lwqb8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8afde4479f9", MAC:"82:b9:97:83:1d:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:18:05.071014 containerd[1618]: 2025-07-10 00:18:05.066 [INFO][4419] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0dfb8f49a88175836260f32c9f9dfaed6189568e9174969f94c98cd8007599c0" Namespace="calico-system" Pod="goldmane-768f4c5c69-lwqb8" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lwqb8-eth0" Jul 10 00:18:05.112214 containerd[1618]: time="2025-07-10T00:18:05.112160091Z" level=info msg="connecting to shim 0dfb8f49a88175836260f32c9f9dfaed6189568e9174969f94c98cd8007599c0" address="unix:///run/containerd/s/e607c6f66dcb9e3917fe54083ede68a67a53f7075210781ff80f927f86f50f80" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:18:05.138680 kubelet[2914]: I0710 00:18:05.135375 2914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6d568d6c4c-t6mbx" podStartSLOduration=2.376317744 podStartE2EDuration="6.123781862s" podCreationTimestamp="2025-07-10 00:17:59 +0000 UTC" firstStartedPulling="2025-07-10 00:18:00.247340454 +0000 UTC m=+35.410357451" lastFinishedPulling="2025-07-10 00:18:03.994804571 +0000 UTC m=+39.157821569" observedRunningTime="2025-07-10 00:18:05.123402368 +0000 UTC m=+40.286419374" watchObservedRunningTime="2025-07-10 00:18:05.123781862 +0000 UTC m=+40.286798856" Jul 10 00:18:05.164842 systemd[1]: Started cri-containerd-0dfb8f49a88175836260f32c9f9dfaed6189568e9174969f94c98cd8007599c0.scope - libcontainer container 0dfb8f49a88175836260f32c9f9dfaed6189568e9174969f94c98cd8007599c0. Jul 10 00:18:05.182889 systemd-networkd[1540]: cali7eaf1a41d5e: Link UP Jul 10 00:18:05.183809 systemd-networkd[1540]: cali7eaf1a41d5e: Gained carrier Jul 10 00:18:05.192756 kubelet[2914]: I0710 00:18:05.192691 2914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wl5zx" podStartSLOduration=36.192678969 podStartE2EDuration="36.192678969s" podCreationTimestamp="2025-07-10 00:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:18:05.151523303 +0000 UTC m=+40.314540308" watchObservedRunningTime="2025-07-10 00:18:05.192678969 +0000 UTC m=+40.355695967" Jul 10 00:18:05.195597 containerd[1618]: 2025-07-10 00:18:04.968 [INFO][4425] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:18:05.195597 containerd[1618]: 2025-07-10 00:18:04.978 [INFO][4425] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5bcf4cfc7f--j2vlz-eth0 calico-apiserver-5bcf4cfc7f- calico-apiserver 89b075c8-08cd-4903-a32e-138339255054 837 0 2025-07-10 00:17:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bcf4cfc7f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5bcf4cfc7f-j2vlz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7eaf1a41d5e [] [] }} ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" Namespace="calico-apiserver" Pod="calico-apiserver-5bcf4cfc7f-j2vlz" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcf4cfc7f--j2vlz-" Jul 10 00:18:05.195597 containerd[1618]: 2025-07-10 00:18:04.978 [INFO][4425] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" Namespace="calico-apiserver" Pod="calico-apiserver-5bcf4cfc7f-j2vlz" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcf4cfc7f--j2vlz-eth0" Jul 10 00:18:05.195597 containerd[1618]: 2025-07-10 00:18:05.020 [INFO][4446] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" HandleID="k8s-pod-network.23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" Workload="localhost-k8s-calico--apiserver--5bcf4cfc7f--j2vlz-eth0" Jul 10 00:18:05.195597 containerd[1618]: 2025-07-10 00:18:05.020 [INFO][4446] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" HandleID="k8s-pod-network.23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" Workload="localhost-k8s-calico--apiserver--5bcf4cfc7f--j2vlz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad4a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5bcf4cfc7f-j2vlz", "timestamp":"2025-07-10 00:18:05.020586168 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:18:05.195597 containerd[1618]: 2025-07-10 00:18:05.020 [INFO][4446] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:18:05.195597 containerd[1618]: 2025-07-10 00:18:05.049 [INFO][4446] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:18:05.195597 containerd[1618]: 2025-07-10 00:18:05.049 [INFO][4446] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:18:05.195597 containerd[1618]: 2025-07-10 00:18:05.129 [INFO][4446] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" host="localhost" Jul 10 00:18:05.195597 containerd[1618]: 2025-07-10 00:18:05.148 [INFO][4446] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:18:05.195597 containerd[1618]: 2025-07-10 00:18:05.161 [INFO][4446] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:18:05.195597 containerd[1618]: 2025-07-10 00:18:05.163 [INFO][4446] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:18:05.195597 containerd[1618]: 2025-07-10 00:18:05.165 [INFO][4446] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:18:05.195597 containerd[1618]: 2025-07-10 00:18:05.165 [INFO][4446] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" host="localhost" Jul 10 00:18:05.195597 containerd[1618]: 2025-07-10 00:18:05.167 [INFO][4446] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745 Jul 10 00:18:05.195597 containerd[1618]: 2025-07-10 00:18:05.170 [INFO][4446] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" host="localhost" Jul 10 00:18:05.195597 containerd[1618]: 2025-07-10 00:18:05.176 [INFO][4446] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" host="localhost" Jul 10 00:18:05.195597 containerd[1618]: 2025-07-10 00:18:05.176 [INFO][4446] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" host="localhost" Jul 10 00:18:05.195597 containerd[1618]: 2025-07-10 00:18:05.176 [INFO][4446] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:18:05.195597 containerd[1618]: 2025-07-10 00:18:05.176 [INFO][4446] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" HandleID="k8s-pod-network.23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" Workload="localhost-k8s-calico--apiserver--5bcf4cfc7f--j2vlz-eth0" Jul 10 00:18:05.197265 containerd[1618]: 2025-07-10 00:18:05.179 [INFO][4425] cni-plugin/k8s.go 418: Populated endpoint ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" Namespace="calico-apiserver" Pod="calico-apiserver-5bcf4cfc7f-j2vlz" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcf4cfc7f--j2vlz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bcf4cfc7f--j2vlz-eth0", GenerateName:"calico-apiserver-5bcf4cfc7f-", Namespace:"calico-apiserver", SelfLink:"", UID:"89b075c8-08cd-4903-a32e-138339255054", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 17, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bcf4cfc7f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5bcf4cfc7f-j2vlz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7eaf1a41d5e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:18:05.197265 containerd[1618]: 2025-07-10 00:18:05.179 [INFO][4425] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" Namespace="calico-apiserver" Pod="calico-apiserver-5bcf4cfc7f-j2vlz" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcf4cfc7f--j2vlz-eth0" Jul 10 00:18:05.197265 containerd[1618]: 2025-07-10 00:18:05.179 [INFO][4425] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7eaf1a41d5e ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" Namespace="calico-apiserver" Pod="calico-apiserver-5bcf4cfc7f-j2vlz" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcf4cfc7f--j2vlz-eth0" Jul 10 00:18:05.197265 containerd[1618]: 2025-07-10 00:18:05.184 [INFO][4425] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" Namespace="calico-apiserver" Pod="calico-apiserver-5bcf4cfc7f-j2vlz" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcf4cfc7f--j2vlz-eth0" Jul 10 00:18:05.197265 containerd[1618]: 2025-07-10 00:18:05.184 [INFO][4425] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" Namespace="calico-apiserver" Pod="calico-apiserver-5bcf4cfc7f-j2vlz" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcf4cfc7f--j2vlz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bcf4cfc7f--j2vlz-eth0", GenerateName:"calico-apiserver-5bcf4cfc7f-", Namespace:"calico-apiserver", SelfLink:"", UID:"89b075c8-08cd-4903-a32e-138339255054", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 17, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bcf4cfc7f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745", Pod:"calico-apiserver-5bcf4cfc7f-j2vlz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7eaf1a41d5e", MAC:"ea:f2:1f:44:15:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:18:05.197265 containerd[1618]: 2025-07-10 00:18:05.191 [INFO][4425] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" Namespace="calico-apiserver" Pod="calico-apiserver-5bcf4cfc7f-j2vlz" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcf4cfc7f--j2vlz-eth0" Jul 10 00:18:05.214330 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:18:05.214949 containerd[1618]: time="2025-07-10T00:18:05.214824794Z" level=info msg="connecting to shim 23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" address="unix:///run/containerd/s/ad3c7cb17ce6032394c084283d44fcffa681cbe7ff00bf7378c1d6c6e203a014" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:18:05.234506 systemd[1]: Started cri-containerd-23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745.scope - libcontainer container 23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745. Jul 10 00:18:05.251794 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:18:05.277761 containerd[1618]: time="2025-07-10T00:18:05.277646582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-lwqb8,Uid:e075a451-649b-4d0c-8fd9-f058337570ca,Namespace:calico-system,Attempt:0,} returns sandbox id \"0dfb8f49a88175836260f32c9f9dfaed6189568e9174969f94c98cd8007599c0\"" Jul 10 00:18:05.290586 containerd[1618]: time="2025-07-10T00:18:05.290558901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bcf4cfc7f-j2vlz,Uid:89b075c8-08cd-4903-a32e-138339255054,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745\"" Jul 10 00:18:05.296099 containerd[1618]: time="2025-07-10T00:18:05.296072258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 10 00:18:05.714652 systemd-networkd[1540]: cali6f2f3ec6518: Gained IPv6LL Jul 10 00:18:05.907489 containerd[1618]: time="2025-07-10T00:18:05.907396685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5dc87db9-p7g8k,Uid:fb6624e5-1abf-4638-86f1-56b2a2c177a9,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:18:05.908210 containerd[1618]: time="2025-07-10T00:18:05.908116723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ktmfr,Uid:06016ed6-c265-4984-9437-f30edfbeb627,Namespace:kube-system,Attempt:0,}" Jul 10 00:18:05.984275 kubelet[2914]: I0710 00:18:05.984183 2914 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:18:06.007089 systemd-networkd[1540]: cali0fdec8b0fe3: Link UP Jul 10 00:18:06.009178 systemd-networkd[1540]: cali0fdec8b0fe3: Gained carrier Jul 10 00:18:06.029758 containerd[1618]: 2025-07-10 00:18:05.943 [INFO][4592] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:18:06.029758 containerd[1618]: 2025-07-10 00:18:05.950 [INFO][4592] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--ktmfr-eth0 coredns-668d6bf9bc- kube-system 06016ed6-c265-4984-9437-f30edfbeb627 827 0 2025-07-10 00:17:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-ktmfr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0fdec8b0fe3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a5f2c428575563d0d7424a97500d8e2369cf479584db9fdddae3c0157afd82a1" Namespace="kube-system" Pod="coredns-668d6bf9bc-ktmfr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ktmfr-" Jul 10 00:18:06.029758 containerd[1618]: 2025-07-10 00:18:05.950 [INFO][4592] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a5f2c428575563d0d7424a97500d8e2369cf479584db9fdddae3c0157afd82a1" Namespace="kube-system" Pod="coredns-668d6bf9bc-ktmfr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ktmfr-eth0" Jul 10 00:18:06.029758 containerd[1618]: 2025-07-10 00:18:05.975 [INFO][4614] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a5f2c428575563d0d7424a97500d8e2369cf479584db9fdddae3c0157afd82a1" HandleID="k8s-pod-network.a5f2c428575563d0d7424a97500d8e2369cf479584db9fdddae3c0157afd82a1" Workload="localhost-k8s-coredns--668d6bf9bc--ktmfr-eth0" Jul 10 00:18:06.029758 containerd[1618]: 2025-07-10 00:18:05.975 [INFO][4614] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a5f2c428575563d0d7424a97500d8e2369cf479584db9fdddae3c0157afd82a1" HandleID="k8s-pod-network.a5f2c428575563d0d7424a97500d8e2369cf479584db9fdddae3c0157afd82a1" Workload="localhost-k8s-coredns--668d6bf9bc--ktmfr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f610), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-ktmfr", "timestamp":"2025-07-10 00:18:05.975335221 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:18:06.029758 containerd[1618]: 2025-07-10 00:18:05.975 [INFO][4614] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:18:06.029758 containerd[1618]: 2025-07-10 00:18:05.976 [INFO][4614] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:18:06.029758 containerd[1618]: 2025-07-10 00:18:05.976 [INFO][4614] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:18:06.029758 containerd[1618]: 2025-07-10 00:18:05.982 [INFO][4614] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a5f2c428575563d0d7424a97500d8e2369cf479584db9fdddae3c0157afd82a1" host="localhost" Jul 10 00:18:06.029758 containerd[1618]: 2025-07-10 00:18:05.985 [INFO][4614] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:18:06.029758 containerd[1618]: 2025-07-10 00:18:05.987 [INFO][4614] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:18:06.029758 containerd[1618]: 2025-07-10 00:18:05.988 [INFO][4614] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:18:06.029758 containerd[1618]: 2025-07-10 00:18:05.990 [INFO][4614] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:18:06.029758 containerd[1618]: 2025-07-10 00:18:05.990 [INFO][4614] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a5f2c428575563d0d7424a97500d8e2369cf479584db9fdddae3c0157afd82a1" host="localhost" Jul 10 00:18:06.029758 containerd[1618]: 2025-07-10 00:18:05.991 [INFO][4614] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a5f2c428575563d0d7424a97500d8e2369cf479584db9fdddae3c0157afd82a1 Jul 10 00:18:06.029758 containerd[1618]: 2025-07-10 00:18:05.993 [INFO][4614] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a5f2c428575563d0d7424a97500d8e2369cf479584db9fdddae3c0157afd82a1" host="localhost" Jul 10 00:18:06.029758 containerd[1618]: 2025-07-10 00:18:05.998 [INFO][4614] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.a5f2c428575563d0d7424a97500d8e2369cf479584db9fdddae3c0157afd82a1" host="localhost" Jul 10 00:18:06.029758 containerd[1618]: 2025-07-10 00:18:05.999 [INFO][4614] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.a5f2c428575563d0d7424a97500d8e2369cf479584db9fdddae3c0157afd82a1" host="localhost" Jul 10 00:18:06.029758 containerd[1618]: 2025-07-10 00:18:05.999 [INFO][4614] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:18:06.029758 containerd[1618]: 2025-07-10 00:18:05.999 [INFO][4614] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="a5f2c428575563d0d7424a97500d8e2369cf479584db9fdddae3c0157afd82a1" HandleID="k8s-pod-network.a5f2c428575563d0d7424a97500d8e2369cf479584db9fdddae3c0157afd82a1" Workload="localhost-k8s-coredns--668d6bf9bc--ktmfr-eth0" Jul 10 00:18:06.031639 containerd[1618]: 2025-07-10 00:18:06.003 [INFO][4592] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a5f2c428575563d0d7424a97500d8e2369cf479584db9fdddae3c0157afd82a1" Namespace="kube-system" Pod="coredns-668d6bf9bc-ktmfr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ktmfr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--ktmfr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"06016ed6-c265-4984-9437-f30edfbeb627", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 17, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-ktmfr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0fdec8b0fe3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:18:06.031639 containerd[1618]: 2025-07-10 00:18:06.003 [INFO][4592] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="a5f2c428575563d0d7424a97500d8e2369cf479584db9fdddae3c0157afd82a1" Namespace="kube-system" Pod="coredns-668d6bf9bc-ktmfr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ktmfr-eth0" Jul 10 00:18:06.031639 containerd[1618]: 2025-07-10 00:18:06.003 [INFO][4592] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0fdec8b0fe3 ContainerID="a5f2c428575563d0d7424a97500d8e2369cf479584db9fdddae3c0157afd82a1" Namespace="kube-system" Pod="coredns-668d6bf9bc-ktmfr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ktmfr-eth0" Jul 10 00:18:06.031639 containerd[1618]: 2025-07-10 00:18:06.008 [INFO][4592] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a5f2c428575563d0d7424a97500d8e2369cf479584db9fdddae3c0157afd82a1" Namespace="kube-system" Pod="coredns-668d6bf9bc-ktmfr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ktmfr-eth0" Jul 10 00:18:06.031639 containerd[1618]: 2025-07-10 00:18:06.009 [INFO][4592] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a5f2c428575563d0d7424a97500d8e2369cf479584db9fdddae3c0157afd82a1" Namespace="kube-system" Pod="coredns-668d6bf9bc-ktmfr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ktmfr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--ktmfr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"06016ed6-c265-4984-9437-f30edfbeb627", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 17, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a5f2c428575563d0d7424a97500d8e2369cf479584db9fdddae3c0157afd82a1", Pod:"coredns-668d6bf9bc-ktmfr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0fdec8b0fe3", MAC:"ba:67:66:56:84:74", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:18:06.031639 containerd[1618]: 2025-07-10 00:18:06.023 [INFO][4592] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a5f2c428575563d0d7424a97500d8e2369cf479584db9fdddae3c0157afd82a1" Namespace="kube-system" Pod="coredns-668d6bf9bc-ktmfr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ktmfr-eth0" Jul 10 00:18:06.059368 containerd[1618]: time="2025-07-10T00:18:06.059202331Z" level=info msg="connecting to shim a5f2c428575563d0d7424a97500d8e2369cf479584db9fdddae3c0157afd82a1" address="unix:///run/containerd/s/f2f0458af4c0c44d4a1b78ca779b1a87e14d1cdab5bfeba6765f04307937961b" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:18:06.074553 systemd[1]: Started cri-containerd-a5f2c428575563d0d7424a97500d8e2369cf479584db9fdddae3c0157afd82a1.scope - libcontainer container a5f2c428575563d0d7424a97500d8e2369cf479584db9fdddae3c0157afd82a1. Jul 10 00:18:06.082154 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:18:06.108793 systemd-networkd[1540]: cali012f719ac12: Link UP Jul 10 00:18:06.111285 systemd-networkd[1540]: cali012f719ac12: Gained carrier Jul 10 00:18:06.121890 containerd[1618]: 2025-07-10 00:18:05.942 [INFO][4593] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:18:06.121890 containerd[1618]: 2025-07-10 00:18:05.951 [INFO][4593] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c5dc87db9--p7g8k-eth0 calico-apiserver-c5dc87db9- calico-apiserver fb6624e5-1abf-4638-86f1-56b2a2c177a9 838 0 2025-07-10 00:17:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c5dc87db9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c5dc87db9-p7g8k eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali012f719ac12 [] [] }} ContainerID="8f8c6b0f0116fb8500ab97cbc98e651bc7bbc47af88edaaa6aa72eb94312fd9f" Namespace="calico-apiserver" Pod="calico-apiserver-c5dc87db9-p7g8k" WorkloadEndpoint="localhost-k8s-calico--apiserver--c5dc87db9--p7g8k-" Jul 10 00:18:06.121890 containerd[1618]: 2025-07-10 00:18:05.951 [INFO][4593] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8f8c6b0f0116fb8500ab97cbc98e651bc7bbc47af88edaaa6aa72eb94312fd9f" Namespace="calico-apiserver" Pod="calico-apiserver-c5dc87db9-p7g8k" WorkloadEndpoint="localhost-k8s-calico--apiserver--c5dc87db9--p7g8k-eth0" Jul 10 00:18:06.121890 containerd[1618]: 2025-07-10 00:18:05.983 [INFO][4616] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f8c6b0f0116fb8500ab97cbc98e651bc7bbc47af88edaaa6aa72eb94312fd9f" HandleID="k8s-pod-network.8f8c6b0f0116fb8500ab97cbc98e651bc7bbc47af88edaaa6aa72eb94312fd9f" Workload="localhost-k8s-calico--apiserver--c5dc87db9--p7g8k-eth0" Jul 10 00:18:06.121890 containerd[1618]: 2025-07-10 00:18:05.983 [INFO][4616] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8f8c6b0f0116fb8500ab97cbc98e651bc7bbc47af88edaaa6aa72eb94312fd9f" HandleID="k8s-pod-network.8f8c6b0f0116fb8500ab97cbc98e651bc7bbc47af88edaaa6aa72eb94312fd9f" Workload="localhost-k8s-calico--apiserver--c5dc87db9--p7g8k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd920), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-c5dc87db9-p7g8k", "timestamp":"2025-07-10 00:18:05.98324883 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:18:06.121890 containerd[1618]: 2025-07-10 00:18:05.983 [INFO][4616] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:18:06.121890 containerd[1618]: 2025-07-10 00:18:05.999 [INFO][4616] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:18:06.121890 containerd[1618]: 2025-07-10 00:18:05.999 [INFO][4616] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:18:06.121890 containerd[1618]: 2025-07-10 00:18:06.083 [INFO][4616] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8f8c6b0f0116fb8500ab97cbc98e651bc7bbc47af88edaaa6aa72eb94312fd9f" host="localhost" Jul 10 00:18:06.121890 containerd[1618]: 2025-07-10 00:18:06.085 [INFO][4616] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:18:06.121890 containerd[1618]: 2025-07-10 00:18:06.088 [INFO][4616] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:18:06.121890 containerd[1618]: 2025-07-10 00:18:06.089 [INFO][4616] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:18:06.121890 containerd[1618]: 2025-07-10 00:18:06.091 [INFO][4616] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:18:06.121890 containerd[1618]: 2025-07-10 00:18:06.091 [INFO][4616] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8f8c6b0f0116fb8500ab97cbc98e651bc7bbc47af88edaaa6aa72eb94312fd9f" host="localhost" Jul 10 00:18:06.121890 containerd[1618]: 2025-07-10 00:18:06.093 [INFO][4616] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8f8c6b0f0116fb8500ab97cbc98e651bc7bbc47af88edaaa6aa72eb94312fd9f Jul 10 00:18:06.121890 containerd[1618]: 2025-07-10 00:18:06.096 [INFO][4616] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8f8c6b0f0116fb8500ab97cbc98e651bc7bbc47af88edaaa6aa72eb94312fd9f" host="localhost" Jul 10 00:18:06.121890 containerd[1618]: 2025-07-10 00:18:06.103 [INFO][4616] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.8f8c6b0f0116fb8500ab97cbc98e651bc7bbc47af88edaaa6aa72eb94312fd9f" host="localhost" Jul 10 00:18:06.121890 containerd[1618]: 2025-07-10 00:18:06.103 [INFO][4616] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.8f8c6b0f0116fb8500ab97cbc98e651bc7bbc47af88edaaa6aa72eb94312fd9f" host="localhost" Jul 10 00:18:06.121890 containerd[1618]: 2025-07-10 00:18:06.103 [INFO][4616] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:18:06.121890 containerd[1618]: 2025-07-10 00:18:06.103 [INFO][4616] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="8f8c6b0f0116fb8500ab97cbc98e651bc7bbc47af88edaaa6aa72eb94312fd9f" HandleID="k8s-pod-network.8f8c6b0f0116fb8500ab97cbc98e651bc7bbc47af88edaaa6aa72eb94312fd9f" Workload="localhost-k8s-calico--apiserver--c5dc87db9--p7g8k-eth0" Jul 10 00:18:06.124775 containerd[1618]: 2025-07-10 00:18:06.106 [INFO][4593] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8f8c6b0f0116fb8500ab97cbc98e651bc7bbc47af88edaaa6aa72eb94312fd9f" Namespace="calico-apiserver" Pod="calico-apiserver-c5dc87db9-p7g8k" WorkloadEndpoint="localhost-k8s-calico--apiserver--c5dc87db9--p7g8k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c5dc87db9--p7g8k-eth0", GenerateName:"calico-apiserver-c5dc87db9-", Namespace:"calico-apiserver", SelfLink:"", UID:"fb6624e5-1abf-4638-86f1-56b2a2c177a9", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 17, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c5dc87db9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c5dc87db9-p7g8k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali012f719ac12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:18:06.124775 containerd[1618]: 2025-07-10 00:18:06.106 [INFO][4593] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="8f8c6b0f0116fb8500ab97cbc98e651bc7bbc47af88edaaa6aa72eb94312fd9f" Namespace="calico-apiserver" Pod="calico-apiserver-c5dc87db9-p7g8k" WorkloadEndpoint="localhost-k8s-calico--apiserver--c5dc87db9--p7g8k-eth0" Jul 10 00:18:06.124775 containerd[1618]: 2025-07-10 00:18:06.106 [INFO][4593] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali012f719ac12 ContainerID="8f8c6b0f0116fb8500ab97cbc98e651bc7bbc47af88edaaa6aa72eb94312fd9f" Namespace="calico-apiserver" Pod="calico-apiserver-c5dc87db9-p7g8k" WorkloadEndpoint="localhost-k8s-calico--apiserver--c5dc87db9--p7g8k-eth0" Jul 10 00:18:06.124775 containerd[1618]: 2025-07-10 00:18:06.111 [INFO][4593] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8f8c6b0f0116fb8500ab97cbc98e651bc7bbc47af88edaaa6aa72eb94312fd9f" Namespace="calico-apiserver" Pod="calico-apiserver-c5dc87db9-p7g8k" WorkloadEndpoint="localhost-k8s-calico--apiserver--c5dc87db9--p7g8k-eth0" Jul 10 00:18:06.124775 containerd[1618]: 2025-07-10 00:18:06.111 [INFO][4593] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8f8c6b0f0116fb8500ab97cbc98e651bc7bbc47af88edaaa6aa72eb94312fd9f" Namespace="calico-apiserver" Pod="calico-apiserver-c5dc87db9-p7g8k" WorkloadEndpoint="localhost-k8s-calico--apiserver--c5dc87db9--p7g8k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c5dc87db9--p7g8k-eth0", GenerateName:"calico-apiserver-c5dc87db9-", Namespace:"calico-apiserver", SelfLink:"", UID:"fb6624e5-1abf-4638-86f1-56b2a2c177a9", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 17, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c5dc87db9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8f8c6b0f0116fb8500ab97cbc98e651bc7bbc47af88edaaa6aa72eb94312fd9f", Pod:"calico-apiserver-c5dc87db9-p7g8k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali012f719ac12", MAC:"96:25:9b:ad:e9:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:18:06.124775 containerd[1618]: 2025-07-10 00:18:06.119 [INFO][4593] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8f8c6b0f0116fb8500ab97cbc98e651bc7bbc47af88edaaa6aa72eb94312fd9f" Namespace="calico-apiserver" Pod="calico-apiserver-c5dc87db9-p7g8k" WorkloadEndpoint="localhost-k8s-calico--apiserver--c5dc87db9--p7g8k-eth0" Jul 10 00:18:06.139478 containerd[1618]: time="2025-07-10T00:18:06.139283995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ktmfr,Uid:06016ed6-c265-4984-9437-f30edfbeb627,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5f2c428575563d0d7424a97500d8e2369cf479584db9fdddae3c0157afd82a1\"" Jul 10 00:18:06.141798 containerd[1618]: time="2025-07-10T00:18:06.141739947Z" level=info msg="CreateContainer within sandbox \"a5f2c428575563d0d7424a97500d8e2369cf479584db9fdddae3c0157afd82a1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:18:06.147876 containerd[1618]: time="2025-07-10T00:18:06.147839742Z" level=info msg="connecting to shim 8f8c6b0f0116fb8500ab97cbc98e651bc7bbc47af88edaaa6aa72eb94312fd9f" address="unix:///run/containerd/s/b0dc2f3bcb1918ff8045a68f03f5edbbcda94b4d2a07cd56d5145618cddfbe9b" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:18:06.158460 containerd[1618]: time="2025-07-10T00:18:06.158380786Z" level=info msg="Container a4c8966df2e3df2a48084fe0b262bc924bef15ed5beed3d095f9f4b167fa3ad3: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:18:06.166000 containerd[1618]: time="2025-07-10T00:18:06.165976762Z" level=info msg="CreateContainer within sandbox \"a5f2c428575563d0d7424a97500d8e2369cf479584db9fdddae3c0157afd82a1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a4c8966df2e3df2a48084fe0b262bc924bef15ed5beed3d095f9f4b167fa3ad3\"" Jul 10 00:18:06.168261 containerd[1618]: time="2025-07-10T00:18:06.168244437Z" level=info msg="StartContainer for \"a4c8966df2e3df2a48084fe0b262bc924bef15ed5beed3d095f9f4b167fa3ad3\"" Jul 10 00:18:06.169190 containerd[1618]: time="2025-07-10T00:18:06.169010117Z" level=info msg="connecting to shim a4c8966df2e3df2a48084fe0b262bc924bef15ed5beed3d095f9f4b167fa3ad3" address="unix:///run/containerd/s/f2f0458af4c0c44d4a1b78ca779b1a87e14d1cdab5bfeba6765f04307937961b" protocol=ttrpc version=3 Jul 10 00:18:06.170874 systemd[1]: Started cri-containerd-8f8c6b0f0116fb8500ab97cbc98e651bc7bbc47af88edaaa6aa72eb94312fd9f.scope - libcontainer container 8f8c6b0f0116fb8500ab97cbc98e651bc7bbc47af88edaaa6aa72eb94312fd9f. Jul 10 00:18:06.200121 systemd[1]: Started cri-containerd-a4c8966df2e3df2a48084fe0b262bc924bef15ed5beed3d095f9f4b167fa3ad3.scope - libcontainer container a4c8966df2e3df2a48084fe0b262bc924bef15ed5beed3d095f9f4b167fa3ad3. Jul 10 00:18:06.212538 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:18:06.232689 containerd[1618]: time="2025-07-10T00:18:06.232658734Z" level=info msg="StartContainer for \"a4c8966df2e3df2a48084fe0b262bc924bef15ed5beed3d095f9f4b167fa3ad3\" returns successfully" Jul 10 00:18:06.276307 containerd[1618]: time="2025-07-10T00:18:06.276230019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5dc87db9-p7g8k,Uid:fb6624e5-1abf-4638-86f1-56b2a2c177a9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8f8c6b0f0116fb8500ab97cbc98e651bc7bbc47af88edaaa6aa72eb94312fd9f\"" Jul 10 00:18:06.482553 systemd-networkd[1540]: cali8afde4479f9: Gained IPv6LL Jul 10 00:18:06.521076 systemd-networkd[1540]: vxlan.calico: Link UP Jul 10 00:18:06.521082 systemd-networkd[1540]: vxlan.calico: Gained carrier Jul 10 00:18:06.546509 systemd-networkd[1540]: cali7eaf1a41d5e: Gained IPv6LL Jul 10 00:18:07.161082 kubelet[2914]: I0710 00:18:07.158341 2914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ktmfr" podStartSLOduration=38.158326924 podStartE2EDuration="38.158326924s" podCreationTimestamp="2025-07-10 00:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:18:07.147457017 +0000 UTC m=+42.310474022" watchObservedRunningTime="2025-07-10 00:18:07.158326924 +0000 UTC m=+42.321343924" Jul 10 00:18:07.634849 systemd-networkd[1540]: cali012f719ac12: Gained IPv6LL Jul 10 00:18:07.890570 systemd-networkd[1540]: cali0fdec8b0fe3: Gained IPv6LL Jul 10 00:18:07.907223 containerd[1618]: time="2025-07-10T00:18:07.907168217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c6bccbf5d-bxtv5,Uid:e6130889-62e9-44d8-9fa0-8fe4c4301a67,Namespace:calico-system,Attempt:0,}" Jul 10 00:18:07.907223 containerd[1618]: time="2025-07-10T00:18:07.907194464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bcf4cfc7f-7945z,Uid:4136c2e0-77ff-4fe9-bf8f-cbf21d4df8c9,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:18:08.091824 systemd-networkd[1540]: cali8b0d9249008: Link UP Jul 10 00:18:08.092561 systemd-networkd[1540]: cali8b0d9249008: Gained carrier Jul 10 00:18:08.126432 containerd[1618]: 2025-07-10 00:18:07.949 [INFO][4918] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5bcf4cfc7f--7945z-eth0 calico-apiserver-5bcf4cfc7f- calico-apiserver 4136c2e0-77ff-4fe9-bf8f-cbf21d4df8c9 841 0 2025-07-10 00:17:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bcf4cfc7f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5bcf4cfc7f-7945z eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8b0d9249008 [] [] }} ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" Namespace="calico-apiserver" Pod="calico-apiserver-5bcf4cfc7f-7945z" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcf4cfc7f--7945z-" Jul 10 00:18:08.126432 containerd[1618]: 2025-07-10 00:18:07.950 [INFO][4918] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" Namespace="calico-apiserver" Pod="calico-apiserver-5bcf4cfc7f-7945z" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcf4cfc7f--7945z-eth0" Jul 10 00:18:08.126432 containerd[1618]: 2025-07-10 00:18:08.026 [INFO][4941] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" HandleID="k8s-pod-network.91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" Workload="localhost-k8s-calico--apiserver--5bcf4cfc7f--7945z-eth0" Jul 10 00:18:08.126432 containerd[1618]: 2025-07-10 00:18:08.026 [INFO][4941] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" HandleID="k8s-pod-network.91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" Workload="localhost-k8s-calico--apiserver--5bcf4cfc7f--7945z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f730), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5bcf4cfc7f-7945z", "timestamp":"2025-07-10 00:18:08.026028426 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:18:08.126432 containerd[1618]: 2025-07-10 00:18:08.026 [INFO][4941] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:18:08.126432 containerd[1618]: 2025-07-10 00:18:08.026 [INFO][4941] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:18:08.126432 containerd[1618]: 2025-07-10 00:18:08.026 [INFO][4941] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:18:08.126432 containerd[1618]: 2025-07-10 00:18:08.059 [INFO][4941] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" host="localhost" Jul 10 00:18:08.126432 containerd[1618]: 2025-07-10 00:18:08.062 [INFO][4941] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:18:08.126432 containerd[1618]: 2025-07-10 00:18:08.067 [INFO][4941] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:18:08.126432 containerd[1618]: 2025-07-10 00:18:08.068 [INFO][4941] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:18:08.126432 containerd[1618]: 2025-07-10 00:18:08.069 [INFO][4941] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:18:08.126432 containerd[1618]: 2025-07-10 00:18:08.069 [INFO][4941] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" host="localhost" Jul 10 00:18:08.126432 containerd[1618]: 2025-07-10 00:18:08.070 [INFO][4941] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b Jul 10 00:18:08.126432 containerd[1618]: 2025-07-10 00:18:08.075 [INFO][4941] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" host="localhost" Jul 10 00:18:08.126432 containerd[1618]: 2025-07-10 00:18:08.085 [INFO][4941] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" host="localhost" Jul 10 00:18:08.126432 containerd[1618]: 2025-07-10 00:18:08.085 [INFO][4941] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" host="localhost" Jul 10 00:18:08.126432 containerd[1618]: 2025-07-10 00:18:08.085 [INFO][4941] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:18:08.126432 containerd[1618]: 2025-07-10 00:18:08.085 [INFO][4941] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" HandleID="k8s-pod-network.91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" Workload="localhost-k8s-calico--apiserver--5bcf4cfc7f--7945z-eth0" Jul 10 00:18:08.157066 containerd[1618]: 2025-07-10 00:18:08.089 [INFO][4918] cni-plugin/k8s.go 418: Populated endpoint ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" Namespace="calico-apiserver" Pod="calico-apiserver-5bcf4cfc7f-7945z" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcf4cfc7f--7945z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bcf4cfc7f--7945z-eth0", GenerateName:"calico-apiserver-5bcf4cfc7f-", Namespace:"calico-apiserver", SelfLink:"", UID:"4136c2e0-77ff-4fe9-bf8f-cbf21d4df8c9", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 17, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bcf4cfc7f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5bcf4cfc7f-7945z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8b0d9249008", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:18:08.157066 containerd[1618]: 2025-07-10 00:18:08.090 [INFO][4918] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" Namespace="calico-apiserver" Pod="calico-apiserver-5bcf4cfc7f-7945z" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcf4cfc7f--7945z-eth0" Jul 10 00:18:08.157066 containerd[1618]: 2025-07-10 00:18:08.090 [INFO][4918] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8b0d9249008 ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" Namespace="calico-apiserver" Pod="calico-apiserver-5bcf4cfc7f-7945z" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcf4cfc7f--7945z-eth0" Jul 10 00:18:08.157066 containerd[1618]: 2025-07-10 00:18:08.091 [INFO][4918] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" Namespace="calico-apiserver" Pod="calico-apiserver-5bcf4cfc7f-7945z" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcf4cfc7f--7945z-eth0" Jul 10 00:18:08.157066 containerd[1618]: 2025-07-10 00:18:08.091 [INFO][4918] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" Namespace="calico-apiserver" Pod="calico-apiserver-5bcf4cfc7f-7945z" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcf4cfc7f--7945z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bcf4cfc7f--7945z-eth0", GenerateName:"calico-apiserver-5bcf4cfc7f-", Namespace:"calico-apiserver", SelfLink:"", UID:"4136c2e0-77ff-4fe9-bf8f-cbf21d4df8c9", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 17, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bcf4cfc7f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b", Pod:"calico-apiserver-5bcf4cfc7f-7945z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8b0d9249008", MAC:"16:f9:f4:b0:a1:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:18:08.157066 containerd[1618]: 2025-07-10 00:18:08.125 [INFO][4918] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" Namespace="calico-apiserver" Pod="calico-apiserver-5bcf4cfc7f-7945z" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcf4cfc7f--7945z-eth0" Jul 10 00:18:08.146642 systemd-networkd[1540]: vxlan.calico: Gained IPv6LL Jul 10 00:18:08.213965 containerd[1618]: time="2025-07-10T00:18:08.213916453Z" level=info msg="connecting to shim 91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" address="unix:///run/containerd/s/3d7ee7fcdff4e1806612565dfc1cbb02188ccaf94f8f10a202379db7e57fc891" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:18:08.239900 systemd-networkd[1540]: calic4476c82408: Link UP Jul 10 00:18:08.241231 systemd-networkd[1540]: calic4476c82408: Gained carrier Jul 10 00:18:08.245548 systemd[1]: Started cri-containerd-91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b.scope - libcontainer container 91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b. Jul 10 00:18:08.259373 containerd[1618]: 2025-07-10 00:18:07.949 [INFO][4916] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--c6bccbf5d--bxtv5-eth0 calico-kube-controllers-c6bccbf5d- calico-system e6130889-62e9-44d8-9fa0-8fe4c4301a67 833 0 2025-07-10 00:17:41 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c6bccbf5d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-c6bccbf5d-bxtv5 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic4476c82408 [] [] }} ContainerID="aa8a9e5e69d0656f83dd9acafb5d07798cfc0da9078ee2c91fba22d51b6d2637" Namespace="calico-system" Pod="calico-kube-controllers-c6bccbf5d-bxtv5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6bccbf5d--bxtv5-" Jul 10 00:18:08.259373 containerd[1618]: 2025-07-10 00:18:07.949 [INFO][4916] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aa8a9e5e69d0656f83dd9acafb5d07798cfc0da9078ee2c91fba22d51b6d2637" Namespace="calico-system" Pod="calico-kube-controllers-c6bccbf5d-bxtv5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6bccbf5d--bxtv5-eth0" Jul 10 00:18:08.259373 containerd[1618]: 2025-07-10 00:18:08.026 [INFO][4943] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aa8a9e5e69d0656f83dd9acafb5d07798cfc0da9078ee2c91fba22d51b6d2637" HandleID="k8s-pod-network.aa8a9e5e69d0656f83dd9acafb5d07798cfc0da9078ee2c91fba22d51b6d2637" Workload="localhost-k8s-calico--kube--controllers--c6bccbf5d--bxtv5-eth0" Jul 10 00:18:08.259373 containerd[1618]: 2025-07-10 00:18:08.026 [INFO][4943] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aa8a9e5e69d0656f83dd9acafb5d07798cfc0da9078ee2c91fba22d51b6d2637" HandleID="k8s-pod-network.aa8a9e5e69d0656f83dd9acafb5d07798cfc0da9078ee2c91fba22d51b6d2637" Workload="localhost-k8s-calico--kube--controllers--c6bccbf5d--bxtv5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad870), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-c6bccbf5d-bxtv5", "timestamp":"2025-07-10 00:18:08.026028926 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:18:08.259373 containerd[1618]: 2025-07-10 00:18:08.026 [INFO][4943] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:18:08.259373 containerd[1618]: 2025-07-10 00:18:08.085 [INFO][4943] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:18:08.259373 containerd[1618]: 2025-07-10 00:18:08.086 [INFO][4943] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:18:08.259373 containerd[1618]: 2025-07-10 00:18:08.160 [INFO][4943] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aa8a9e5e69d0656f83dd9acafb5d07798cfc0da9078ee2c91fba22d51b6d2637" host="localhost" Jul 10 00:18:08.259373 containerd[1618]: 2025-07-10 00:18:08.162 [INFO][4943] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:18:08.259373 containerd[1618]: 2025-07-10 00:18:08.182 [INFO][4943] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:18:08.259373 containerd[1618]: 2025-07-10 00:18:08.184 [INFO][4943] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:18:08.259373 containerd[1618]: 2025-07-10 00:18:08.185 [INFO][4943] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:18:08.259373 containerd[1618]: 2025-07-10 00:18:08.186 [INFO][4943] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aa8a9e5e69d0656f83dd9acafb5d07798cfc0da9078ee2c91fba22d51b6d2637" host="localhost" Jul 10 00:18:08.259373 containerd[1618]: 2025-07-10 00:18:08.186 [INFO][4943] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.aa8a9e5e69d0656f83dd9acafb5d07798cfc0da9078ee2c91fba22d51b6d2637 Jul 10 00:18:08.259373 containerd[1618]: 2025-07-10 00:18:08.193 [INFO][4943] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aa8a9e5e69d0656f83dd9acafb5d07798cfc0da9078ee2c91fba22d51b6d2637" host="localhost" Jul 10 00:18:08.259373 containerd[1618]: 2025-07-10 00:18:08.225 [INFO][4943] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.aa8a9e5e69d0656f83dd9acafb5d07798cfc0da9078ee2c91fba22d51b6d2637" host="localhost" Jul 10 00:18:08.259373 containerd[1618]: 2025-07-10 00:18:08.226 [INFO][4943] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.aa8a9e5e69d0656f83dd9acafb5d07798cfc0da9078ee2c91fba22d51b6d2637" host="localhost" Jul 10 00:18:08.259373 containerd[1618]: 2025-07-10 00:18:08.226 [INFO][4943] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:18:08.259373 containerd[1618]: 2025-07-10 00:18:08.226 [INFO][4943] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="aa8a9e5e69d0656f83dd9acafb5d07798cfc0da9078ee2c91fba22d51b6d2637" HandleID="k8s-pod-network.aa8a9e5e69d0656f83dd9acafb5d07798cfc0da9078ee2c91fba22d51b6d2637" Workload="localhost-k8s-calico--kube--controllers--c6bccbf5d--bxtv5-eth0" Jul 10 00:18:08.260052 containerd[1618]: 2025-07-10 00:18:08.229 [INFO][4916] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aa8a9e5e69d0656f83dd9acafb5d07798cfc0da9078ee2c91fba22d51b6d2637" Namespace="calico-system" Pod="calico-kube-controllers-c6bccbf5d-bxtv5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6bccbf5d--bxtv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c6bccbf5d--bxtv5-eth0", GenerateName:"calico-kube-controllers-c6bccbf5d-", Namespace:"calico-system", SelfLink:"", UID:"e6130889-62e9-44d8-9fa0-8fe4c4301a67", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 17, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c6bccbf5d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-c6bccbf5d-bxtv5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic4476c82408", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:18:08.260052 containerd[1618]: 2025-07-10 00:18:08.229 [INFO][4916] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="aa8a9e5e69d0656f83dd9acafb5d07798cfc0da9078ee2c91fba22d51b6d2637" Namespace="calico-system" Pod="calico-kube-controllers-c6bccbf5d-bxtv5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6bccbf5d--bxtv5-eth0" Jul 10 00:18:08.260052 containerd[1618]: 2025-07-10 00:18:08.229 [INFO][4916] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic4476c82408 ContainerID="aa8a9e5e69d0656f83dd9acafb5d07798cfc0da9078ee2c91fba22d51b6d2637" Namespace="calico-system" Pod="calico-kube-controllers-c6bccbf5d-bxtv5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6bccbf5d--bxtv5-eth0" Jul 10 00:18:08.260052 containerd[1618]: 2025-07-10 00:18:08.240 [INFO][4916] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aa8a9e5e69d0656f83dd9acafb5d07798cfc0da9078ee2c91fba22d51b6d2637" Namespace="calico-system" Pod="calico-kube-controllers-c6bccbf5d-bxtv5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6bccbf5d--bxtv5-eth0" Jul 10 00:18:08.260052 containerd[1618]: 2025-07-10 00:18:08.244 [INFO][4916] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aa8a9e5e69d0656f83dd9acafb5d07798cfc0da9078ee2c91fba22d51b6d2637" Namespace="calico-system" Pod="calico-kube-controllers-c6bccbf5d-bxtv5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6bccbf5d--bxtv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c6bccbf5d--bxtv5-eth0", GenerateName:"calico-kube-controllers-c6bccbf5d-", Namespace:"calico-system", SelfLink:"", UID:"e6130889-62e9-44d8-9fa0-8fe4c4301a67", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 17, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c6bccbf5d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aa8a9e5e69d0656f83dd9acafb5d07798cfc0da9078ee2c91fba22d51b6d2637", Pod:"calico-kube-controllers-c6bccbf5d-bxtv5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic4476c82408", MAC:"be:5a:39:ca:db:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:18:08.260052 containerd[1618]: 2025-07-10 00:18:08.256 [INFO][4916] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aa8a9e5e69d0656f83dd9acafb5d07798cfc0da9078ee2c91fba22d51b6d2637" Namespace="calico-system" Pod="calico-kube-controllers-c6bccbf5d-bxtv5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6bccbf5d--bxtv5-eth0" Jul 10 00:18:08.291123 containerd[1618]: time="2025-07-10T00:18:08.291100718Z" level=info msg="connecting to shim aa8a9e5e69d0656f83dd9acafb5d07798cfc0da9078ee2c91fba22d51b6d2637" address="unix:///run/containerd/s/370432f42787bf4c4a6a749666073d8bf799ba50b6621d845ed816296bd147b2" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:18:08.309665 systemd[1]: Started cri-containerd-aa8a9e5e69d0656f83dd9acafb5d07798cfc0da9078ee2c91fba22d51b6d2637.scope - libcontainer container aa8a9e5e69d0656f83dd9acafb5d07798cfc0da9078ee2c91fba22d51b6d2637. Jul 10 00:18:08.328538 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:18:08.329638 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:18:08.373031 containerd[1618]: time="2025-07-10T00:18:08.372991213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c6bccbf5d-bxtv5,Uid:e6130889-62e9-44d8-9fa0-8fe4c4301a67,Namespace:calico-system,Attempt:0,} returns sandbox id \"aa8a9e5e69d0656f83dd9acafb5d07798cfc0da9078ee2c91fba22d51b6d2637\"" Jul 10 00:18:08.375589 containerd[1618]: time="2025-07-10T00:18:08.375567521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bcf4cfc7f-7945z,Uid:4136c2e0-77ff-4fe9-bf8f-cbf21d4df8c9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b\"" Jul 10 00:18:08.910697 containerd[1618]: time="2025-07-10T00:18:08.910508135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pblgr,Uid:85f38ee8-8868-4fd3-804f-d03ca4c958a9,Namespace:calico-system,Attempt:0,}" Jul 10 00:18:09.021300 systemd-networkd[1540]: cali8f6cfcb90d4: Link UP Jul 10 00:18:09.024156 systemd-networkd[1540]: cali8f6cfcb90d4: Gained carrier Jul 10 00:18:09.043161 containerd[1618]: 2025-07-10 00:18:08.936 [INFO][5068] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--pblgr-eth0 csi-node-driver- calico-system 85f38ee8-8868-4fd3-804f-d03ca4c958a9 726 0 2025-07-10 00:17:41 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-pblgr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8f6cfcb90d4 [] [] }} ContainerID="dcc5fac1b4a15f61b1d2042e2e5d5fb726094045f676be554d980b21509d07fd" Namespace="calico-system" Pod="csi-node-driver-pblgr" WorkloadEndpoint="localhost-k8s-csi--node--driver--pblgr-" Jul 10 00:18:09.043161 containerd[1618]: 2025-07-10 00:18:08.937 [INFO][5068] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dcc5fac1b4a15f61b1d2042e2e5d5fb726094045f676be554d980b21509d07fd" Namespace="calico-system" Pod="csi-node-driver-pblgr" WorkloadEndpoint="localhost-k8s-csi--node--driver--pblgr-eth0" Jul 10 00:18:09.043161 containerd[1618]: 2025-07-10 00:18:08.964 [INFO][5079] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dcc5fac1b4a15f61b1d2042e2e5d5fb726094045f676be554d980b21509d07fd" HandleID="k8s-pod-network.dcc5fac1b4a15f61b1d2042e2e5d5fb726094045f676be554d980b21509d07fd" Workload="localhost-k8s-csi--node--driver--pblgr-eth0" Jul 10 00:18:09.043161 containerd[1618]: 2025-07-10 00:18:08.965 [INFO][5079] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dcc5fac1b4a15f61b1d2042e2e5d5fb726094045f676be554d980b21509d07fd" HandleID="k8s-pod-network.dcc5fac1b4a15f61b1d2042e2e5d5fb726094045f676be554d980b21509d07fd" Workload="localhost-k8s-csi--node--driver--pblgr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024ef60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-pblgr", "timestamp":"2025-07-10 00:18:08.964925324 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:18:09.043161 containerd[1618]: 2025-07-10 00:18:08.965 [INFO][5079] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:18:09.043161 containerd[1618]: 2025-07-10 00:18:08.965 [INFO][5079] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:18:09.043161 containerd[1618]: 2025-07-10 00:18:08.965 [INFO][5079] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:18:09.043161 containerd[1618]: 2025-07-10 00:18:08.970 [INFO][5079] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dcc5fac1b4a15f61b1d2042e2e5d5fb726094045f676be554d980b21509d07fd" host="localhost" Jul 10 00:18:09.043161 containerd[1618]: 2025-07-10 00:18:08.974 [INFO][5079] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:18:09.043161 containerd[1618]: 2025-07-10 00:18:08.979 [INFO][5079] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:18:09.043161 containerd[1618]: 2025-07-10 00:18:08.980 [INFO][5079] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:18:09.043161 containerd[1618]: 2025-07-10 00:18:08.982 [INFO][5079] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:18:09.043161 containerd[1618]: 2025-07-10 00:18:08.982 [INFO][5079] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dcc5fac1b4a15f61b1d2042e2e5d5fb726094045f676be554d980b21509d07fd" host="localhost" Jul 10 00:18:09.043161 containerd[1618]: 2025-07-10 00:18:08.984 [INFO][5079] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dcc5fac1b4a15f61b1d2042e2e5d5fb726094045f676be554d980b21509d07fd Jul 10 00:18:09.043161 containerd[1618]: 2025-07-10 00:18:08.995 [INFO][5079] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dcc5fac1b4a15f61b1d2042e2e5d5fb726094045f676be554d980b21509d07fd" host="localhost" Jul 10 00:18:09.043161 containerd[1618]: 2025-07-10 00:18:09.010 [INFO][5079] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.dcc5fac1b4a15f61b1d2042e2e5d5fb726094045f676be554d980b21509d07fd" host="localhost" Jul 10 00:18:09.043161 containerd[1618]: 2025-07-10 00:18:09.010 [INFO][5079] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.dcc5fac1b4a15f61b1d2042e2e5d5fb726094045f676be554d980b21509d07fd" host="localhost" Jul 10 00:18:09.043161 containerd[1618]: 2025-07-10 00:18:09.010 [INFO][5079] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:18:09.043161 containerd[1618]: 2025-07-10 00:18:09.010 [INFO][5079] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="dcc5fac1b4a15f61b1d2042e2e5d5fb726094045f676be554d980b21509d07fd" HandleID="k8s-pod-network.dcc5fac1b4a15f61b1d2042e2e5d5fb726094045f676be554d980b21509d07fd" Workload="localhost-k8s-csi--node--driver--pblgr-eth0" Jul 10 00:18:09.045254 containerd[1618]: 2025-07-10 00:18:09.016 [INFO][5068] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dcc5fac1b4a15f61b1d2042e2e5d5fb726094045f676be554d980b21509d07fd" Namespace="calico-system" Pod="csi-node-driver-pblgr" WorkloadEndpoint="localhost-k8s-csi--node--driver--pblgr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pblgr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"85f38ee8-8868-4fd3-804f-d03ca4c958a9", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 17, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-pblgr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f6cfcb90d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:18:09.045254 containerd[1618]: 2025-07-10 00:18:09.016 [INFO][5068] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="dcc5fac1b4a15f61b1d2042e2e5d5fb726094045f676be554d980b21509d07fd" Namespace="calico-system" Pod="csi-node-driver-pblgr" WorkloadEndpoint="localhost-k8s-csi--node--driver--pblgr-eth0" Jul 10 00:18:09.045254 containerd[1618]: 2025-07-10 00:18:09.016 [INFO][5068] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f6cfcb90d4 ContainerID="dcc5fac1b4a15f61b1d2042e2e5d5fb726094045f676be554d980b21509d07fd" Namespace="calico-system" Pod="csi-node-driver-pblgr" WorkloadEndpoint="localhost-k8s-csi--node--driver--pblgr-eth0" Jul 10 00:18:09.045254 containerd[1618]: 2025-07-10 00:18:09.024 [INFO][5068] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dcc5fac1b4a15f61b1d2042e2e5d5fb726094045f676be554d980b21509d07fd" Namespace="calico-system" Pod="csi-node-driver-pblgr" WorkloadEndpoint="localhost-k8s-csi--node--driver--pblgr-eth0" Jul 10 00:18:09.045254 containerd[1618]: 2025-07-10 00:18:09.025 [INFO][5068] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dcc5fac1b4a15f61b1d2042e2e5d5fb726094045f676be554d980b21509d07fd" Namespace="calico-system" Pod="csi-node-driver-pblgr" WorkloadEndpoint="localhost-k8s-csi--node--driver--pblgr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pblgr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"85f38ee8-8868-4fd3-804f-d03ca4c958a9", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 17, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dcc5fac1b4a15f61b1d2042e2e5d5fb726094045f676be554d980b21509d07fd", Pod:"csi-node-driver-pblgr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f6cfcb90d4", MAC:"12:cb:9c:fb:b0:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:18:09.045254 containerd[1618]: 2025-07-10 00:18:09.038 [INFO][5068] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dcc5fac1b4a15f61b1d2042e2e5d5fb726094045f676be554d980b21509d07fd" Namespace="calico-system" Pod="csi-node-driver-pblgr" WorkloadEndpoint="localhost-k8s-csi--node--driver--pblgr-eth0" Jul 10 00:18:09.080427 containerd[1618]: time="2025-07-10T00:18:09.080387865Z" level=info msg="connecting to shim dcc5fac1b4a15f61b1d2042e2e5d5fb726094045f676be554d980b21509d07fd" address="unix:///run/containerd/s/ae420f981f3e116d3d8ac385ed1101ddd57aef4eb619756f3301c7af673c5277" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:18:09.106595 systemd[1]: Started cri-containerd-dcc5fac1b4a15f61b1d2042e2e5d5fb726094045f676be554d980b21509d07fd.scope - libcontainer container dcc5fac1b4a15f61b1d2042e2e5d5fb726094045f676be554d980b21509d07fd. Jul 10 00:18:09.115345 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:18:09.358190 containerd[1618]: time="2025-07-10T00:18:09.358085812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pblgr,Uid:85f38ee8-8868-4fd3-804f-d03ca4c958a9,Namespace:calico-system,Attempt:0,} returns sandbox id \"dcc5fac1b4a15f61b1d2042e2e5d5fb726094045f676be554d980b21509d07fd\"" Jul 10 00:18:09.427004 systemd-networkd[1540]: calic4476c82408: Gained IPv6LL Jul 10 00:18:09.911849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3406984974.mount: Deactivated successfully. Jul 10 00:18:09.938601 systemd-networkd[1540]: cali8b0d9249008: Gained IPv6LL Jul 10 00:18:10.594747 containerd[1618]: time="2025-07-10T00:18:10.594678930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 10 00:18:10.677718 containerd[1618]: time="2025-07-10T00:18:10.676628418Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:18:10.678689 containerd[1618]: time="2025-07-10T00:18:10.678673137Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 5.38048301s" Jul 10 00:18:10.678754 containerd[1618]: time="2025-07-10T00:18:10.678743940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 10 00:18:10.684705 containerd[1618]: time="2025-07-10T00:18:10.684683681Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:18:10.685245 containerd[1618]: time="2025-07-10T00:18:10.685159861Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:18:10.689532 containerd[1618]: time="2025-07-10T00:18:10.689503273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 00:18:10.691159 containerd[1618]: time="2025-07-10T00:18:10.691000846Z" level=info msg="CreateContainer within sandbox \"0dfb8f49a88175836260f32c9f9dfaed6189568e9174969f94c98cd8007599c0\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 10 00:18:10.728617 containerd[1618]: time="2025-07-10T00:18:10.728491948Z" level=info msg="Container c59a11653b1546853044070723b1ff50bf125dd0bebea98461c259ebc06f5556: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:18:10.754739 containerd[1618]: time="2025-07-10T00:18:10.754716611Z" level=info msg="CreateContainer within sandbox \"0dfb8f49a88175836260f32c9f9dfaed6189568e9174969f94c98cd8007599c0\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"c59a11653b1546853044070723b1ff50bf125dd0bebea98461c259ebc06f5556\"" Jul 10 00:18:10.755261 containerd[1618]: time="2025-07-10T00:18:10.755229743Z" level=info msg="StartContainer for \"c59a11653b1546853044070723b1ff50bf125dd0bebea98461c259ebc06f5556\"" Jul 10 00:18:10.759836 containerd[1618]: time="2025-07-10T00:18:10.759808344Z" level=info msg="connecting to shim c59a11653b1546853044070723b1ff50bf125dd0bebea98461c259ebc06f5556" address="unix:///run/containerd/s/e607c6f66dcb9e3917fe54083ede68a67a53f7075210781ff80f927f86f50f80" protocol=ttrpc version=3 Jul 10 00:18:10.805531 systemd[1]: Started cri-containerd-c59a11653b1546853044070723b1ff50bf125dd0bebea98461c259ebc06f5556.scope - libcontainer container c59a11653b1546853044070723b1ff50bf125dd0bebea98461c259ebc06f5556. Jul 10 00:18:10.838267 containerd[1618]: time="2025-07-10T00:18:10.838236990Z" level=info msg="StartContainer for \"c59a11653b1546853044070723b1ff50bf125dd0bebea98461c259ebc06f5556\" returns successfully" Jul 10 00:18:10.898548 systemd-networkd[1540]: cali8f6cfcb90d4: Gained IPv6LL Jul 10 00:18:11.140742 kubelet[2914]: I0710 00:18:11.140660 2914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-lwqb8" podStartSLOduration=25.734347233 podStartE2EDuration="31.140629139s" podCreationTimestamp="2025-07-10 00:17:40 +0000 UTC" firstStartedPulling="2025-07-10 00:18:05.27868762 +0000 UTC m=+40.441704615" lastFinishedPulling="2025-07-10 00:18:10.684969525 +0000 UTC m=+45.847986521" observedRunningTime="2025-07-10 00:18:11.139977115 +0000 UTC m=+46.302994118" watchObservedRunningTime="2025-07-10 00:18:11.140629139 +0000 UTC m=+46.303646143" Jul 10 00:18:12.148005 kubelet[2914]: I0710 00:18:12.147700 2914 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:18:14.641709 containerd[1618]: time="2025-07-10T00:18:14.641531923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:18:14.663819 containerd[1618]: time="2025-07-10T00:18:14.663786983Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 10 00:18:14.683431 containerd[1618]: time="2025-07-10T00:18:14.683401368Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:18:14.707955 containerd[1618]: time="2025-07-10T00:18:14.707901497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:18:14.709052 containerd[1618]: time="2025-07-10T00:18:14.708969900Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 4.019368223s" Jul 10 00:18:14.709052 containerd[1618]: time="2025-07-10T00:18:14.708994034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 10 00:18:14.717851 containerd[1618]: time="2025-07-10T00:18:14.709830119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 00:18:14.727893 containerd[1618]: time="2025-07-10T00:18:14.727847310Z" level=info msg="CreateContainer within sandbox \"23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 00:18:14.856170 containerd[1618]: time="2025-07-10T00:18:14.854187514Z" level=info msg="Container b90ef6ebc33d19c61c808e4bda9c57a7d336cf4f1b8eb5468e0decc2ebbe7c85: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:18:15.011729 containerd[1618]: time="2025-07-10T00:18:15.011649920Z" level=info msg="CreateContainer within sandbox \"23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b90ef6ebc33d19c61c808e4bda9c57a7d336cf4f1b8eb5468e0decc2ebbe7c85\"" Jul 10 00:18:15.016235 containerd[1618]: time="2025-07-10T00:18:15.012243499Z" level=info msg="StartContainer for \"b90ef6ebc33d19c61c808e4bda9c57a7d336cf4f1b8eb5468e0decc2ebbe7c85\"" Jul 10 00:18:15.033748 containerd[1618]: time="2025-07-10T00:18:15.033723444Z" level=info msg="connecting to shim b90ef6ebc33d19c61c808e4bda9c57a7d336cf4f1b8eb5468e0decc2ebbe7c85" address="unix:///run/containerd/s/ad3c7cb17ce6032394c084283d44fcffa681cbe7ff00bf7378c1d6c6e203a014" protocol=ttrpc version=3 Jul 10 00:18:15.086875 systemd[1]: Started cri-containerd-b90ef6ebc33d19c61c808e4bda9c57a7d336cf4f1b8eb5468e0decc2ebbe7c85.scope - libcontainer container b90ef6ebc33d19c61c808e4bda9c57a7d336cf4f1b8eb5468e0decc2ebbe7c85. Jul 10 00:18:15.152753 containerd[1618]: time="2025-07-10T00:18:15.152720790Z" level=info msg="StartContainer for \"b90ef6ebc33d19c61c808e4bda9c57a7d336cf4f1b8eb5468e0decc2ebbe7c85\" returns successfully" Jul 10 00:18:15.196288 containerd[1618]: time="2025-07-10T00:18:15.196061316Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:18:15.197527 containerd[1618]: time="2025-07-10T00:18:15.196557735Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 10 00:18:15.200455 containerd[1618]: time="2025-07-10T00:18:15.199811842Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 489.962911ms" Jul 10 00:18:15.200521 containerd[1618]: time="2025-07-10T00:18:15.200512215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 10 00:18:15.202559 containerd[1618]: time="2025-07-10T00:18:15.202544572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 10 00:18:15.204983 containerd[1618]: time="2025-07-10T00:18:15.204962554Z" level=info msg="CreateContainer within sandbox \"8f8c6b0f0116fb8500ab97cbc98e651bc7bbc47af88edaaa6aa72eb94312fd9f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 00:18:15.213946 containerd[1618]: time="2025-07-10T00:18:15.213523920Z" level=info msg="Container efa26254b4f9e739b5817dc38792199724e2efc4cc26b627c1323479fb9834ef: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:18:15.227658 containerd[1618]: time="2025-07-10T00:18:15.227564500Z" level=info msg="CreateContainer within sandbox \"8f8c6b0f0116fb8500ab97cbc98e651bc7bbc47af88edaaa6aa72eb94312fd9f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"efa26254b4f9e739b5817dc38792199724e2efc4cc26b627c1323479fb9834ef\"" Jul 10 00:18:15.230309 containerd[1618]: time="2025-07-10T00:18:15.230291225Z" level=info msg="StartContainer for \"efa26254b4f9e739b5817dc38792199724e2efc4cc26b627c1323479fb9834ef\"" Jul 10 00:18:15.232596 containerd[1618]: time="2025-07-10T00:18:15.232570738Z" level=info msg="connecting to shim efa26254b4f9e739b5817dc38792199724e2efc4cc26b627c1323479fb9834ef" address="unix:///run/containerd/s/b0dc2f3bcb1918ff8045a68f03f5edbbcda94b4d2a07cd56d5145618cddfbe9b" protocol=ttrpc version=3 Jul 10 00:18:15.255523 systemd[1]: Started cri-containerd-efa26254b4f9e739b5817dc38792199724e2efc4cc26b627c1323479fb9834ef.scope - libcontainer container efa26254b4f9e739b5817dc38792199724e2efc4cc26b627c1323479fb9834ef. Jul 10 00:18:15.340205 containerd[1618]: time="2025-07-10T00:18:15.339807533Z" level=info msg="StartContainer for \"efa26254b4f9e739b5817dc38792199724e2efc4cc26b627c1323479fb9834ef\" returns successfully" Jul 10 00:18:16.402461 kubelet[2914]: I0710 00:18:16.402054 2914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c5dc87db9-p7g8k" podStartSLOduration=28.478189791 podStartE2EDuration="37.402036386s" podCreationTimestamp="2025-07-10 00:17:39 +0000 UTC" firstStartedPulling="2025-07-10 00:18:06.27760861 +0000 UTC m=+41.440625606" lastFinishedPulling="2025-07-10 00:18:15.201455205 +0000 UTC m=+50.364472201" observedRunningTime="2025-07-10 00:18:16.341381042 +0000 UTC m=+51.504398051" watchObservedRunningTime="2025-07-10 00:18:16.402036386 +0000 UTC m=+51.565053386" Jul 10 00:18:16.405536 kubelet[2914]: I0710 00:18:16.403296 2914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5bcf4cfc7f-j2vlz" podStartSLOduration=28.992069159 podStartE2EDuration="38.403287798s" podCreationTimestamp="2025-07-10 00:17:38 +0000 UTC" firstStartedPulling="2025-07-10 00:18:05.298465899 +0000 UTC m=+40.461482895" lastFinishedPulling="2025-07-10 00:18:14.709684531 +0000 UTC m=+49.872701534" observedRunningTime="2025-07-10 00:18:16.358378331 +0000 UTC m=+51.521395337" watchObservedRunningTime="2025-07-10 00:18:16.403287798 +0000 UTC m=+51.566304798" Jul 10 00:18:17.300501 kubelet[2914]: I0710 00:18:17.300480 2914 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:18:19.389968 kubelet[2914]: I0710 00:18:19.389940 2914 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:18:21.014562 containerd[1618]: time="2025-07-10T00:18:21.014500131Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c59a11653b1546853044070723b1ff50bf125dd0bebea98461c259ebc06f5556\" id:\"534abd08f8759e0171c4d2c61db63042a29e1d366ff9b6e7bb3b6135f5fa0f50\" pid:5299 exited_at:{seconds:1752106700 nanos:984497714}" Jul 10 00:18:21.154461 containerd[1618]: time="2025-07-10T00:18:21.154389740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:18:21.183118 containerd[1618]: time="2025-07-10T00:18:21.183094642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 10 00:18:21.193406 containerd[1618]: time="2025-07-10T00:18:21.193379765Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:18:21.212644 containerd[1618]: time="2025-07-10T00:18:21.212607536Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 6.01004453s" Jul 10 00:18:21.212644 containerd[1618]: time="2025-07-10T00:18:21.212642770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 10 00:18:21.219373 containerd[1618]: time="2025-07-10T00:18:21.213772995Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:18:21.231277 containerd[1618]: time="2025-07-10T00:18:21.231078732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 00:18:21.353483 containerd[1618]: time="2025-07-10T00:18:21.353308956Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c59a11653b1546853044070723b1ff50bf125dd0bebea98461c259ebc06f5556\" id:\"324295972eb50861302fdde4e29a9baccbc9b704d7247066b38527509fa5b7c3\" pid:5326 exited_at:{seconds:1752106701 nanos:353120856}" Jul 10 00:18:21.504313 containerd[1618]: time="2025-07-10T00:18:21.503949697Z" level=info msg="CreateContainer within sandbox \"aa8a9e5e69d0656f83dd9acafb5d07798cfc0da9078ee2c91fba22d51b6d2637\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 10 00:18:21.510415 containerd[1618]: time="2025-07-10T00:18:21.509602955Z" level=info msg="Container c2b99fa1f3c38d467cd4b789d59c7a02b5e615ac2a7bf8fd24e88381f64bbfd1: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:18:21.512997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1872403416.mount: Deactivated successfully. Jul 10 00:18:21.555967 containerd[1618]: time="2025-07-10T00:18:21.555937679Z" level=info msg="CreateContainer within sandbox \"aa8a9e5e69d0656f83dd9acafb5d07798cfc0da9078ee2c91fba22d51b6d2637\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c2b99fa1f3c38d467cd4b789d59c7a02b5e615ac2a7bf8fd24e88381f64bbfd1\"" Jul 10 00:18:21.556659 containerd[1618]: time="2025-07-10T00:18:21.556466300Z" level=info msg="StartContainer for \"c2b99fa1f3c38d467cd4b789d59c7a02b5e615ac2a7bf8fd24e88381f64bbfd1\"" Jul 10 00:18:21.573345 containerd[1618]: time="2025-07-10T00:18:21.573322961Z" level=info msg="connecting to shim c2b99fa1f3c38d467cd4b789d59c7a02b5e615ac2a7bf8fd24e88381f64bbfd1" address="unix:///run/containerd/s/370432f42787bf4c4a6a749666073d8bf799ba50b6621d845ed816296bd147b2" protocol=ttrpc version=3 Jul 10 00:18:21.634583 systemd[1]: Started cri-containerd-c2b99fa1f3c38d467cd4b789d59c7a02b5e615ac2a7bf8fd24e88381f64bbfd1.scope - libcontainer container c2b99fa1f3c38d467cd4b789d59c7a02b5e615ac2a7bf8fd24e88381f64bbfd1. Jul 10 00:18:21.677668 containerd[1618]: time="2025-07-10T00:18:21.677641638Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:18:21.681002 containerd[1618]: time="2025-07-10T00:18:21.680984107Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 10 00:18:21.687304 containerd[1618]: time="2025-07-10T00:18:21.687253202Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 456.150178ms" Jul 10 00:18:21.687304 containerd[1618]: time="2025-07-10T00:18:21.687282432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 10 00:18:21.719013 containerd[1618]: time="2025-07-10T00:18:21.718986285Z" level=info msg="StartContainer for \"c2b99fa1f3c38d467cd4b789d59c7a02b5e615ac2a7bf8fd24e88381f64bbfd1\" returns successfully" Jul 10 00:18:21.742797 containerd[1618]: time="2025-07-10T00:18:21.742774208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 10 00:18:21.744173 containerd[1618]: time="2025-07-10T00:18:21.744142476Z" level=info msg="CreateContainer within sandbox \"91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 00:18:21.765625 containerd[1618]: time="2025-07-10T00:18:21.765593263Z" level=info msg="Container e4b678372106ead071869f89998e42d7e88280932012d9a5776d2f4c64499f20: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:18:21.813916 containerd[1618]: time="2025-07-10T00:18:21.813848596Z" level=info msg="CreateContainer within sandbox \"91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e4b678372106ead071869f89998e42d7e88280932012d9a5776d2f4c64499f20\"" Jul 10 00:18:21.814527 containerd[1618]: time="2025-07-10T00:18:21.814431584Z" level=info msg="StartContainer for \"e4b678372106ead071869f89998e42d7e88280932012d9a5776d2f4c64499f20\"" Jul 10 00:18:21.815446 containerd[1618]: time="2025-07-10T00:18:21.815402982Z" level=info msg="connecting to shim e4b678372106ead071869f89998e42d7e88280932012d9a5776d2f4c64499f20" address="unix:///run/containerd/s/3d7ee7fcdff4e1806612565dfc1cbb02188ccaf94f8f10a202379db7e57fc891" protocol=ttrpc version=3 Jul 10 00:18:21.834633 systemd[1]: Started cri-containerd-e4b678372106ead071869f89998e42d7e88280932012d9a5776d2f4c64499f20.scope - libcontainer container e4b678372106ead071869f89998e42d7e88280932012d9a5776d2f4c64499f20. Jul 10 00:18:21.886706 containerd[1618]: time="2025-07-10T00:18:21.886431242Z" level=info msg="StartContainer for \"e4b678372106ead071869f89998e42d7e88280932012d9a5776d2f4c64499f20\" returns successfully" Jul 10 00:18:22.402650 kubelet[2914]: I0710 00:18:22.397905 2914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-c6bccbf5d-bxtv5" podStartSLOduration=28.555719 podStartE2EDuration="41.397886972s" podCreationTimestamp="2025-07-10 00:17:41 +0000 UTC" firstStartedPulling="2025-07-10 00:18:08.373888039 +0000 UTC m=+43.536905037" lastFinishedPulling="2025-07-10 00:18:21.216056012 +0000 UTC m=+56.379073009" observedRunningTime="2025-07-10 00:18:22.39703513 +0000 UTC m=+57.560052136" watchObservedRunningTime="2025-07-10 00:18:22.397886972 +0000 UTC m=+57.560903970" Jul 10 00:18:22.406576 kubelet[2914]: I0710 00:18:22.406337 2914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5bcf4cfc7f-7945z" podStartSLOduration=31.045816975 podStartE2EDuration="44.40632571s" podCreationTimestamp="2025-07-10 00:17:38 +0000 UTC" firstStartedPulling="2025-07-10 00:18:08.377046807 +0000 UTC m=+43.540063805" lastFinishedPulling="2025-07-10 00:18:21.737555542 +0000 UTC m=+56.900572540" observedRunningTime="2025-07-10 00:18:22.405682535 +0000 UTC m=+57.568699545" watchObservedRunningTime="2025-07-10 00:18:22.40632571 +0000 UTC m=+57.569342715" Jul 10 00:18:22.465893 containerd[1618]: time="2025-07-10T00:18:22.465863351Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c2b99fa1f3c38d467cd4b789d59c7a02b5e615ac2a7bf8fd24e88381f64bbfd1\" id:\"bb128f1131db137b36ca29c093ad8152121fa612da174603de56940b01406594\" pid:5446 exited_at:{seconds:1752106702 nanos:461054463}" Jul 10 00:18:23.635012 kubelet[2914]: I0710 00:18:23.634891 2914 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:18:24.201449 containerd[1618]: time="2025-07-10T00:18:24.201407460Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:18:24.202358 containerd[1618]: time="2025-07-10T00:18:24.202341631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 10 00:18:24.202872 containerd[1618]: time="2025-07-10T00:18:24.202665780Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:18:24.212745 containerd[1618]: time="2025-07-10T00:18:24.212728064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:18:24.212934 containerd[1618]: time="2025-07-10T00:18:24.212914676Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.469924064s" Jul 10 00:18:24.213496 containerd[1618]: time="2025-07-10T00:18:24.212934657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 10 00:18:24.221084 containerd[1618]: time="2025-07-10T00:18:24.221031726Z" level=info msg="CreateContainer within sandbox \"dcc5fac1b4a15f61b1d2042e2e5d5fb726094045f676be554d980b21509d07fd\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 10 00:18:24.236619 containerd[1618]: time="2025-07-10T00:18:24.236593295Z" level=info msg="Container 08757d145a3fb1095ea640bcf147c293e6bf839d982d8de19e3426be1c43fe2f: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:18:24.239695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount252187495.mount: Deactivated successfully. Jul 10 00:18:24.251388 containerd[1618]: time="2025-07-10T00:18:24.251356930Z" level=info msg="CreateContainer within sandbox \"dcc5fac1b4a15f61b1d2042e2e5d5fb726094045f676be554d980b21509d07fd\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"08757d145a3fb1095ea640bcf147c293e6bf839d982d8de19e3426be1c43fe2f\"" Jul 10 00:18:24.261048 containerd[1618]: time="2025-07-10T00:18:24.251797969Z" level=info msg="StartContainer for \"08757d145a3fb1095ea640bcf147c293e6bf839d982d8de19e3426be1c43fe2f\"" Jul 10 00:18:24.261048 containerd[1618]: time="2025-07-10T00:18:24.252990627Z" level=info msg="connecting to shim 08757d145a3fb1095ea640bcf147c293e6bf839d982d8de19e3426be1c43fe2f" address="unix:///run/containerd/s/ae420f981f3e116d3d8ac385ed1101ddd57aef4eb619756f3301c7af673c5277" protocol=ttrpc version=3 Jul 10 00:18:24.271557 systemd[1]: Started cri-containerd-08757d145a3fb1095ea640bcf147c293e6bf839d982d8de19e3426be1c43fe2f.scope - libcontainer container 08757d145a3fb1095ea640bcf147c293e6bf839d982d8de19e3426be1c43fe2f. Jul 10 00:18:24.301210 containerd[1618]: time="2025-07-10T00:18:24.301145227Z" level=info msg="StartContainer for \"08757d145a3fb1095ea640bcf147c293e6bf839d982d8de19e3426be1c43fe2f\" returns successfully" Jul 10 00:18:24.317378 containerd[1618]: time="2025-07-10T00:18:24.317357948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 10 00:18:26.418416 containerd[1618]: time="2025-07-10T00:18:26.418276457Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:18:26.425905 containerd[1618]: time="2025-07-10T00:18:26.425824284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 10 00:18:26.435201 containerd[1618]: time="2025-07-10T00:18:26.434986563Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:18:26.438234 containerd[1618]: time="2025-07-10T00:18:26.438217765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:18:26.445024 containerd[1618]: time="2025-07-10T00:18:26.439110799Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.12164245s" Jul 10 00:18:26.445024 containerd[1618]: time="2025-07-10T00:18:26.439138806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 10 00:18:26.445024 containerd[1618]: time="2025-07-10T00:18:26.440523039Z" level=info msg="CreateContainer within sandbox \"dcc5fac1b4a15f61b1d2042e2e5d5fb726094045f676be554d980b21509d07fd\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 10 00:18:26.509035 containerd[1618]: time="2025-07-10T00:18:26.509004023Z" level=info msg="Container 1e3464a498ce3143d88827f41b1eaf7ed58587f7ffb91bfc8ab9f0d6ec8e7576: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:18:26.546185 containerd[1618]: time="2025-07-10T00:18:26.546140089Z" level=info msg="CreateContainer within sandbox \"dcc5fac1b4a15f61b1d2042e2e5d5fb726094045f676be554d980b21509d07fd\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1e3464a498ce3143d88827f41b1eaf7ed58587f7ffb91bfc8ab9f0d6ec8e7576\"" Jul 10 00:18:26.547210 containerd[1618]: time="2025-07-10T00:18:26.546702254Z" level=info msg="StartContainer for \"1e3464a498ce3143d88827f41b1eaf7ed58587f7ffb91bfc8ab9f0d6ec8e7576\"" Jul 10 00:18:26.548354 containerd[1618]: time="2025-07-10T00:18:26.548297562Z" level=info msg="connecting to shim 1e3464a498ce3143d88827f41b1eaf7ed58587f7ffb91bfc8ab9f0d6ec8e7576" address="unix:///run/containerd/s/ae420f981f3e116d3d8ac385ed1101ddd57aef4eb619756f3301c7af673c5277" protocol=ttrpc version=3 Jul 10 00:18:26.602594 systemd[1]: Started cri-containerd-1e3464a498ce3143d88827f41b1eaf7ed58587f7ffb91bfc8ab9f0d6ec8e7576.scope - libcontainer container 1e3464a498ce3143d88827f41b1eaf7ed58587f7ffb91bfc8ab9f0d6ec8e7576. Jul 10 00:18:26.635248 containerd[1618]: time="2025-07-10T00:18:26.635217856Z" level=info msg="StartContainer for \"1e3464a498ce3143d88827f41b1eaf7ed58587f7ffb91bfc8ab9f0d6ec8e7576\" returns successfully" Jul 10 00:18:26.791309 kubelet[2914]: I0710 00:18:26.786255 2914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-pblgr" podStartSLOduration=28.705271339 podStartE2EDuration="45.783531917s" podCreationTimestamp="2025-07-10 00:17:41 +0000 UTC" firstStartedPulling="2025-07-10 00:18:09.361421997 +0000 UTC m=+44.524438992" lastFinishedPulling="2025-07-10 00:18:26.439682566 +0000 UTC m=+61.602699570" observedRunningTime="2025-07-10 00:18:26.77058249 +0000 UTC m=+61.933599496" watchObservedRunningTime="2025-07-10 00:18:26.783531917 +0000 UTC m=+61.946548916" Jul 10 00:18:27.323587 kubelet[2914]: I0710 00:18:27.318088 2914 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 10 00:18:27.328646 kubelet[2914]: I0710 00:18:27.328605 2914 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 10 00:18:29.517662 containerd[1618]: time="2025-07-10T00:18:29.517632088Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f90873fd3cb87b8aa9a96512b14a2ac653258afd1cbd00a321ca809106cb6ae1\" id:\"2d2025f72931bf4b5a139854271281580e2bb27a78d7b5f03c809700a8fff81c\" pid:5565 exited_at:{seconds:1752106709 nanos:465952221}" Jul 10 00:18:29.633033 containerd[1618]: time="2025-07-10T00:18:29.633005853Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f90873fd3cb87b8aa9a96512b14a2ac653258afd1cbd00a321ca809106cb6ae1\" id:\"9589989d474a4280682f71108bf54bf8af7f7b08bac47861c20ad7f15f5ea05e\" pid:5588 exited_at:{seconds:1752106709 nanos:632727836}" Jul 10 00:18:30.582186 kubelet[2914]: I0710 00:18:30.582030 2914 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:18:30.662934 kubelet[2914]: I0710 00:18:30.662619 2914 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:18:30.732582 containerd[1618]: time="2025-07-10T00:18:30.732475581Z" level=info msg="StopContainer for \"e4b678372106ead071869f89998e42d7e88280932012d9a5776d2f4c64499f20\" with timeout 30 (s)" Jul 10 00:18:30.744387 containerd[1618]: time="2025-07-10T00:18:30.744316931Z" level=info msg="Stop container \"e4b678372106ead071869f89998e42d7e88280932012d9a5776d2f4c64499f20\" with signal terminated" Jul 10 00:18:30.759432 systemd[1]: cri-containerd-e4b678372106ead071869f89998e42d7e88280932012d9a5776d2f4c64499f20.scope: Deactivated successfully. Jul 10 00:18:30.760155 systemd[1]: cri-containerd-e4b678372106ead071869f89998e42d7e88280932012d9a5776d2f4c64499f20.scope: Consumed 612ms CPU time, 49.8M memory peak, 3.3M read from disk. Jul 10 00:18:30.767542 containerd[1618]: time="2025-07-10T00:18:30.763743807Z" level=info msg="received exit event container_id:\"e4b678372106ead071869f89998e42d7e88280932012d9a5776d2f4c64499f20\" id:\"e4b678372106ead071869f89998e42d7e88280932012d9a5776d2f4c64499f20\" pid:5409 exit_status:1 exited_at:{seconds:1752106710 nanos:763530137}" Jul 10 00:18:30.767542 containerd[1618]: time="2025-07-10T00:18:30.763895508Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e4b678372106ead071869f89998e42d7e88280932012d9a5776d2f4c64499f20\" id:\"e4b678372106ead071869f89998e42d7e88280932012d9a5776d2f4c64499f20\" pid:5409 exit_status:1 exited_at:{seconds:1752106710 nanos:763530137}" Jul 10 00:18:30.807847 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4b678372106ead071869f89998e42d7e88280932012d9a5776d2f4c64499f20-rootfs.mount: Deactivated successfully. Jul 10 00:18:30.824749 containerd[1618]: time="2025-07-10T00:18:30.824498967Z" level=info msg="StopContainer for \"e4b678372106ead071869f89998e42d7e88280932012d9a5776d2f4c64499f20\" returns successfully" Jul 10 00:18:30.837780 containerd[1618]: time="2025-07-10T00:18:30.836477692Z" level=info msg="StopPodSandbox for \"91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b\"" Jul 10 00:18:30.850927 containerd[1618]: time="2025-07-10T00:18:30.849183484Z" level=info msg="Container to stop \"e4b678372106ead071869f89998e42d7e88280932012d9a5776d2f4c64499f20\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:18:30.891731 systemd[1]: Created slice kubepods-besteffort-pod3ba88cb6_4b62_4716_adea_217e6d3480e3.slice - libcontainer container kubepods-besteffort-pod3ba88cb6_4b62_4716_adea_217e6d3480e3.slice. Jul 10 00:18:30.892120 systemd[1]: cri-containerd-91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b.scope: Deactivated successfully. Jul 10 00:18:30.903775 containerd[1618]: time="2025-07-10T00:18:30.903711542Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b\" id:\"91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b\" pid:4998 exit_status:137 exited_at:{seconds:1752106710 nanos:902108030}" Jul 10 00:18:30.933038 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b-rootfs.mount: Deactivated successfully. Jul 10 00:18:30.972537 containerd[1618]: time="2025-07-10T00:18:30.972482979Z" level=info msg="shim disconnected" id=91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b namespace=k8s.io Jul 10 00:18:30.972537 containerd[1618]: time="2025-07-10T00:18:30.972511401Z" level=warning msg="cleaning up after shim disconnected" id=91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b namespace=k8s.io Jul 10 00:18:30.997387 containerd[1618]: time="2025-07-10T00:18:30.972516756Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:18:31.008459 kubelet[2914]: I0710 00:18:31.007089 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwftb\" (UniqueName: \"kubernetes.io/projected/3ba88cb6-4b62-4716-adea-217e6d3480e3-kube-api-access-gwftb\") pod \"calico-apiserver-c5dc87db9-9nmjd\" (UID: \"3ba88cb6-4b62-4716-adea-217e6d3480e3\") " pod="calico-apiserver/calico-apiserver-c5dc87db9-9nmjd" Jul 10 00:18:31.008667 kubelet[2914]: I0710 00:18:31.008620 2914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3ba88cb6-4b62-4716-adea-217e6d3480e3-calico-apiserver-certs\") pod \"calico-apiserver-c5dc87db9-9nmjd\" (UID: \"3ba88cb6-4b62-4716-adea-217e6d3480e3\") " pod="calico-apiserver/calico-apiserver-c5dc87db9-9nmjd" Jul 10 00:18:31.114398 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b-shm.mount: Deactivated successfully. Jul 10 00:18:31.126762 containerd[1618]: time="2025-07-10T00:18:31.126615733Z" level=info msg="received exit event sandbox_id:\"91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b\" exit_status:137 exited_at:{seconds:1752106710 nanos:902108030}" Jul 10 00:18:31.204089 containerd[1618]: time="2025-07-10T00:18:31.204064401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5dc87db9-9nmjd,Uid:3ba88cb6-4b62-4716-adea-217e6d3480e3,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:18:31.458430 systemd-networkd[1540]: cali8b0d9249008: Link DOWN Jul 10 00:18:31.458434 systemd-networkd[1540]: cali8b0d9249008: Lost carrier Jul 10 00:18:31.745937 kubelet[2914]: I0710 00:18:31.729140 2914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" Jul 10 00:18:31.846150 systemd-networkd[1540]: cali0456eef284a: Link UP Jul 10 00:18:31.847413 systemd-networkd[1540]: cali0456eef284a: Gained carrier Jul 10 00:18:31.886097 containerd[1618]: 2025-07-10 00:18:31.448 [INFO][5681] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c5dc87db9--9nmjd-eth0 calico-apiserver-c5dc87db9- calico-apiserver 3ba88cb6-4b62-4716-adea-217e6d3480e3 1131 0 2025-07-10 00:18:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c5dc87db9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c5dc87db9-9nmjd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0456eef284a [] [] }} ContainerID="556d886608d7a271a6b4f30257fb57733163ec93b7c9d70bf0c2ec0cbd71befb" Namespace="calico-apiserver" Pod="calico-apiserver-c5dc87db9-9nmjd" WorkloadEndpoint="localhost-k8s-calico--apiserver--c5dc87db9--9nmjd-" Jul 10 00:18:31.886097 containerd[1618]: 2025-07-10 00:18:31.452 [INFO][5681] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="556d886608d7a271a6b4f30257fb57733163ec93b7c9d70bf0c2ec0cbd71befb" Namespace="calico-apiserver" Pod="calico-apiserver-c5dc87db9-9nmjd" WorkloadEndpoint="localhost-k8s-calico--apiserver--c5dc87db9--9nmjd-eth0" Jul 10 00:18:31.886097 containerd[1618]: 2025-07-10 00:18:31.768 [INFO][5700] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="556d886608d7a271a6b4f30257fb57733163ec93b7c9d70bf0c2ec0cbd71befb" HandleID="k8s-pod-network.556d886608d7a271a6b4f30257fb57733163ec93b7c9d70bf0c2ec0cbd71befb" Workload="localhost-k8s-calico--apiserver--c5dc87db9--9nmjd-eth0" Jul 10 00:18:31.886097 containerd[1618]: 2025-07-10 00:18:31.771 [INFO][5700] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="556d886608d7a271a6b4f30257fb57733163ec93b7c9d70bf0c2ec0cbd71befb" HandleID="k8s-pod-network.556d886608d7a271a6b4f30257fb57733163ec93b7c9d70bf0c2ec0cbd71befb" Workload="localhost-k8s-calico--apiserver--c5dc87db9--9nmjd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000246320), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-c5dc87db9-9nmjd", "timestamp":"2025-07-10 00:18:31.768467852 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:18:31.886097 containerd[1618]: 2025-07-10 00:18:31.771 [INFO][5700] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:18:31.886097 containerd[1618]: 2025-07-10 00:18:31.772 [INFO][5700] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:18:31.886097 containerd[1618]: 2025-07-10 00:18:31.772 [INFO][5700] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:18:31.886097 containerd[1618]: 2025-07-10 00:18:31.790 [INFO][5700] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.556d886608d7a271a6b4f30257fb57733163ec93b7c9d70bf0c2ec0cbd71befb" host="localhost" Jul 10 00:18:31.886097 containerd[1618]: 2025-07-10 00:18:31.810 [INFO][5700] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:18:31.886097 containerd[1618]: 2025-07-10 00:18:31.812 [INFO][5700] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:18:31.886097 containerd[1618]: 2025-07-10 00:18:31.813 [INFO][5700] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:18:31.886097 containerd[1618]: 2025-07-10 00:18:31.816 [INFO][5700] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:18:31.886097 containerd[1618]: 2025-07-10 00:18:31.816 [INFO][5700] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.556d886608d7a271a6b4f30257fb57733163ec93b7c9d70bf0c2ec0cbd71befb" host="localhost" Jul 10 00:18:31.886097 containerd[1618]: 2025-07-10 00:18:31.818 [INFO][5700] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.556d886608d7a271a6b4f30257fb57733163ec93b7c9d70bf0c2ec0cbd71befb Jul 10 00:18:31.886097 containerd[1618]: 2025-07-10 00:18:31.822 [INFO][5700] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.556d886608d7a271a6b4f30257fb57733163ec93b7c9d70bf0c2ec0cbd71befb" host="localhost" Jul 10 00:18:31.886097 containerd[1618]: 2025-07-10 00:18:31.826 [INFO][5700] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.138/26] block=192.168.88.128/26 handle="k8s-pod-network.556d886608d7a271a6b4f30257fb57733163ec93b7c9d70bf0c2ec0cbd71befb" host="localhost" Jul 10 00:18:31.886097 containerd[1618]: 2025-07-10 00:18:31.826 [INFO][5700] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.138/26] handle="k8s-pod-network.556d886608d7a271a6b4f30257fb57733163ec93b7c9d70bf0c2ec0cbd71befb" host="localhost" Jul 10 00:18:31.886097 containerd[1618]: 2025-07-10 00:18:31.826 [INFO][5700] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:18:31.886097 containerd[1618]: 2025-07-10 00:18:31.826 [INFO][5700] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.138/26] IPv6=[] ContainerID="556d886608d7a271a6b4f30257fb57733163ec93b7c9d70bf0c2ec0cbd71befb" HandleID="k8s-pod-network.556d886608d7a271a6b4f30257fb57733163ec93b7c9d70bf0c2ec0cbd71befb" Workload="localhost-k8s-calico--apiserver--c5dc87db9--9nmjd-eth0" Jul 10 00:18:31.914883 containerd[1618]: 2025-07-10 00:18:31.828 [INFO][5681] cni-plugin/k8s.go 418: Populated endpoint ContainerID="556d886608d7a271a6b4f30257fb57733163ec93b7c9d70bf0c2ec0cbd71befb" Namespace="calico-apiserver" Pod="calico-apiserver-c5dc87db9-9nmjd" WorkloadEndpoint="localhost-k8s-calico--apiserver--c5dc87db9--9nmjd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c5dc87db9--9nmjd-eth0", GenerateName:"calico-apiserver-c5dc87db9-", Namespace:"calico-apiserver", SelfLink:"", UID:"3ba88cb6-4b62-4716-adea-217e6d3480e3", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 18, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c5dc87db9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c5dc87db9-9nmjd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0456eef284a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:18:31.914883 containerd[1618]: 2025-07-10 00:18:31.829 [INFO][5681] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.138/32] ContainerID="556d886608d7a271a6b4f30257fb57733163ec93b7c9d70bf0c2ec0cbd71befb" Namespace="calico-apiserver" Pod="calico-apiserver-c5dc87db9-9nmjd" WorkloadEndpoint="localhost-k8s-calico--apiserver--c5dc87db9--9nmjd-eth0" Jul 10 00:18:31.914883 containerd[1618]: 2025-07-10 00:18:31.829 [INFO][5681] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0456eef284a ContainerID="556d886608d7a271a6b4f30257fb57733163ec93b7c9d70bf0c2ec0cbd71befb" Namespace="calico-apiserver" Pod="calico-apiserver-c5dc87db9-9nmjd" WorkloadEndpoint="localhost-k8s-calico--apiserver--c5dc87db9--9nmjd-eth0" Jul 10 00:18:31.914883 containerd[1618]: 2025-07-10 00:18:31.861 [INFO][5681] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="556d886608d7a271a6b4f30257fb57733163ec93b7c9d70bf0c2ec0cbd71befb" Namespace="calico-apiserver" Pod="calico-apiserver-c5dc87db9-9nmjd" WorkloadEndpoint="localhost-k8s-calico--apiserver--c5dc87db9--9nmjd-eth0" Jul 10 00:18:31.914883 containerd[1618]: 2025-07-10 00:18:31.861 [INFO][5681] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="556d886608d7a271a6b4f30257fb57733163ec93b7c9d70bf0c2ec0cbd71befb" Namespace="calico-apiserver" Pod="calico-apiserver-c5dc87db9-9nmjd" WorkloadEndpoint="localhost-k8s-calico--apiserver--c5dc87db9--9nmjd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c5dc87db9--9nmjd-eth0", GenerateName:"calico-apiserver-c5dc87db9-", Namespace:"calico-apiserver", SelfLink:"", UID:"3ba88cb6-4b62-4716-adea-217e6d3480e3", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 18, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c5dc87db9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"556d886608d7a271a6b4f30257fb57733163ec93b7c9d70bf0c2ec0cbd71befb", Pod:"calico-apiserver-c5dc87db9-9nmjd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0456eef284a", MAC:"3a:0d:33:c2:a0:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:18:31.914883 containerd[1618]: 2025-07-10 00:18:31.881 [INFO][5681] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="556d886608d7a271a6b4f30257fb57733163ec93b7c9d70bf0c2ec0cbd71befb" Namespace="calico-apiserver" Pod="calico-apiserver-c5dc87db9-9nmjd" WorkloadEndpoint="localhost-k8s-calico--apiserver--c5dc87db9--9nmjd-eth0" Jul 10 00:18:31.935730 containerd[1618]: 2025-07-10 00:18:31.451 [INFO][5680] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" Jul 10 00:18:31.935730 containerd[1618]: 2025-07-10 00:18:31.452 [INFO][5680] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" iface="eth0" netns="/var/run/netns/cni-08a9e5ae-2c9f-6685-ce0a-07456fba2234" Jul 10 00:18:31.935730 containerd[1618]: 2025-07-10 00:18:31.452 [INFO][5680] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" iface="eth0" netns="/var/run/netns/cni-08a9e5ae-2c9f-6685-ce0a-07456fba2234" Jul 10 00:18:31.935730 containerd[1618]: 2025-07-10 00:18:31.466 [INFO][5680] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" after=14.606297ms iface="eth0" netns="/var/run/netns/cni-08a9e5ae-2c9f-6685-ce0a-07456fba2234" Jul 10 00:18:31.935730 containerd[1618]: 2025-07-10 00:18:31.467 [INFO][5680] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" Jul 10 00:18:31.935730 containerd[1618]: 2025-07-10 00:18:31.467 [INFO][5680] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" Jul 10 00:18:31.935730 containerd[1618]: 2025-07-10 00:18:31.768 [INFO][5702] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" HandleID="k8s-pod-network.91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" Workload="localhost-k8s-calico--apiserver--5bcf4cfc7f--7945z-eth0" Jul 10 00:18:31.935730 containerd[1618]: 2025-07-10 00:18:31.771 [INFO][5702] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:18:31.935730 containerd[1618]: 2025-07-10 00:18:31.826 [INFO][5702] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:18:31.935730 containerd[1618]: 2025-07-10 00:18:31.924 [INFO][5702] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" HandleID="k8s-pod-network.91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" Workload="localhost-k8s-calico--apiserver--5bcf4cfc7f--7945z-eth0" Jul 10 00:18:31.935730 containerd[1618]: 2025-07-10 00:18:31.924 [INFO][5702] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" HandleID="k8s-pod-network.91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" Workload="localhost-k8s-calico--apiserver--5bcf4cfc7f--7945z-eth0" Jul 10 00:18:31.935730 containerd[1618]: 2025-07-10 00:18:31.925 [INFO][5702] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:18:31.935730 containerd[1618]: 2025-07-10 00:18:31.930 [INFO][5680] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" Jul 10 00:18:31.964599 containerd[1618]: time="2025-07-10T00:18:31.936538198Z" level=info msg="TearDown network for sandbox \"91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b\" successfully" Jul 10 00:18:31.964599 containerd[1618]: time="2025-07-10T00:18:31.936560393Z" level=info msg="StopPodSandbox for \"91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b\" returns successfully" Jul 10 00:18:31.936476 systemd[1]: run-netns-cni\x2d08a9e5ae\x2d2c9f\x2d6685\x2dce0a\x2d07456fba2234.mount: Deactivated successfully. Jul 10 00:18:32.123371 kubelet[2914]: I0710 00:18:32.123294 2914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4136c2e0-77ff-4fe9-bf8f-cbf21d4df8c9-calico-apiserver-certs\") pod \"4136c2e0-77ff-4fe9-bf8f-cbf21d4df8c9\" (UID: \"4136c2e0-77ff-4fe9-bf8f-cbf21d4df8c9\") " Jul 10 00:18:32.123371 kubelet[2914]: I0710 00:18:32.123330 2914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5k48\" (UniqueName: \"kubernetes.io/projected/4136c2e0-77ff-4fe9-bf8f-cbf21d4df8c9-kube-api-access-d5k48\") pod \"4136c2e0-77ff-4fe9-bf8f-cbf21d4df8c9\" (UID: \"4136c2e0-77ff-4fe9-bf8f-cbf21d4df8c9\") " Jul 10 00:18:32.186161 systemd[1]: var-lib-kubelet-pods-4136c2e0\x2d77ff\x2d4fe9\x2dbf8f\x2dcbf21d4df8c9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd5k48.mount: Deactivated successfully. Jul 10 00:18:32.188142 systemd[1]: var-lib-kubelet-pods-4136c2e0\x2d77ff\x2d4fe9\x2dbf8f\x2dcbf21d4df8c9-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 10 00:18:32.194067 kubelet[2914]: I0710 00:18:32.191131 2914 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4136c2e0-77ff-4fe9-bf8f-cbf21d4df8c9-kube-api-access-d5k48" (OuterVolumeSpecName: "kube-api-access-d5k48") pod "4136c2e0-77ff-4fe9-bf8f-cbf21d4df8c9" (UID: "4136c2e0-77ff-4fe9-bf8f-cbf21d4df8c9"). InnerVolumeSpecName "kube-api-access-d5k48". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:18:32.204701 kubelet[2914]: I0710 00:18:32.191004 2914 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4136c2e0-77ff-4fe9-bf8f-cbf21d4df8c9-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "4136c2e0-77ff-4fe9-bf8f-cbf21d4df8c9" (UID: "4136c2e0-77ff-4fe9-bf8f-cbf21d4df8c9"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 00:18:32.224284 kubelet[2914]: I0710 00:18:32.224242 2914 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4136c2e0-77ff-4fe9-bf8f-cbf21d4df8c9-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Jul 10 00:18:32.224284 kubelet[2914]: I0710 00:18:32.224265 2914 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d5k48\" (UniqueName: \"kubernetes.io/projected/4136c2e0-77ff-4fe9-bf8f-cbf21d4df8c9-kube-api-access-d5k48\") on node \"localhost\" DevicePath \"\"" Jul 10 00:18:32.296847 containerd[1618]: time="2025-07-10T00:18:32.296706735Z" level=info msg="connecting to shim 556d886608d7a271a6b4f30257fb57733163ec93b7c9d70bf0c2ec0cbd71befb" address="unix:///run/containerd/s/35ffa10c81eaca9a634b7ebaa38b86a97a8d5cdaccfeefe4022f36254ef1ac7c" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:18:32.324537 systemd[1]: Started cri-containerd-556d886608d7a271a6b4f30257fb57733163ec93b7c9d70bf0c2ec0cbd71befb.scope - libcontainer container 556d886608d7a271a6b4f30257fb57733163ec93b7c9d70bf0c2ec0cbd71befb. Jul 10 00:18:32.355879 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:18:32.472990 containerd[1618]: time="2025-07-10T00:18:32.472695566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5dc87db9-9nmjd,Uid:3ba88cb6-4b62-4716-adea-217e6d3480e3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"556d886608d7a271a6b4f30257fb57733163ec93b7c9d70bf0c2ec0cbd71befb\"" Jul 10 00:18:32.935094 containerd[1618]: time="2025-07-10T00:18:32.935055403Z" level=info msg="CreateContainer within sandbox \"556d886608d7a271a6b4f30257fb57733163ec93b7c9d70bf0c2ec0cbd71befb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 00:18:33.039068 containerd[1618]: time="2025-07-10T00:18:33.037230275Z" level=info msg="Container fb39bdd72165170d58276bc37f5d3e8375ef77243fa896916e9f89ba8c4e7dd3: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:18:33.042230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1879147763.mount: Deactivated successfully. Jul 10 00:18:33.050104 containerd[1618]: time="2025-07-10T00:18:33.049983949Z" level=info msg="CreateContainer within sandbox \"556d886608d7a271a6b4f30257fb57733163ec93b7c9d70bf0c2ec0cbd71befb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fb39bdd72165170d58276bc37f5d3e8375ef77243fa896916e9f89ba8c4e7dd3\"" Jul 10 00:18:33.081905 containerd[1618]: time="2025-07-10T00:18:33.080671819Z" level=info msg="StartContainer for \"fb39bdd72165170d58276bc37f5d3e8375ef77243fa896916e9f89ba8c4e7dd3\"" Jul 10 00:18:33.082857 systemd[1]: Removed slice kubepods-besteffort-pod4136c2e0_77ff_4fe9_bf8f_cbf21d4df8c9.slice - libcontainer container kubepods-besteffort-pod4136c2e0_77ff_4fe9_bf8f_cbf21d4df8c9.slice. Jul 10 00:18:33.082930 systemd[1]: kubepods-besteffort-pod4136c2e0_77ff_4fe9_bf8f_cbf21d4df8c9.slice: Consumed 636ms CPU time, 50.4M memory peak, 3.3M read from disk. Jul 10 00:18:33.119738 containerd[1618]: time="2025-07-10T00:18:33.118973026Z" level=info msg="connecting to shim fb39bdd72165170d58276bc37f5d3e8375ef77243fa896916e9f89ba8c4e7dd3" address="unix:///run/containerd/s/35ffa10c81eaca9a634b7ebaa38b86a97a8d5cdaccfeefe4022f36254ef1ac7c" protocol=ttrpc version=3 Jul 10 00:18:33.233556 systemd[1]: Started cri-containerd-fb39bdd72165170d58276bc37f5d3e8375ef77243fa896916e9f89ba8c4e7dd3.scope - libcontainer container fb39bdd72165170d58276bc37f5d3e8375ef77243fa896916e9f89ba8c4e7dd3. Jul 10 00:18:33.471165 containerd[1618]: time="2025-07-10T00:18:33.471142481Z" level=info msg="StartContainer for \"fb39bdd72165170d58276bc37f5d3e8375ef77243fa896916e9f89ba8c4e7dd3\" returns successfully" Jul 10 00:18:33.490675 systemd-networkd[1540]: cali0456eef284a: Gained IPv6LL Jul 10 00:18:36.939016 kubelet[2914]: I0710 00:18:36.932545 2914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4136c2e0-77ff-4fe9-bf8f-cbf21d4df8c9" path="/var/lib/kubelet/pods/4136c2e0-77ff-4fe9-bf8f-cbf21d4df8c9/volumes" Jul 10 00:18:36.957265 kubelet[2914]: E0710 00:18:36.948522 2914 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.002s" Jul 10 00:18:36.966268 kubelet[2914]: I0710 00:18:36.946190 2914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c5dc87db9-9nmjd" podStartSLOduration=6.920014353 podStartE2EDuration="6.920014353s" podCreationTimestamp="2025-07-10 00:18:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:18:35.806904873 +0000 UTC m=+70.969921880" watchObservedRunningTime="2025-07-10 00:18:36.920014353 +0000 UTC m=+72.083031353" Jul 10 00:18:37.209878 containerd[1618]: time="2025-07-10T00:18:37.209785756Z" level=info msg="StopContainer for \"b90ef6ebc33d19c61c808e4bda9c57a7d336cf4f1b8eb5468e0decc2ebbe7c85\" with timeout 30 (s)" Jul 10 00:18:37.233548 containerd[1618]: time="2025-07-10T00:18:37.231459722Z" level=info msg="Stop container \"b90ef6ebc33d19c61c808e4bda9c57a7d336cf4f1b8eb5468e0decc2ebbe7c85\" with signal terminated" Jul 10 00:18:37.280733 systemd[1]: cri-containerd-b90ef6ebc33d19c61c808e4bda9c57a7d336cf4f1b8eb5468e0decc2ebbe7c85.scope: Deactivated successfully. Jul 10 00:18:37.288161 systemd[1]: cri-containerd-b90ef6ebc33d19c61c808e4bda9c57a7d336cf4f1b8eb5468e0decc2ebbe7c85.scope: Consumed 576ms CPU time, 64.1M memory peak, 4.2M read from disk. Jul 10 00:18:37.314621 containerd[1618]: time="2025-07-10T00:18:37.314512255Z" level=info msg="received exit event container_id:\"b90ef6ebc33d19c61c808e4bda9c57a7d336cf4f1b8eb5468e0decc2ebbe7c85\" id:\"b90ef6ebc33d19c61c808e4bda9c57a7d336cf4f1b8eb5468e0decc2ebbe7c85\" pid:5217 exit_status:1 exited_at:{seconds:1752106717 nanos:304184513}" Jul 10 00:18:37.315877 containerd[1618]: time="2025-07-10T00:18:37.315856926Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b90ef6ebc33d19c61c808e4bda9c57a7d336cf4f1b8eb5468e0decc2ebbe7c85\" id:\"b90ef6ebc33d19c61c808e4bda9c57a7d336cf4f1b8eb5468e0decc2ebbe7c85\" pid:5217 exit_status:1 exited_at:{seconds:1752106717 nanos:304184513}" Jul 10 00:18:37.405378 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b90ef6ebc33d19c61c808e4bda9c57a7d336cf4f1b8eb5468e0decc2ebbe7c85-rootfs.mount: Deactivated successfully. Jul 10 00:18:37.427619 containerd[1618]: time="2025-07-10T00:18:37.427589456Z" level=info msg="StopContainer for \"b90ef6ebc33d19c61c808e4bda9c57a7d336cf4f1b8eb5468e0decc2ebbe7c85\" returns successfully" Jul 10 00:18:37.428120 containerd[1618]: time="2025-07-10T00:18:37.428105631Z" level=info msg="StopPodSandbox for \"23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745\"" Jul 10 00:18:37.429185 containerd[1618]: time="2025-07-10T00:18:37.429169839Z" level=info msg="Container to stop \"b90ef6ebc33d19c61c808e4bda9c57a7d336cf4f1b8eb5468e0decc2ebbe7c85\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:18:37.436199 systemd[1]: cri-containerd-23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745.scope: Deactivated successfully. Jul 10 00:18:37.439572 containerd[1618]: time="2025-07-10T00:18:37.439253860Z" level=info msg="TaskExit event in podsandbox handler container_id:\"23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745\" id:\"23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745\" pid:4543 exit_status:137 exited_at:{seconds:1752106717 nanos:437528323}" Jul 10 00:18:37.462108 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745-rootfs.mount: Deactivated successfully. Jul 10 00:18:37.472121 containerd[1618]: time="2025-07-10T00:18:37.472091106Z" level=info msg="shim disconnected" id=23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745 namespace=k8s.io Jul 10 00:18:37.472121 containerd[1618]: time="2025-07-10T00:18:37.472117333Z" level=warning msg="cleaning up after shim disconnected" id=23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745 namespace=k8s.io Jul 10 00:18:37.483727 containerd[1618]: time="2025-07-10T00:18:37.472122241Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:18:37.532605 containerd[1618]: time="2025-07-10T00:18:37.532518595Z" level=info msg="received exit event sandbox_id:\"23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745\" exit_status:137 exited_at:{seconds:1752106717 nanos:437528323}" Jul 10 00:18:37.539804 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745-shm.mount: Deactivated successfully. Jul 10 00:18:37.824634 kubelet[2914]: I0710 00:18:37.824608 2914 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" Jul 10 00:18:38.020787 systemd-networkd[1540]: cali7eaf1a41d5e: Link DOWN Jul 10 00:18:38.020793 systemd-networkd[1540]: cali7eaf1a41d5e: Lost carrier Jul 10 00:18:38.511276 containerd[1618]: 2025-07-10 00:18:38.006 [INFO][5908] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" Jul 10 00:18:38.511276 containerd[1618]: 2025-07-10 00:18:38.008 [INFO][5908] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" iface="eth0" netns="/var/run/netns/cni-c235a437-6c1b-edce-6664-71905e7ef508" Jul 10 00:18:38.511276 containerd[1618]: 2025-07-10 00:18:38.008 [INFO][5908] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" iface="eth0" netns="/var/run/netns/cni-c235a437-6c1b-edce-6664-71905e7ef508" Jul 10 00:18:38.511276 containerd[1618]: 2025-07-10 00:18:38.021 [INFO][5908] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" after=13.07313ms iface="eth0" netns="/var/run/netns/cni-c235a437-6c1b-edce-6664-71905e7ef508" Jul 10 00:18:38.511276 containerd[1618]: 2025-07-10 00:18:38.021 [INFO][5908] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" Jul 10 00:18:38.511276 containerd[1618]: 2025-07-10 00:18:38.021 [INFO][5908] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" Jul 10 00:18:38.511276 containerd[1618]: 2025-07-10 00:18:38.440 [INFO][5915] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" HandleID="k8s-pod-network.23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" Workload="localhost-k8s-calico--apiserver--5bcf4cfc7f--j2vlz-eth0" Jul 10 00:18:38.511276 containerd[1618]: 2025-07-10 00:18:38.442 [INFO][5915] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:18:38.511276 containerd[1618]: 2025-07-10 00:18:38.443 [INFO][5915] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:18:38.511276 containerd[1618]: 2025-07-10 00:18:38.505 [INFO][5915] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" HandleID="k8s-pod-network.23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" Workload="localhost-k8s-calico--apiserver--5bcf4cfc7f--j2vlz-eth0" Jul 10 00:18:38.511276 containerd[1618]: 2025-07-10 00:18:38.505 [INFO][5915] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" HandleID="k8s-pod-network.23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" Workload="localhost-k8s-calico--apiserver--5bcf4cfc7f--j2vlz-eth0" Jul 10 00:18:38.511276 containerd[1618]: 2025-07-10 00:18:38.506 [INFO][5915] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:18:38.511276 containerd[1618]: 2025-07-10 00:18:38.508 [INFO][5908] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" Jul 10 00:18:38.516401 containerd[1618]: time="2025-07-10T00:18:38.515568176Z" level=info msg="TearDown network for sandbox \"23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745\" successfully" Jul 10 00:18:38.516401 containerd[1618]: time="2025-07-10T00:18:38.515598198Z" level=info msg="StopPodSandbox for \"23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745\" returns successfully" Jul 10 00:18:38.515335 systemd[1]: run-netns-cni\x2dc235a437\x2d6c1b\x2dedce\x2d6664\x2d71905e7ef508.mount: Deactivated successfully. Jul 10 00:18:38.784252 kubelet[2914]: I0710 00:18:38.784183 2914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/89b075c8-08cd-4903-a32e-138339255054-calico-apiserver-certs\") pod \"89b075c8-08cd-4903-a32e-138339255054\" (UID: \"89b075c8-08cd-4903-a32e-138339255054\") " Jul 10 00:18:38.784252 kubelet[2914]: I0710 00:18:38.784234 2914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvxw8\" (UniqueName: \"kubernetes.io/projected/89b075c8-08cd-4903-a32e-138339255054-kube-api-access-nvxw8\") pod \"89b075c8-08cd-4903-a32e-138339255054\" (UID: \"89b075c8-08cd-4903-a32e-138339255054\") " Jul 10 00:18:38.831829 systemd[1]: var-lib-kubelet-pods-89b075c8\x2d08cd\x2d4903\x2da32e\x2d138339255054-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnvxw8.mount: Deactivated successfully. Jul 10 00:18:38.832241 systemd[1]: var-lib-kubelet-pods-89b075c8\x2d08cd\x2d4903\x2da32e\x2d138339255054-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 10 00:18:38.835177 kubelet[2914]: I0710 00:18:38.831681 2914 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89b075c8-08cd-4903-a32e-138339255054-kube-api-access-nvxw8" (OuterVolumeSpecName: "kube-api-access-nvxw8") pod "89b075c8-08cd-4903-a32e-138339255054" (UID: "89b075c8-08cd-4903-a32e-138339255054"). InnerVolumeSpecName "kube-api-access-nvxw8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:18:38.835177 kubelet[2914]: I0710 00:18:38.831625 2914 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89b075c8-08cd-4903-a32e-138339255054-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "89b075c8-08cd-4903-a32e-138339255054" (UID: "89b075c8-08cd-4903-a32e-138339255054"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 00:18:38.884913 kubelet[2914]: I0710 00:18:38.884890 2914 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nvxw8\" (UniqueName: \"kubernetes.io/projected/89b075c8-08cd-4903-a32e-138339255054-kube-api-access-nvxw8\") on node \"localhost\" DevicePath \"\"" Jul 10 00:18:38.885039 kubelet[2914]: I0710 00:18:38.885032 2914 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/89b075c8-08cd-4903-a32e-138339255054-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Jul 10 00:18:38.927501 systemd[1]: Removed slice kubepods-besteffort-pod89b075c8_08cd_4903_a32e_138339255054.slice - libcontainer container kubepods-besteffort-pod89b075c8_08cd_4903_a32e_138339255054.slice. Jul 10 00:18:38.927716 systemd[1]: kubepods-besteffort-pod89b075c8_08cd_4903_a32e_138339255054.slice: Consumed 599ms CPU time, 64.7M memory peak, 4.2M read from disk. Jul 10 00:18:39.229282 containerd[1618]: time="2025-07-10T00:18:39.229190724Z" level=info msg="TaskExit event in podsandbox handler exit_status:137 exited_at:{seconds:1752106717 nanos:437528323}" Jul 10 00:18:40.910974 kubelet[2914]: I0710 00:18:40.910895 2914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89b075c8-08cd-4903-a32e-138339255054" path="/var/lib/kubelet/pods/89b075c8-08cd-4903-a32e-138339255054/volumes" Jul 10 00:18:43.921576 systemd[1]: Started sshd@7-139.178.70.100:22-139.178.68.195:54200.service - OpenSSH per-connection server daemon (139.178.68.195:54200). Jul 10 00:18:44.023455 sshd[5940]: Accepted publickey for core from 139.178.68.195 port 54200 ssh2: RSA SHA256:4dbLs3K8zeCUdpJVvc+oLD6Wxu1uro36XJoOlJl6xXw Jul 10 00:18:44.024882 sshd-session[5940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:18:44.033552 systemd-logind[1589]: New session 10 of user core. Jul 10 00:18:44.036725 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 10 00:18:44.870244 sshd[5942]: Connection closed by 139.178.68.195 port 54200 Jul 10 00:18:44.880549 systemd-logind[1589]: Session 10 logged out. Waiting for processes to exit. Jul 10 00:18:44.870633 sshd-session[5940]: pam_unix(sshd:session): session closed for user core Jul 10 00:18:44.880708 systemd[1]: sshd@7-139.178.70.100:22-139.178.68.195:54200.service: Deactivated successfully. Jul 10 00:18:44.882109 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 00:18:44.884514 systemd-logind[1589]: Removed session 10. Jul 10 00:18:49.892054 systemd[1]: Started sshd@8-139.178.70.100:22-139.178.68.195:35424.service - OpenSSH per-connection server daemon (139.178.68.195:35424). Jul 10 00:18:50.506680 sshd[5963]: Accepted publickey for core from 139.178.68.195 port 35424 ssh2: RSA SHA256:4dbLs3K8zeCUdpJVvc+oLD6Wxu1uro36XJoOlJl6xXw Jul 10 00:18:50.522065 sshd-session[5963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:18:50.529706 systemd-logind[1589]: New session 11 of user core. Jul 10 00:18:50.533518 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 10 00:18:52.908168 containerd[1618]: time="2025-07-10T00:18:52.908038111Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c2b99fa1f3c38d467cd4b789d59c7a02b5e615ac2a7bf8fd24e88381f64bbfd1\" id:\"833810bf6bcd9eae4895c0ee47868028d4dd1c5a20cfe1caa52d14cdb37769d3\" pid:6003 exited_at:{seconds:1752106732 nanos:899737336}" Jul 10 00:18:53.735469 sshd[5965]: Connection closed by 139.178.68.195 port 35424 Jul 10 00:18:53.844891 sshd-session[5963]: pam_unix(sshd:session): session closed for user core Jul 10 00:18:53.925903 systemd-logind[1589]: Session 11 logged out. Waiting for processes to exit. Jul 10 00:18:53.925992 systemd[1]: sshd@8-139.178.70.100:22-139.178.68.195:35424.service: Deactivated successfully. Jul 10 00:18:53.936618 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 00:18:53.938721 systemd-logind[1589]: Removed session 11. Jul 10 00:18:56.034848 containerd[1618]: time="2025-07-10T00:18:56.034815888Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c59a11653b1546853044070723b1ff50bf125dd0bebea98461c259ebc06f5556\" id:\"d2f8281faaaf6031d6d9efcccab2dcb2ffb4625016b26df338792ef51e4a79f4\" pid:5985 exited_at:{seconds:1752106736 nanos:34498705}" Jul 10 00:18:58.746248 systemd[1]: Started sshd@9-139.178.70.100:22-139.178.68.195:50582.service - OpenSSH per-connection server daemon (139.178.68.195:50582). Jul 10 00:18:58.848518 sshd[6024]: Accepted publickey for core from 139.178.68.195 port 50582 ssh2: RSA SHA256:4dbLs3K8zeCUdpJVvc+oLD6Wxu1uro36XJoOlJl6xXw Jul 10 00:18:58.849936 sshd-session[6024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:18:58.853555 systemd-logind[1589]: New session 12 of user core. Jul 10 00:18:58.858593 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 10 00:19:00.532799 sshd[6026]: Connection closed by 139.178.68.195 port 50582 Jul 10 00:19:00.533528 sshd-session[6024]: pam_unix(sshd:session): session closed for user core Jul 10 00:19:00.539615 systemd[1]: sshd@9-139.178.70.100:22-139.178.68.195:50582.service: Deactivated successfully. Jul 10 00:19:00.541657 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 00:19:00.542646 systemd-logind[1589]: Session 12 logged out. Waiting for processes to exit. Jul 10 00:19:00.545246 systemd[1]: Started sshd@10-139.178.70.100:22-139.178.68.195:50588.service - OpenSSH per-connection server daemon (139.178.68.195:50588). Jul 10 00:19:00.546623 systemd-logind[1589]: Removed session 12. Jul 10 00:19:00.612132 sshd[6067]: Accepted publickey for core from 139.178.68.195 port 50588 ssh2: RSA SHA256:4dbLs3K8zeCUdpJVvc+oLD6Wxu1uro36XJoOlJl6xXw Jul 10 00:19:00.613686 sshd-session[6067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:19:00.617484 systemd-logind[1589]: New session 13 of user core. Jul 10 00:19:00.620529 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 10 00:19:00.927925 sshd[6069]: Connection closed by 139.178.68.195 port 50588 Jul 10 00:19:00.928125 sshd-session[6067]: pam_unix(sshd:session): session closed for user core Jul 10 00:19:00.931179 containerd[1618]: time="2025-07-10T00:19:00.931156320Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f90873fd3cb87b8aa9a96512b14a2ac653258afd1cbd00a321ca809106cb6ae1\" id:\"c37d68da8c575c4a90ab8eb4132344efb8a6d0af5de4f0d08f84a7387802c0f0\" pid:6044 exit_status:1 exited_at:{seconds:1752106740 nanos:930641259}" Jul 10 00:19:00.936994 systemd[1]: sshd@10-139.178.70.100:22-139.178.68.195:50588.service: Deactivated successfully. Jul 10 00:19:00.939203 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 00:19:00.939910 systemd-logind[1589]: Session 13 logged out. Waiting for processes to exit. Jul 10 00:19:00.942237 systemd[1]: Started sshd@11-139.178.70.100:22-139.178.68.195:50598.service - OpenSSH per-connection server daemon (139.178.68.195:50598). Jul 10 00:19:00.943304 systemd-logind[1589]: Removed session 13. Jul 10 00:19:01.008919 sshd[6084]: Accepted publickey for core from 139.178.68.195 port 50598 ssh2: RSA SHA256:4dbLs3K8zeCUdpJVvc+oLD6Wxu1uro36XJoOlJl6xXw Jul 10 00:19:01.009797 sshd-session[6084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:19:01.014519 systemd-logind[1589]: New session 14 of user core. Jul 10 00:19:01.020580 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 10 00:19:01.123595 sshd[6086]: Connection closed by 139.178.68.195 port 50598 Jul 10 00:19:01.123981 sshd-session[6084]: pam_unix(sshd:session): session closed for user core Jul 10 00:19:01.126214 systemd-logind[1589]: Session 14 logged out. Waiting for processes to exit. Jul 10 00:19:01.126651 systemd[1]: sshd@11-139.178.70.100:22-139.178.68.195:50598.service: Deactivated successfully. Jul 10 00:19:01.127772 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 00:19:01.128652 systemd-logind[1589]: Removed session 14. Jul 10 00:19:06.139037 systemd[1]: Started sshd@12-139.178.70.100:22-139.178.68.195:50610.service - OpenSSH per-connection server daemon (139.178.68.195:50610). Jul 10 00:19:06.252802 sshd[6099]: Accepted publickey for core from 139.178.68.195 port 50610 ssh2: RSA SHA256:4dbLs3K8zeCUdpJVvc+oLD6Wxu1uro36XJoOlJl6xXw Jul 10 00:19:06.255077 sshd-session[6099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:19:06.259722 systemd-logind[1589]: New session 15 of user core. Jul 10 00:19:06.265649 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 10 00:19:06.846717 sshd[6101]: Connection closed by 139.178.68.195 port 50610 Jul 10 00:19:06.847796 sshd-session[6099]: pam_unix(sshd:session): session closed for user core Jul 10 00:19:06.854918 systemd[1]: sshd@12-139.178.70.100:22-139.178.68.195:50610.service: Deactivated successfully. Jul 10 00:19:06.857259 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 00:19:06.858558 systemd-logind[1589]: Session 15 logged out. Waiting for processes to exit. Jul 10 00:19:06.862512 systemd[1]: Started sshd@13-139.178.70.100:22-139.178.68.195:50622.service - OpenSSH per-connection server daemon (139.178.68.195:50622). Jul 10 00:19:06.863988 systemd-logind[1589]: Removed session 15. Jul 10 00:19:06.901665 sshd[6113]: Accepted publickey for core from 139.178.68.195 port 50622 ssh2: RSA SHA256:4dbLs3K8zeCUdpJVvc+oLD6Wxu1uro36XJoOlJl6xXw Jul 10 00:19:06.902692 sshd-session[6113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:19:06.905603 systemd-logind[1589]: New session 16 of user core. Jul 10 00:19:06.911566 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 10 00:19:07.341973 sshd[6115]: Connection closed by 139.178.68.195 port 50622 Jul 10 00:19:07.350605 sshd-session[6113]: pam_unix(sshd:session): session closed for user core Jul 10 00:19:07.354308 systemd[1]: Started sshd@14-139.178.70.100:22-139.178.68.195:50636.service - OpenSSH per-connection server daemon (139.178.68.195:50636). Jul 10 00:19:07.354992 systemd[1]: sshd@13-139.178.70.100:22-139.178.68.195:50622.service: Deactivated successfully. Jul 10 00:19:07.356396 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 00:19:07.359864 systemd-logind[1589]: Session 16 logged out. Waiting for processes to exit. Jul 10 00:19:07.362273 systemd-logind[1589]: Removed session 16. Jul 10 00:19:07.429139 sshd[6122]: Accepted publickey for core from 139.178.68.195 port 50636 ssh2: RSA SHA256:4dbLs3K8zeCUdpJVvc+oLD6Wxu1uro36XJoOlJl6xXw Jul 10 00:19:07.430292 sshd-session[6122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:19:07.433454 systemd-logind[1589]: New session 17 of user core. Jul 10 00:19:07.445605 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 10 00:19:08.579575 sshd[6127]: Connection closed by 139.178.68.195 port 50636 Jul 10 00:19:08.579701 sshd-session[6122]: pam_unix(sshd:session): session closed for user core Jul 10 00:19:08.589506 systemd[1]: sshd@14-139.178.70.100:22-139.178.68.195:50636.service: Deactivated successfully. Jul 10 00:19:08.592797 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 00:19:08.595202 systemd-logind[1589]: Session 17 logged out. Waiting for processes to exit. Jul 10 00:19:08.599406 systemd[1]: Started sshd@15-139.178.70.100:22-139.178.68.195:55340.service - OpenSSH per-connection server daemon (139.178.68.195:55340). Jul 10 00:19:08.600986 systemd-logind[1589]: Removed session 17. Jul 10 00:19:08.707530 sshd[6160]: Accepted publickey for core from 139.178.68.195 port 55340 ssh2: RSA SHA256:4dbLs3K8zeCUdpJVvc+oLD6Wxu1uro36XJoOlJl6xXw Jul 10 00:19:08.709089 sshd-session[6160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:19:08.714141 systemd-logind[1589]: New session 18 of user core. Jul 10 00:19:08.720392 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 10 00:19:09.164582 containerd[1618]: time="2025-07-10T00:19:09.164543913Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c59a11653b1546853044070723b1ff50bf125dd0bebea98461c259ebc06f5556\" id:\"33024515cc905efe9696fc352b47494d3d714a39406a7a7fc71e6dc664ccd240\" pid:6148 exited_at:{seconds:1752106749 nanos:161718823}" Jul 10 00:19:11.712893 sshd[6163]: Connection closed by 139.178.68.195 port 55340 Jul 10 00:19:11.724959 systemd[1]: Started sshd@16-139.178.70.100:22-139.178.68.195:55348.service - OpenSSH per-connection server daemon (139.178.68.195:55348). Jul 10 00:19:11.734221 sshd-session[6160]: pam_unix(sshd:session): session closed for user core Jul 10 00:19:11.757648 systemd[1]: sshd@15-139.178.70.100:22-139.178.68.195:55340.service: Deactivated successfully. Jul 10 00:19:11.758874 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 00:19:11.774688 systemd-logind[1589]: Session 18 logged out. Waiting for processes to exit. Jul 10 00:19:11.775846 systemd-logind[1589]: Removed session 18. Jul 10 00:19:11.861635 sshd[6175]: Accepted publickey for core from 139.178.68.195 port 55348 ssh2: RSA SHA256:4dbLs3K8zeCUdpJVvc+oLD6Wxu1uro36XJoOlJl6xXw Jul 10 00:19:11.863913 sshd-session[6175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:19:11.869491 systemd-logind[1589]: New session 19 of user core. Jul 10 00:19:11.874831 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 10 00:19:12.504859 sshd[6180]: Connection closed by 139.178.68.195 port 55348 Jul 10 00:19:12.504568 sshd-session[6175]: pam_unix(sshd:session): session closed for user core Jul 10 00:19:12.538271 systemd-logind[1589]: Session 19 logged out. Waiting for processes to exit. Jul 10 00:19:12.538357 systemd[1]: sshd@16-139.178.70.100:22-139.178.68.195:55348.service: Deactivated successfully. Jul 10 00:19:12.539644 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 00:19:12.540581 systemd-logind[1589]: Removed session 19. Jul 10 00:19:14.171463 containerd[1618]: time="2025-07-10T00:19:14.171414466Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c2b99fa1f3c38d467cd4b789d59c7a02b5e615ac2a7bf8fd24e88381f64bbfd1\" id:\"c94f49d4e2022b437e6bf7509e1a2403d9aca6220ad28c6b62744c776c9b3e57\" pid:6205 exited_at:{seconds:1752106754 nanos:157892333}" Jul 10 00:19:17.528070 systemd[1]: Started sshd@17-139.178.70.100:22-139.178.68.195:55356.service - OpenSSH per-connection server daemon (139.178.68.195:55356). Jul 10 00:19:17.611486 sshd[6215]: Accepted publickey for core from 139.178.68.195 port 55356 ssh2: RSA SHA256:4dbLs3K8zeCUdpJVvc+oLD6Wxu1uro36XJoOlJl6xXw Jul 10 00:19:17.613403 sshd-session[6215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:19:17.617167 systemd-logind[1589]: New session 20 of user core. Jul 10 00:19:17.621559 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 10 00:19:18.035021 sshd[6217]: Connection closed by 139.178.68.195 port 55356 Jul 10 00:19:18.035920 sshd-session[6215]: pam_unix(sshd:session): session closed for user core Jul 10 00:19:18.038674 systemd[1]: sshd@17-139.178.70.100:22-139.178.68.195:55356.service: Deactivated successfully. Jul 10 00:19:18.040749 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 00:19:18.041999 systemd-logind[1589]: Session 20 logged out. Waiting for processes to exit. Jul 10 00:19:18.044850 systemd-logind[1589]: Removed session 20. Jul 10 00:19:21.451792 containerd[1618]: time="2025-07-10T00:19:21.445179946Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c59a11653b1546853044070723b1ff50bf125dd0bebea98461c259ebc06f5556\" id:\"3b35601cdc0cfa5cf6973f103a4e843d0b805c70d2f7ed64832937d511e3d4de\" pid:6244 exited_at:{seconds:1752106761 nanos:444933903}" Jul 10 00:19:22.460357 containerd[1618]: time="2025-07-10T00:19:22.460263306Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c2b99fa1f3c38d467cd4b789d59c7a02b5e615ac2a7bf8fd24e88381f64bbfd1\" id:\"655f8a4a788e3b66a80bf64d4d692411407339360e0ac59bdd5d90bc9e68b2ee\" pid:6269 exited_at:{seconds:1752106762 nanos:459279727}" Jul 10 00:19:23.051006 systemd[1]: Started sshd@18-139.178.70.100:22-139.178.68.195:38560.service - OpenSSH per-connection server daemon (139.178.68.195:38560). Jul 10 00:19:23.188940 sshd[6279]: Accepted publickey for core from 139.178.68.195 port 38560 ssh2: RSA SHA256:4dbLs3K8zeCUdpJVvc+oLD6Wxu1uro36XJoOlJl6xXw Jul 10 00:19:23.191093 sshd-session[6279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:19:23.195539 systemd-logind[1589]: New session 21 of user core. Jul 10 00:19:23.199559 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 10 00:19:23.662544 sshd[6281]: Connection closed by 139.178.68.195 port 38560 Jul 10 00:19:23.662891 sshd-session[6279]: pam_unix(sshd:session): session closed for user core Jul 10 00:19:23.665780 systemd-logind[1589]: Session 21 logged out. Waiting for processes to exit. Jul 10 00:19:23.665875 systemd[1]: sshd@18-139.178.70.100:22-139.178.68.195:38560.service: Deactivated successfully. Jul 10 00:19:23.667067 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 00:19:23.668288 systemd-logind[1589]: Removed session 21. Jul 10 00:19:25.160408 kubelet[2914]: I0710 00:19:25.150965 2914 scope.go:117] "RemoveContainer" containerID="b90ef6ebc33d19c61c808e4bda9c57a7d336cf4f1b8eb5468e0decc2ebbe7c85" Jul 10 00:19:25.257856 containerd[1618]: time="2025-07-10T00:19:25.257806623Z" level=info msg="RemoveContainer for \"b90ef6ebc33d19c61c808e4bda9c57a7d336cf4f1b8eb5468e0decc2ebbe7c85\"" Jul 10 00:19:25.346453 containerd[1618]: time="2025-07-10T00:19:25.346119454Z" level=info msg="RemoveContainer for \"b90ef6ebc33d19c61c808e4bda9c57a7d336cf4f1b8eb5468e0decc2ebbe7c85\" returns successfully" Jul 10 00:19:25.353621 kubelet[2914]: I0710 00:19:25.353521 2914 scope.go:117] "RemoveContainer" containerID="e4b678372106ead071869f89998e42d7e88280932012d9a5776d2f4c64499f20" Jul 10 00:19:25.355864 containerd[1618]: time="2025-07-10T00:19:25.355491588Z" level=info msg="RemoveContainer for \"e4b678372106ead071869f89998e42d7e88280932012d9a5776d2f4c64499f20\"" Jul 10 00:19:25.378606 containerd[1618]: time="2025-07-10T00:19:25.378039711Z" level=info msg="RemoveContainer for \"e4b678372106ead071869f89998e42d7e88280932012d9a5776d2f4c64499f20\" returns successfully" Jul 10 00:19:25.390431 containerd[1618]: time="2025-07-10T00:19:25.389836998Z" level=info msg="StopPodSandbox for \"23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745\"" Jul 10 00:19:26.090369 containerd[1618]: 2025-07-10 00:19:25.784 [WARNING][6302] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcf4cfc7f--j2vlz-eth0" Jul 10 00:19:26.090369 containerd[1618]: 2025-07-10 00:19:25.787 [INFO][6302] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" Jul 10 00:19:26.090369 containerd[1618]: 2025-07-10 00:19:25.787 [INFO][6302] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" iface="eth0" netns="" Jul 10 00:19:26.090369 containerd[1618]: 2025-07-10 00:19:25.787 [INFO][6302] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" Jul 10 00:19:26.090369 containerd[1618]: 2025-07-10 00:19:25.787 [INFO][6302] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" Jul 10 00:19:26.090369 containerd[1618]: 2025-07-10 00:19:26.067 [INFO][6309] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" HandleID="k8s-pod-network.23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" Workload="localhost-k8s-calico--apiserver--5bcf4cfc7f--j2vlz-eth0" Jul 10 00:19:26.090369 containerd[1618]: 2025-07-10 00:19:26.070 [INFO][6309] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:19:26.090369 containerd[1618]: 2025-07-10 00:19:26.070 [INFO][6309] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:19:26.090369 containerd[1618]: 2025-07-10 00:19:26.084 [WARNING][6309] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" HandleID="k8s-pod-network.23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" Workload="localhost-k8s-calico--apiserver--5bcf4cfc7f--j2vlz-eth0" Jul 10 00:19:26.090369 containerd[1618]: 2025-07-10 00:19:26.084 [INFO][6309] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" HandleID="k8s-pod-network.23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" Workload="localhost-k8s-calico--apiserver--5bcf4cfc7f--j2vlz-eth0" Jul 10 00:19:26.090369 containerd[1618]: 2025-07-10 00:19:26.086 [INFO][6309] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:19:26.090369 containerd[1618]: 2025-07-10 00:19:26.088 [INFO][6302] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" Jul 10 00:19:26.092971 containerd[1618]: time="2025-07-10T00:19:26.092851642Z" level=info msg="TearDown network for sandbox \"23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745\" successfully" Jul 10 00:19:26.092971 containerd[1618]: time="2025-07-10T00:19:26.092871455Z" level=info msg="StopPodSandbox for \"23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745\" returns successfully" Jul 10 00:19:26.115133 containerd[1618]: time="2025-07-10T00:19:26.115006185Z" level=info msg="RemovePodSandbox for \"23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745\"" Jul 10 00:19:26.115133 containerd[1618]: time="2025-07-10T00:19:26.115038187Z" level=info msg="Forcibly stopping sandbox \"23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745\"" Jul 10 00:19:26.271660 containerd[1618]: 2025-07-10 00:19:26.239 [WARNING][6323] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcf4cfc7f--j2vlz-eth0" Jul 10 00:19:26.271660 containerd[1618]: 2025-07-10 00:19:26.239 [INFO][6323] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" Jul 10 00:19:26.271660 containerd[1618]: 2025-07-10 00:19:26.239 [INFO][6323] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" iface="eth0" netns="" Jul 10 00:19:26.271660 containerd[1618]: 2025-07-10 00:19:26.239 [INFO][6323] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" Jul 10 00:19:26.271660 containerd[1618]: 2025-07-10 00:19:26.239 [INFO][6323] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" Jul 10 00:19:26.271660 containerd[1618]: 2025-07-10 00:19:26.261 [INFO][6330] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" HandleID="k8s-pod-network.23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" Workload="localhost-k8s-calico--apiserver--5bcf4cfc7f--j2vlz-eth0" Jul 10 00:19:26.271660 containerd[1618]: 2025-07-10 00:19:26.261 [INFO][6330] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:19:26.271660 containerd[1618]: 2025-07-10 00:19:26.261 [INFO][6330] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:19:26.271660 containerd[1618]: 2025-07-10 00:19:26.266 [WARNING][6330] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" HandleID="k8s-pod-network.23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" Workload="localhost-k8s-calico--apiserver--5bcf4cfc7f--j2vlz-eth0" Jul 10 00:19:26.271660 containerd[1618]: 2025-07-10 00:19:26.266 [INFO][6330] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" HandleID="k8s-pod-network.23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" Workload="localhost-k8s-calico--apiserver--5bcf4cfc7f--j2vlz-eth0" Jul 10 00:19:26.271660 containerd[1618]: 2025-07-10 00:19:26.266 [INFO][6330] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:19:26.271660 containerd[1618]: 2025-07-10 00:19:26.269 [INFO][6323] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745" Jul 10 00:19:26.272966 containerd[1618]: time="2025-07-10T00:19:26.271752680Z" level=info msg="TearDown network for sandbox \"23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745\" successfully" Jul 10 00:19:26.274675 containerd[1618]: time="2025-07-10T00:19:26.274599442Z" level=info msg="Ensure that sandbox 23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745 in task-service has been cleanup successfully" Jul 10 00:19:26.276606 containerd[1618]: time="2025-07-10T00:19:26.276595182Z" level=info msg="RemovePodSandbox \"23196c29110e6eb5896c4543d218e6b635630440152aa444a920af329f87f745\" returns successfully" Jul 10 00:19:26.280110 containerd[1618]: time="2025-07-10T00:19:26.280091270Z" level=info msg="StopPodSandbox for \"91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b\"" Jul 10 00:19:26.327656 containerd[1618]: 2025-07-10 00:19:26.306 [WARNING][6344] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcf4cfc7f--7945z-eth0" Jul 10 00:19:26.327656 containerd[1618]: 2025-07-10 00:19:26.306 [INFO][6344] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" Jul 10 00:19:26.327656 containerd[1618]: 2025-07-10 00:19:26.306 [INFO][6344] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" iface="eth0" netns="" Jul 10 00:19:26.327656 containerd[1618]: 2025-07-10 00:19:26.306 [INFO][6344] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" Jul 10 00:19:26.327656 containerd[1618]: 2025-07-10 00:19:26.306 [INFO][6344] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" Jul 10 00:19:26.327656 containerd[1618]: 2025-07-10 00:19:26.319 [INFO][6352] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" HandleID="k8s-pod-network.91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" Workload="localhost-k8s-calico--apiserver--5bcf4cfc7f--7945z-eth0" Jul 10 00:19:26.327656 containerd[1618]: 2025-07-10 00:19:26.319 [INFO][6352] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:19:26.327656 containerd[1618]: 2025-07-10 00:19:26.320 [INFO][6352] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:19:26.327656 containerd[1618]: 2025-07-10 00:19:26.323 [WARNING][6352] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" HandleID="k8s-pod-network.91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" Workload="localhost-k8s-calico--apiserver--5bcf4cfc7f--7945z-eth0" Jul 10 00:19:26.327656 containerd[1618]: 2025-07-10 00:19:26.323 [INFO][6352] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" HandleID="k8s-pod-network.91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" Workload="localhost-k8s-calico--apiserver--5bcf4cfc7f--7945z-eth0" Jul 10 00:19:26.327656 containerd[1618]: 2025-07-10 00:19:26.324 [INFO][6352] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:19:26.327656 containerd[1618]: 2025-07-10 00:19:26.326 [INFO][6344] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" Jul 10 00:19:26.329396 containerd[1618]: time="2025-07-10T00:19:26.327686266Z" level=info msg="TearDown network for sandbox \"91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b\" successfully" Jul 10 00:19:26.329396 containerd[1618]: time="2025-07-10T00:19:26.327720146Z" level=info msg="StopPodSandbox for \"91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b\" returns successfully" Jul 10 00:19:26.329396 containerd[1618]: time="2025-07-10T00:19:26.328039296Z" level=info msg="RemovePodSandbox for \"91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b\"" Jul 10 00:19:26.329396 containerd[1618]: time="2025-07-10T00:19:26.328054782Z" level=info msg="Forcibly stopping sandbox \"91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b\"" Jul 10 00:19:26.379870 containerd[1618]: 2025-07-10 00:19:26.358 [WARNING][6366] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcf4cfc7f--7945z-eth0" Jul 10 00:19:26.379870 containerd[1618]: 2025-07-10 00:19:26.358 [INFO][6366] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" Jul 10 00:19:26.379870 containerd[1618]: 2025-07-10 00:19:26.358 [INFO][6366] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" iface="eth0" netns="" Jul 10 00:19:26.379870 containerd[1618]: 2025-07-10 00:19:26.358 [INFO][6366] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" Jul 10 00:19:26.379870 containerd[1618]: 2025-07-10 00:19:26.358 [INFO][6366] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" Jul 10 00:19:26.379870 containerd[1618]: 2025-07-10 00:19:26.372 [INFO][6373] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" HandleID="k8s-pod-network.91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" Workload="localhost-k8s-calico--apiserver--5bcf4cfc7f--7945z-eth0" Jul 10 00:19:26.379870 containerd[1618]: 2025-07-10 00:19:26.372 [INFO][6373] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:19:26.379870 containerd[1618]: 2025-07-10 00:19:26.372 [INFO][6373] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:19:26.379870 containerd[1618]: 2025-07-10 00:19:26.376 [WARNING][6373] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" HandleID="k8s-pod-network.91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" Workload="localhost-k8s-calico--apiserver--5bcf4cfc7f--7945z-eth0" Jul 10 00:19:26.379870 containerd[1618]: 2025-07-10 00:19:26.376 [INFO][6373] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" HandleID="k8s-pod-network.91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" Workload="localhost-k8s-calico--apiserver--5bcf4cfc7f--7945z-eth0" Jul 10 00:19:26.379870 containerd[1618]: 2025-07-10 00:19:26.377 [INFO][6373] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:19:26.379870 containerd[1618]: 2025-07-10 00:19:26.378 [INFO][6366] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b" Jul 10 00:19:26.380855 containerd[1618]: time="2025-07-10T00:19:26.379854284Z" level=info msg="TearDown network for sandbox \"91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b\" successfully" Jul 10 00:19:26.382792 containerd[1618]: time="2025-07-10T00:19:26.382774527Z" level=info msg="Ensure that sandbox 91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b in task-service has been cleanup successfully" Jul 10 00:19:26.384266 containerd[1618]: time="2025-07-10T00:19:26.384254603Z" level=info msg="RemovePodSandbox \"91f49b0b0c82cb0ba2e1d004966ca98becfbe67252811d59a3670ea56087923b\" returns successfully" Jul 10 00:19:28.697955 systemd[1]: Started sshd@19-139.178.70.100:22-139.178.68.195:43280.service - OpenSSH per-connection server daemon (139.178.68.195:43280). Jul 10 00:19:28.822512 sshd[6386]: Accepted publickey for core from 139.178.68.195 port 43280 ssh2: RSA SHA256:4dbLs3K8zeCUdpJVvc+oLD6Wxu1uro36XJoOlJl6xXw Jul 10 00:19:28.824352 sshd-session[6386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:19:28.827898 systemd-logind[1589]: New session 22 of user core. Jul 10 00:19:28.833529 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 10 00:19:29.571470 sshd[6388]: Connection closed by 139.178.68.195 port 43280 Jul 10 00:19:29.575069 systemd[1]: sshd@19-139.178.70.100:22-139.178.68.195:43280.service: Deactivated successfully. Jul 10 00:19:29.571531 sshd-session[6386]: pam_unix(sshd:session): session closed for user core Jul 10 00:19:29.577116 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 00:19:29.579001 systemd-logind[1589]: Session 22 logged out. Waiting for processes to exit. Jul 10 00:19:29.586661 systemd-logind[1589]: Removed session 22. Jul 10 00:19:30.202815 containerd[1618]: time="2025-07-10T00:19:30.202778429Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f90873fd3cb87b8aa9a96512b14a2ac653258afd1cbd00a321ca809106cb6ae1\" id:\"c02162809f6bb7c886e83626e1465ab155a46f28e82de4e4a3bea6b28d2ea484\" pid:6411 exited_at:{seconds:1752106770 nanos:19109139}"