Oct 13 05:49:19.705036 kernel: Linux version 6.12.51-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Oct 12 22:37:12 -00 2025 Oct 13 05:49:19.705052 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=a48d469b0deb49c328e6faf6cf366b11952d47f2d24963c866a0ea8221fb0039 Oct 13 05:49:19.705058 kernel: Disabled fast string operations Oct 13 05:49:19.705062 kernel: BIOS-provided physical RAM map: Oct 13 05:49:19.705066 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Oct 13 05:49:19.705070 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Oct 13 05:49:19.705076 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Oct 13 05:49:19.705081 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Oct 13 05:49:19.705085 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Oct 13 05:49:19.705089 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Oct 13 05:49:19.705093 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Oct 13 05:49:19.705098 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Oct 13 05:49:19.705102 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Oct 13 05:49:19.705106 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Oct 13 05:49:19.705112 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Oct 13 05:49:19.705117 kernel: NX (Execute Disable) protection: active Oct 13 05:49:19.705122 kernel: APIC: Static calls initialized Oct 13 05:49:19.705127 kernel: SMBIOS 2.7 present. Oct 13 05:49:19.705132 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Oct 13 05:49:19.705137 kernel: DMI: Memory slots populated: 1/128 Oct 13 05:49:19.705142 kernel: vmware: hypercall mode: 0x00 Oct 13 05:49:19.705147 kernel: Hypervisor detected: VMware Oct 13 05:49:19.705151 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Oct 13 05:49:19.705156 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Oct 13 05:49:19.705161 kernel: vmware: using clock offset of 3393652025 ns Oct 13 05:49:19.705166 kernel: tsc: Detected 3408.000 MHz processor Oct 13 05:49:19.705171 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 13 05:49:19.705176 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 13 05:49:19.705181 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Oct 13 05:49:19.705186 kernel: total RAM covered: 3072M Oct 13 05:49:19.705192 kernel: Found optimal setting for mtrr clean up Oct 13 05:49:19.705197 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Oct 13 05:49:19.705202 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Oct 13 05:49:19.705207 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 13 05:49:19.705212 kernel: Using GB pages for direct mapping Oct 13 05:49:19.705217 kernel: ACPI: Early table checksum verification disabled Oct 13 05:49:19.705222 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Oct 13 05:49:19.705226 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Oct 13 05:49:19.705231 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Oct 13 05:49:19.705237 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Oct 13 05:49:19.705244 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Oct 13 05:49:19.705249 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Oct 13 05:49:19.705254 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Oct 13 05:49:19.705259 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Oct 13 05:49:19.705265 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Oct 13 05:49:19.705271 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Oct 13 05:49:19.705276 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Oct 13 05:49:19.705281 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Oct 13 05:49:19.705286 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Oct 13 05:49:19.705291 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Oct 13 05:49:19.705296 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Oct 13 05:49:19.705301 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Oct 13 05:49:19.705306 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Oct 13 05:49:19.705311 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Oct 13 05:49:19.705317 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Oct 13 05:49:19.705322 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Oct 13 05:49:19.705327 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Oct 13 05:49:19.705332 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Oct 13 05:49:19.705337 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 13 05:49:19.705343 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 13 05:49:19.705348 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Oct 13 05:49:19.705353 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00001000-0x7fffffff] Oct 13 05:49:19.705358 kernel: NODE_DATA(0) allocated [mem 0x7fff8dc0-0x7fffffff] Oct 13 05:49:19.705364 kernel: Zone ranges: Oct 13 05:49:19.705369 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 13 05:49:19.705375 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Oct 13 05:49:19.705380 kernel: Normal empty Oct 13 05:49:19.705385 kernel: Device empty Oct 13 05:49:19.705390 kernel: Movable zone start for each node Oct 13 05:49:19.705395 kernel: Early memory node ranges Oct 13 05:49:19.705400 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Oct 13 05:49:19.705405 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Oct 13 05:49:19.705410 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Oct 13 05:49:19.705416 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Oct 13 05:49:19.705421 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 13 05:49:19.705426 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Oct 13 05:49:19.705431 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Oct 13 05:49:19.705436 kernel: ACPI: PM-Timer IO Port: 0x1008 Oct 13 05:49:19.705441 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Oct 13 05:49:19.705446 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Oct 13 05:49:19.705451 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Oct 13 05:49:19.705456 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Oct 13 05:49:19.705462 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Oct 13 05:49:19.705467 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Oct 13 05:49:19.705472 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Oct 13 05:49:19.705477 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Oct 13 05:49:19.705482 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Oct 13 05:49:19.705487 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Oct 13 05:49:19.705492 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Oct 13 05:49:19.705497 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Oct 13 05:49:19.705502 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Oct 13 05:49:19.705507 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Oct 13 05:49:19.705513 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Oct 13 05:49:19.705518 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Oct 13 05:49:19.705523 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Oct 13 05:49:19.705528 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Oct 13 05:49:19.705533 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Oct 13 05:49:19.705538 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Oct 13 05:49:19.705543 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Oct 13 05:49:19.705548 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Oct 13 05:49:19.705553 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Oct 13 05:49:19.705558 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Oct 13 05:49:19.705564 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Oct 13 05:49:19.705569 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Oct 13 05:49:19.705574 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Oct 13 05:49:19.705579 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Oct 13 05:49:19.705584 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Oct 13 05:49:19.705588 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Oct 13 05:49:19.705594 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Oct 13 05:49:19.705599 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Oct 13 05:49:19.705604 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Oct 13 05:49:19.705609 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Oct 13 05:49:19.705614 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Oct 13 05:49:19.705619 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Oct 13 05:49:19.705624 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Oct 13 05:49:19.705629 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Oct 13 05:49:19.705634 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Oct 13 05:49:19.705639 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Oct 13 05:49:19.705649 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Oct 13 05:49:19.705654 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Oct 13 05:49:19.705659 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Oct 13 05:49:19.705666 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Oct 13 05:49:19.705671 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Oct 13 05:49:19.705676 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Oct 13 05:49:19.705682 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Oct 13 05:49:19.705687 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Oct 13 05:49:19.705692 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Oct 13 05:49:19.705697 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Oct 13 05:49:19.705703 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Oct 13 05:49:19.705708 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Oct 13 05:49:19.705714 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Oct 13 05:49:19.705719 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Oct 13 05:49:19.705725 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Oct 13 05:49:19.705730 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Oct 13 05:49:19.705735 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Oct 13 05:49:19.705740 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Oct 13 05:49:19.705746 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Oct 13 05:49:19.705751 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Oct 13 05:49:19.705756 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Oct 13 05:49:19.705761 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Oct 13 05:49:19.705768 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Oct 13 05:49:19.705773 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Oct 13 05:49:19.705779 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Oct 13 05:49:19.705784 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Oct 13 05:49:19.705789 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Oct 13 05:49:19.705794 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Oct 13 05:49:19.705800 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Oct 13 05:49:19.705805 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Oct 13 05:49:19.705810 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Oct 13 05:49:19.705816 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Oct 13 05:49:19.705822 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Oct 13 05:49:19.705827 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Oct 13 05:49:19.705833 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Oct 13 05:49:19.705838 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Oct 13 05:49:19.705843 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Oct 13 05:49:19.705848 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Oct 13 05:49:19.705854 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Oct 13 05:49:19.705859 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Oct 13 05:49:19.705864 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Oct 13 05:49:19.705870 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Oct 13 05:49:19.705876 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Oct 13 05:49:19.705881 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Oct 13 05:49:19.705900 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Oct 13 05:49:19.705906 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Oct 13 05:49:19.705911 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Oct 13 05:49:19.705916 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Oct 13 05:49:19.705922 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Oct 13 05:49:19.705927 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Oct 13 05:49:19.705933 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Oct 13 05:49:19.705940 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Oct 13 05:49:19.705946 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Oct 13 05:49:19.705951 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Oct 13 05:49:19.705956 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Oct 13 05:49:19.705961 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Oct 13 05:49:19.705967 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Oct 13 05:49:19.705972 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Oct 13 05:49:19.705977 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Oct 13 05:49:19.705982 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Oct 13 05:49:19.705988 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Oct 13 05:49:19.705994 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Oct 13 05:49:19.706000 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Oct 13 05:49:19.706005 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Oct 13 05:49:19.706010 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Oct 13 05:49:19.706015 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Oct 13 05:49:19.706021 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Oct 13 05:49:19.706026 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Oct 13 05:49:19.706031 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Oct 13 05:49:19.706037 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Oct 13 05:49:19.706042 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Oct 13 05:49:19.706048 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Oct 13 05:49:19.706053 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Oct 13 05:49:19.706059 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Oct 13 05:49:19.706064 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Oct 13 05:49:19.706069 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Oct 13 05:49:19.706075 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Oct 13 05:49:19.706080 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Oct 13 05:49:19.706085 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Oct 13 05:49:19.706090 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Oct 13 05:49:19.706096 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Oct 13 05:49:19.706102 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Oct 13 05:49:19.706107 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Oct 13 05:49:19.706113 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Oct 13 05:49:19.706118 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Oct 13 05:49:19.706123 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Oct 13 05:49:19.706129 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Oct 13 05:49:19.706134 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Oct 13 05:49:19.706139 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Oct 13 05:49:19.706145 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Oct 13 05:49:19.706152 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 13 05:49:19.706157 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Oct 13 05:49:19.706163 kernel: TSC deadline timer available Oct 13 05:49:19.706168 kernel: CPU topo: Max. logical packages: 128 Oct 13 05:49:19.706173 kernel: CPU topo: Max. logical dies: 128 Oct 13 05:49:19.706179 kernel: CPU topo: Max. dies per package: 1 Oct 13 05:49:19.706184 kernel: CPU topo: Max. threads per core: 1 Oct 13 05:49:19.706189 kernel: CPU topo: Num. cores per package: 1 Oct 13 05:49:19.706195 kernel: CPU topo: Num. threads per package: 1 Oct 13 05:49:19.706200 kernel: CPU topo: Allowing 2 present CPUs plus 126 hotplug CPUs Oct 13 05:49:19.706207 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Oct 13 05:49:19.706212 kernel: Booting paravirtualized kernel on VMware hypervisor Oct 13 05:49:19.706218 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 13 05:49:19.706223 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Oct 13 05:49:19.706229 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Oct 13 05:49:19.706234 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Oct 13 05:49:19.706239 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Oct 13 05:49:19.706245 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Oct 13 05:49:19.706250 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Oct 13 05:49:19.706256 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Oct 13 05:49:19.706262 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Oct 13 05:49:19.706267 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Oct 13 05:49:19.706273 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Oct 13 05:49:19.706278 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Oct 13 05:49:19.706283 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Oct 13 05:49:19.706288 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Oct 13 05:49:19.706294 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Oct 13 05:49:19.706299 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Oct 13 05:49:19.706305 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Oct 13 05:49:19.706311 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Oct 13 05:49:19.706316 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Oct 13 05:49:19.706321 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Oct 13 05:49:19.706327 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=a48d469b0deb49c328e6faf6cf366b11952d47f2d24963c866a0ea8221fb0039 Oct 13 05:49:19.706333 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 13 05:49:19.706338 kernel: random: crng init done Oct 13 05:49:19.706344 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Oct 13 05:49:19.706350 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Oct 13 05:49:19.706356 kernel: printk: log_buf_len min size: 262144 bytes Oct 13 05:49:19.706361 kernel: printk: log_buf_len: 1048576 bytes Oct 13 05:49:19.706366 kernel: printk: early log buf free: 245592(93%) Oct 13 05:49:19.706372 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 13 05:49:19.706377 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 13 05:49:19.706383 kernel: Fallback order for Node 0: 0 Oct 13 05:49:19.706388 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524157 Oct 13 05:49:19.706393 kernel: Policy zone: DMA32 Oct 13 05:49:19.706400 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 13 05:49:19.706405 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Oct 13 05:49:19.706411 kernel: ftrace: allocating 40139 entries in 157 pages Oct 13 05:49:19.706416 kernel: ftrace: allocated 157 pages with 5 groups Oct 13 05:49:19.706421 kernel: Dynamic Preempt: voluntary Oct 13 05:49:19.706427 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 13 05:49:19.706433 kernel: rcu: RCU event tracing is enabled. Oct 13 05:49:19.706438 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Oct 13 05:49:19.706444 kernel: Trampoline variant of Tasks RCU enabled. Oct 13 05:49:19.706454 kernel: Rude variant of Tasks RCU enabled. Oct 13 05:49:19.706459 kernel: Tracing variant of Tasks RCU enabled. Oct 13 05:49:19.706465 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 13 05:49:19.706470 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Oct 13 05:49:19.706476 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Oct 13 05:49:19.706481 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Oct 13 05:49:19.706487 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Oct 13 05:49:19.706492 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Oct 13 05:49:19.706498 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Oct 13 05:49:19.706503 kernel: Console: colour VGA+ 80x25 Oct 13 05:49:19.706510 kernel: printk: legacy console [tty0] enabled Oct 13 05:49:19.706515 kernel: printk: legacy console [ttyS0] enabled Oct 13 05:49:19.706521 kernel: ACPI: Core revision 20240827 Oct 13 05:49:19.706526 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Oct 13 05:49:19.706531 kernel: APIC: Switch to symmetric I/O mode setup Oct 13 05:49:19.706537 kernel: x2apic enabled Oct 13 05:49:19.706542 kernel: APIC: Switched APIC routing to: physical x2apic Oct 13 05:49:19.706548 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 13 05:49:19.706553 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Oct 13 05:49:19.706560 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Oct 13 05:49:19.706565 kernel: Disabled fast string operations Oct 13 05:49:19.706571 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Oct 13 05:49:19.706576 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Oct 13 05:49:19.706581 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 13 05:49:19.706587 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Oct 13 05:49:19.706592 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Oct 13 05:49:19.706598 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Oct 13 05:49:19.706604 kernel: RETBleed: Mitigation: Enhanced IBRS Oct 13 05:49:19.706610 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 13 05:49:19.706616 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 13 05:49:19.706621 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 13 05:49:19.706627 kernel: SRBDS: Unknown: Dependent on hypervisor status Oct 13 05:49:19.706632 kernel: GDS: Unknown: Dependent on hypervisor status Oct 13 05:49:19.706638 kernel: active return thunk: its_return_thunk Oct 13 05:49:19.706643 kernel: ITS: Mitigation: Aligned branch/return thunks Oct 13 05:49:19.706648 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 13 05:49:19.706654 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 13 05:49:19.706660 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 13 05:49:19.706666 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 13 05:49:19.706671 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 13 05:49:19.706677 kernel: Freeing SMP alternatives memory: 32K Oct 13 05:49:19.706682 kernel: pid_max: default: 131072 minimum: 1024 Oct 13 05:49:19.706687 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 13 05:49:19.706693 kernel: landlock: Up and running. Oct 13 05:49:19.706698 kernel: SELinux: Initializing. Oct 13 05:49:19.706704 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 13 05:49:19.706711 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 13 05:49:19.706716 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Oct 13 05:49:19.706722 kernel: Performance Events: Skylake events, core PMU driver. Oct 13 05:49:19.706727 kernel: core: CPUID marked event: 'cpu cycles' unavailable Oct 13 05:49:19.706737 kernel: core: CPUID marked event: 'instructions' unavailable Oct 13 05:49:19.706743 kernel: core: CPUID marked event: 'bus cycles' unavailable Oct 13 05:49:19.706748 kernel: core: CPUID marked event: 'cache references' unavailable Oct 13 05:49:19.706753 kernel: core: CPUID marked event: 'cache misses' unavailable Oct 13 05:49:19.706760 kernel: core: CPUID marked event: 'branch instructions' unavailable Oct 13 05:49:19.706765 kernel: core: CPUID marked event: 'branch misses' unavailable Oct 13 05:49:19.706771 kernel: ... version: 1 Oct 13 05:49:19.706782 kernel: ... bit width: 48 Oct 13 05:49:19.706792 kernel: ... generic registers: 4 Oct 13 05:49:19.706798 kernel: ... value mask: 0000ffffffffffff Oct 13 05:49:19.706804 kernel: ... max period: 000000007fffffff Oct 13 05:49:19.706809 kernel: ... fixed-purpose events: 0 Oct 13 05:49:19.706815 kernel: ... event mask: 000000000000000f Oct 13 05:49:19.706825 kernel: signal: max sigframe size: 1776 Oct 13 05:49:19.706830 kernel: rcu: Hierarchical SRCU implementation. Oct 13 05:49:19.706836 kernel: rcu: Max phase no-delay instances is 400. Oct 13 05:49:19.706842 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Oct 13 05:49:19.706847 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 13 05:49:19.706856 kernel: smp: Bringing up secondary CPUs ... Oct 13 05:49:19.706862 kernel: smpboot: x86: Booting SMP configuration: Oct 13 05:49:19.706867 kernel: .... node #0, CPUs: #1 Oct 13 05:49:19.706872 kernel: Disabled fast string operations Oct 13 05:49:19.706878 kernel: smp: Brought up 1 node, 2 CPUs Oct 13 05:49:19.706899 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Oct 13 05:49:19.706907 kernel: Memory: 1924228K/2096628K available (14336K kernel code, 2443K rwdata, 10000K rodata, 54096K init, 2852K bss, 161016K reserved, 0K cma-reserved) Oct 13 05:49:19.706913 kernel: devtmpfs: initialized Oct 13 05:49:19.706918 kernel: x86/mm: Memory block size: 128MB Oct 13 05:49:19.706927 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Oct 13 05:49:19.706933 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 13 05:49:19.706939 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Oct 13 05:49:19.706944 kernel: pinctrl core: initialized pinctrl subsystem Oct 13 05:49:19.706952 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 13 05:49:19.706961 kernel: audit: initializing netlink subsys (disabled) Oct 13 05:49:19.706966 kernel: audit: type=2000 audit(1760334556.268:1): state=initialized audit_enabled=0 res=1 Oct 13 05:49:19.706972 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 13 05:49:19.706977 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 13 05:49:19.706983 kernel: cpuidle: using governor menu Oct 13 05:49:19.706992 kernel: Simple Boot Flag at 0x36 set to 0x80 Oct 13 05:49:19.706997 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 13 05:49:19.707003 kernel: dca service started, version 1.12.1 Oct 13 05:49:19.707015 kernel: PCI: ECAM [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) for domain 0000 [bus 00-7f] Oct 13 05:49:19.707022 kernel: PCI: Using configuration type 1 for base access Oct 13 05:49:19.707029 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 13 05:49:19.707034 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 13 05:49:19.707040 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 13 05:49:19.707046 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 13 05:49:19.707052 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 13 05:49:19.707058 kernel: ACPI: Added _OSI(Module Device) Oct 13 05:49:19.707063 kernel: ACPI: Added _OSI(Processor Device) Oct 13 05:49:19.707070 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 13 05:49:19.707076 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 13 05:49:19.707082 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Oct 13 05:49:19.707087 kernel: ACPI: Interpreter enabled Oct 13 05:49:19.707093 kernel: ACPI: PM: (supports S0 S1 S5) Oct 13 05:49:19.707099 kernel: ACPI: Using IOAPIC for interrupt routing Oct 13 05:49:19.707105 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 13 05:49:19.707111 kernel: PCI: Using E820 reservations for host bridge windows Oct 13 05:49:19.707116 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Oct 13 05:49:19.707123 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Oct 13 05:49:19.707206 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 13 05:49:19.707261 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Oct 13 05:49:19.707310 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Oct 13 05:49:19.707318 kernel: PCI host bridge to bus 0000:00 Oct 13 05:49:19.707369 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 13 05:49:19.707414 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Oct 13 05:49:19.707476 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 13 05:49:19.707521 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 13 05:49:19.707565 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Oct 13 05:49:19.707617 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Oct 13 05:49:19.707678 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 conventional PCI endpoint Oct 13 05:49:19.707738 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 conventional PCI bridge Oct 13 05:49:19.707791 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 13 05:49:19.707850 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Oct 13 05:49:19.708377 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a conventional PCI endpoint Oct 13 05:49:19.708436 kernel: pci 0000:00:07.1: BAR 4 [io 0x1060-0x106f] Oct 13 05:49:19.708491 kernel: pci 0000:00:07.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Oct 13 05:49:19.708542 kernel: pci 0000:00:07.1: BAR 1 [io 0x03f6]: legacy IDE quirk Oct 13 05:49:19.708591 kernel: pci 0000:00:07.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Oct 13 05:49:19.708641 kernel: pci 0000:00:07.1: BAR 3 [io 0x0376]: legacy IDE quirk Oct 13 05:49:19.708695 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Oct 13 05:49:19.708745 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Oct 13 05:49:19.708797 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Oct 13 05:49:19.708851 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 conventional PCI endpoint Oct 13 05:49:19.708915 kernel: pci 0000:00:07.7: BAR 0 [io 0x1080-0x10bf] Oct 13 05:49:19.708967 kernel: pci 0000:00:07.7: BAR 1 [mem 0xfebfe000-0xfebfffff 64bit] Oct 13 05:49:19.709023 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 conventional PCI endpoint Oct 13 05:49:19.709072 kernel: pci 0000:00:0f.0: BAR 0 [io 0x1070-0x107f] Oct 13 05:49:19.709121 kernel: pci 0000:00:0f.0: BAR 1 [mem 0xe8000000-0xefffffff pref] Oct 13 05:49:19.709173 kernel: pci 0000:00:0f.0: BAR 2 [mem 0xfe000000-0xfe7fffff] Oct 13 05:49:19.709221 kernel: pci 0000:00:0f.0: ROM [mem 0x00000000-0x00007fff pref] Oct 13 05:49:19.709273 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 13 05:49:19.710967 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 conventional PCI bridge Oct 13 05:49:19.711023 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Oct 13 05:49:19.711075 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Oct 13 05:49:19.711126 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Oct 13 05:49:19.711181 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 13 05:49:19.711237 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:49:19.711289 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 13 05:49:19.711340 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Oct 13 05:49:19.711390 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Oct 13 05:49:19.711440 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Oct 13 05:49:19.711507 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:49:19.711562 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 13 05:49:19.711618 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Oct 13 05:49:19.711687 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Oct 13 05:49:19.711749 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Oct 13 05:49:19.711805 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Oct 13 05:49:19.711864 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:49:19.711942 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 13 05:49:19.712061 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Oct 13 05:49:19.712118 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Oct 13 05:49:19.712168 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Oct 13 05:49:19.712219 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Oct 13 05:49:19.712276 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:49:19.712328 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 13 05:49:19.712382 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Oct 13 05:49:19.712433 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Oct 13 05:49:19.712483 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Oct 13 05:49:19.712537 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:49:19.712588 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 13 05:49:19.712638 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Oct 13 05:49:19.712689 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 13 05:49:19.712742 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Oct 13 05:49:19.712796 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:49:19.712847 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 13 05:49:19.712912 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Oct 13 05:49:19.712966 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Oct 13 05:49:19.713016 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Oct 13 05:49:19.713073 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:49:19.713127 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 13 05:49:19.713178 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Oct 13 05:49:19.713229 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Oct 13 05:49:19.713279 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Oct 13 05:49:19.713333 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:49:19.713385 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 13 05:49:19.713441 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Oct 13 05:49:19.713498 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Oct 13 05:49:19.713554 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Oct 13 05:49:19.713619 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:49:19.713672 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 13 05:49:19.713723 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Oct 13 05:49:19.713773 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Oct 13 05:49:19.713824 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Oct 13 05:49:19.713878 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:49:19.714050 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 13 05:49:19.714101 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Oct 13 05:49:19.714152 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Oct 13 05:49:19.714202 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Oct 13 05:49:19.714252 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Oct 13 05:49:19.714307 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:49:19.714357 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 13 05:49:19.714410 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Oct 13 05:49:19.714465 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Oct 13 05:49:19.714515 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Oct 13 05:49:19.714565 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Oct 13 05:49:19.714622 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:49:19.714673 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 13 05:49:19.714722 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Oct 13 05:49:19.714775 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 13 05:49:19.714825 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Oct 13 05:49:19.714879 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:49:19.714968 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 13 05:49:19.715021 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Oct 13 05:49:19.715071 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 13 05:49:19.715121 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Oct 13 05:49:19.715179 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:49:19.715229 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 13 05:49:19.715280 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Oct 13 05:49:19.715967 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Oct 13 05:49:19.716023 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Oct 13 05:49:19.716082 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:49:19.716135 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 13 05:49:19.716189 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Oct 13 05:49:19.716240 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Oct 13 05:49:19.716290 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Oct 13 05:49:19.716346 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:49:19.716397 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 13 05:49:19.716448 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Oct 13 05:49:19.716498 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 13 05:49:19.716551 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Oct 13 05:49:19.716604 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:49:19.716655 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 13 05:49:19.716704 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Oct 13 05:49:19.716754 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Oct 13 05:49:19.716804 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 13 05:49:19.716854 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Oct 13 05:49:19.717944 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:49:19.718007 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 13 05:49:19.718061 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Oct 13 05:49:19.718114 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Oct 13 05:49:19.718169 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Oct 13 05:49:19.718219 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Oct 13 05:49:19.718274 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:49:19.718326 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 13 05:49:19.718376 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Oct 13 05:49:19.718427 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Oct 13 05:49:19.718477 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Oct 13 05:49:19.718530 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Oct 13 05:49:19.718587 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:49:19.718638 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 13 05:49:19.718688 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Oct 13 05:49:19.718738 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 13 05:49:19.718788 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Oct 13 05:49:19.718843 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:49:19.718912 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 13 05:49:19.718967 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Oct 13 05:49:19.719017 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 13 05:49:19.719067 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Oct 13 05:49:19.719122 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:49:19.719174 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 13 05:49:19.719224 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Oct 13 05:49:19.719277 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Oct 13 05:49:19.719327 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Oct 13 05:49:19.719382 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:49:19.719432 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 13 05:49:19.719482 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Oct 13 05:49:19.719532 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Oct 13 05:49:19.719582 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Oct 13 05:49:19.719635 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:49:19.719689 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 13 05:49:19.719739 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Oct 13 05:49:19.719791 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 13 05:49:19.719841 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Oct 13 05:49:19.721628 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:49:19.721691 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 13 05:49:19.721745 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Oct 13 05:49:19.721800 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Oct 13 05:49:19.721852 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Oct 13 05:49:19.721932 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Oct 13 05:49:19.721996 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:49:19.722058 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 13 05:49:19.722659 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Oct 13 05:49:19.722718 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Oct 13 05:49:19.722775 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Oct 13 05:49:19.722828 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Oct 13 05:49:19.722899 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:49:19.722955 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 13 05:49:19.723008 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Oct 13 05:49:19.723059 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Oct 13 05:49:19.723110 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Oct 13 05:49:19.723167 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:49:19.723218 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 13 05:49:19.723285 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Oct 13 05:49:19.723335 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 13 05:49:19.723385 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Oct 13 05:49:19.723441 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:49:19.723514 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 13 05:49:19.723583 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Oct 13 05:49:19.723632 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Oct 13 05:49:19.723682 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Oct 13 05:49:19.723737 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:49:19.723787 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 13 05:49:19.723836 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Oct 13 05:49:19.723915 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Oct 13 05:49:19.723971 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Oct 13 05:49:19.724027 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:49:19.724077 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 13 05:49:19.724126 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Oct 13 05:49:19.724175 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Oct 13 05:49:19.724224 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Oct 13 05:49:19.724277 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:49:19.724330 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 13 05:49:19.724380 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Oct 13 05:49:19.724429 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 13 05:49:19.724483 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Oct 13 05:49:19.724538 kernel: pci_bus 0000:01: extended config space not accessible Oct 13 05:49:19.724590 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 13 05:49:19.724641 kernel: pci_bus 0000:02: extended config space not accessible Oct 13 05:49:19.724652 kernel: acpiphp: Slot [32] registered Oct 13 05:49:19.724658 kernel: acpiphp: Slot [33] registered Oct 13 05:49:19.724664 kernel: acpiphp: Slot [34] registered Oct 13 05:49:19.724670 kernel: acpiphp: Slot [35] registered Oct 13 05:49:19.724675 kernel: acpiphp: Slot [36] registered Oct 13 05:49:19.724681 kernel: acpiphp: Slot [37] registered Oct 13 05:49:19.724687 kernel: acpiphp: Slot [38] registered Oct 13 05:49:19.724692 kernel: acpiphp: Slot [39] registered Oct 13 05:49:19.724698 kernel: acpiphp: Slot [40] registered Oct 13 05:49:19.724704 kernel: acpiphp: Slot [41] registered Oct 13 05:49:19.724710 kernel: acpiphp: Slot [42] registered Oct 13 05:49:19.724716 kernel: acpiphp: Slot [43] registered Oct 13 05:49:19.724722 kernel: acpiphp: Slot [44] registered Oct 13 05:49:19.724728 kernel: acpiphp: Slot [45] registered Oct 13 05:49:19.724733 kernel: acpiphp: Slot [46] registered Oct 13 05:49:19.724739 kernel: acpiphp: Slot [47] registered Oct 13 05:49:19.724745 kernel: acpiphp: Slot [48] registered Oct 13 05:49:19.724751 kernel: acpiphp: Slot [49] registered Oct 13 05:49:19.724756 kernel: acpiphp: Slot [50] registered Oct 13 05:49:19.724763 kernel: acpiphp: Slot [51] registered Oct 13 05:49:19.724769 kernel: acpiphp: Slot [52] registered Oct 13 05:49:19.724775 kernel: acpiphp: Slot [53] registered Oct 13 05:49:19.724780 kernel: acpiphp: Slot [54] registered Oct 13 05:49:19.724786 kernel: acpiphp: Slot [55] registered Oct 13 05:49:19.724792 kernel: acpiphp: Slot [56] registered Oct 13 05:49:19.724797 kernel: acpiphp: Slot [57] registered Oct 13 05:49:19.724803 kernel: acpiphp: Slot [58] registered Oct 13 05:49:19.724808 kernel: acpiphp: Slot [59] registered Oct 13 05:49:19.724814 kernel: acpiphp: Slot [60] registered Oct 13 05:49:19.724821 kernel: acpiphp: Slot [61] registered Oct 13 05:49:19.724826 kernel: acpiphp: Slot [62] registered Oct 13 05:49:19.724832 kernel: acpiphp: Slot [63] registered Oct 13 05:49:19.724888 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Oct 13 05:49:19.724954 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Oct 13 05:49:19.725004 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Oct 13 05:49:19.725055 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Oct 13 05:49:19.725105 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Oct 13 05:49:19.725157 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Oct 13 05:49:19.725215 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 PCIe Endpoint Oct 13 05:49:19.725268 kernel: pci 0000:03:00.0: BAR 0 [io 0x4000-0x4007] Oct 13 05:49:19.725319 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfd5f8000-0xfd5fffff 64bit] Oct 13 05:49:19.725370 kernel: pci 0000:03:00.0: ROM [mem 0x00000000-0x0000ffff pref] Oct 13 05:49:19.725421 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Oct 13 05:49:19.725471 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Oct 13 05:49:19.725524 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 13 05:49:19.727946 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 13 05:49:19.728017 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 13 05:49:19.728083 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 13 05:49:19.728137 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 13 05:49:19.728187 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 13 05:49:19.728240 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 13 05:49:19.728296 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 13 05:49:19.728354 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 PCIe Endpoint Oct 13 05:49:19.728407 kernel: pci 0000:0b:00.0: BAR 0 [mem 0xfd4fc000-0xfd4fcfff] Oct 13 05:49:19.728459 kernel: pci 0000:0b:00.0: BAR 1 [mem 0xfd4fd000-0xfd4fdfff] Oct 13 05:49:19.728510 kernel: pci 0000:0b:00.0: BAR 2 [mem 0xfd4fe000-0xfd4fffff] Oct 13 05:49:19.728561 kernel: pci 0000:0b:00.0: BAR 3 [io 0x5000-0x500f] Oct 13 05:49:19.728612 kernel: pci 0000:0b:00.0: ROM [mem 0x00000000-0x0000ffff pref] Oct 13 05:49:19.728666 kernel: pci 0000:0b:00.0: supports D1 D2 Oct 13 05:49:19.728718 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 13 05:49:19.728769 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Oct 13 05:49:19.728821 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 13 05:49:19.728872 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 13 05:49:19.728931 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 13 05:49:19.728982 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 13 05:49:19.729033 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 13 05:49:19.729087 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 13 05:49:19.729138 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 13 05:49:19.729189 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 13 05:49:19.729239 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 13 05:49:19.729289 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 13 05:49:19.729340 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 13 05:49:19.729391 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 13 05:49:19.729442 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 13 05:49:19.729511 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 13 05:49:19.729561 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 13 05:49:19.729610 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 13 05:49:19.729658 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 13 05:49:19.729707 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 13 05:49:19.729756 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 13 05:49:19.729805 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 13 05:49:19.729857 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 13 05:49:19.733952 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 13 05:49:19.734051 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 13 05:49:19.734107 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 13 05:49:19.734117 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Oct 13 05:49:19.734123 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Oct 13 05:49:19.734130 kernel: ACPI: PCI: Interrupt link LNKB disabled Oct 13 05:49:19.734136 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 13 05:49:19.734144 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Oct 13 05:49:19.734150 kernel: iommu: Default domain type: Translated Oct 13 05:49:19.734156 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 13 05:49:19.734163 kernel: PCI: Using ACPI for IRQ routing Oct 13 05:49:19.734169 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 13 05:49:19.734175 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Oct 13 05:49:19.734180 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Oct 13 05:49:19.734233 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Oct 13 05:49:19.734284 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Oct 13 05:49:19.734337 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 13 05:49:19.734346 kernel: vgaarb: loaded Oct 13 05:49:19.734353 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Oct 13 05:49:19.734359 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Oct 13 05:49:19.734365 kernel: clocksource: Switched to clocksource tsc-early Oct 13 05:49:19.734371 kernel: VFS: Disk quotas dquot_6.6.0 Oct 13 05:49:19.734377 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 13 05:49:19.734383 kernel: pnp: PnP ACPI init Oct 13 05:49:19.734437 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Oct 13 05:49:19.734492 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Oct 13 05:49:19.734537 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Oct 13 05:49:19.734586 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Oct 13 05:49:19.734637 kernel: pnp 00:06: [dma 2] Oct 13 05:49:19.734688 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Oct 13 05:49:19.734734 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Oct 13 05:49:19.734782 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Oct 13 05:49:19.734791 kernel: pnp: PnP ACPI: found 8 devices Oct 13 05:49:19.734797 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 13 05:49:19.734804 kernel: NET: Registered PF_INET protocol family Oct 13 05:49:19.734810 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 13 05:49:19.734815 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 13 05:49:19.734821 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 13 05:49:19.734827 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 13 05:49:19.734835 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 13 05:49:19.734841 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 13 05:49:19.734847 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 13 05:49:19.734854 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 13 05:49:19.734859 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 13 05:49:19.734865 kernel: NET: Registered PF_XDP protocol family Oct 13 05:49:19.735944 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Oct 13 05:49:19.736002 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Oct 13 05:49:19.736061 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 13 05:49:19.736115 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 13 05:49:19.736167 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 13 05:49:19.736221 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Oct 13 05:49:19.736272 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Oct 13 05:49:19.736324 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Oct 13 05:49:19.736375 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Oct 13 05:49:19.736427 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Oct 13 05:49:19.736483 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Oct 13 05:49:19.736534 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Oct 13 05:49:19.736585 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Oct 13 05:49:19.736636 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Oct 13 05:49:19.736687 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Oct 13 05:49:19.736739 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Oct 13 05:49:19.736790 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Oct 13 05:49:19.736841 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Oct 13 05:49:19.736913 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Oct 13 05:49:19.736968 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Oct 13 05:49:19.737019 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Oct 13 05:49:19.737071 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Oct 13 05:49:19.737121 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Oct 13 05:49:19.737171 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref]: assigned Oct 13 05:49:19.737222 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref]: assigned Oct 13 05:49:19.737272 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.737325 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.737376 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.737425 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.737480 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.737531 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.737582 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.737632 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.737685 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.737735 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.737786 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.737836 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.737891 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.737947 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.738005 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.738057 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.738111 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.738161 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.738211 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.738262 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.738312 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.738362 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.738412 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.738461 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.738514 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.738563 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.738613 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.738664 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.738714 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.738764 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.738816 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.738865 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.738939 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.738990 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.739041 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.739091 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.739141 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.739191 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.739241 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.739291 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.739344 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.739394 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.739444 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.739494 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.739545 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.739595 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.739645 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.739695 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.739745 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.739797 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.739847 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.739908 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.739961 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.740011 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.740061 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.740111 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.740161 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.740211 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.740264 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.740314 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.740364 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.740413 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.740467 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.740516 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.740566 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.740615 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.740665 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.740715 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.740767 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.740817 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.740868 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.740925 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.740975 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.741025 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.741078 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.741128 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.741178 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.741228 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.741278 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.741328 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.741377 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.741427 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.741481 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:49:19.741534 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Oct 13 05:49:19.741585 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 13 05:49:19.741636 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Oct 13 05:49:19.741686 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Oct 13 05:49:19.741736 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Oct 13 05:49:19.741785 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 13 05:49:19.741839 kernel: pci 0000:03:00.0: ROM [mem 0xfd500000-0xfd50ffff pref]: assigned Oct 13 05:49:19.741898 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 13 05:49:19.741953 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Oct 13 05:49:19.742003 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Oct 13 05:49:19.742053 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Oct 13 05:49:19.742104 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 13 05:49:19.742154 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Oct 13 05:49:19.742204 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Oct 13 05:49:19.742254 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Oct 13 05:49:19.742306 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 13 05:49:19.742366 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Oct 13 05:49:19.742416 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Oct 13 05:49:19.742469 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Oct 13 05:49:19.742520 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 13 05:49:19.742570 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Oct 13 05:49:19.742621 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Oct 13 05:49:19.742671 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 13 05:49:19.742721 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Oct 13 05:49:19.742771 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 13 05:49:19.742822 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 13 05:49:19.742874 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Oct 13 05:49:19.742933 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Oct 13 05:49:19.742983 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 13 05:49:19.743033 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Oct 13 05:49:19.743083 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Oct 13 05:49:19.743133 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 13 05:49:19.743183 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Oct 13 05:49:19.743235 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Oct 13 05:49:19.743291 kernel: pci 0000:0b:00.0: ROM [mem 0xfd400000-0xfd40ffff pref]: assigned Oct 13 05:49:19.743343 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 13 05:49:19.743393 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Oct 13 05:49:19.743443 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Oct 13 05:49:19.743496 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Oct 13 05:49:19.743547 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 13 05:49:19.743598 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Oct 13 05:49:19.743648 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Oct 13 05:49:19.743701 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Oct 13 05:49:19.743752 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 13 05:49:19.743801 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Oct 13 05:49:19.743851 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Oct 13 05:49:19.743921 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Oct 13 05:49:19.743974 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 13 05:49:19.744023 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Oct 13 05:49:19.744072 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 13 05:49:19.744126 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 13 05:49:19.744176 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Oct 13 05:49:19.744226 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 13 05:49:19.744277 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 13 05:49:19.744327 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Oct 13 05:49:19.744378 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Oct 13 05:49:19.744428 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 13 05:49:19.744482 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Oct 13 05:49:19.744535 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Oct 13 05:49:19.744585 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 13 05:49:19.744634 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Oct 13 05:49:19.744684 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 13 05:49:19.744735 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 13 05:49:19.744786 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Oct 13 05:49:19.744835 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Oct 13 05:49:19.744894 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 13 05:49:19.744950 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 13 05:49:19.745000 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Oct 13 05:49:19.745050 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Oct 13 05:49:19.745100 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Oct 13 05:49:19.745151 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 13 05:49:19.745201 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Oct 13 05:49:19.745249 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Oct 13 05:49:19.745299 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Oct 13 05:49:19.745348 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 13 05:49:19.745400 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Oct 13 05:49:19.745450 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 13 05:49:19.745501 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 13 05:49:19.745550 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Oct 13 05:49:19.745600 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 13 05:49:19.745649 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 13 05:49:19.745700 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Oct 13 05:49:19.745750 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Oct 13 05:49:19.745803 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 13 05:49:19.745855 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Oct 13 05:49:19.745927 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Oct 13 05:49:19.745979 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 13 05:49:19.746028 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Oct 13 05:49:19.746077 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 13 05:49:19.746130 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 13 05:49:19.746183 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Oct 13 05:49:19.746232 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Oct 13 05:49:19.746282 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Oct 13 05:49:19.746334 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 13 05:49:19.746384 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Oct 13 05:49:19.746434 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Oct 13 05:49:19.746484 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Oct 13 05:49:19.746535 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 13 05:49:19.746584 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Oct 13 05:49:19.746635 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Oct 13 05:49:19.746686 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 13 05:49:19.746735 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Oct 13 05:49:19.746785 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 13 05:49:19.746835 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 13 05:49:19.746895 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Oct 13 05:49:19.746946 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Oct 13 05:49:19.747000 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 13 05:49:19.747050 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Oct 13 05:49:19.747100 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Oct 13 05:49:19.747160 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 13 05:49:19.747210 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Oct 13 05:49:19.747260 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Oct 13 05:49:19.747311 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 13 05:49:19.747361 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Oct 13 05:49:19.747413 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 13 05:49:19.747468 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Oct 13 05:49:19.747513 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Oct 13 05:49:19.747557 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Oct 13 05:49:19.747600 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Oct 13 05:49:19.747644 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Oct 13 05:49:19.747692 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Oct 13 05:49:19.747741 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Oct 13 05:49:19.747786 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 13 05:49:19.747832 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Oct 13 05:49:19.747876 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Oct 13 05:49:19.747940 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Oct 13 05:49:19.747986 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Oct 13 05:49:19.748032 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Oct 13 05:49:19.748086 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Oct 13 05:49:19.748133 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Oct 13 05:49:19.748179 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Oct 13 05:49:19.748229 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Oct 13 05:49:19.748275 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Oct 13 05:49:19.748321 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Oct 13 05:49:19.748370 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Oct 13 05:49:19.748419 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Oct 13 05:49:19.748465 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Oct 13 05:49:19.748516 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Oct 13 05:49:19.748563 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Oct 13 05:49:19.748613 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Oct 13 05:49:19.748660 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 13 05:49:19.748712 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Oct 13 05:49:19.748759 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Oct 13 05:49:19.748809 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Oct 13 05:49:19.748856 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Oct 13 05:49:19.748914 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Oct 13 05:49:19.748960 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Oct 13 05:49:19.749015 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Oct 13 05:49:19.749062 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Oct 13 05:49:19.749107 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Oct 13 05:49:19.749157 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Oct 13 05:49:19.749204 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Oct 13 05:49:19.749251 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Oct 13 05:49:19.749303 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Oct 13 05:49:19.749349 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Oct 13 05:49:19.749395 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Oct 13 05:49:19.749445 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Oct 13 05:49:19.749491 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 13 05:49:19.749542 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Oct 13 05:49:19.749589 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 13 05:49:19.749641 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Oct 13 05:49:19.749686 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Oct 13 05:49:19.749736 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Oct 13 05:49:19.749782 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Oct 13 05:49:19.749833 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Oct 13 05:49:19.749879 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 13 05:49:19.749953 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Oct 13 05:49:19.750000 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Oct 13 05:49:19.750046 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 13 05:49:19.750096 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Oct 13 05:49:19.750143 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Oct 13 05:49:19.750189 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Oct 13 05:49:19.750242 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Oct 13 05:49:19.750291 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Oct 13 05:49:19.750337 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Oct 13 05:49:19.750388 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Oct 13 05:49:19.750435 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 13 05:49:19.750489 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Oct 13 05:49:19.750536 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 13 05:49:19.750591 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Oct 13 05:49:19.750637 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Oct 13 05:49:19.750688 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Oct 13 05:49:19.750734 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Oct 13 05:49:19.750785 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Oct 13 05:49:19.750831 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 13 05:49:19.750892 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Oct 13 05:49:19.750954 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Oct 13 05:49:19.751002 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Oct 13 05:49:19.751054 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Oct 13 05:49:19.751101 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Oct 13 05:49:19.751146 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Oct 13 05:49:19.751196 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Oct 13 05:49:19.751246 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Oct 13 05:49:19.751297 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Oct 13 05:49:19.751343 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 13 05:49:19.751392 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Oct 13 05:49:19.751439 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Oct 13 05:49:19.751497 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Oct 13 05:49:19.751545 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Oct 13 05:49:19.751597 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Oct 13 05:49:19.751644 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Oct 13 05:49:19.751693 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Oct 13 05:49:19.751739 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 13 05:49:19.751794 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 13 05:49:19.751806 kernel: PCI: CLS 32 bytes, default 64 Oct 13 05:49:19.751812 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 13 05:49:19.751818 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Oct 13 05:49:19.751825 kernel: clocksource: Switched to clocksource tsc Oct 13 05:49:19.751831 kernel: Initialise system trusted keyrings Oct 13 05:49:19.751837 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 13 05:49:19.751843 kernel: Key type asymmetric registered Oct 13 05:49:19.751848 kernel: Asymmetric key parser 'x509' registered Oct 13 05:49:19.751854 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 13 05:49:19.751861 kernel: io scheduler mq-deadline registered Oct 13 05:49:19.751867 kernel: io scheduler kyber registered Oct 13 05:49:19.751873 kernel: io scheduler bfq registered Oct 13 05:49:19.751944 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Oct 13 05:49:19.751997 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:49:19.752050 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Oct 13 05:49:19.752101 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:49:19.752152 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Oct 13 05:49:19.752206 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:49:19.752258 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Oct 13 05:49:19.752310 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:49:19.752363 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Oct 13 05:49:19.752414 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:49:19.752465 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Oct 13 05:49:19.752517 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:49:19.752572 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Oct 13 05:49:19.752623 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:49:19.752676 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Oct 13 05:49:19.752728 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:49:19.752779 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Oct 13 05:49:19.752830 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:49:19.752890 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Oct 13 05:49:19.754911 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:49:19.754983 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Oct 13 05:49:19.755040 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:49:19.755096 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Oct 13 05:49:19.755158 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:49:19.755214 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Oct 13 05:49:19.755266 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:49:19.755319 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Oct 13 05:49:19.755374 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:49:19.755426 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Oct 13 05:49:19.755480 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:49:19.755532 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Oct 13 05:49:19.755584 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:49:19.755636 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Oct 13 05:49:19.755688 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:49:19.755742 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Oct 13 05:49:19.755793 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:49:19.755845 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Oct 13 05:49:19.755919 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:49:19.755974 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Oct 13 05:49:19.756025 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:49:19.756077 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Oct 13 05:49:19.756127 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:49:19.756182 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Oct 13 05:49:19.756234 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:49:19.756286 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Oct 13 05:49:19.756338 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:49:19.756390 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Oct 13 05:49:19.756442 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:49:19.756494 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Oct 13 05:49:19.756547 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:49:19.756598 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Oct 13 05:49:19.756654 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:49:19.756707 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Oct 13 05:49:19.756758 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:49:19.756810 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Oct 13 05:49:19.756861 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:49:19.756955 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Oct 13 05:49:19.757009 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:49:19.757063 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Oct 13 05:49:19.757115 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:49:19.757168 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Oct 13 05:49:19.757220 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:49:19.757273 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Oct 13 05:49:19.757326 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:49:19.757338 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 13 05:49:19.757345 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 13 05:49:19.757352 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 13 05:49:19.757358 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Oct 13 05:49:19.757364 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 13 05:49:19.757371 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 13 05:49:19.757424 kernel: rtc_cmos 00:01: registered as rtc0 Oct 13 05:49:19.757479 kernel: rtc_cmos 00:01: setting system clock to 2025-10-13T05:49:19 UTC (1760334559) Oct 13 05:49:19.757526 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Oct 13 05:49:19.757535 kernel: intel_pstate: CPU model not supported Oct 13 05:49:19.757541 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 13 05:49:19.757548 kernel: NET: Registered PF_INET6 protocol family Oct 13 05:49:19.757554 kernel: Segment Routing with IPv6 Oct 13 05:49:19.757561 kernel: In-situ OAM (IOAM) with IPv6 Oct 13 05:49:19.757567 kernel: NET: Registered PF_PACKET protocol family Oct 13 05:49:19.757573 kernel: Key type dns_resolver registered Oct 13 05:49:19.757582 kernel: IPI shorthand broadcast: enabled Oct 13 05:49:19.757588 kernel: sched_clock: Marking stable (2590003300, 164777624)->(2772938594, -18157670) Oct 13 05:49:19.757594 kernel: registered taskstats version 1 Oct 13 05:49:19.757600 kernel: Loading compiled-in X.509 certificates Oct 13 05:49:19.757607 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.51-flatcar: d8dbf4abead15098249886d373d42a3af4f50ccd' Oct 13 05:49:19.757613 kernel: Demotion targets for Node 0: null Oct 13 05:49:19.757619 kernel: Key type .fscrypt registered Oct 13 05:49:19.757626 kernel: Key type fscrypt-provisioning registered Oct 13 05:49:19.757633 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 13 05:49:19.757639 kernel: ima: Allocated hash algorithm: sha1 Oct 13 05:49:19.757647 kernel: ima: No architecture policies found Oct 13 05:49:19.757653 kernel: clk: Disabling unused clocks Oct 13 05:49:19.757659 kernel: Warning: unable to open an initial console. Oct 13 05:49:19.757665 kernel: Freeing unused kernel image (initmem) memory: 54096K Oct 13 05:49:19.757672 kernel: Write protecting the kernel read-only data: 24576k Oct 13 05:49:19.757678 kernel: Freeing unused kernel image (rodata/data gap) memory: 240K Oct 13 05:49:19.757684 kernel: Run /init as init process Oct 13 05:49:19.757691 kernel: with arguments: Oct 13 05:49:19.757698 kernel: /init Oct 13 05:49:19.757703 kernel: with environment: Oct 13 05:49:19.757710 kernel: HOME=/ Oct 13 05:49:19.757715 kernel: TERM=linux Oct 13 05:49:19.757722 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 13 05:49:19.757729 systemd[1]: Successfully made /usr/ read-only. Oct 13 05:49:19.757737 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 05:49:19.757745 systemd[1]: Detected virtualization vmware. Oct 13 05:49:19.757751 systemd[1]: Detected architecture x86-64. Oct 13 05:49:19.757758 systemd[1]: Running in initrd. Oct 13 05:49:19.757765 systemd[1]: No hostname configured, using default hostname. Oct 13 05:49:19.757771 systemd[1]: Hostname set to . Oct 13 05:49:19.757778 systemd[1]: Initializing machine ID from random generator. Oct 13 05:49:19.757784 systemd[1]: Queued start job for default target initrd.target. Oct 13 05:49:19.757790 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:49:19.757797 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:49:19.757805 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 13 05:49:19.757811 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 05:49:19.757818 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 13 05:49:19.757825 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 13 05:49:19.757832 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 13 05:49:19.757839 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 13 05:49:19.757846 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:49:19.757853 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:49:19.757859 systemd[1]: Reached target paths.target - Path Units. Oct 13 05:49:19.757866 systemd[1]: Reached target slices.target - Slice Units. Oct 13 05:49:19.757873 systemd[1]: Reached target swap.target - Swaps. Oct 13 05:49:19.757879 systemd[1]: Reached target timers.target - Timer Units. Oct 13 05:49:19.758027 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 05:49:19.758036 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 05:49:19.758043 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 13 05:49:19.758052 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 13 05:49:19.758058 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:49:19.758064 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 05:49:19.758071 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:49:19.758078 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 05:49:19.758084 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 13 05:49:19.758091 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 05:49:19.758097 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 13 05:49:19.758105 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 13 05:49:19.758112 systemd[1]: Starting systemd-fsck-usr.service... Oct 13 05:49:19.758119 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 05:49:19.758125 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 05:49:19.758131 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:49:19.758138 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 13 05:49:19.758161 systemd-journald[243]: Collecting audit messages is disabled. Oct 13 05:49:19.758178 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:49:19.758185 systemd[1]: Finished systemd-fsck-usr.service. Oct 13 05:49:19.758193 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 05:49:19.758200 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 13 05:49:19.758207 kernel: Bridge firewalling registered Oct 13 05:49:19.758213 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 05:49:19.758220 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:49:19.758227 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:49:19.758234 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 13 05:49:19.758241 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:49:19.758249 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 05:49:19.758256 systemd-journald[243]: Journal started Oct 13 05:49:19.758272 systemd-journald[243]: Runtime Journal (/run/log/journal/8edbac0a128a4cd3a560a2b84b245d57) is 4.8M, max 38.8M, 34M free. Oct 13 05:49:19.710891 systemd-modules-load[245]: Inserted module 'overlay' Oct 13 05:49:19.732821 systemd-modules-load[245]: Inserted module 'br_netfilter' Oct 13 05:49:19.759896 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 05:49:19.764701 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 05:49:19.765315 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:49:19.766690 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:49:19.769946 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 05:49:19.771951 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 13 05:49:19.772616 systemd-tmpfiles[266]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 13 05:49:19.774223 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:49:19.776017 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 05:49:19.782548 dracut-cmdline[282]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=a48d469b0deb49c328e6faf6cf366b11952d47f2d24963c866a0ea8221fb0039 Oct 13 05:49:19.799997 systemd-resolved[284]: Positive Trust Anchors: Oct 13 05:49:19.800239 systemd-resolved[284]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 05:49:19.800421 systemd-resolved[284]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 05:49:19.802647 systemd-resolved[284]: Defaulting to hostname 'linux'. Oct 13 05:49:19.803369 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 05:49:19.803661 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:49:19.829898 kernel: SCSI subsystem initialized Oct 13 05:49:19.845900 kernel: Loading iSCSI transport class v2.0-870. Oct 13 05:49:19.853895 kernel: iscsi: registered transport (tcp) Oct 13 05:49:19.876268 kernel: iscsi: registered transport (qla4xxx) Oct 13 05:49:19.876309 kernel: QLogic iSCSI HBA Driver Oct 13 05:49:19.887360 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 05:49:19.903942 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:49:19.904977 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 05:49:19.927638 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 13 05:49:19.928595 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 13 05:49:19.975943 kernel: raid6: avx2x4 gen() 47973 MB/s Oct 13 05:49:19.987936 kernel: raid6: avx2x2 gen() 48845 MB/s Oct 13 05:49:20.005566 kernel: raid6: avx2x1 gen() 33203 MB/s Oct 13 05:49:20.005615 kernel: raid6: using algorithm avx2x2 gen() 48845 MB/s Oct 13 05:49:20.023481 kernel: raid6: .... xor() 23279 MB/s, rmw enabled Oct 13 05:49:20.023528 kernel: raid6: using avx2x2 recovery algorithm Oct 13 05:49:20.040903 kernel: xor: automatically using best checksumming function avx Oct 13 05:49:20.155991 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 13 05:49:20.158811 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 13 05:49:20.160116 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:49:20.176007 systemd-udevd[493]: Using default interface naming scheme 'v255'. Oct 13 05:49:20.179469 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:49:20.180715 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 13 05:49:20.199799 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation Oct 13 05:49:20.214191 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 05:49:20.215170 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 05:49:20.296745 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:49:20.298188 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 13 05:49:20.372274 kernel: VMware PVSCSI driver - version 1.0.7.0-k Oct 13 05:49:20.372314 kernel: vmw_pvscsi: using 64bit dma Oct 13 05:49:20.374951 kernel: vmw_pvscsi: max_id: 16 Oct 13 05:49:20.374984 kernel: vmw_pvscsi: setting ring_pages to 8 Oct 13 05:49:20.380909 kernel: vmw_pvscsi: enabling reqCallThreshold Oct 13 05:49:20.380960 kernel: vmw_pvscsi: driver-based request coalescing enabled Oct 13 05:49:20.380978 kernel: vmw_pvscsi: using MSI-X Oct 13 05:49:20.385895 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Oct 13 05:49:20.390482 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Oct 13 05:49:20.390607 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Oct 13 05:49:20.405260 (udev-worker)[540]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Oct 13 05:49:20.410893 kernel: VMware vmxnet3 virtual NIC driver - version 1.9.0.0-k-NAPI Oct 13 05:49:20.412508 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:49:20.412947 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Oct 13 05:49:20.412599 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:49:20.413287 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:49:20.414210 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:49:20.416904 kernel: libata version 3.00 loaded. Oct 13 05:49:20.418935 kernel: cryptd: max_cpu_qlen set to 1000 Oct 13 05:49:20.421898 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Oct 13 05:49:20.423897 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Oct 13 05:49:20.435122 kernel: ata_piix 0000:00:07.1: version 2.13 Oct 13 05:49:20.435248 kernel: scsi host1: ata_piix Oct 13 05:49:20.438669 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Oct 13 05:49:20.438696 kernel: scsi host2: ata_piix Oct 13 05:49:20.438779 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 lpm-pol 0 Oct 13 05:49:20.438788 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 lpm-pol 0 Oct 13 05:49:20.441905 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Oct 13 05:49:20.442019 kernel: sd 0:0:0:0: [sda] Write Protect is off Oct 13 05:49:20.442086 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Oct 13 05:49:20.442147 kernel: sd 0:0:0:0: [sda] Cache data unavailable Oct 13 05:49:20.442206 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Oct 13 05:49:20.442266 kernel: AES CTR mode by8 optimization enabled Oct 13 05:49:20.452767 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:49:20.456130 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 13 05:49:20.456158 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Oct 13 05:49:20.605028 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Oct 13 05:49:20.610912 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Oct 13 05:49:20.637368 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Oct 13 05:49:20.637485 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 13 05:49:20.640163 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Oct 13 05:49:20.650898 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 13 05:49:20.657102 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Oct 13 05:49:20.662539 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Oct 13 05:49:20.666927 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Oct 13 05:49:20.667315 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Oct 13 05:49:20.668961 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 13 05:49:20.702908 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 13 05:49:20.713902 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 13 05:49:20.901619 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 13 05:49:20.902000 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 05:49:20.902136 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:49:20.902363 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 05:49:20.903071 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 13 05:49:20.919757 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 13 05:49:21.714896 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 13 05:49:21.714936 disk-uuid[644]: The operation has completed successfully. Oct 13 05:49:21.754139 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 13 05:49:21.754216 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 13 05:49:21.764962 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 13 05:49:21.775079 sh[674]: Success Oct 13 05:49:21.789737 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 13 05:49:21.789778 kernel: device-mapper: uevent: version 1.0.3 Oct 13 05:49:21.789793 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 13 05:49:21.796910 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Oct 13 05:49:21.869925 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 13 05:49:21.871925 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 13 05:49:21.881170 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 13 05:49:21.898902 kernel: BTRFS: device fsid c8746500-26f5-4ec1-9da8-aef51ec7db92 devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (686) Oct 13 05:49:21.898938 kernel: BTRFS info (device dm-0): first mount of filesystem c8746500-26f5-4ec1-9da8-aef51ec7db92 Oct 13 05:49:21.900988 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:49:21.912007 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 13 05:49:21.912045 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 13 05:49:21.912053 kernel: BTRFS info (device dm-0): enabling free space tree Oct 13 05:49:21.915620 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 13 05:49:21.915956 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 13 05:49:21.916555 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Oct 13 05:49:21.917948 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 13 05:49:21.940836 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (709) Oct 13 05:49:21.940871 kernel: BTRFS info (device sda6): first mount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:49:21.940879 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:49:21.946147 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 13 05:49:21.946185 kernel: BTRFS info (device sda6): enabling free space tree Oct 13 05:49:21.948903 kernel: BTRFS info (device sda6): last unmount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:49:21.949232 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 13 05:49:21.950968 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 13 05:49:21.997356 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Oct 13 05:49:21.998006 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 13 05:49:22.063346 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 05:49:22.064496 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 05:49:22.105248 systemd-networkd[863]: lo: Link UP Oct 13 05:49:22.105254 systemd-networkd[863]: lo: Gained carrier Oct 13 05:49:22.113073 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Oct 13 05:49:22.113191 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Oct 13 05:49:22.106006 systemd-networkd[863]: Enumeration completed Oct 13 05:49:22.106656 systemd-networkd[863]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Oct 13 05:49:22.107560 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 05:49:22.111138 systemd-networkd[863]: ens192: Link UP Oct 13 05:49:22.111142 systemd-networkd[863]: ens192: Gained carrier Oct 13 05:49:22.112864 systemd[1]: Reached target network.target - Network. Oct 13 05:49:22.145226 ignition[731]: Ignition 2.22.0 Oct 13 05:49:22.145238 ignition[731]: Stage: fetch-offline Oct 13 05:49:22.145260 ignition[731]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:49:22.145265 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 13 05:49:22.145316 ignition[731]: parsed url from cmdline: "" Oct 13 05:49:22.145318 ignition[731]: no config URL provided Oct 13 05:49:22.145321 ignition[731]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 05:49:22.145326 ignition[731]: no config at "/usr/lib/ignition/user.ign" Oct 13 05:49:22.145715 ignition[731]: config successfully fetched Oct 13 05:49:22.145734 ignition[731]: parsing config with SHA512: 7d762f79beae90303c0456f9a2e06f0ada2665d4013c4bc558b06bb0b421b2f656fd7803575c71e77dd0ec2ceea732905c0594737f16a3afba9c09e4485d68f3 Oct 13 05:49:22.149742 unknown[731]: fetched base config from "system" Oct 13 05:49:22.149752 unknown[731]: fetched user config from "vmware" Oct 13 05:49:22.150045 ignition[731]: fetch-offline: fetch-offline passed Oct 13 05:49:22.150082 ignition[731]: Ignition finished successfully Oct 13 05:49:22.151031 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 05:49:22.151385 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 13 05:49:22.152185 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 13 05:49:22.167928 ignition[874]: Ignition 2.22.0 Oct 13 05:49:22.168168 ignition[874]: Stage: kargs Oct 13 05:49:22.168253 ignition[874]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:49:22.168258 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 13 05:49:22.169254 ignition[874]: kargs: kargs passed Oct 13 05:49:22.169291 ignition[874]: Ignition finished successfully Oct 13 05:49:22.170453 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 13 05:49:22.171429 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 13 05:49:22.184515 ignition[880]: Ignition 2.22.0 Oct 13 05:49:22.184524 ignition[880]: Stage: disks Oct 13 05:49:22.184604 ignition[880]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:49:22.184610 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 13 05:49:22.185835 ignition[880]: disks: disks passed Oct 13 05:49:22.185972 ignition[880]: Ignition finished successfully Oct 13 05:49:22.186802 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 13 05:49:22.187296 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 13 05:49:22.187538 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 13 05:49:22.187774 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 05:49:22.187972 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 05:49:22.188208 systemd[1]: Reached target basic.target - Basic System. Oct 13 05:49:22.188942 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 13 05:49:22.207125 systemd-fsck[888]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Oct 13 05:49:22.208378 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 13 05:49:22.209267 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 13 05:49:22.312896 kernel: EXT4-fs (sda9): mounted filesystem 8b520359-9763-45f3-b7f7-db1e9fbc640d r/w with ordered data mode. Quota mode: none. Oct 13 05:49:22.313666 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 13 05:49:22.313970 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 13 05:49:22.315365 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 05:49:22.316928 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 13 05:49:22.317361 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 13 05:49:22.317535 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 13 05:49:22.317550 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 05:49:22.326906 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 13 05:49:22.327738 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 13 05:49:22.339027 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (896) Oct 13 05:49:22.341284 kernel: BTRFS info (device sda6): first mount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:49:22.341305 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:49:22.346046 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 13 05:49:22.346078 kernel: BTRFS info (device sda6): enabling free space tree Oct 13 05:49:22.347425 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 05:49:22.363398 initrd-setup-root[920]: cut: /sysroot/etc/passwd: No such file or directory Oct 13 05:49:22.365782 initrd-setup-root[927]: cut: /sysroot/etc/group: No such file or directory Oct 13 05:49:22.371527 initrd-setup-root[934]: cut: /sysroot/etc/shadow: No such file or directory Oct 13 05:49:22.373645 initrd-setup-root[941]: cut: /sysroot/etc/gshadow: No such file or directory Oct 13 05:49:22.493457 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 13 05:49:22.494302 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 13 05:49:22.494947 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 13 05:49:22.502897 kernel: BTRFS info (device sda6): last unmount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:49:22.518213 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 13 05:49:22.521111 ignition[1009]: INFO : Ignition 2.22.0 Oct 13 05:49:22.521349 ignition[1009]: INFO : Stage: mount Oct 13 05:49:22.521549 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:49:22.521670 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 13 05:49:22.522330 ignition[1009]: INFO : mount: mount passed Oct 13 05:49:22.522458 ignition[1009]: INFO : Ignition finished successfully Oct 13 05:49:22.523280 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 13 05:49:22.524333 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 13 05:49:22.735060 systemd-resolved[284]: Detected conflict on linux IN A 139.178.70.106 Oct 13 05:49:22.735073 systemd-resolved[284]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Oct 13 05:49:22.897949 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 13 05:49:22.899212 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 05:49:22.922897 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1020) Oct 13 05:49:22.925059 kernel: BTRFS info (device sda6): first mount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:49:22.925080 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:49:22.928349 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 13 05:49:22.928369 kernel: BTRFS info (device sda6): enabling free space tree Oct 13 05:49:22.929532 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 05:49:22.952088 ignition[1036]: INFO : Ignition 2.22.0 Oct 13 05:49:22.952088 ignition[1036]: INFO : Stage: files Oct 13 05:49:22.952538 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:49:22.952538 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 13 05:49:22.952869 ignition[1036]: DEBUG : files: compiled without relabeling support, skipping Oct 13 05:49:22.953348 ignition[1036]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 13 05:49:22.953348 ignition[1036]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 13 05:49:22.955073 ignition[1036]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 13 05:49:22.955240 ignition[1036]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 13 05:49:22.955389 ignition[1036]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 13 05:49:22.955305 unknown[1036]: wrote ssh authorized keys file for user: core Oct 13 05:49:22.957526 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 13 05:49:22.957772 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 13 05:49:23.008607 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 13 05:49:23.047450 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 13 05:49:23.047450 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 13 05:49:23.047450 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 13 05:49:23.047450 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 13 05:49:23.047450 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 13 05:49:23.047450 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 05:49:23.047450 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 05:49:23.047450 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 05:49:23.047450 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 05:49:23.049981 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 05:49:23.050147 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 05:49:23.050147 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 05:49:23.052314 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 05:49:23.052548 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 05:49:23.052548 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Oct 13 05:49:23.438640 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 13 05:49:23.762923 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 05:49:23.762923 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Oct 13 05:49:23.771125 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Oct 13 05:49:23.771125 ignition[1036]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Oct 13 05:49:23.771125 ignition[1036]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 05:49:23.772190 ignition[1036]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 05:49:23.772190 ignition[1036]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Oct 13 05:49:23.772190 ignition[1036]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Oct 13 05:49:23.772190 ignition[1036]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 13 05:49:23.772995 ignition[1036]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 13 05:49:23.772995 ignition[1036]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Oct 13 05:49:23.772995 ignition[1036]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Oct 13 05:49:23.799459 ignition[1036]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 13 05:49:23.802570 ignition[1036]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 13 05:49:23.802570 ignition[1036]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Oct 13 05:49:23.802570 ignition[1036]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 13 05:49:23.802570 ignition[1036]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 13 05:49:23.802570 ignition[1036]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 13 05:49:23.802570 ignition[1036]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 13 05:49:23.802570 ignition[1036]: INFO : files: files passed Oct 13 05:49:23.802570 ignition[1036]: INFO : Ignition finished successfully Oct 13 05:49:23.805110 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 13 05:49:23.806221 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 13 05:49:23.806971 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 13 05:49:23.812871 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 13 05:49:23.812970 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 13 05:49:23.815480 initrd-setup-root-after-ignition[1068]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:49:23.815480 initrd-setup-root-after-ignition[1068]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:49:23.816436 initrd-setup-root-after-ignition[1072]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:49:23.817736 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 05:49:23.818131 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 13 05:49:23.818788 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 13 05:49:23.850215 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 13 05:49:23.850278 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 13 05:49:23.850564 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 13 05:49:23.850683 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 13 05:49:23.850880 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 13 05:49:23.851363 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 13 05:49:23.860695 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 05:49:23.861533 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 13 05:49:23.875395 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:49:23.875572 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:49:23.875798 systemd[1]: Stopped target timers.target - Timer Units. Oct 13 05:49:23.876032 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 13 05:49:23.876110 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 05:49:23.876482 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 13 05:49:23.876636 systemd[1]: Stopped target basic.target - Basic System. Oct 13 05:49:23.876848 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 13 05:49:23.877055 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 05:49:23.877270 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 13 05:49:23.877490 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 13 05:49:23.877703 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 13 05:49:23.877930 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 05:49:23.878152 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 13 05:49:23.878380 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 13 05:49:23.878543 systemd[1]: Stopped target swap.target - Swaps. Oct 13 05:49:23.878730 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 13 05:49:23.878801 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 13 05:49:23.879110 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:49:23.879367 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:49:23.879546 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 13 05:49:23.879605 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:49:23.879773 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 13 05:49:23.879873 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 13 05:49:23.880265 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 13 05:49:23.880352 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 05:49:23.880573 systemd[1]: Stopped target paths.target - Path Units. Oct 13 05:49:23.880723 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 13 05:49:23.880795 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:49:23.880978 systemd[1]: Stopped target slices.target - Slice Units. Oct 13 05:49:23.881157 systemd[1]: Stopped target sockets.target - Socket Units. Oct 13 05:49:23.881365 systemd[1]: iscsid.socket: Deactivated successfully. Oct 13 05:49:23.881435 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 05:49:23.881590 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 13 05:49:23.881637 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 05:49:23.881842 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 13 05:49:23.881955 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 05:49:23.882195 systemd[1]: ignition-files.service: Deactivated successfully. Oct 13 05:49:23.882280 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 13 05:49:23.884015 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 13 05:49:23.884939 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 13 05:49:23.885193 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 13 05:49:23.885400 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:49:23.885576 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 13 05:49:23.885636 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 05:49:23.890231 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 13 05:49:23.891970 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 13 05:49:23.901633 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 13 05:49:23.906418 ignition[1093]: INFO : Ignition 2.22.0 Oct 13 05:49:23.906821 ignition[1093]: INFO : Stage: umount Oct 13 05:49:23.907115 ignition[1093]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:49:23.907115 ignition[1093]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 13 05:49:23.907934 ignition[1093]: INFO : umount: umount passed Oct 13 05:49:23.908101 ignition[1093]: INFO : Ignition finished successfully Oct 13 05:49:23.908971 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 13 05:49:23.909029 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 13 05:49:23.909673 systemd[1]: Stopped target network.target - Network. Oct 13 05:49:23.909923 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 13 05:49:23.909965 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 13 05:49:23.910323 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 13 05:49:23.910452 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 13 05:49:23.910707 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 13 05:49:23.910836 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 13 05:49:23.911105 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 13 05:49:23.911236 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 13 05:49:23.911727 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 13 05:49:23.912187 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 13 05:49:23.913431 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 13 05:49:23.913599 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 13 05:49:23.915097 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Oct 13 05:49:23.915341 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 13 05:49:23.915392 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:49:23.916303 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Oct 13 05:49:23.918814 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 13 05:49:23.918880 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 13 05:49:23.919573 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Oct 13 05:49:23.919656 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 13 05:49:23.919820 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 13 05:49:23.919837 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:49:23.920551 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 13 05:49:23.920651 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 13 05:49:23.920676 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 05:49:23.920802 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Oct 13 05:49:23.920825 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Oct 13 05:49:23.920968 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 13 05:49:23.920988 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:49:23.921142 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 13 05:49:23.921163 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 13 05:49:23.921285 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:49:23.922089 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 13 05:49:23.931064 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 13 05:49:23.931152 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:49:23.931477 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 13 05:49:23.931511 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 13 05:49:23.931729 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 13 05:49:23.931748 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:49:23.931872 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 13 05:49:23.931906 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 13 05:49:23.932208 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 13 05:49:23.932234 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 13 05:49:23.932498 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 13 05:49:23.932523 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 05:49:23.933449 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 13 05:49:23.933568 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 13 05:49:23.933597 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:49:23.934038 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 13 05:49:23.934061 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:49:23.934985 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 13 05:49:23.935185 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:49:23.935509 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 13 05:49:23.935629 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:49:23.935891 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:49:23.936014 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:49:23.941150 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 13 05:49:23.941365 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 13 05:49:23.942382 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 13 05:49:23.942565 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 13 05:49:24.002914 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 13 05:49:24.003159 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 13 05:49:24.003406 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 13 05:49:24.003541 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 13 05:49:24.003577 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 13 05:49:24.004865 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 13 05:49:24.025779 systemd[1]: Switching root. Oct 13 05:49:24.074048 systemd-journald[243]: Journal stopped Oct 13 05:49:26.368581 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). Oct 13 05:49:26.368603 kernel: SELinux: policy capability network_peer_controls=1 Oct 13 05:49:26.368611 kernel: SELinux: policy capability open_perms=1 Oct 13 05:49:26.368617 kernel: SELinux: policy capability extended_socket_class=1 Oct 13 05:49:26.368622 kernel: SELinux: policy capability always_check_network=0 Oct 13 05:49:26.368629 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 13 05:49:26.368635 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 13 05:49:26.368640 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 13 05:49:26.368646 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 13 05:49:26.368651 kernel: SELinux: policy capability userspace_initial_context=0 Oct 13 05:49:26.368656 kernel: audit: type=1403 audit(1760334564.792:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 13 05:49:26.368663 systemd[1]: Successfully loaded SELinux policy in 45.496ms. Oct 13 05:49:26.368670 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 3.861ms. Oct 13 05:49:26.368677 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 05:49:26.368684 systemd[1]: Detected virtualization vmware. Oct 13 05:49:26.368691 systemd[1]: Detected architecture x86-64. Oct 13 05:49:26.368698 systemd[1]: Detected first boot. Oct 13 05:49:26.368704 systemd[1]: Initializing machine ID from random generator. Oct 13 05:49:26.368711 zram_generator::config[1137]: No configuration found. Oct 13 05:49:26.368788 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Oct 13 05:49:26.368799 kernel: Guest personality initialized and is active Oct 13 05:49:26.368805 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 13 05:49:26.368812 kernel: Initialized host personality Oct 13 05:49:26.368819 kernel: NET: Registered PF_VSOCK protocol family Oct 13 05:49:26.368826 systemd[1]: Populated /etc with preset unit settings. Oct 13 05:49:26.368833 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 13 05:49:26.368840 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Oct 13 05:49:26.368847 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Oct 13 05:49:26.368853 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 13 05:49:26.368859 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 13 05:49:26.368866 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 13 05:49:26.368873 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 13 05:49:26.368880 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 13 05:49:26.368898 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 13 05:49:26.368908 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 13 05:49:26.368914 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 13 05:49:26.368921 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 13 05:49:26.368930 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 13 05:49:26.368936 systemd[1]: Created slice user.slice - User and Session Slice. Oct 13 05:49:26.368943 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:49:26.368952 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:49:26.368958 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 13 05:49:26.368965 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 13 05:49:26.368972 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 13 05:49:26.368978 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 05:49:26.368986 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 13 05:49:26.368993 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:49:26.368999 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:49:26.369006 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 13 05:49:26.369013 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 13 05:49:26.369019 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 13 05:49:26.369026 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 13 05:49:26.369033 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:49:26.369041 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 05:49:26.369048 systemd[1]: Reached target slices.target - Slice Units. Oct 13 05:49:26.369054 systemd[1]: Reached target swap.target - Swaps. Oct 13 05:49:26.369061 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 13 05:49:26.369068 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 13 05:49:26.369075 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 13 05:49:26.369082 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:49:26.369089 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 05:49:26.369095 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:49:26.369102 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 13 05:49:26.369109 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 13 05:49:26.369115 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 13 05:49:26.369122 systemd[1]: Mounting media.mount - External Media Directory... Oct 13 05:49:26.369130 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:49:26.369137 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 13 05:49:26.369144 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 13 05:49:26.369150 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 13 05:49:26.369157 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 13 05:49:26.369164 systemd[1]: Reached target machines.target - Containers. Oct 13 05:49:26.369171 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 13 05:49:26.369178 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Oct 13 05:49:26.369185 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 05:49:26.369192 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 13 05:49:26.369199 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:49:26.369206 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 05:49:26.369213 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:49:26.369219 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 13 05:49:26.369226 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:49:26.369233 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 13 05:49:26.369241 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 13 05:49:26.369247 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 13 05:49:26.369254 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 13 05:49:26.369261 systemd[1]: Stopped systemd-fsck-usr.service. Oct 13 05:49:26.369268 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:49:26.369275 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 05:49:26.369282 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 05:49:26.369289 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 05:49:26.369296 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 13 05:49:26.369304 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 13 05:49:26.369311 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 05:49:26.369317 systemd[1]: verity-setup.service: Deactivated successfully. Oct 13 05:49:26.369324 systemd[1]: Stopped verity-setup.service. Oct 13 05:49:26.369331 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:49:26.369338 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 13 05:49:26.369345 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 13 05:49:26.369352 systemd[1]: Mounted media.mount - External Media Directory. Oct 13 05:49:26.369359 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 13 05:49:26.369367 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 13 05:49:26.369373 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 13 05:49:26.369380 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:49:26.369386 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 13 05:49:26.369393 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 13 05:49:26.369400 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:49:26.369406 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:49:26.369413 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:49:26.369421 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:49:26.369428 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 05:49:26.369435 kernel: fuse: init (API version 7.41) Oct 13 05:49:26.369441 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:49:26.369451 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 13 05:49:26.369458 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 13 05:49:26.369464 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 13 05:49:26.369471 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 05:49:26.369479 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 13 05:49:26.369486 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 13 05:49:26.369496 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 13 05:49:26.369504 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 05:49:26.374935 systemd-journald[1234]: Collecting audit messages is disabled. Oct 13 05:49:26.374964 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 13 05:49:26.374976 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 13 05:49:26.374985 systemd-journald[1234]: Journal started Oct 13 05:49:26.375000 systemd-journald[1234]: Runtime Journal (/run/log/journal/565b5c54e3b947738469a9f104e904fe) is 4.8M, max 38.8M, 34M free. Oct 13 05:49:26.139659 systemd[1]: Queued start job for default target multi-user.target. Oct 13 05:49:26.147044 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Oct 13 05:49:26.147285 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 13 05:49:26.375472 jq[1207]: true Oct 13 05:49:26.376026 jq[1249]: true Oct 13 05:49:26.381894 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:49:26.390660 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 13 05:49:26.390704 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:49:26.390714 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 13 05:49:26.390723 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:49:26.396895 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 13 05:49:26.398900 kernel: ACPI: bus type drm_connector registered Oct 13 05:49:26.401905 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 05:49:26.404050 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 05:49:26.408244 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 13 05:49:26.408506 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 05:49:26.408635 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 05:49:26.408919 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 13 05:49:26.410036 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 13 05:49:26.410203 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 13 05:49:26.426075 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 13 05:49:26.429927 kernel: loop: module loaded Oct 13 05:49:26.431422 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:49:26.431600 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:49:26.431838 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:49:26.432338 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 13 05:49:26.432716 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 13 05:49:26.434208 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 13 05:49:26.443291 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:49:26.452433 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 13 05:49:26.454617 kernel: loop0: detected capacity change from 0 to 110984 Oct 13 05:49:26.454681 systemd-journald[1234]: Time spent on flushing to /var/log/journal/565b5c54e3b947738469a9f104e904fe is 33.566ms for 1769 entries. Oct 13 05:49:26.454681 systemd-journald[1234]: System Journal (/var/log/journal/565b5c54e3b947738469a9f104e904fe) is 8M, max 584.8M, 576.8M free. Oct 13 05:49:26.519991 systemd-journald[1234]: Received client request to flush runtime journal. Oct 13 05:49:26.470813 ignition[1262]: Ignition 2.22.0 Oct 13 05:49:26.510944 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Oct 13 05:49:26.471992 ignition[1262]: deleting config from guestinfo properties Oct 13 05:49:26.510955 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Oct 13 05:49:26.515142 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:49:26.523043 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 13 05:49:26.523371 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 13 05:49:26.532230 ignition[1262]: Successfully deleted config Oct 13 05:49:26.550495 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 13 05:49:26.549168 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Oct 13 05:49:26.549556 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:49:26.564265 kernel: loop1: detected capacity change from 0 to 2960 Oct 13 05:49:26.562961 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 13 05:49:26.565120 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 05:49:26.586376 systemd-tmpfiles[1307]: ACLs are not supported, ignoring. Oct 13 05:49:26.586389 systemd-tmpfiles[1307]: ACLs are not supported, ignoring. Oct 13 05:49:26.588938 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:49:26.602899 kernel: loop2: detected capacity change from 0 to 128016 Oct 13 05:49:26.694906 kernel: loop3: detected capacity change from 0 to 219144 Oct 13 05:49:26.729902 kernel: loop4: detected capacity change from 0 to 110984 Oct 13 05:49:26.908911 kernel: loop5: detected capacity change from 0 to 2960 Oct 13 05:49:26.929966 kernel: loop6: detected capacity change from 0 to 128016 Oct 13 05:49:26.951904 kernel: loop7: detected capacity change from 0 to 219144 Oct 13 05:49:27.057190 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 13 05:49:27.058403 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:49:27.078321 systemd-udevd[1315]: Using default interface naming scheme 'v255'. Oct 13 05:49:27.095566 (sd-merge)[1313]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Oct 13 05:49:27.096437 (sd-merge)[1313]: Merged extensions into '/usr'. Oct 13 05:49:27.104826 systemd[1]: Reload requested from client PID 1260 ('systemd-sysext') (unit systemd-sysext.service)... Oct 13 05:49:27.104840 systemd[1]: Reloading... Oct 13 05:49:27.147920 zram_generator::config[1340]: No configuration found. Oct 13 05:49:27.296125 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 13 05:49:27.315907 kernel: mousedev: PS/2 mouse device common for all mice Oct 13 05:49:27.322933 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Oct 13 05:49:27.323056 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 13 05:49:27.340900 kernel: ACPI: button: Power Button [PWRF] Oct 13 05:49:27.367497 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 13 05:49:27.367708 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 13 05:49:27.367825 systemd[1]: Reloading finished in 262 ms. Oct 13 05:49:27.380209 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:49:27.380934 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 13 05:49:27.390189 systemd[1]: Starting ensure-sysext.service... Oct 13 05:49:27.392592 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 05:49:27.396043 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 05:49:27.430835 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 13 05:49:27.441991 systemd-tmpfiles[1432]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 13 05:49:27.442015 systemd-tmpfiles[1432]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 13 05:49:27.442192 systemd-tmpfiles[1432]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 13 05:49:27.442348 systemd-tmpfiles[1432]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 13 05:49:27.442821 systemd-tmpfiles[1432]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 13 05:49:27.445479 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Oct 13 05:49:27.446171 systemd-tmpfiles[1432]: ACLs are not supported, ignoring. Oct 13 05:49:27.446208 systemd-tmpfiles[1432]: ACLs are not supported, ignoring. Oct 13 05:49:27.448147 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 13 05:49:27.449157 systemd[1]: Reload requested from client PID 1429 ('systemctl') (unit ensure-sysext.service)... Oct 13 05:49:27.449165 systemd[1]: Reloading... Oct 13 05:49:27.475180 systemd-tmpfiles[1432]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 05:49:27.475186 systemd-tmpfiles[1432]: Skipping /boot Oct 13 05:49:27.487944 systemd-tmpfiles[1432]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 05:49:27.487952 systemd-tmpfiles[1432]: Skipping /boot Oct 13 05:49:27.501913 zram_generator::config[1466]: No configuration found. Oct 13 05:49:27.614796 (udev-worker)[1362]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Oct 13 05:49:27.634450 systemd-networkd[1430]: lo: Link UP Oct 13 05:49:27.634456 systemd-networkd[1430]: lo: Gained carrier Oct 13 05:49:27.637262 systemd-networkd[1430]: Enumeration completed Oct 13 05:49:27.639093 systemd-networkd[1430]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Oct 13 05:49:27.640947 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Oct 13 05:49:27.641108 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Oct 13 05:49:27.645128 systemd-networkd[1430]: ens192: Link UP Oct 13 05:49:27.645316 systemd-networkd[1430]: ens192: Gained carrier Oct 13 05:49:27.662664 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 13 05:49:27.717040 ldconfig[1256]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 13 05:49:27.731088 systemd[1]: Reloading finished in 281 ms. Oct 13 05:49:27.740227 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 13 05:49:27.740515 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 05:49:27.740854 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 13 05:49:27.754087 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:49:27.754483 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 13 05:49:27.771881 systemd[1]: Finished ensure-sysext.service. Oct 13 05:49:27.773872 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:49:27.774975 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:49:27.781269 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 13 05:49:27.783697 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:49:27.784382 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 05:49:27.785558 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:49:27.788593 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:49:27.788813 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:49:27.788836 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:49:27.794719 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 13 05:49:27.796290 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 13 05:49:27.802115 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 13 05:49:27.805505 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 05:49:27.808075 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 13 05:49:27.809287 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 13 05:49:27.812440 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:49:27.812587 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:49:27.813162 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:49:27.813285 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:49:27.813563 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 05:49:27.813667 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 05:49:27.813982 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:49:27.814096 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:49:27.814335 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:49:27.814441 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:49:27.816679 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:49:27.816716 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:49:27.834100 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 13 05:49:27.839940 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 13 05:49:27.864140 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 13 05:49:27.868182 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 13 05:49:27.872364 augenrules[1589]: No rules Oct 13 05:49:27.875185 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:49:27.875358 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:49:27.892493 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 13 05:49:27.893404 systemd-resolved[1561]: Positive Trust Anchors: Oct 13 05:49:27.893774 systemd-resolved[1561]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 05:49:27.893831 systemd-resolved[1561]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 05:49:27.896317 systemd-resolved[1561]: Defaulting to hostname 'linux'. Oct 13 05:49:27.897509 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 05:49:27.897687 systemd[1]: Reached target network.target - Network. Oct 13 05:49:27.897781 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:49:27.898351 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 13 05:49:27.898535 systemd[1]: Reached target time-set.target - System Time Set. Oct 13 05:49:27.913107 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:49:27.973968 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 13 05:49:27.974505 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 13 05:49:27.974554 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 05:49:27.974744 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 13 05:49:27.974909 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 13 05:49:27.975049 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 13 05:49:27.975259 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 13 05:49:27.975428 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 13 05:49:27.975557 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 13 05:49:27.975681 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 13 05:49:27.975702 systemd[1]: Reached target paths.target - Path Units. Oct 13 05:49:27.975796 systemd[1]: Reached target timers.target - Timer Units. Oct 13 05:49:27.977551 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 13 05:49:27.978733 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 13 05:49:27.980444 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 13 05:49:27.980685 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 13 05:49:27.980820 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 13 05:49:27.984186 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 13 05:49:27.984540 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 13 05:49:27.985252 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 13 05:49:27.985855 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 05:49:27.986011 systemd[1]: Reached target basic.target - Basic System. Oct 13 05:49:27.986143 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 13 05:49:27.986162 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 13 05:49:27.987005 systemd[1]: Starting containerd.service - containerd container runtime... Oct 13 05:49:27.990073 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 13 05:49:27.991041 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 13 05:49:27.992568 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 13 05:49:27.995073 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 13 05:49:27.995212 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 13 05:49:27.997020 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 13 05:49:28.001258 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 13 05:49:28.002528 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 13 05:49:28.006274 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 13 05:49:28.007293 jq[1606]: false Oct 13 05:49:28.010002 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 13 05:49:28.014848 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 13 05:49:28.017161 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 13 05:49:28.017727 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 13 05:49:28.018579 systemd[1]: Starting update-engine.service - Update Engine... Oct 13 05:49:28.020013 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 13 05:49:28.022408 google_oslogin_nss_cache[1608]: oslogin_cache_refresh[1608]: Refreshing passwd entry cache Oct 13 05:49:28.022172 oslogin_cache_refresh[1608]: Refreshing passwd entry cache Oct 13 05:49:28.023020 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Oct 13 05:49:28.028280 extend-filesystems[1607]: Found /dev/sda6 Oct 13 05:49:28.032825 oslogin_cache_refresh[1608]: Failure getting users, quitting Oct 13 05:49:28.032033 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 13 05:49:28.034016 google_oslogin_nss_cache[1608]: oslogin_cache_refresh[1608]: Failure getting users, quitting Oct 13 05:49:28.034016 google_oslogin_nss_cache[1608]: oslogin_cache_refresh[1608]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 13 05:49:28.032837 oslogin_cache_refresh[1608]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 13 05:49:28.032554 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 13 05:49:28.032868 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 13 05:49:28.036243 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 13 05:49:28.036391 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 13 05:49:28.040842 google_oslogin_nss_cache[1608]: oslogin_cache_refresh[1608]: Refreshing group entry cache Oct 13 05:49:28.040414 oslogin_cache_refresh[1608]: Refreshing group entry cache Oct 13 05:49:28.042187 extend-filesystems[1607]: Found /dev/sda9 Oct 13 05:49:28.043456 jq[1618]: true Oct 13 05:49:28.043844 google_oslogin_nss_cache[1608]: oslogin_cache_refresh[1608]: Failure getting groups, quitting Oct 13 05:49:28.043844 google_oslogin_nss_cache[1608]: oslogin_cache_refresh[1608]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 13 05:49:28.043840 oslogin_cache_refresh[1608]: Failure getting groups, quitting Oct 13 05:49:28.043847 oslogin_cache_refresh[1608]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 13 05:49:28.046276 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 13 05:49:28.046432 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 13 05:49:28.046529 extend-filesystems[1607]: Checking size of /dev/sda9 Oct 13 05:51:04.320568 systemd-timesyncd[1562]: Contacted time server 45.79.111.167:123 (0.flatcar.pool.ntp.org). Oct 13 05:51:04.320603 systemd-timesyncd[1562]: Initial clock synchronization to Mon 2025-10-13 05:51:04.320498 UTC. Oct 13 05:51:04.321362 systemd-resolved[1561]: Clock change detected. Flushing caches. Oct 13 05:51:04.329383 update_engine[1617]: I20251013 05:51:04.329335 1617 main.cc:92] Flatcar Update Engine starting Oct 13 05:51:04.336378 systemd[1]: motdgen.service: Deactivated successfully. Oct 13 05:51:04.337677 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 13 05:51:04.338381 (ntainerd)[1643]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 13 05:51:04.338484 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Oct 13 05:51:04.343728 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Oct 13 05:51:04.351539 jq[1641]: true Oct 13 05:51:04.356788 extend-filesystems[1607]: Old size kept for /dev/sda9 Oct 13 05:51:04.357901 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 13 05:51:04.358278 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 13 05:51:04.382775 tar[1624]: linux-amd64/LICENSE Oct 13 05:51:04.385352 tar[1624]: linux-amd64/helm Oct 13 05:51:04.394366 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Oct 13 05:51:04.405278 systemd-logind[1616]: Watching system buttons on /dev/input/event2 (Power Button) Oct 13 05:51:04.406980 systemd-logind[1616]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 13 05:51:04.407968 unknown[1646]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Oct 13 05:51:04.408603 systemd-logind[1616]: New seat seat0. Oct 13 05:51:04.410120 systemd[1]: Started systemd-logind.service - User Login Management. Oct 13 05:51:04.412907 unknown[1646]: Core dump limit set to -1 Oct 13 05:51:04.435829 dbus-daemon[1604]: [system] SELinux support is enabled Oct 13 05:51:04.436142 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 13 05:51:04.438280 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 13 05:51:04.438297 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 13 05:51:04.438615 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 13 05:51:04.438628 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 13 05:51:04.442280 bash[1674]: Updated "/home/core/.ssh/authorized_keys" Oct 13 05:51:04.442472 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 13 05:51:04.443297 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 13 05:51:04.446735 dbus-daemon[1604]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 13 05:51:04.452637 update_engine[1617]: I20251013 05:51:04.452268 1617 update_check_scheduler.cc:74] Next update check in 10m9s Oct 13 05:51:04.452749 systemd[1]: Started update-engine.service - Update Engine. Oct 13 05:51:04.460731 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 13 05:51:04.474029 sshd_keygen[1642]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 13 05:51:04.510388 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 13 05:51:04.513674 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 13 05:51:04.532863 systemd[1]: issuegen.service: Deactivated successfully. Oct 13 05:51:04.533000 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 13 05:51:04.537979 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 13 05:51:04.562249 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 13 05:51:04.564121 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 13 05:51:04.566752 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 13 05:51:04.567665 systemd[1]: Reached target getty.target - Login Prompts. Oct 13 05:51:04.574857 locksmithd[1677]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 13 05:51:04.668347 containerd[1643]: time="2025-10-13T05:51:04Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 13 05:51:04.669210 containerd[1643]: time="2025-10-13T05:51:04.669193303Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 13 05:51:04.677048 containerd[1643]: time="2025-10-13T05:51:04.676973927Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.022µs" Oct 13 05:51:04.677125 containerd[1643]: time="2025-10-13T05:51:04.677114517Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 13 05:51:04.678349 containerd[1643]: time="2025-10-13T05:51:04.677167268Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 13 05:51:04.678349 containerd[1643]: time="2025-10-13T05:51:04.677674402Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 13 05:51:04.678349 containerd[1643]: time="2025-10-13T05:51:04.677685098Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 13 05:51:04.678349 containerd[1643]: time="2025-10-13T05:51:04.677701018Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 05:51:04.678349 containerd[1643]: time="2025-10-13T05:51:04.677738981Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 05:51:04.678349 containerd[1643]: time="2025-10-13T05:51:04.677746783Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 05:51:04.678349 containerd[1643]: time="2025-10-13T05:51:04.677875216Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 05:51:04.678349 containerd[1643]: time="2025-10-13T05:51:04.677884511Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 05:51:04.678349 containerd[1643]: time="2025-10-13T05:51:04.677894569Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 05:51:04.678349 containerd[1643]: time="2025-10-13T05:51:04.677899578Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 13 05:51:04.678349 containerd[1643]: time="2025-10-13T05:51:04.677941579Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 13 05:51:04.678349 containerd[1643]: time="2025-10-13T05:51:04.678055929Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 05:51:04.678558 containerd[1643]: time="2025-10-13T05:51:04.678073165Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 05:51:04.678558 containerd[1643]: time="2025-10-13T05:51:04.678078722Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 13 05:51:04.678558 containerd[1643]: time="2025-10-13T05:51:04.678096371Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 13 05:51:04.678558 containerd[1643]: time="2025-10-13T05:51:04.678217623Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 13 05:51:04.678558 containerd[1643]: time="2025-10-13T05:51:04.678248819Z" level=info msg="metadata content store policy set" policy=shared Oct 13 05:51:04.708979 containerd[1643]: time="2025-10-13T05:51:04.708947329Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 13 05:51:04.709103 containerd[1643]: time="2025-10-13T05:51:04.709091072Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 13 05:51:04.709148 containerd[1643]: time="2025-10-13T05:51:04.709139153Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 13 05:51:04.709193 containerd[1643]: time="2025-10-13T05:51:04.709184512Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 13 05:51:04.709230 containerd[1643]: time="2025-10-13T05:51:04.709222822Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 13 05:51:04.709265 containerd[1643]: time="2025-10-13T05:51:04.709257473Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 13 05:51:04.709300 containerd[1643]: time="2025-10-13T05:51:04.709292471Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 13 05:51:04.709340 containerd[1643]: time="2025-10-13T05:51:04.709331993Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 13 05:51:04.709376 containerd[1643]: time="2025-10-13T05:51:04.709369123Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 13 05:51:04.709412 containerd[1643]: time="2025-10-13T05:51:04.709404561Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 13 05:51:04.709444 containerd[1643]: time="2025-10-13T05:51:04.709437518Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 13 05:51:04.709480 containerd[1643]: time="2025-10-13T05:51:04.709472581Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 13 05:51:04.709630 containerd[1643]: time="2025-10-13T05:51:04.709619923Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 13 05:51:04.709672 containerd[1643]: time="2025-10-13T05:51:04.709665006Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 13 05:51:04.709718 containerd[1643]: time="2025-10-13T05:51:04.709710189Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 13 05:51:04.709755 containerd[1643]: time="2025-10-13T05:51:04.709748168Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 13 05:51:04.709791 containerd[1643]: time="2025-10-13T05:51:04.709783002Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 13 05:51:04.709825 containerd[1643]: time="2025-10-13T05:51:04.709818241Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 13 05:51:04.709865 containerd[1643]: time="2025-10-13T05:51:04.709856975Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 13 05:51:04.709904 containerd[1643]: time="2025-10-13T05:51:04.709896764Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 13 05:51:04.709949 containerd[1643]: time="2025-10-13T05:51:04.709938154Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 13 05:51:04.710002 containerd[1643]: time="2025-10-13T05:51:04.709991236Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 13 05:51:04.710053 containerd[1643]: time="2025-10-13T05:51:04.710044698Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 13 05:51:04.710137 containerd[1643]: time="2025-10-13T05:51:04.710127147Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 13 05:51:04.710179 containerd[1643]: time="2025-10-13T05:51:04.710171796Z" level=info msg="Start snapshots syncer" Oct 13 05:51:04.710224 containerd[1643]: time="2025-10-13T05:51:04.710216149Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 13 05:51:04.710421 containerd[1643]: time="2025-10-13T05:51:04.710398714Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 13 05:51:04.710553 containerd[1643]: time="2025-10-13T05:51:04.710536166Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 13 05:51:04.710630 containerd[1643]: time="2025-10-13T05:51:04.710619785Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 13 05:51:04.710715 containerd[1643]: time="2025-10-13T05:51:04.710704870Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 13 05:51:04.710764 containerd[1643]: time="2025-10-13T05:51:04.710755880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 13 05:51:04.710796 containerd[1643]: time="2025-10-13T05:51:04.710789775Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 13 05:51:04.710826 containerd[1643]: time="2025-10-13T05:51:04.710820014Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 13 05:51:04.710858 containerd[1643]: time="2025-10-13T05:51:04.710851504Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 13 05:51:04.710889 containerd[1643]: time="2025-10-13T05:51:04.710882319Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 13 05:51:04.710934 containerd[1643]: time="2025-10-13T05:51:04.710926107Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 13 05:51:04.710980 containerd[1643]: time="2025-10-13T05:51:04.710971838Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 13 05:51:04.711013 containerd[1643]: time="2025-10-13T05:51:04.711006344Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 13 05:51:04.711045 containerd[1643]: time="2025-10-13T05:51:04.711038136Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 13 05:51:04.711091 containerd[1643]: time="2025-10-13T05:51:04.711082771Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 05:51:04.711133 containerd[1643]: time="2025-10-13T05:51:04.711124980Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 05:51:04.711163 containerd[1643]: time="2025-10-13T05:51:04.711157083Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 05:51:04.711194 containerd[1643]: time="2025-10-13T05:51:04.711187435Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 05:51:04.711228 containerd[1643]: time="2025-10-13T05:51:04.711221196Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 13 05:51:04.711263 containerd[1643]: time="2025-10-13T05:51:04.711256711Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 13 05:51:04.711296 containerd[1643]: time="2025-10-13T05:51:04.711289247Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 13 05:51:04.711329 containerd[1643]: time="2025-10-13T05:51:04.711323579Z" level=info msg="runtime interface created" Oct 13 05:51:04.711356 containerd[1643]: time="2025-10-13T05:51:04.711350613Z" level=info msg="created NRI interface" Oct 13 05:51:04.711385 containerd[1643]: time="2025-10-13T05:51:04.711378586Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 13 05:51:04.711419 containerd[1643]: time="2025-10-13T05:51:04.711413117Z" level=info msg="Connect containerd service" Oct 13 05:51:04.711461 containerd[1643]: time="2025-10-13T05:51:04.711454569Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 13 05:51:04.711915 containerd[1643]: time="2025-10-13T05:51:04.711902562Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 05:51:04.746491 tar[1624]: linux-amd64/README.md Oct 13 05:51:04.762044 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 13 05:51:04.802470 containerd[1643]: time="2025-10-13T05:51:04.802437791Z" level=info msg="Start subscribing containerd event" Oct 13 05:51:04.802600 containerd[1643]: time="2025-10-13T05:51:04.802475143Z" level=info msg="Start recovering state" Oct 13 05:51:04.802600 containerd[1643]: time="2025-10-13T05:51:04.802558887Z" level=info msg="Start event monitor" Oct 13 05:51:04.802600 containerd[1643]: time="2025-10-13T05:51:04.802567514Z" level=info msg="Start cni network conf syncer for default" Oct 13 05:51:04.802600 containerd[1643]: time="2025-10-13T05:51:04.802571558Z" level=info msg="Start streaming server" Oct 13 05:51:04.802600 containerd[1643]: time="2025-10-13T05:51:04.802579023Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 13 05:51:04.802600 containerd[1643]: time="2025-10-13T05:51:04.802583049Z" level=info msg="runtime interface starting up..." Oct 13 05:51:04.802600 containerd[1643]: time="2025-10-13T05:51:04.802585974Z" level=info msg="starting plugins..." Oct 13 05:51:04.802600 containerd[1643]: time="2025-10-13T05:51:04.802594313Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 13 05:51:04.802846 containerd[1643]: time="2025-10-13T05:51:04.802804530Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 13 05:51:04.802846 containerd[1643]: time="2025-10-13T05:51:04.802839279Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 13 05:51:04.802909 containerd[1643]: time="2025-10-13T05:51:04.802881696Z" level=info msg="containerd successfully booted in 0.134774s" Oct 13 05:51:04.803612 systemd[1]: Started containerd.service - containerd container runtime. Oct 13 05:51:05.726772 systemd-networkd[1430]: ens192: Gained IPv6LL Oct 13 05:51:05.728328 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 13 05:51:05.728782 systemd[1]: Reached target network-online.target - Network is Online. Oct 13 05:51:05.729834 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Oct 13 05:51:05.731143 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:51:05.732721 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 13 05:51:05.759523 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 13 05:51:05.786634 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 13 05:51:05.786773 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Oct 13 05:51:05.787123 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 13 05:51:07.370723 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:51:07.371492 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 13 05:51:07.371796 systemd[1]: Startup finished in 2.640s (kernel) + 5.190s (initrd) + 6.352s (userspace) = 14.184s. Oct 13 05:51:07.376766 (kubelet)[1802]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:51:07.418503 login[1702]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 13 05:51:07.420597 login[1703]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 13 05:51:07.432594 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 13 05:51:07.433224 systemd-logind[1616]: New session 1 of user core. Oct 13 05:51:07.433650 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 13 05:51:07.438068 systemd-logind[1616]: New session 2 of user core. Oct 13 05:51:07.449278 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 13 05:51:07.450995 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 13 05:51:07.461181 (systemd)[1809]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 13 05:51:07.463642 systemd-logind[1616]: New session c1 of user core. Oct 13 05:51:07.641799 systemd[1809]: Queued start job for default target default.target. Oct 13 05:51:07.646757 systemd[1809]: Created slice app.slice - User Application Slice. Oct 13 05:51:07.646776 systemd[1809]: Reached target paths.target - Paths. Oct 13 05:51:07.646806 systemd[1809]: Reached target timers.target - Timers. Oct 13 05:51:07.647516 systemd[1809]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 13 05:51:07.656999 systemd[1809]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 13 05:51:07.657039 systemd[1809]: Reached target sockets.target - Sockets. Oct 13 05:51:07.657099 systemd[1809]: Reached target basic.target - Basic System. Oct 13 05:51:07.657150 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 13 05:51:07.657334 systemd[1809]: Reached target default.target - Main User Target. Oct 13 05:51:07.657355 systemd[1809]: Startup finished in 189ms. Oct 13 05:51:07.662616 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 13 05:51:07.663606 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 13 05:51:08.398629 kubelet[1802]: E1013 05:51:08.398590 1802 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:51:08.400257 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:51:08.400347 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:51:08.400582 systemd[1]: kubelet.service: Consumed 665ms CPU time, 258.1M memory peak. Oct 13 05:51:18.650695 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 13 05:51:18.652036 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:51:18.881188 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:51:18.886753 (kubelet)[1853]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:51:18.921293 kubelet[1853]: E1013 05:51:18.921222 1853 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:51:18.923674 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:51:18.923851 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:51:18.924239 systemd[1]: kubelet.service: Consumed 106ms CPU time, 110.2M memory peak. Oct 13 05:51:29.174196 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 13 05:51:29.175830 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:51:29.412305 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:51:29.415753 (kubelet)[1868]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:51:29.440871 kubelet[1868]: E1013 05:51:29.440802 1868 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:51:29.442443 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:51:29.442594 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:51:29.442957 systemd[1]: kubelet.service: Consumed 99ms CPU time, 109.8M memory peak. Oct 13 05:51:34.506414 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 13 05:51:34.507998 systemd[1]: Started sshd@0-139.178.70.106:22-147.75.109.163:44800.service - OpenSSH per-connection server daemon (147.75.109.163:44800). Oct 13 05:51:34.608909 sshd[1875]: Accepted publickey for core from 147.75.109.163 port 44800 ssh2: RSA SHA256:ai+pxKByarUoBL/gnU9GvjBvPxJtyFadwyl0rYxwxDk Oct 13 05:51:34.609519 sshd-session[1875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:51:34.612038 systemd-logind[1616]: New session 3 of user core. Oct 13 05:51:34.622730 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 13 05:51:34.678678 systemd[1]: Started sshd@1-139.178.70.106:22-147.75.109.163:44804.service - OpenSSH per-connection server daemon (147.75.109.163:44804). Oct 13 05:51:34.721040 sshd[1881]: Accepted publickey for core from 147.75.109.163 port 44804 ssh2: RSA SHA256:ai+pxKByarUoBL/gnU9GvjBvPxJtyFadwyl0rYxwxDk Oct 13 05:51:34.722384 sshd-session[1881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:51:34.724971 systemd-logind[1616]: New session 4 of user core. Oct 13 05:51:34.734784 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 13 05:51:34.783986 sshd[1884]: Connection closed by 147.75.109.163 port 44804 Oct 13 05:51:34.784380 sshd-session[1881]: pam_unix(sshd:session): session closed for user core Oct 13 05:51:34.791176 systemd[1]: sshd@1-139.178.70.106:22-147.75.109.163:44804.service: Deactivated successfully. Oct 13 05:51:34.792707 systemd[1]: session-4.scope: Deactivated successfully. Oct 13 05:51:34.793291 systemd-logind[1616]: Session 4 logged out. Waiting for processes to exit. Oct 13 05:51:34.795294 systemd[1]: Started sshd@2-139.178.70.106:22-147.75.109.163:44808.service - OpenSSH per-connection server daemon (147.75.109.163:44808). Oct 13 05:51:34.796262 systemd-logind[1616]: Removed session 4. Oct 13 05:51:34.832973 sshd[1890]: Accepted publickey for core from 147.75.109.163 port 44808 ssh2: RSA SHA256:ai+pxKByarUoBL/gnU9GvjBvPxJtyFadwyl0rYxwxDk Oct 13 05:51:34.833639 sshd-session[1890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:51:34.836260 systemd-logind[1616]: New session 5 of user core. Oct 13 05:51:34.846648 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 13 05:51:34.893863 sshd[1893]: Connection closed by 147.75.109.163 port 44808 Oct 13 05:51:34.894238 sshd-session[1890]: pam_unix(sshd:session): session closed for user core Oct 13 05:51:34.906864 systemd[1]: sshd@2-139.178.70.106:22-147.75.109.163:44808.service: Deactivated successfully. Oct 13 05:51:34.908096 systemd[1]: session-5.scope: Deactivated successfully. Oct 13 05:51:34.909106 systemd-logind[1616]: Session 5 logged out. Waiting for processes to exit. Oct 13 05:51:34.911121 systemd[1]: Started sshd@3-139.178.70.106:22-147.75.109.163:44822.service - OpenSSH per-connection server daemon (147.75.109.163:44822). Oct 13 05:51:34.911918 systemd-logind[1616]: Removed session 5. Oct 13 05:51:34.946029 sshd[1899]: Accepted publickey for core from 147.75.109.163 port 44822 ssh2: RSA SHA256:ai+pxKByarUoBL/gnU9GvjBvPxJtyFadwyl0rYxwxDk Oct 13 05:51:34.946850 sshd-session[1899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:51:34.950781 systemd-logind[1616]: New session 6 of user core. Oct 13 05:51:34.957634 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 13 05:51:35.006436 sshd[1902]: Connection closed by 147.75.109.163 port 44822 Oct 13 05:51:35.006702 sshd-session[1899]: pam_unix(sshd:session): session closed for user core Oct 13 05:51:35.014101 systemd[1]: sshd@3-139.178.70.106:22-147.75.109.163:44822.service: Deactivated successfully. Oct 13 05:51:35.015110 systemd[1]: session-6.scope: Deactivated successfully. Oct 13 05:51:35.015783 systemd-logind[1616]: Session 6 logged out. Waiting for processes to exit. Oct 13 05:51:35.017080 systemd[1]: Started sshd@4-139.178.70.106:22-147.75.109.163:44826.service - OpenSSH per-connection server daemon (147.75.109.163:44826). Oct 13 05:51:35.019219 systemd-logind[1616]: Removed session 6. Oct 13 05:51:35.053798 sshd[1908]: Accepted publickey for core from 147.75.109.163 port 44826 ssh2: RSA SHA256:ai+pxKByarUoBL/gnU9GvjBvPxJtyFadwyl0rYxwxDk Oct 13 05:51:35.055023 sshd-session[1908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:51:35.058388 systemd-logind[1616]: New session 7 of user core. Oct 13 05:51:35.064640 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 13 05:51:35.120864 sudo[1912]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 13 05:51:35.121037 sudo[1912]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:51:35.130793 sudo[1912]: pam_unix(sudo:session): session closed for user root Oct 13 05:51:35.131646 sshd[1911]: Connection closed by 147.75.109.163 port 44826 Oct 13 05:51:35.132030 sshd-session[1908]: pam_unix(sshd:session): session closed for user core Oct 13 05:51:35.145548 systemd[1]: sshd@4-139.178.70.106:22-147.75.109.163:44826.service: Deactivated successfully. Oct 13 05:51:35.146540 systemd[1]: session-7.scope: Deactivated successfully. Oct 13 05:51:35.147082 systemd-logind[1616]: Session 7 logged out. Waiting for processes to exit. Oct 13 05:51:35.148778 systemd[1]: Started sshd@5-139.178.70.106:22-147.75.109.163:44840.service - OpenSSH per-connection server daemon (147.75.109.163:44840). Oct 13 05:51:35.149390 systemd-logind[1616]: Removed session 7. Oct 13 05:51:35.177985 sshd[1918]: Accepted publickey for core from 147.75.109.163 port 44840 ssh2: RSA SHA256:ai+pxKByarUoBL/gnU9GvjBvPxJtyFadwyl0rYxwxDk Oct 13 05:51:35.178877 sshd-session[1918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:51:35.182124 systemd-logind[1616]: New session 8 of user core. Oct 13 05:51:35.192641 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 13 05:51:35.241980 sudo[1923]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 13 05:51:35.242169 sudo[1923]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:51:35.245019 sudo[1923]: pam_unix(sudo:session): session closed for user root Oct 13 05:51:35.248804 sudo[1922]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 13 05:51:35.249020 sudo[1922]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:51:35.255848 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:51:35.286412 augenrules[1945]: No rules Oct 13 05:51:35.286753 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:51:35.286893 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:51:35.288930 sudo[1922]: pam_unix(sudo:session): session closed for user root Oct 13 05:51:35.289853 sshd[1921]: Connection closed by 147.75.109.163 port 44840 Oct 13 05:51:35.289809 sshd-session[1918]: pam_unix(sshd:session): session closed for user core Oct 13 05:51:35.296018 systemd[1]: sshd@5-139.178.70.106:22-147.75.109.163:44840.service: Deactivated successfully. Oct 13 05:51:35.297317 systemd[1]: session-8.scope: Deactivated successfully. Oct 13 05:51:35.298151 systemd-logind[1616]: Session 8 logged out. Waiting for processes to exit. Oct 13 05:51:35.300352 systemd[1]: Started sshd@6-139.178.70.106:22-147.75.109.163:44854.service - OpenSSH per-connection server daemon (147.75.109.163:44854). Oct 13 05:51:35.301623 systemd-logind[1616]: Removed session 8. Oct 13 05:51:35.336602 sshd[1954]: Accepted publickey for core from 147.75.109.163 port 44854 ssh2: RSA SHA256:ai+pxKByarUoBL/gnU9GvjBvPxJtyFadwyl0rYxwxDk Oct 13 05:51:35.337349 sshd-session[1954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:51:35.340568 systemd-logind[1616]: New session 9 of user core. Oct 13 05:51:35.358796 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 13 05:51:35.407566 sudo[1958]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 13 05:51:35.407774 sudo[1958]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:51:35.773815 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 13 05:51:35.782837 (dockerd)[1976]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 13 05:51:36.055961 dockerd[1976]: time="2025-10-13T05:51:36.055473336Z" level=info msg="Starting up" Oct 13 05:51:36.056698 dockerd[1976]: time="2025-10-13T05:51:36.056687294Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 13 05:51:36.063458 dockerd[1976]: time="2025-10-13T05:51:36.063421486Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 13 05:51:36.091990 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1004107024-merged.mount: Deactivated successfully. Oct 13 05:51:36.161482 dockerd[1976]: time="2025-10-13T05:51:36.161352185Z" level=info msg="Loading containers: start." Oct 13 05:51:36.170551 kernel: Initializing XFRM netlink socket Oct 13 05:51:36.408873 systemd-networkd[1430]: docker0: Link UP Oct 13 05:51:36.410128 dockerd[1976]: time="2025-10-13T05:51:36.410106719Z" level=info msg="Loading containers: done." Oct 13 05:51:36.429053 dockerd[1976]: time="2025-10-13T05:51:36.429023071Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 13 05:51:36.429149 dockerd[1976]: time="2025-10-13T05:51:36.429082218Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 13 05:51:36.429149 dockerd[1976]: time="2025-10-13T05:51:36.429128737Z" level=info msg="Initializing buildkit" Oct 13 05:51:36.477804 dockerd[1976]: time="2025-10-13T05:51:36.477771151Z" level=info msg="Completed buildkit initialization" Oct 13 05:51:36.481009 dockerd[1976]: time="2025-10-13T05:51:36.480991357Z" level=info msg="Daemon has completed initialization" Oct 13 05:51:36.481171 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 13 05:51:36.482117 dockerd[1976]: time="2025-10-13T05:51:36.481071300Z" level=info msg="API listen on /run/docker.sock" Oct 13 05:51:37.554528 containerd[1643]: time="2025-10-13T05:51:37.554483835Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 13 05:51:38.126950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3421036416.mount: Deactivated successfully. Oct 13 05:51:39.080556 containerd[1643]: time="2025-10-13T05:51:39.080383164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:51:39.085652 containerd[1643]: time="2025-10-13T05:51:39.085631428Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Oct 13 05:51:39.090721 containerd[1643]: time="2025-10-13T05:51:39.090700993Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:51:39.095597 containerd[1643]: time="2025-10-13T05:51:39.095576290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:51:39.096338 containerd[1643]: time="2025-10-13T05:51:39.096316319Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 1.541809415s" Oct 13 05:51:39.096376 containerd[1643]: time="2025-10-13T05:51:39.096340196Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Oct 13 05:51:39.096947 containerd[1643]: time="2025-10-13T05:51:39.096914765Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 13 05:51:39.693015 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 13 05:51:39.695637 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:51:40.100068 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:51:40.107760 (kubelet)[2254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:51:40.137026 kubelet[2254]: E1013 05:51:40.136990 2254 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:51:40.138616 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:51:40.138766 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:51:40.139173 systemd[1]: kubelet.service: Consumed 100ms CPU time, 110.6M memory peak. Oct 13 05:51:40.791559 containerd[1643]: time="2025-10-13T05:51:40.791049202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:51:40.795127 containerd[1643]: time="2025-10-13T05:51:40.795110623Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Oct 13 05:51:40.797600 containerd[1643]: time="2025-10-13T05:51:40.797580953Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:51:40.803918 containerd[1643]: time="2025-10-13T05:51:40.803894431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:51:40.804388 containerd[1643]: time="2025-10-13T05:51:40.804238786Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.707299316s" Oct 13 05:51:40.804388 containerd[1643]: time="2025-10-13T05:51:40.804257508Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Oct 13 05:51:40.804917 containerd[1643]: time="2025-10-13T05:51:40.804523857Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 13 05:51:42.560195 containerd[1643]: time="2025-10-13T05:51:42.560163396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:51:42.561004 containerd[1643]: time="2025-10-13T05:51:42.560980663Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Oct 13 05:51:42.561267 containerd[1643]: time="2025-10-13T05:51:42.561252261Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:51:42.563488 containerd[1643]: time="2025-10-13T05:51:42.562841258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:51:42.563488 containerd[1643]: time="2025-10-13T05:51:42.563400544Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.758852183s" Oct 13 05:51:42.563488 containerd[1643]: time="2025-10-13T05:51:42.563416528Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Oct 13 05:51:42.563875 containerd[1643]: time="2025-10-13T05:51:42.563818900Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 13 05:51:43.476451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1962123464.mount: Deactivated successfully. Oct 13 05:51:43.869245 containerd[1643]: time="2025-10-13T05:51:43.869156356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:51:43.876862 containerd[1643]: time="2025-10-13T05:51:43.876717325Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Oct 13 05:51:43.882985 containerd[1643]: time="2025-10-13T05:51:43.882962583Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:51:43.887802 containerd[1643]: time="2025-10-13T05:51:43.887779659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:51:43.888242 containerd[1643]: time="2025-10-13T05:51:43.888221980Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.324282514s" Oct 13 05:51:43.888310 containerd[1643]: time="2025-10-13T05:51:43.888296970Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Oct 13 05:51:43.888769 containerd[1643]: time="2025-10-13T05:51:43.888708424Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 13 05:51:44.646465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount157987020.mount: Deactivated successfully. Oct 13 05:51:45.542967 containerd[1643]: time="2025-10-13T05:51:45.542377382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:51:45.542967 containerd[1643]: time="2025-10-13T05:51:45.542929324Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Oct 13 05:51:45.543269 containerd[1643]: time="2025-10-13T05:51:45.543174543Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:51:45.545584 containerd[1643]: time="2025-10-13T05:51:45.545567201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:51:45.546982 containerd[1643]: time="2025-10-13T05:51:45.546963721Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.658054772s" Oct 13 05:51:45.547057 containerd[1643]: time="2025-10-13T05:51:45.547043726Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Oct 13 05:51:45.547470 containerd[1643]: time="2025-10-13T05:51:45.547446846Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 13 05:51:46.044558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2459839662.mount: Deactivated successfully. Oct 13 05:51:46.047501 containerd[1643]: time="2025-10-13T05:51:46.047094750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:51:46.047501 containerd[1643]: time="2025-10-13T05:51:46.047455250Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Oct 13 05:51:46.047501 containerd[1643]: time="2025-10-13T05:51:46.047482886Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:51:46.048458 containerd[1643]: time="2025-10-13T05:51:46.048446813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:51:46.048876 containerd[1643]: time="2025-10-13T05:51:46.048860145Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 501.389871ms" Oct 13 05:51:46.048905 containerd[1643]: time="2025-10-13T05:51:46.048876768Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Oct 13 05:51:46.049379 containerd[1643]: time="2025-10-13T05:51:46.049370579Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 13 05:51:48.675759 containerd[1643]: time="2025-10-13T05:51:48.675731443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:51:48.685252 containerd[1643]: time="2025-10-13T05:51:48.685226029Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Oct 13 05:51:48.690565 containerd[1643]: time="2025-10-13T05:51:48.690539794Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:51:48.700976 containerd[1643]: time="2025-10-13T05:51:48.700952537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:51:48.701578 containerd[1643]: time="2025-10-13T05:51:48.701457179Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.652039041s" Oct 13 05:51:48.701578 containerd[1643]: time="2025-10-13T05:51:48.701477571Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Oct 13 05:51:50.201916 update_engine[1617]: I20251013 05:51:50.201584 1617 update_attempter.cc:509] Updating boot flags... Oct 13 05:51:50.207576 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 13 05:51:50.211639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:51:50.834606 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:51:50.839789 (kubelet)[2421]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:51:50.877568 kubelet[2421]: E1013 05:51:50.876028 2421 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:51:50.879818 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:51:50.879902 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:51:50.880106 systemd[1]: kubelet.service: Consumed 100ms CPU time, 112.2M memory peak. Oct 13 05:51:51.332897 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:51:51.333130 systemd[1]: kubelet.service: Consumed 100ms CPU time, 112.2M memory peak. Oct 13 05:51:51.335485 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:51:51.351807 systemd[1]: Reload requested from client PID 2436 ('systemctl') (unit session-9.scope)... Oct 13 05:51:51.351879 systemd[1]: Reloading... Oct 13 05:51:51.434562 zram_generator::config[2483]: No configuration found. Oct 13 05:51:51.496831 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 13 05:51:51.564009 systemd[1]: Reloading finished in 211 ms. Oct 13 05:51:51.594340 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 13 05:51:51.594409 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 13 05:51:51.594738 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:51:51.596385 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:51:51.893438 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:51:51.900724 (kubelet)[2547]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:51:51.927283 kubelet[2547]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:51:51.927283 kubelet[2547]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:51:51.947554 kubelet[2547]: I1013 05:51:51.947327 2547 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:51:52.143782 kubelet[2547]: I1013 05:51:52.143678 2547 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 13 05:51:52.143782 kubelet[2547]: I1013 05:51:52.143697 2547 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:51:52.144402 kubelet[2547]: I1013 05:51:52.144385 2547 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 13 05:51:52.144434 kubelet[2547]: I1013 05:51:52.144403 2547 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:51:52.144646 kubelet[2547]: I1013 05:51:52.144635 2547 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 05:51:52.155557 kubelet[2547]: E1013 05:51:52.155163 2547 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.70.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 13 05:51:52.157656 kubelet[2547]: I1013 05:51:52.156216 2547 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:51:52.162405 kubelet[2547]: I1013 05:51:52.162387 2547 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:51:52.167276 kubelet[2547]: I1013 05:51:52.167260 2547 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 13 05:51:52.170186 kubelet[2547]: I1013 05:51:52.170163 2547 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:51:52.171271 kubelet[2547]: I1013 05:51:52.170183 2547 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:51:52.171360 kubelet[2547]: I1013 05:51:52.171272 2547 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:51:52.171360 kubelet[2547]: I1013 05:51:52.171279 2547 container_manager_linux.go:306] "Creating device plugin manager" Oct 13 05:51:52.171360 kubelet[2547]: I1013 05:51:52.171337 2547 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 13 05:51:52.172105 kubelet[2547]: I1013 05:51:52.172088 2547 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:51:52.172814 kubelet[2547]: I1013 05:51:52.172800 2547 kubelet.go:475] "Attempting to sync node with API server" Oct 13 05:51:52.172814 kubelet[2547]: I1013 05:51:52.172812 2547 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:51:52.172860 kubelet[2547]: I1013 05:51:52.172827 2547 kubelet.go:387] "Adding apiserver pod source" Oct 13 05:51:52.172860 kubelet[2547]: I1013 05:51:52.172838 2547 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:51:52.174457 kubelet[2547]: E1013 05:51:52.174439 2547 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 05:51:52.175780 kubelet[2547]: E1013 05:51:52.175730 2547 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 05:51:52.176766 kubelet[2547]: I1013 05:51:52.176741 2547 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:51:52.179240 kubelet[2547]: I1013 05:51:52.179150 2547 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 05:51:52.179240 kubelet[2547]: I1013 05:51:52.179171 2547 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 13 05:51:52.181373 kubelet[2547]: W1013 05:51:52.180833 2547 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 13 05:51:52.184692 kubelet[2547]: I1013 05:51:52.184681 2547 server.go:1262] "Started kubelet" Oct 13 05:51:52.185931 kubelet[2547]: I1013 05:51:52.185921 2547 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:51:52.189864 kubelet[2547]: E1013 05:51:52.187898 2547 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.106:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.106:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186df718158e37d3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-13 05:51:52.184649683 +0000 UTC m=+0.281528972,LastTimestamp:2025-10-13 05:51:52.184649683 +0000 UTC m=+0.281528972,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 13 05:51:52.191998 kubelet[2547]: I1013 05:51:52.190004 2547 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:51:52.192899 kubelet[2547]: I1013 05:51:52.192369 2547 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 13 05:51:52.195422 kubelet[2547]: I1013 05:51:52.195245 2547 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 13 05:51:52.195422 kubelet[2547]: E1013 05:51:52.195389 2547 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:51:52.196307 kubelet[2547]: I1013 05:51:52.196298 2547 server.go:310] "Adding debug handlers to kubelet server" Oct 13 05:51:52.199738 kubelet[2547]: I1013 05:51:52.199725 2547 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 13 05:51:52.199779 kubelet[2547]: I1013 05:51:52.199758 2547 reconciler.go:29] "Reconciler: start to sync state" Oct 13 05:51:52.199820 kubelet[2547]: I1013 05:51:52.199729 2547 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:51:52.199915 kubelet[2547]: I1013 05:51:52.199906 2547 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 13 05:51:52.200057 kubelet[2547]: I1013 05:51:52.200049 2547 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:51:52.200895 kubelet[2547]: I1013 05:51:52.200885 2547 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:51:52.201645 kubelet[2547]: E1013 05:51:52.201628 2547 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 05:51:52.202547 kubelet[2547]: E1013 05:51:52.202099 2547 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="200ms" Oct 13 05:51:52.202547 kubelet[2547]: I1013 05:51:52.202313 2547 factory.go:223] Registration of the systemd container factory successfully Oct 13 05:51:52.204700 kubelet[2547]: I1013 05:51:52.204586 2547 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:51:52.207921 kubelet[2547]: I1013 05:51:52.207910 2547 factory.go:223] Registration of the containerd container factory successfully Oct 13 05:51:52.208526 kubelet[2547]: I1013 05:51:52.208511 2547 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 13 05:51:52.208526 kubelet[2547]: I1013 05:51:52.208523 2547 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 13 05:51:52.208747 kubelet[2547]: I1013 05:51:52.208564 2547 kubelet.go:2427] "Starting kubelet main sync loop" Oct 13 05:51:52.208747 kubelet[2547]: E1013 05:51:52.208589 2547 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 05:51:52.223200 kubelet[2547]: E1013 05:51:52.223179 2547 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 05:51:52.223305 kubelet[2547]: E1013 05:51:52.223272 2547 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 05:51:52.227519 kubelet[2547]: I1013 05:51:52.227504 2547 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:51:52.227519 kubelet[2547]: I1013 05:51:52.227514 2547 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:51:52.227607 kubelet[2547]: I1013 05:51:52.227523 2547 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:51:52.296025 kubelet[2547]: E1013 05:51:52.295999 2547 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:51:52.309290 kubelet[2547]: E1013 05:51:52.309276 2547 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 13 05:51:52.314090 kubelet[2547]: I1013 05:51:52.314024 2547 policy_none.go:49] "None policy: Start" Oct 13 05:51:52.314090 kubelet[2547]: I1013 05:51:52.314035 2547 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 13 05:51:52.314090 kubelet[2547]: I1013 05:51:52.314043 2547 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 13 05:51:52.331579 kubelet[2547]: I1013 05:51:52.331238 2547 policy_none.go:47] "Start" Oct 13 05:51:52.334819 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 13 05:51:52.346793 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 13 05:51:52.348854 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 13 05:51:52.358961 kubelet[2547]: E1013 05:51:52.358945 2547 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 05:51:52.359089 kubelet[2547]: I1013 05:51:52.359077 2547 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:51:52.359114 kubelet[2547]: I1013 05:51:52.359089 2547 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:51:52.359431 kubelet[2547]: I1013 05:51:52.359420 2547 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:51:52.360442 kubelet[2547]: E1013 05:51:52.360314 2547 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:51:52.360442 kubelet[2547]: E1013 05:51:52.360344 2547 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 13 05:51:52.402787 kubelet[2547]: E1013 05:51:52.402716 2547 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="400ms" Oct 13 05:51:52.460558 kubelet[2547]: I1013 05:51:52.460519 2547 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:51:52.460772 kubelet[2547]: E1013 05:51:52.460748 2547 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" Oct 13 05:51:52.518634 systemd[1]: Created slice kubepods-burstable-pode412733ae7e9e3b7df6d21449fc6893f.slice - libcontainer container kubepods-burstable-pode412733ae7e9e3b7df6d21449fc6893f.slice. Oct 13 05:51:52.534286 kubelet[2547]: E1013 05:51:52.534272 2547 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:51:52.536049 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Oct 13 05:51:52.554486 kubelet[2547]: E1013 05:51:52.554463 2547 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:51:52.556480 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Oct 13 05:51:52.557474 kubelet[2547]: E1013 05:51:52.557462 2547 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:51:52.602039 kubelet[2547]: I1013 05:51:52.602015 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e412733ae7e9e3b7df6d21449fc6893f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e412733ae7e9e3b7df6d21449fc6893f\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:51:52.602039 kubelet[2547]: I1013 05:51:52.602036 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e412733ae7e9e3b7df6d21449fc6893f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e412733ae7e9e3b7df6d21449fc6893f\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:51:52.602152 kubelet[2547]: I1013 05:51:52.602048 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:51:52.602152 kubelet[2547]: I1013 05:51:52.602077 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:51:52.602152 kubelet[2547]: I1013 05:51:52.602086 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 13 05:51:52.602152 kubelet[2547]: I1013 05:51:52.602094 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e412733ae7e9e3b7df6d21449fc6893f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e412733ae7e9e3b7df6d21449fc6893f\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:51:52.602152 kubelet[2547]: I1013 05:51:52.602108 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:51:52.602245 kubelet[2547]: I1013 05:51:52.602116 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:51:52.602245 kubelet[2547]: I1013 05:51:52.602124 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:51:52.662602 kubelet[2547]: I1013 05:51:52.662372 2547 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:51:52.662756 kubelet[2547]: E1013 05:51:52.662738 2547 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" Oct 13 05:51:52.804044 kubelet[2547]: E1013 05:51:52.804012 2547 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="800ms" Oct 13 05:51:52.836698 containerd[1643]: time="2025-10-13T05:51:52.836667369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e412733ae7e9e3b7df6d21449fc6893f,Namespace:kube-system,Attempt:0,}" Oct 13 05:51:52.855783 containerd[1643]: time="2025-10-13T05:51:52.855753243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Oct 13 05:51:52.858991 containerd[1643]: time="2025-10-13T05:51:52.858609614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Oct 13 05:51:53.064048 kubelet[2547]: I1013 05:51:53.064001 2547 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:51:53.064496 kubelet[2547]: E1013 05:51:53.064479 2547 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" Oct 13 05:51:53.259693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2943325414.mount: Deactivated successfully. Oct 13 05:51:53.261177 kubelet[2547]: E1013 05:51:53.261158 2547 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 05:51:53.262396 containerd[1643]: time="2025-10-13T05:51:53.262302422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:51:53.262872 containerd[1643]: time="2025-10-13T05:51:53.262853222Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:51:53.263215 containerd[1643]: time="2025-10-13T05:51:53.263199004Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 13 05:51:53.263654 containerd[1643]: time="2025-10-13T05:51:53.263631646Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:51:53.263990 containerd[1643]: time="2025-10-13T05:51:53.263864974Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 13 05:51:53.264628 containerd[1643]: time="2025-10-13T05:51:53.264617326Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 13 05:51:53.265026 containerd[1643]: time="2025-10-13T05:51:53.265011854Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:51:53.265486 containerd[1643]: time="2025-10-13T05:51:53.265474395Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 404.758871ms" Oct 13 05:51:53.265705 containerd[1643]: time="2025-10-13T05:51:53.265679335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:51:53.266895 containerd[1643]: time="2025-10-13T05:51:53.266813801Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 421.529039ms" Oct 13 05:51:53.267451 containerd[1643]: time="2025-10-13T05:51:53.267435578Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 406.720413ms" Oct 13 05:51:53.346931 containerd[1643]: time="2025-10-13T05:51:53.345649616Z" level=info msg="connecting to shim f7293e3e455f98be41fe1f0f3cdb76ac8118a6a0d3f9263f3772585979556162" address="unix:///run/containerd/s/eb5aadb4e818fca49ed3808b44105bd0ad7caf439a8d4c5f89e51b0351db9715" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:51:53.347954 containerd[1643]: time="2025-10-13T05:51:53.347884415Z" level=info msg="connecting to shim b38141772ab2211a87c82dfba33b7be549cc80e877fb93d10908e5bd56465304" address="unix:///run/containerd/s/a90379f7d01ab8b64c7be96cd9522969e8c2caaaf90e73407ce3ccf5aaa7e73c" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:51:53.350052 containerd[1643]: time="2025-10-13T05:51:53.350037289Z" level=info msg="connecting to shim f0d585cfbecab988a98906fc0370e7cecd517c80500b82599b890b6467912f3b" address="unix:///run/containerd/s/8053330adbb9861b1cf70bd968178cf836f749eebd133903597b09d95cf1e1e9" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:51:53.415631 systemd[1]: Started cri-containerd-b38141772ab2211a87c82dfba33b7be549cc80e877fb93d10908e5bd56465304.scope - libcontainer container b38141772ab2211a87c82dfba33b7be549cc80e877fb93d10908e5bd56465304. Oct 13 05:51:53.416815 systemd[1]: Started cri-containerd-f0d585cfbecab988a98906fc0370e7cecd517c80500b82599b890b6467912f3b.scope - libcontainer container f0d585cfbecab988a98906fc0370e7cecd517c80500b82599b890b6467912f3b. Oct 13 05:51:53.417878 systemd[1]: Started cri-containerd-f7293e3e455f98be41fe1f0f3cdb76ac8118a6a0d3f9263f3772585979556162.scope - libcontainer container f7293e3e455f98be41fe1f0f3cdb76ac8118a6a0d3f9263f3772585979556162. Oct 13 05:51:53.469243 containerd[1643]: time="2025-10-13T05:51:53.469222022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"b38141772ab2211a87c82dfba33b7be549cc80e877fb93d10908e5bd56465304\"" Oct 13 05:51:53.479993 containerd[1643]: time="2025-10-13T05:51:53.479968722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7293e3e455f98be41fe1f0f3cdb76ac8118a6a0d3f9263f3772585979556162\"" Oct 13 05:51:53.481797 containerd[1643]: time="2025-10-13T05:51:53.481757094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e412733ae7e9e3b7df6d21449fc6893f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0d585cfbecab988a98906fc0370e7cecd517c80500b82599b890b6467912f3b\"" Oct 13 05:51:53.482353 containerd[1643]: time="2025-10-13T05:51:53.482341968Z" level=info msg="CreateContainer within sandbox \"b38141772ab2211a87c82dfba33b7be549cc80e877fb93d10908e5bd56465304\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 13 05:51:53.491341 containerd[1643]: time="2025-10-13T05:51:53.491323893Z" level=info msg="Container 39bb3e9c5aa6e7dfe0d061b0cffdb48934642cda8d59b7caba28bf05f1acc424: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:51:53.498117 containerd[1643]: time="2025-10-13T05:51:53.498100257Z" level=info msg="CreateContainer within sandbox \"f7293e3e455f98be41fe1f0f3cdb76ac8118a6a0d3f9263f3772585979556162\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 13 05:51:53.499994 containerd[1643]: time="2025-10-13T05:51:53.499944654Z" level=info msg="CreateContainer within sandbox \"f0d585cfbecab988a98906fc0370e7cecd517c80500b82599b890b6467912f3b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 13 05:51:53.501726 containerd[1643]: time="2025-10-13T05:51:53.501706132Z" level=info msg="CreateContainer within sandbox \"b38141772ab2211a87c82dfba33b7be549cc80e877fb93d10908e5bd56465304\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"39bb3e9c5aa6e7dfe0d061b0cffdb48934642cda8d59b7caba28bf05f1acc424\"" Oct 13 05:51:53.501836 containerd[1643]: time="2025-10-13T05:51:53.501821398Z" level=info msg="Container 0e0ae46c212090afbe3c22d8c00e9269855875ca06528f8e5e045cfb6642bbb3: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:51:53.503063 containerd[1643]: time="2025-10-13T05:51:53.503052108Z" level=info msg="StartContainer for \"39bb3e9c5aa6e7dfe0d061b0cffdb48934642cda8d59b7caba28bf05f1acc424\"" Oct 13 05:51:53.504050 containerd[1643]: time="2025-10-13T05:51:53.504018358Z" level=info msg="connecting to shim 39bb3e9c5aa6e7dfe0d061b0cffdb48934642cda8d59b7caba28bf05f1acc424" address="unix:///run/containerd/s/a90379f7d01ab8b64c7be96cd9522969e8c2caaaf90e73407ce3ccf5aaa7e73c" protocol=ttrpc version=3 Oct 13 05:51:53.505495 containerd[1643]: time="2025-10-13T05:51:53.505448185Z" level=info msg="CreateContainer within sandbox \"f7293e3e455f98be41fe1f0f3cdb76ac8118a6a0d3f9263f3772585979556162\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0e0ae46c212090afbe3c22d8c00e9269855875ca06528f8e5e045cfb6642bbb3\"" Oct 13 05:51:53.505634 containerd[1643]: time="2025-10-13T05:51:53.505619359Z" level=info msg="StartContainer for \"0e0ae46c212090afbe3c22d8c00e9269855875ca06528f8e5e045cfb6642bbb3\"" Oct 13 05:51:53.506491 containerd[1643]: time="2025-10-13T05:51:53.506477730Z" level=info msg="connecting to shim 0e0ae46c212090afbe3c22d8c00e9269855875ca06528f8e5e045cfb6642bbb3" address="unix:///run/containerd/s/eb5aadb4e818fca49ed3808b44105bd0ad7caf439a8d4c5f89e51b0351db9715" protocol=ttrpc version=3 Oct 13 05:51:53.507023 containerd[1643]: time="2025-10-13T05:51:53.506897790Z" level=info msg="Container 2bf5cb8aa6d16d6b42395da29105af47ac3e12233bc92691b95bef5a46dbae5c: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:51:53.510063 containerd[1643]: time="2025-10-13T05:51:53.510045745Z" level=info msg="CreateContainer within sandbox \"f0d585cfbecab988a98906fc0370e7cecd517c80500b82599b890b6467912f3b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2bf5cb8aa6d16d6b42395da29105af47ac3e12233bc92691b95bef5a46dbae5c\"" Oct 13 05:51:53.510246 containerd[1643]: time="2025-10-13T05:51:53.510235367Z" level=info msg="StartContainer for \"2bf5cb8aa6d16d6b42395da29105af47ac3e12233bc92691b95bef5a46dbae5c\"" Oct 13 05:51:53.510905 containerd[1643]: time="2025-10-13T05:51:53.510866968Z" level=info msg="connecting to shim 2bf5cb8aa6d16d6b42395da29105af47ac3e12233bc92691b95bef5a46dbae5c" address="unix:///run/containerd/s/8053330adbb9861b1cf70bd968178cf836f749eebd133903597b09d95cf1e1e9" protocol=ttrpc version=3 Oct 13 05:51:53.523620 systemd[1]: Started cri-containerd-39bb3e9c5aa6e7dfe0d061b0cffdb48934642cda8d59b7caba28bf05f1acc424.scope - libcontainer container 39bb3e9c5aa6e7dfe0d061b0cffdb48934642cda8d59b7caba28bf05f1acc424. Oct 13 05:51:53.526917 systemd[1]: Started cri-containerd-0e0ae46c212090afbe3c22d8c00e9269855875ca06528f8e5e045cfb6642bbb3.scope - libcontainer container 0e0ae46c212090afbe3c22d8c00e9269855875ca06528f8e5e045cfb6642bbb3. Oct 13 05:51:53.528374 systemd[1]: Started cri-containerd-2bf5cb8aa6d16d6b42395da29105af47ac3e12233bc92691b95bef5a46dbae5c.scope - libcontainer container 2bf5cb8aa6d16d6b42395da29105af47ac3e12233bc92691b95bef5a46dbae5c. Oct 13 05:51:53.544617 kubelet[2547]: E1013 05:51:53.544467 2547 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 05:51:53.578724 containerd[1643]: time="2025-10-13T05:51:53.578698862Z" level=info msg="StartContainer for \"0e0ae46c212090afbe3c22d8c00e9269855875ca06528f8e5e045cfb6642bbb3\" returns successfully" Oct 13 05:51:53.590431 containerd[1643]: time="2025-10-13T05:51:53.590393654Z" level=info msg="StartContainer for \"2bf5cb8aa6d16d6b42395da29105af47ac3e12233bc92691b95bef5a46dbae5c\" returns successfully" Oct 13 05:51:53.604110 containerd[1643]: time="2025-10-13T05:51:53.603620757Z" level=info msg="StartContainer for \"39bb3e9c5aa6e7dfe0d061b0cffdb48934642cda8d59b7caba28bf05f1acc424\" returns successfully" Oct 13 05:51:53.604674 kubelet[2547]: E1013 05:51:53.604653 2547 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="1.6s" Oct 13 05:51:53.696197 kubelet[2547]: E1013 05:51:53.696172 2547 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 05:51:53.734743 kubelet[2547]: E1013 05:51:53.734729 2547 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 05:51:53.866235 kubelet[2547]: I1013 05:51:53.866182 2547 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:51:53.866392 kubelet[2547]: E1013 05:51:53.866372 2547 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" Oct 13 05:51:54.233952 kubelet[2547]: E1013 05:51:54.233827 2547 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:51:54.236706 kubelet[2547]: E1013 05:51:54.236678 2547 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:51:54.237457 kubelet[2547]: E1013 05:51:54.237446 2547 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:51:55.241918 kubelet[2547]: E1013 05:51:55.241903 2547 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:51:55.242220 kubelet[2547]: E1013 05:51:55.241956 2547 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:51:55.469929 kubelet[2547]: I1013 05:51:55.469910 2547 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:51:55.504870 kubelet[2547]: E1013 05:51:55.504699 2547 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 13 05:51:55.604671 kubelet[2547]: I1013 05:51:55.604649 2547 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 13 05:51:55.604671 kubelet[2547]: E1013 05:51:55.604671 2547 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 13 05:51:55.697168 kubelet[2547]: I1013 05:51:55.697141 2547 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:51:55.701088 kubelet[2547]: E1013 05:51:55.701063 2547 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 13 05:51:55.701088 kubelet[2547]: I1013 05:51:55.701083 2547 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:51:55.702221 kubelet[2547]: E1013 05:51:55.702205 2547 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:51:55.702221 kubelet[2547]: I1013 05:51:55.702220 2547 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 05:51:55.703136 kubelet[2547]: E1013 05:51:55.703124 2547 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 13 05:51:56.178265 kubelet[2547]: I1013 05:51:56.178229 2547 apiserver.go:52] "Watching apiserver" Oct 13 05:51:56.200134 kubelet[2547]: I1013 05:51:56.200091 2547 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 13 05:51:57.517768 systemd[1]: Reload requested from client PID 2835 ('systemctl') (unit session-9.scope)... Oct 13 05:51:57.517782 systemd[1]: Reloading... Oct 13 05:51:57.577575 zram_generator::config[2878]: No configuration found. Oct 13 05:51:57.656733 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 13 05:51:57.731752 systemd[1]: Reloading finished in 213 ms. Oct 13 05:51:57.757936 kubelet[2547]: I1013 05:51:57.757891 2547 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:51:57.758058 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:51:57.771295 systemd[1]: kubelet.service: Deactivated successfully. Oct 13 05:51:57.771483 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:51:57.771513 systemd[1]: kubelet.service: Consumed 459ms CPU time, 123.3M memory peak. Oct 13 05:51:57.772759 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:51:58.207257 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:51:58.217788 (kubelet)[2946]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:51:58.320561 kubelet[2946]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:51:58.320561 kubelet[2946]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:51:58.320561 kubelet[2946]: I1013 05:51:58.320303 2946 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:51:58.325239 kubelet[2946]: I1013 05:51:58.325189 2946 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 13 05:51:58.325239 kubelet[2946]: I1013 05:51:58.325208 2946 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:51:58.330399 kubelet[2946]: I1013 05:51:58.329336 2946 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 13 05:51:58.330399 kubelet[2946]: I1013 05:51:58.329361 2946 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:51:58.330399 kubelet[2946]: I1013 05:51:58.329599 2946 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 05:51:58.330711 kubelet[2946]: I1013 05:51:58.330700 2946 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 13 05:51:58.333807 kubelet[2946]: I1013 05:51:58.333793 2946 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:51:58.336286 kubelet[2946]: I1013 05:51:58.336269 2946 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:51:58.339128 kubelet[2946]: I1013 05:51:58.339105 2946 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 13 05:51:58.342006 kubelet[2946]: I1013 05:51:58.341978 2946 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:51:58.342301 kubelet[2946]: I1013 05:51:58.342119 2946 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:51:58.342416 kubelet[2946]: I1013 05:51:58.342400 2946 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:51:58.342459 kubelet[2946]: I1013 05:51:58.342453 2946 container_manager_linux.go:306] "Creating device plugin manager" Oct 13 05:51:58.342527 kubelet[2946]: I1013 05:51:58.342521 2946 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 13 05:51:58.344260 kubelet[2946]: I1013 05:51:58.344244 2946 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:51:58.344938 kubelet[2946]: I1013 05:51:58.344921 2946 kubelet.go:475] "Attempting to sync node with API server" Oct 13 05:51:58.344938 kubelet[2946]: I1013 05:51:58.344937 2946 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:51:58.345985 kubelet[2946]: I1013 05:51:58.345970 2946 kubelet.go:387] "Adding apiserver pod source" Oct 13 05:51:58.346023 kubelet[2946]: I1013 05:51:58.345987 2946 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:51:58.348555 kubelet[2946]: I1013 05:51:58.347571 2946 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:51:58.348555 kubelet[2946]: I1013 05:51:58.348315 2946 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 05:51:58.348555 kubelet[2946]: I1013 05:51:58.348333 2946 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 13 05:51:58.352286 kubelet[2946]: I1013 05:51:58.352268 2946 server.go:1262] "Started kubelet" Oct 13 05:51:58.357242 kubelet[2946]: I1013 05:51:58.356091 2946 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:51:58.367587 kubelet[2946]: I1013 05:51:58.367565 2946 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:51:58.367737 kubelet[2946]: I1013 05:51:58.367726 2946 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 13 05:51:58.369029 kubelet[2946]: I1013 05:51:58.369014 2946 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:51:58.369803 kubelet[2946]: I1013 05:51:58.369772 2946 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:51:58.374571 kubelet[2946]: I1013 05:51:58.373344 2946 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:51:58.374571 kubelet[2946]: I1013 05:51:58.373847 2946 server.go:310] "Adding debug handlers to kubelet server" Oct 13 05:51:58.375502 kubelet[2946]: I1013 05:51:58.375488 2946 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 13 05:51:58.376943 kubelet[2946]: I1013 05:51:58.376933 2946 reconciler.go:29] "Reconciler: start to sync state" Oct 13 05:51:58.377049 kubelet[2946]: I1013 05:51:58.377043 2946 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 13 05:51:58.378662 kubelet[2946]: I1013 05:51:58.378639 2946 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 13 05:51:58.379465 kubelet[2946]: I1013 05:51:58.379451 2946 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 13 05:51:58.379465 kubelet[2946]: I1013 05:51:58.379463 2946 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 13 05:51:58.379560 kubelet[2946]: I1013 05:51:58.379477 2946 kubelet.go:2427] "Starting kubelet main sync loop" Oct 13 05:51:58.379560 kubelet[2946]: E1013 05:51:58.379498 2946 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 05:51:58.380613 kubelet[2946]: E1013 05:51:58.380602 2946 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 05:51:58.381125 kubelet[2946]: I1013 05:51:58.381116 2946 factory.go:223] Registration of the containerd container factory successfully Oct 13 05:51:58.381256 kubelet[2946]: I1013 05:51:58.381170 2946 factory.go:223] Registration of the systemd container factory successfully Oct 13 05:51:58.381452 kubelet[2946]: I1013 05:51:58.381440 2946 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:51:58.416023 kubelet[2946]: I1013 05:51:58.416008 2946 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:51:58.416198 kubelet[2946]: I1013 05:51:58.416190 2946 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:51:58.416240 kubelet[2946]: I1013 05:51:58.416236 2946 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:51:58.416351 kubelet[2946]: I1013 05:51:58.416344 2946 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 13 05:51:58.416890 kubelet[2946]: I1013 05:51:58.416795 2946 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 13 05:51:58.416943 kubelet[2946]: I1013 05:51:58.416937 2946 policy_none.go:49] "None policy: Start" Oct 13 05:51:58.417027 kubelet[2946]: I1013 05:51:58.417021 2946 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 13 05:51:58.417145 kubelet[2946]: I1013 05:51:58.417139 2946 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 13 05:51:58.417330 kubelet[2946]: I1013 05:51:58.417253 2946 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 13 05:51:58.417426 kubelet[2946]: I1013 05:51:58.417414 2946 policy_none.go:47] "Start" Oct 13 05:51:58.420180 kubelet[2946]: E1013 05:51:58.420170 2946 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 05:51:58.420428 kubelet[2946]: I1013 05:51:58.420420 2946 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:51:58.420739 kubelet[2946]: I1013 05:51:58.420724 2946 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:51:58.421039 kubelet[2946]: I1013 05:51:58.421032 2946 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:51:58.425549 kubelet[2946]: E1013 05:51:58.425078 2946 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:51:58.480995 kubelet[2946]: I1013 05:51:58.480933 2946 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 05:51:58.481298 kubelet[2946]: I1013 05:51:58.481291 2946 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:51:58.482803 kubelet[2946]: I1013 05:51:58.481712 2946 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:51:58.524614 kubelet[2946]: I1013 05:51:58.524576 2946 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:51:58.529737 kubelet[2946]: I1013 05:51:58.529634 2946 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 13 05:51:58.530068 kubelet[2946]: I1013 05:51:58.530057 2946 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 13 05:51:58.577756 kubelet[2946]: I1013 05:51:58.577538 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:51:58.577756 kubelet[2946]: I1013 05:51:58.577577 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:51:58.577756 kubelet[2946]: I1013 05:51:58.577596 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:51:58.577756 kubelet[2946]: I1013 05:51:58.577612 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:51:58.577756 kubelet[2946]: I1013 05:51:58.577628 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:51:58.577993 kubelet[2946]: I1013 05:51:58.577643 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 13 05:51:58.577993 kubelet[2946]: I1013 05:51:58.577668 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e412733ae7e9e3b7df6d21449fc6893f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e412733ae7e9e3b7df6d21449fc6893f\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:51:58.577993 kubelet[2946]: I1013 05:51:58.577683 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e412733ae7e9e3b7df6d21449fc6893f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e412733ae7e9e3b7df6d21449fc6893f\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:51:58.577993 kubelet[2946]: I1013 05:51:58.577698 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e412733ae7e9e3b7df6d21449fc6893f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e412733ae7e9e3b7df6d21449fc6893f\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:51:59.353671 kubelet[2946]: I1013 05:51:59.353647 2946 apiserver.go:52] "Watching apiserver" Oct 13 05:51:59.378613 kubelet[2946]: I1013 05:51:59.378582 2946 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 13 05:51:59.406887 kubelet[2946]: I1013 05:51:59.406290 2946 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:51:59.406887 kubelet[2946]: I1013 05:51:59.406599 2946 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 05:51:59.409040 kubelet[2946]: E1013 05:51:59.409014 2946 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 13 05:51:59.411624 kubelet[2946]: E1013 05:51:59.411608 2946 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 13 05:51:59.431776 kubelet[2946]: I1013 05:51:59.431730 2946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.431717957 podStartE2EDuration="1.431717957s" podCreationTimestamp="2025-10-13 05:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:51:59.431082374 +0000 UTC m=+1.188767115" watchObservedRunningTime="2025-10-13 05:51:59.431717957 +0000 UTC m=+1.189402699" Oct 13 05:51:59.437927 kubelet[2946]: I1013 05:51:59.437888 2946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.437877733 podStartE2EDuration="1.437877733s" podCreationTimestamp="2025-10-13 05:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:51:59.437501312 +0000 UTC m=+1.195186057" watchObservedRunningTime="2025-10-13 05:51:59.437877733 +0000 UTC m=+1.195562476" Oct 13 05:51:59.442883 kubelet[2946]: I1013 05:51:59.442793 2946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.442784281 podStartE2EDuration="1.442784281s" podCreationTimestamp="2025-10-13 05:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:51:59.442662694 +0000 UTC m=+1.200347439" watchObservedRunningTime="2025-10-13 05:51:59.442784281 +0000 UTC m=+1.200469022" Oct 13 05:52:04.657146 kubelet[2946]: I1013 05:52:04.657122 2946 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 13 05:52:04.657823 containerd[1643]: time="2025-10-13T05:52:04.657793221Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 13 05:52:04.658162 kubelet[2946]: I1013 05:52:04.657910 2946 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 13 05:52:05.346002 systemd[1]: Created slice kubepods-besteffort-pod8fa15c2e_b7a9_4ab6_96b9_c5ca2b237ecf.slice - libcontainer container kubepods-besteffort-pod8fa15c2e_b7a9_4ab6_96b9_c5ca2b237ecf.slice. Oct 13 05:52:05.422549 kubelet[2946]: I1013 05:52:05.422440 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8fa15c2e-b7a9-4ab6-96b9-c5ca2b237ecf-kube-proxy\") pod \"kube-proxy-5592c\" (UID: \"8fa15c2e-b7a9-4ab6-96b9-c5ca2b237ecf\") " pod="kube-system/kube-proxy-5592c" Oct 13 05:52:05.422549 kubelet[2946]: I1013 05:52:05.422468 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8fa15c2e-b7a9-4ab6-96b9-c5ca2b237ecf-xtables-lock\") pod \"kube-proxy-5592c\" (UID: \"8fa15c2e-b7a9-4ab6-96b9-c5ca2b237ecf\") " pod="kube-system/kube-proxy-5592c" Oct 13 05:52:05.422549 kubelet[2946]: I1013 05:52:05.422479 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8fa15c2e-b7a9-4ab6-96b9-c5ca2b237ecf-lib-modules\") pod \"kube-proxy-5592c\" (UID: \"8fa15c2e-b7a9-4ab6-96b9-c5ca2b237ecf\") " pod="kube-system/kube-proxy-5592c" Oct 13 05:52:05.422549 kubelet[2946]: I1013 05:52:05.422489 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbzjl\" (UniqueName: \"kubernetes.io/projected/8fa15c2e-b7a9-4ab6-96b9-c5ca2b237ecf-kube-api-access-hbzjl\") pod \"kube-proxy-5592c\" (UID: \"8fa15c2e-b7a9-4ab6-96b9-c5ca2b237ecf\") " pod="kube-system/kube-proxy-5592c" Oct 13 05:52:05.532875 kubelet[2946]: E1013 05:52:05.532649 2946 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 13 05:52:05.532875 kubelet[2946]: E1013 05:52:05.532675 2946 projected.go:196] Error preparing data for projected volume kube-api-access-hbzjl for pod kube-system/kube-proxy-5592c: configmap "kube-root-ca.crt" not found Oct 13 05:52:05.532875 kubelet[2946]: E1013 05:52:05.532744 2946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8fa15c2e-b7a9-4ab6-96b9-c5ca2b237ecf-kube-api-access-hbzjl podName:8fa15c2e-b7a9-4ab6-96b9-c5ca2b237ecf nodeName:}" failed. No retries permitted until 2025-10-13 05:52:06.032720698 +0000 UTC m=+7.790405437 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hbzjl" (UniqueName: "kubernetes.io/projected/8fa15c2e-b7a9-4ab6-96b9-c5ca2b237ecf-kube-api-access-hbzjl") pod "kube-proxy-5592c" (UID: "8fa15c2e-b7a9-4ab6-96b9-c5ca2b237ecf") : configmap "kube-root-ca.crt" not found Oct 13 05:52:05.876903 systemd[1]: Created slice kubepods-besteffort-pod9ba9dd1f_8542_41dc_ac8a_7d044f8480c3.slice - libcontainer container kubepods-besteffort-pod9ba9dd1f_8542_41dc_ac8a_7d044f8480c3.slice. Oct 13 05:52:05.926234 kubelet[2946]: I1013 05:52:05.926145 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9ba9dd1f-8542-41dc-ac8a-7d044f8480c3-var-lib-calico\") pod \"tigera-operator-db78d5bd4-rr7ch\" (UID: \"9ba9dd1f-8542-41dc-ac8a-7d044f8480c3\") " pod="tigera-operator/tigera-operator-db78d5bd4-rr7ch" Oct 13 05:52:05.926234 kubelet[2946]: I1013 05:52:05.926201 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vcxl\" (UniqueName: \"kubernetes.io/projected/9ba9dd1f-8542-41dc-ac8a-7d044f8480c3-kube-api-access-8vcxl\") pod \"tigera-operator-db78d5bd4-rr7ch\" (UID: \"9ba9dd1f-8542-41dc-ac8a-7d044f8480c3\") " pod="tigera-operator/tigera-operator-db78d5bd4-rr7ch" Oct 13 05:52:06.183247 containerd[1643]: time="2025-10-13T05:52:06.183174841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-db78d5bd4-rr7ch,Uid:9ba9dd1f-8542-41dc-ac8a-7d044f8480c3,Namespace:tigera-operator,Attempt:0,}" Oct 13 05:52:06.199441 containerd[1643]: time="2025-10-13T05:52:06.199409320Z" level=info msg="connecting to shim 21e7a1980841b0da18125f96cc7d51db179eedc2b35de0bad78ad82b5f6f9158" address="unix:///run/containerd/s/8521d8d16e8086518d6809066a126aaa95dbeef491dfe4ac9fcd27e37fe119b0" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:52:06.225688 systemd[1]: Started cri-containerd-21e7a1980841b0da18125f96cc7d51db179eedc2b35de0bad78ad82b5f6f9158.scope - libcontainer container 21e7a1980841b0da18125f96cc7d51db179eedc2b35de0bad78ad82b5f6f9158. Oct 13 05:52:06.264051 containerd[1643]: time="2025-10-13T05:52:06.263956435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5592c,Uid:8fa15c2e-b7a9-4ab6-96b9-c5ca2b237ecf,Namespace:kube-system,Attempt:0,}" Oct 13 05:52:06.267009 containerd[1643]: time="2025-10-13T05:52:06.266994314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-db78d5bd4-rr7ch,Uid:9ba9dd1f-8542-41dc-ac8a-7d044f8480c3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"21e7a1980841b0da18125f96cc7d51db179eedc2b35de0bad78ad82b5f6f9158\"" Oct 13 05:52:06.268085 containerd[1643]: time="2025-10-13T05:52:06.268067066Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Oct 13 05:52:06.346376 containerd[1643]: time="2025-10-13T05:52:06.346338429Z" level=info msg="connecting to shim b4fb4b569dc47e48331274964b1f0352a61da3741656ff5e83f74ef767f024eb" address="unix:///run/containerd/s/921b1b67f39b5dfb74b3a13c53c98199e2cc812fddf8405b8c90d74281f5d0ee" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:52:06.366649 systemd[1]: Started cri-containerd-b4fb4b569dc47e48331274964b1f0352a61da3741656ff5e83f74ef767f024eb.scope - libcontainer container b4fb4b569dc47e48331274964b1f0352a61da3741656ff5e83f74ef767f024eb. Oct 13 05:52:06.386047 containerd[1643]: time="2025-10-13T05:52:06.386000372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5592c,Uid:8fa15c2e-b7a9-4ab6-96b9-c5ca2b237ecf,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4fb4b569dc47e48331274964b1f0352a61da3741656ff5e83f74ef767f024eb\"" Oct 13 05:52:06.394192 containerd[1643]: time="2025-10-13T05:52:06.393744618Z" level=info msg="CreateContainer within sandbox \"b4fb4b569dc47e48331274964b1f0352a61da3741656ff5e83f74ef767f024eb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 13 05:52:06.444978 containerd[1643]: time="2025-10-13T05:52:06.444784198Z" level=info msg="Container c46cb44a9f71afcf0774c393f58ebecde41d98aed01878b72676b554e2d0632c: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:52:06.469777 containerd[1643]: time="2025-10-13T05:52:06.469263520Z" level=info msg="CreateContainer within sandbox \"b4fb4b569dc47e48331274964b1f0352a61da3741656ff5e83f74ef767f024eb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c46cb44a9f71afcf0774c393f58ebecde41d98aed01878b72676b554e2d0632c\"" Oct 13 05:52:06.470577 containerd[1643]: time="2025-10-13T05:52:06.470387570Z" level=info msg="StartContainer for \"c46cb44a9f71afcf0774c393f58ebecde41d98aed01878b72676b554e2d0632c\"" Oct 13 05:52:06.473803 containerd[1643]: time="2025-10-13T05:52:06.473770911Z" level=info msg="connecting to shim c46cb44a9f71afcf0774c393f58ebecde41d98aed01878b72676b554e2d0632c" address="unix:///run/containerd/s/921b1b67f39b5dfb74b3a13c53c98199e2cc812fddf8405b8c90d74281f5d0ee" protocol=ttrpc version=3 Oct 13 05:52:06.491633 systemd[1]: Started cri-containerd-c46cb44a9f71afcf0774c393f58ebecde41d98aed01878b72676b554e2d0632c.scope - libcontainer container c46cb44a9f71afcf0774c393f58ebecde41d98aed01878b72676b554e2d0632c. Oct 13 05:52:06.518123 containerd[1643]: time="2025-10-13T05:52:06.518097336Z" level=info msg="StartContainer for \"c46cb44a9f71afcf0774c393f58ebecde41d98aed01878b72676b554e2d0632c\" returns successfully" Oct 13 05:52:07.430493 kubelet[2946]: I1013 05:52:07.430459 2946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5592c" podStartSLOduration=2.43044906 podStartE2EDuration="2.43044906s" podCreationTimestamp="2025-10-13 05:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:52:07.43017158 +0000 UTC m=+9.187856328" watchObservedRunningTime="2025-10-13 05:52:07.43044906 +0000 UTC m=+9.188133800" Oct 13 05:52:07.673231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount479667652.mount: Deactivated successfully. Oct 13 05:52:08.303569 containerd[1643]: time="2025-10-13T05:52:08.303474658Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:08.304225 containerd[1643]: time="2025-10-13T05:52:08.304130089Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Oct 13 05:52:08.305686 containerd[1643]: time="2025-10-13T05:52:08.305075593Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:08.306170 containerd[1643]: time="2025-10-13T05:52:08.306151981Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:08.306725 containerd[1643]: time="2025-10-13T05:52:08.306709243Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.038620187s" Oct 13 05:52:08.306788 containerd[1643]: time="2025-10-13T05:52:08.306778051Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Oct 13 05:52:08.308816 containerd[1643]: time="2025-10-13T05:52:08.308776702Z" level=info msg="CreateContainer within sandbox \"21e7a1980841b0da18125f96cc7d51db179eedc2b35de0bad78ad82b5f6f9158\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 13 05:52:08.316299 containerd[1643]: time="2025-10-13T05:52:08.315719457Z" level=info msg="Container 1bda337426dd8b81b7da246749dc54fd48d3d6261db9607d6d077b9d063f0f9d: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:52:08.315758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2859772453.mount: Deactivated successfully. Oct 13 05:52:08.337580 containerd[1643]: time="2025-10-13T05:52:08.337550177Z" level=info msg="CreateContainer within sandbox \"21e7a1980841b0da18125f96cc7d51db179eedc2b35de0bad78ad82b5f6f9158\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1bda337426dd8b81b7da246749dc54fd48d3d6261db9607d6d077b9d063f0f9d\"" Oct 13 05:52:08.338063 containerd[1643]: time="2025-10-13T05:52:08.338047050Z" level=info msg="StartContainer for \"1bda337426dd8b81b7da246749dc54fd48d3d6261db9607d6d077b9d063f0f9d\"" Oct 13 05:52:08.338683 containerd[1643]: time="2025-10-13T05:52:08.338664494Z" level=info msg="connecting to shim 1bda337426dd8b81b7da246749dc54fd48d3d6261db9607d6d077b9d063f0f9d" address="unix:///run/containerd/s/8521d8d16e8086518d6809066a126aaa95dbeef491dfe4ac9fcd27e37fe119b0" protocol=ttrpc version=3 Oct 13 05:52:08.359710 systemd[1]: Started cri-containerd-1bda337426dd8b81b7da246749dc54fd48d3d6261db9607d6d077b9d063f0f9d.scope - libcontainer container 1bda337426dd8b81b7da246749dc54fd48d3d6261db9607d6d077b9d063f0f9d. Oct 13 05:52:08.381440 containerd[1643]: time="2025-10-13T05:52:08.381337595Z" level=info msg="StartContainer for \"1bda337426dd8b81b7da246749dc54fd48d3d6261db9607d6d077b9d063f0f9d\" returns successfully" Oct 13 05:52:14.283560 sudo[1958]: pam_unix(sudo:session): session closed for user root Oct 13 05:52:14.286140 sshd[1957]: Connection closed by 147.75.109.163 port 44854 Oct 13 05:52:14.286077 sshd-session[1954]: pam_unix(sshd:session): session closed for user core Oct 13 05:52:14.290190 systemd[1]: sshd@6-139.178.70.106:22-147.75.109.163:44854.service: Deactivated successfully. Oct 13 05:52:14.290283 systemd-logind[1616]: Session 9 logged out. Waiting for processes to exit. Oct 13 05:52:14.293429 systemd[1]: session-9.scope: Deactivated successfully. Oct 13 05:52:14.293707 systemd[1]: session-9.scope: Consumed 3.831s CPU time, 160.3M memory peak. Oct 13 05:52:14.296127 systemd-logind[1616]: Removed session 9. Oct 13 05:52:16.920543 kubelet[2946]: I1013 05:52:16.920302 2946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-db78d5bd4-rr7ch" podStartSLOduration=9.880904954 podStartE2EDuration="11.920291359s" podCreationTimestamp="2025-10-13 05:52:05 +0000 UTC" firstStartedPulling="2025-10-13 05:52:06.267853817 +0000 UTC m=+8.025538553" lastFinishedPulling="2025-10-13 05:52:08.307240222 +0000 UTC m=+10.064924958" observedRunningTime="2025-10-13 05:52:08.433042339 +0000 UTC m=+10.190727086" watchObservedRunningTime="2025-10-13 05:52:16.920291359 +0000 UTC m=+18.677976101" Oct 13 05:52:16.939066 systemd[1]: Created slice kubepods-besteffort-podf30d0143_92dc_4ad6_8ebe_35834d8422ef.slice - libcontainer container kubepods-besteffort-podf30d0143_92dc_4ad6_8ebe_35834d8422ef.slice. Oct 13 05:52:17.009256 kubelet[2946]: I1013 05:52:17.009215 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f30d0143-92dc-4ad6-8ebe-35834d8422ef-typha-certs\") pod \"calico-typha-545886cc87-dkh58\" (UID: \"f30d0143-92dc-4ad6-8ebe-35834d8422ef\") " pod="calico-system/calico-typha-545886cc87-dkh58" Oct 13 05:52:17.009256 kubelet[2946]: I1013 05:52:17.009249 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f30d0143-92dc-4ad6-8ebe-35834d8422ef-tigera-ca-bundle\") pod \"calico-typha-545886cc87-dkh58\" (UID: \"f30d0143-92dc-4ad6-8ebe-35834d8422ef\") " pod="calico-system/calico-typha-545886cc87-dkh58" Oct 13 05:52:17.009256 kubelet[2946]: I1013 05:52:17.009264 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxlc2\" (UniqueName: \"kubernetes.io/projected/f30d0143-92dc-4ad6-8ebe-35834d8422ef-kube-api-access-sxlc2\") pod \"calico-typha-545886cc87-dkh58\" (UID: \"f30d0143-92dc-4ad6-8ebe-35834d8422ef\") " pod="calico-system/calico-typha-545886cc87-dkh58" Oct 13 05:52:17.154583 systemd[1]: Created slice kubepods-besteffort-podf3f45e97_6237_4e04_938a_662f583161ff.slice - libcontainer container kubepods-besteffort-podf3f45e97_6237_4e04_938a_662f583161ff.slice. Oct 13 05:52:17.210728 kubelet[2946]: I1013 05:52:17.210652 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f3f45e97-6237-4e04-938a-662f583161ff-cni-bin-dir\") pod \"calico-node-rx7wn\" (UID: \"f3f45e97-6237-4e04-938a-662f583161ff\") " pod="calico-system/calico-node-rx7wn" Oct 13 05:52:17.210880 kubelet[2946]: I1013 05:52:17.210867 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f3f45e97-6237-4e04-938a-662f583161ff-var-lib-calico\") pod \"calico-node-rx7wn\" (UID: \"f3f45e97-6237-4e04-938a-662f583161ff\") " pod="calico-system/calico-node-rx7wn" Oct 13 05:52:17.210954 kubelet[2946]: I1013 05:52:17.210942 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f3f45e97-6237-4e04-938a-662f583161ff-var-run-calico\") pod \"calico-node-rx7wn\" (UID: \"f3f45e97-6237-4e04-938a-662f583161ff\") " pod="calico-system/calico-node-rx7wn" Oct 13 05:52:17.211046 kubelet[2946]: I1013 05:52:17.211036 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f3f45e97-6237-4e04-938a-662f583161ff-node-certs\") pod \"calico-node-rx7wn\" (UID: \"f3f45e97-6237-4e04-938a-662f583161ff\") " pod="calico-system/calico-node-rx7wn" Oct 13 05:52:17.211127 kubelet[2946]: I1013 05:52:17.211112 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f3f45e97-6237-4e04-938a-662f583161ff-cni-log-dir\") pod \"calico-node-rx7wn\" (UID: \"f3f45e97-6237-4e04-938a-662f583161ff\") " pod="calico-system/calico-node-rx7wn" Oct 13 05:52:17.211224 kubelet[2946]: I1013 05:52:17.211214 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3f45e97-6237-4e04-938a-662f583161ff-lib-modules\") pod \"calico-node-rx7wn\" (UID: \"f3f45e97-6237-4e04-938a-662f583161ff\") " pod="calico-system/calico-node-rx7wn" Oct 13 05:52:17.211364 kubelet[2946]: I1013 05:52:17.211299 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3f45e97-6237-4e04-938a-662f583161ff-xtables-lock\") pod \"calico-node-rx7wn\" (UID: \"f3f45e97-6237-4e04-938a-662f583161ff\") " pod="calico-system/calico-node-rx7wn" Oct 13 05:52:17.211364 kubelet[2946]: I1013 05:52:17.211315 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4s6d\" (UniqueName: \"kubernetes.io/projected/f3f45e97-6237-4e04-938a-662f583161ff-kube-api-access-f4s6d\") pod \"calico-node-rx7wn\" (UID: \"f3f45e97-6237-4e04-938a-662f583161ff\") " pod="calico-system/calico-node-rx7wn" Oct 13 05:52:17.211364 kubelet[2946]: I1013 05:52:17.211334 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f3f45e97-6237-4e04-938a-662f583161ff-flexvol-driver-host\") pod \"calico-node-rx7wn\" (UID: \"f3f45e97-6237-4e04-938a-662f583161ff\") " pod="calico-system/calico-node-rx7wn" Oct 13 05:52:17.211577 kubelet[2946]: I1013 05:52:17.211350 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3f45e97-6237-4e04-938a-662f583161ff-tigera-ca-bundle\") pod \"calico-node-rx7wn\" (UID: \"f3f45e97-6237-4e04-938a-662f583161ff\") " pod="calico-system/calico-node-rx7wn" Oct 13 05:52:17.211577 kubelet[2946]: I1013 05:52:17.211477 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f3f45e97-6237-4e04-938a-662f583161ff-cni-net-dir\") pod \"calico-node-rx7wn\" (UID: \"f3f45e97-6237-4e04-938a-662f583161ff\") " pod="calico-system/calico-node-rx7wn" Oct 13 05:52:17.211577 kubelet[2946]: I1013 05:52:17.211494 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f3f45e97-6237-4e04-938a-662f583161ff-policysync\") pod \"calico-node-rx7wn\" (UID: \"f3f45e97-6237-4e04-938a-662f583161ff\") " pod="calico-system/calico-node-rx7wn" Oct 13 05:52:17.243240 containerd[1643]: time="2025-10-13T05:52:17.243167868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-545886cc87-dkh58,Uid:f30d0143-92dc-4ad6-8ebe-35834d8422ef,Namespace:calico-system,Attempt:0,}" Oct 13 05:52:17.255285 containerd[1643]: time="2025-10-13T05:52:17.254934227Z" level=info msg="connecting to shim c482d9d67ec43fa395180f1702de0b7edd8bfb726ad9966d014f04be8aa0dcd9" address="unix:///run/containerd/s/763856834076bd2f89c58d92e80342c1bd7fc61a0e7ada05b887e81bc1e22a49" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:52:17.278686 systemd[1]: Started cri-containerd-c482d9d67ec43fa395180f1702de0b7edd8bfb726ad9966d014f04be8aa0dcd9.scope - libcontainer container c482d9d67ec43fa395180f1702de0b7edd8bfb726ad9966d014f04be8aa0dcd9. Oct 13 05:52:17.322506 kubelet[2946]: E1013 05:52:17.322450 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.322506 kubelet[2946]: W1013 05:52:17.322464 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.322506 kubelet[2946]: E1013 05:52:17.322481 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.327192 kubelet[2946]: E1013 05:52:17.327170 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.327262 kubelet[2946]: W1013 05:52:17.327186 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.327262 kubelet[2946]: E1013 05:52:17.327214 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.334032 containerd[1643]: time="2025-10-13T05:52:17.333596470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-545886cc87-dkh58,Uid:f30d0143-92dc-4ad6-8ebe-35834d8422ef,Namespace:calico-system,Attempt:0,} returns sandbox id \"c482d9d67ec43fa395180f1702de0b7edd8bfb726ad9966d014f04be8aa0dcd9\"" Oct 13 05:52:17.335009 containerd[1643]: time="2025-10-13T05:52:17.334478320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Oct 13 05:52:17.399622 kubelet[2946]: E1013 05:52:17.399578 2946 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6hqlq" podUID="5c9445f8-2a83-4f1a-ae81-7a18fbffe769" Oct 13 05:52:17.462932 containerd[1643]: time="2025-10-13T05:52:17.462702189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rx7wn,Uid:f3f45e97-6237-4e04-938a-662f583161ff,Namespace:calico-system,Attempt:0,}" Oct 13 05:52:17.492149 kubelet[2946]: E1013 05:52:17.492124 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.492255 kubelet[2946]: W1013 05:52:17.492245 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.492298 kubelet[2946]: E1013 05:52:17.492291 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.492460 kubelet[2946]: E1013 05:52:17.492429 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.492460 kubelet[2946]: W1013 05:52:17.492435 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.492460 kubelet[2946]: E1013 05:52:17.492441 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.492670 kubelet[2946]: E1013 05:52:17.492639 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.492670 kubelet[2946]: W1013 05:52:17.492645 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.492670 kubelet[2946]: E1013 05:52:17.492651 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.492883 kubelet[2946]: E1013 05:52:17.492846 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.492883 kubelet[2946]: W1013 05:52:17.492852 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.492883 kubelet[2946]: E1013 05:52:17.492858 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.493033 kubelet[2946]: E1013 05:52:17.493023 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.493090 kubelet[2946]: W1013 05:52:17.493065 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.493090 kubelet[2946]: E1013 05:52:17.493073 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.493205 kubelet[2946]: E1013 05:52:17.493200 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.493285 kubelet[2946]: W1013 05:52:17.493254 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.493285 kubelet[2946]: E1013 05:52:17.493262 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.493474 kubelet[2946]: E1013 05:52:17.493442 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.493474 kubelet[2946]: W1013 05:52:17.493448 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.493474 kubelet[2946]: E1013 05:52:17.493453 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.502210 kubelet[2946]: E1013 05:52:17.493628 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.502210 kubelet[2946]: W1013 05:52:17.493641 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.502210 kubelet[2946]: E1013 05:52:17.493647 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.502210 kubelet[2946]: E1013 05:52:17.493734 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.502210 kubelet[2946]: W1013 05:52:17.493739 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.502210 kubelet[2946]: E1013 05:52:17.493744 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.502210 kubelet[2946]: E1013 05:52:17.493820 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.502210 kubelet[2946]: W1013 05:52:17.493825 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.502210 kubelet[2946]: E1013 05:52:17.493829 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.502210 kubelet[2946]: E1013 05:52:17.493905 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.502389 kubelet[2946]: W1013 05:52:17.493909 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.502389 kubelet[2946]: E1013 05:52:17.493914 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.502389 kubelet[2946]: E1013 05:52:17.494002 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.502389 kubelet[2946]: W1013 05:52:17.494006 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.502389 kubelet[2946]: E1013 05:52:17.494011 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.502389 kubelet[2946]: E1013 05:52:17.494099 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.502389 kubelet[2946]: W1013 05:52:17.494103 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.502389 kubelet[2946]: E1013 05:52:17.494107 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.502389 kubelet[2946]: E1013 05:52:17.494208 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.502389 kubelet[2946]: W1013 05:52:17.494213 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.502561 kubelet[2946]: E1013 05:52:17.494217 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.502561 kubelet[2946]: E1013 05:52:17.494297 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.502561 kubelet[2946]: W1013 05:52:17.494302 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.502561 kubelet[2946]: E1013 05:52:17.494306 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.502561 kubelet[2946]: E1013 05:52:17.494397 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.502561 kubelet[2946]: W1013 05:52:17.494401 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.502561 kubelet[2946]: E1013 05:52:17.494406 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.502561 kubelet[2946]: E1013 05:52:17.494511 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.502561 kubelet[2946]: W1013 05:52:17.494516 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.502561 kubelet[2946]: E1013 05:52:17.494520 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.511883 kubelet[2946]: E1013 05:52:17.494624 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.511883 kubelet[2946]: W1013 05:52:17.494629 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.511883 kubelet[2946]: E1013 05:52:17.494633 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.511883 kubelet[2946]: E1013 05:52:17.494717 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.511883 kubelet[2946]: W1013 05:52:17.494722 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.511883 kubelet[2946]: E1013 05:52:17.494726 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.511883 kubelet[2946]: E1013 05:52:17.494811 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.511883 kubelet[2946]: W1013 05:52:17.494815 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.511883 kubelet[2946]: E1013 05:52:17.494820 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.513296 kubelet[2946]: E1013 05:52:17.513282 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.513402 kubelet[2946]: W1013 05:52:17.513342 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.513402 kubelet[2946]: E1013 05:52:17.513357 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.513402 kubelet[2946]: I1013 05:52:17.513375 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5c9445f8-2a83-4f1a-ae81-7a18fbffe769-varrun\") pod \"csi-node-driver-6hqlq\" (UID: \"5c9445f8-2a83-4f1a-ae81-7a18fbffe769\") " pod="calico-system/csi-node-driver-6hqlq" Oct 13 05:52:17.513562 kubelet[2946]: E1013 05:52:17.513556 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.513595 kubelet[2946]: W1013 05:52:17.513589 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.513667 kubelet[2946]: E1013 05:52:17.513626 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.513667 kubelet[2946]: I1013 05:52:17.513643 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c9445f8-2a83-4f1a-ae81-7a18fbffe769-kubelet-dir\") pod \"csi-node-driver-6hqlq\" (UID: \"5c9445f8-2a83-4f1a-ae81-7a18fbffe769\") " pod="calico-system/csi-node-driver-6hqlq" Oct 13 05:52:17.513756 kubelet[2946]: E1013 05:52:17.513743 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.513756 kubelet[2946]: W1013 05:52:17.513754 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.513826 kubelet[2946]: E1013 05:52:17.513762 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.513851 kubelet[2946]: E1013 05:52:17.513834 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.513879 kubelet[2946]: W1013 05:52:17.513851 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.513879 kubelet[2946]: E1013 05:52:17.513859 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.513943 kubelet[2946]: E1013 05:52:17.513938 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.513943 kubelet[2946]: W1013 05:52:17.513943 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.514043 kubelet[2946]: E1013 05:52:17.513948 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.514043 kubelet[2946]: I1013 05:52:17.513959 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mmmj\" (UniqueName: \"kubernetes.io/projected/5c9445f8-2a83-4f1a-ae81-7a18fbffe769-kube-api-access-5mmmj\") pod \"csi-node-driver-6hqlq\" (UID: \"5c9445f8-2a83-4f1a-ae81-7a18fbffe769\") " pod="calico-system/csi-node-driver-6hqlq" Oct 13 05:52:17.514150 kubelet[2946]: E1013 05:52:17.514108 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.514150 kubelet[2946]: W1013 05:52:17.514115 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.514150 kubelet[2946]: E1013 05:52:17.514121 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.514326 kubelet[2946]: E1013 05:52:17.514289 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.514326 kubelet[2946]: W1013 05:52:17.514295 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.514326 kubelet[2946]: E1013 05:52:17.514300 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.514485 kubelet[2946]: E1013 05:52:17.514449 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.514485 kubelet[2946]: W1013 05:52:17.514454 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.514485 kubelet[2946]: E1013 05:52:17.514459 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.514485 kubelet[2946]: I1013 05:52:17.514472 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5c9445f8-2a83-4f1a-ae81-7a18fbffe769-socket-dir\") pod \"csi-node-driver-6hqlq\" (UID: \"5c9445f8-2a83-4f1a-ae81-7a18fbffe769\") " pod="calico-system/csi-node-driver-6hqlq" Oct 13 05:52:17.514581 kubelet[2946]: E1013 05:52:17.514566 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.514581 kubelet[2946]: W1013 05:52:17.514572 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.514581 kubelet[2946]: E1013 05:52:17.514577 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.514743 kubelet[2946]: E1013 05:52:17.514660 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.514743 kubelet[2946]: W1013 05:52:17.514664 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.514743 kubelet[2946]: E1013 05:52:17.514669 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.514743 kubelet[2946]: E1013 05:52:17.514738 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.514743 kubelet[2946]: W1013 05:52:17.514743 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.514892 kubelet[2946]: E1013 05:52:17.514748 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.514892 kubelet[2946]: I1013 05:52:17.514758 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5c9445f8-2a83-4f1a-ae81-7a18fbffe769-registration-dir\") pod \"csi-node-driver-6hqlq\" (UID: \"5c9445f8-2a83-4f1a-ae81-7a18fbffe769\") " pod="calico-system/csi-node-driver-6hqlq" Oct 13 05:52:17.515007 kubelet[2946]: E1013 05:52:17.514947 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.515007 kubelet[2946]: W1013 05:52:17.514954 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.515007 kubelet[2946]: E1013 05:52:17.514959 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.515108 kubelet[2946]: E1013 05:52:17.515046 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.515108 kubelet[2946]: W1013 05:52:17.515050 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.515108 kubelet[2946]: E1013 05:52:17.515055 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.515289 kubelet[2946]: E1013 05:52:17.515241 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.515289 kubelet[2946]: W1013 05:52:17.515247 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.515289 kubelet[2946]: E1013 05:52:17.515252 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.515395 kubelet[2946]: E1013 05:52:17.515376 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.515395 kubelet[2946]: W1013 05:52:17.515382 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.515395 kubelet[2946]: E1013 05:52:17.515387 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.572294 containerd[1643]: time="2025-10-13T05:52:17.572190535Z" level=info msg="connecting to shim 251196e69d0b89ccc938485a2664143d468affacc8d2e7c296cb8d76a3e7feed" address="unix:///run/containerd/s/0f16b1b1dfcd2fa458b36c99888aead6a9e7aaef070ee1646f0e5c360546dc95" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:52:17.594743 systemd[1]: Started cri-containerd-251196e69d0b89ccc938485a2664143d468affacc8d2e7c296cb8d76a3e7feed.scope - libcontainer container 251196e69d0b89ccc938485a2664143d468affacc8d2e7c296cb8d76a3e7feed. Oct 13 05:52:17.615763 kubelet[2946]: E1013 05:52:17.615724 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.615763 kubelet[2946]: W1013 05:52:17.615737 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.616033 kubelet[2946]: E1013 05:52:17.615878 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.616180 kubelet[2946]: E1013 05:52:17.616160 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.616180 kubelet[2946]: W1013 05:52:17.616165 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.616180 kubelet[2946]: E1013 05:52:17.616171 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.616451 kubelet[2946]: E1013 05:52:17.616412 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.616451 kubelet[2946]: W1013 05:52:17.616418 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.616451 kubelet[2946]: E1013 05:52:17.616423 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.616626 kubelet[2946]: E1013 05:52:17.616620 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.617141 kubelet[2946]: W1013 05:52:17.616661 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.617141 kubelet[2946]: E1013 05:52:17.616669 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.617141 kubelet[2946]: E1013 05:52:17.617054 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.617141 kubelet[2946]: W1013 05:52:17.617059 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.617141 kubelet[2946]: E1013 05:52:17.617066 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.617307 kubelet[2946]: E1013 05:52:17.617257 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.617307 kubelet[2946]: W1013 05:52:17.617263 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.617307 kubelet[2946]: E1013 05:52:17.617268 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.617647 kubelet[2946]: E1013 05:52:17.617419 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.617647 kubelet[2946]: W1013 05:52:17.617425 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.617647 kubelet[2946]: E1013 05:52:17.617430 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.617789 kubelet[2946]: E1013 05:52:17.617736 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.617789 kubelet[2946]: W1013 05:52:17.617743 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.617789 kubelet[2946]: E1013 05:52:17.617748 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.617954 kubelet[2946]: E1013 05:52:17.617904 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.617954 kubelet[2946]: W1013 05:52:17.617909 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.617954 kubelet[2946]: E1013 05:52:17.617914 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.618079 kubelet[2946]: E1013 05:52:17.618072 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.618167 kubelet[2946]: W1013 05:52:17.618122 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.618167 kubelet[2946]: E1013 05:52:17.618131 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.618263 kubelet[2946]: E1013 05:52:17.618257 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.618319 kubelet[2946]: W1013 05:52:17.618292 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.618319 kubelet[2946]: E1013 05:52:17.618299 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.618419 kubelet[2946]: E1013 05:52:17.618413 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.618512 kubelet[2946]: W1013 05:52:17.618463 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.618512 kubelet[2946]: E1013 05:52:17.618471 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.618706 kubelet[2946]: E1013 05:52:17.618657 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.618706 kubelet[2946]: W1013 05:52:17.618663 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.618706 kubelet[2946]: E1013 05:52:17.618668 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.618856 kubelet[2946]: E1013 05:52:17.618850 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.618908 kubelet[2946]: W1013 05:52:17.618891 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.618908 kubelet[2946]: E1013 05:52:17.618899 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.619074 kubelet[2946]: E1013 05:52:17.619057 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.619074 kubelet[2946]: W1013 05:52:17.619063 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.619074 kubelet[2946]: E1013 05:52:17.619068 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.619324 kubelet[2946]: E1013 05:52:17.619318 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.619373 kubelet[2946]: W1013 05:52:17.619366 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.619887 kubelet[2946]: E1013 05:52:17.619404 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.620052 kubelet[2946]: E1013 05:52:17.620045 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.620151 kubelet[2946]: W1013 05:52:17.620071 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.620151 kubelet[2946]: E1013 05:52:17.620079 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.620486 kubelet[2946]: E1013 05:52:17.620402 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.620486 kubelet[2946]: W1013 05:52:17.620456 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.620486 kubelet[2946]: E1013 05:52:17.620463 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.621002 kubelet[2946]: E1013 05:52:17.620856 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.621002 kubelet[2946]: W1013 05:52:17.620862 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.621002 kubelet[2946]: E1013 05:52:17.620867 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.621353 kubelet[2946]: E1013 05:52:17.621294 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.621353 kubelet[2946]: W1013 05:52:17.621301 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.621353 kubelet[2946]: E1013 05:52:17.621308 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.621648 kubelet[2946]: E1013 05:52:17.621591 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.621648 kubelet[2946]: W1013 05:52:17.621599 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.621648 kubelet[2946]: E1013 05:52:17.621608 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.622709 kubelet[2946]: E1013 05:52:17.622374 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.622709 kubelet[2946]: W1013 05:52:17.622382 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.622709 kubelet[2946]: E1013 05:52:17.622389 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.622986 kubelet[2946]: E1013 05:52:17.622823 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.622986 kubelet[2946]: W1013 05:52:17.622829 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.622986 kubelet[2946]: E1013 05:52:17.622836 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.623308 kubelet[2946]: E1013 05:52:17.623301 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.623903 kubelet[2946]: W1013 05:52:17.623374 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.623903 kubelet[2946]: E1013 05:52:17.623383 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.626218 kubelet[2946]: E1013 05:52:17.626177 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.626218 kubelet[2946]: W1013 05:52:17.626189 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.626218 kubelet[2946]: E1013 05:52:17.626198 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.636626 kubelet[2946]: E1013 05:52:17.636612 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:17.636800 kubelet[2946]: W1013 05:52:17.636791 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:17.636914 kubelet[2946]: E1013 05:52:17.636905 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:17.645048 containerd[1643]: time="2025-10-13T05:52:17.644978385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rx7wn,Uid:f3f45e97-6237-4e04-938a-662f583161ff,Namespace:calico-system,Attempt:0,} returns sandbox id \"251196e69d0b89ccc938485a2664143d468affacc8d2e7c296cb8d76a3e7feed\"" Oct 13 05:52:18.840796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1990816291.mount: Deactivated successfully. Oct 13 05:52:19.385625 kubelet[2946]: E1013 05:52:19.385595 2946 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6hqlq" podUID="5c9445f8-2a83-4f1a-ae81-7a18fbffe769" Oct 13 05:52:19.465985 containerd[1643]: time="2025-10-13T05:52:19.465844155Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:19.466230 containerd[1643]: time="2025-10-13T05:52:19.466221955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Oct 13 05:52:19.466553 containerd[1643]: time="2025-10-13T05:52:19.466435073Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:19.467400 containerd[1643]: time="2025-10-13T05:52:19.467373554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:19.468016 containerd[1643]: time="2025-10-13T05:52:19.467744512Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.133249422s" Oct 13 05:52:19.468016 containerd[1643]: time="2025-10-13T05:52:19.467762436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Oct 13 05:52:19.469587 containerd[1643]: time="2025-10-13T05:52:19.469504763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Oct 13 05:52:19.480958 containerd[1643]: time="2025-10-13T05:52:19.480931464Z" level=info msg="CreateContainer within sandbox \"c482d9d67ec43fa395180f1702de0b7edd8bfb726ad9966d014f04be8aa0dcd9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 13 05:52:19.486244 containerd[1643]: time="2025-10-13T05:52:19.485743018Z" level=info msg="Container c0b4847b619d16d0cd8b33558202e18d69636407e31af688421d309af121ad88: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:52:19.489053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1703880834.mount: Deactivated successfully. Oct 13 05:52:19.494915 containerd[1643]: time="2025-10-13T05:52:19.494882471Z" level=info msg="CreateContainer within sandbox \"c482d9d67ec43fa395180f1702de0b7edd8bfb726ad9966d014f04be8aa0dcd9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c0b4847b619d16d0cd8b33558202e18d69636407e31af688421d309af121ad88\"" Oct 13 05:52:19.495784 containerd[1643]: time="2025-10-13T05:52:19.495767029Z" level=info msg="StartContainer for \"c0b4847b619d16d0cd8b33558202e18d69636407e31af688421d309af121ad88\"" Oct 13 05:52:19.496405 containerd[1643]: time="2025-10-13T05:52:19.496388352Z" level=info msg="connecting to shim c0b4847b619d16d0cd8b33558202e18d69636407e31af688421d309af121ad88" address="unix:///run/containerd/s/763856834076bd2f89c58d92e80342c1bd7fc61a0e7ada05b887e81bc1e22a49" protocol=ttrpc version=3 Oct 13 05:52:19.516657 systemd[1]: Started cri-containerd-c0b4847b619d16d0cd8b33558202e18d69636407e31af688421d309af121ad88.scope - libcontainer container c0b4847b619d16d0cd8b33558202e18d69636407e31af688421d309af121ad88. Oct 13 05:52:19.556657 containerd[1643]: time="2025-10-13T05:52:19.556587027Z" level=info msg="StartContainer for \"c0b4847b619d16d0cd8b33558202e18d69636407e31af688421d309af121ad88\" returns successfully" Oct 13 05:52:20.453998 kubelet[2946]: I1013 05:52:20.452841 2946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-545886cc87-dkh58" podStartSLOduration=2.317999741 podStartE2EDuration="4.452830494s" podCreationTimestamp="2025-10-13 05:52:16 +0000 UTC" firstStartedPulling="2025-10-13 05:52:17.334245365 +0000 UTC m=+19.091930100" lastFinishedPulling="2025-10-13 05:52:19.469076117 +0000 UTC m=+21.226760853" observedRunningTime="2025-10-13 05:52:20.452233548 +0000 UTC m=+22.209918295" watchObservedRunningTime="2025-10-13 05:52:20.452830494 +0000 UTC m=+22.210515240" Oct 13 05:52:20.517233 kubelet[2946]: E1013 05:52:20.517162 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:20.517233 kubelet[2946]: W1013 05:52:20.517183 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:20.517233 kubelet[2946]: E1013 05:52:20.517201 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:20.517645 kubelet[2946]: E1013 05:52:20.517602 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:20.517645 kubelet[2946]: W1013 05:52:20.517611 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:20.517645 kubelet[2946]: E1013 05:52:20.517619 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:20.518184 kubelet[2946]: E1013 05:52:20.517948 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:20.518184 kubelet[2946]: W1013 05:52:20.517955 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:20.518184 kubelet[2946]: E1013 05:52:20.517965 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:20.520207 kubelet[2946]: E1013 05:52:20.520154 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:20.520207 kubelet[2946]: W1013 05:52:20.520164 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:20.520207 kubelet[2946]: E1013 05:52:20.520172 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:20.520458 kubelet[2946]: E1013 05:52:20.520415 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:20.520458 kubelet[2946]: W1013 05:52:20.520425 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:20.520458 kubelet[2946]: E1013 05:52:20.520433 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:20.520869 kubelet[2946]: E1013 05:52:20.520763 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:20.520869 kubelet[2946]: W1013 05:52:20.520770 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:20.520869 kubelet[2946]: E1013 05:52:20.520778 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:20.521089 kubelet[2946]: E1013 05:52:20.521050 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:20.521089 kubelet[2946]: W1013 05:52:20.521059 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:20.521089 kubelet[2946]: E1013 05:52:20.521066 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:20.521364 kubelet[2946]: E1013 05:52:20.521311 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:20.521364 kubelet[2946]: W1013 05:52:20.521320 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:20.521364 kubelet[2946]: E1013 05:52:20.521328 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:20.522003 kubelet[2946]: E1013 05:52:20.521593 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:20.522003 kubelet[2946]: W1013 05:52:20.521600 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:20.522003 kubelet[2946]: E1013 05:52:20.521609 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:20.522003 kubelet[2946]: E1013 05:52:20.521817 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:20.522003 kubelet[2946]: W1013 05:52:20.521889 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:20.522003 kubelet[2946]: E1013 05:52:20.521900 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:20.522523 kubelet[2946]: E1013 05:52:20.522264 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:20.522523 kubelet[2946]: W1013 05:52:20.522271 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:20.522523 kubelet[2946]: E1013 05:52:20.522279 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:20.522735 kubelet[2946]: E1013 05:52:20.522658 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:20.522735 kubelet[2946]: W1013 05:52:20.522667 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:20.522735 kubelet[2946]: E1013 05:52:20.522672 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:20.522934 kubelet[2946]: E1013 05:52:20.522883 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:20.522934 kubelet[2946]: W1013 05:52:20.522892 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:20.522934 kubelet[2946]: E1013 05:52:20.522899 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:20.523166 kubelet[2946]: E1013 05:52:20.523124 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:20.523166 kubelet[2946]: W1013 05:52:20.523132 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:20.523166 kubelet[2946]: E1013 05:52:20.523140 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:20.523334 kubelet[2946]: E1013 05:52:20.523327 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:20.523440 kubelet[2946]: W1013 05:52:20.523384 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:20.523440 kubelet[2946]: E1013 05:52:20.523395 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:20.537856 kubelet[2946]: E1013 05:52:20.537790 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:20.537856 kubelet[2946]: W1013 05:52:20.537806 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:20.537856 kubelet[2946]: E1013 05:52:20.537820 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:20.538467 kubelet[2946]: E1013 05:52:20.538154 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:20.538467 kubelet[2946]: W1013 05:52:20.538162 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:20.538467 kubelet[2946]: E1013 05:52:20.538172 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:20.538467 kubelet[2946]: E1013 05:52:20.538313 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:20.538467 kubelet[2946]: W1013 05:52:20.538324 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:20.538467 kubelet[2946]: E1013 05:52:20.538334 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:20.539145 kubelet[2946]: E1013 05:52:20.539075 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:20.539145 kubelet[2946]: W1013 05:52:20.539084 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:20.539145 kubelet[2946]: E1013 05:52:20.539092 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:20.539310 kubelet[2946]: E1013 05:52:20.539247 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:20.539310 kubelet[2946]: W1013 05:52:20.539255 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:20.539310 kubelet[2946]: E1013 05:52:20.539261 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:20.539512 kubelet[2946]: E1013 05:52:20.539505 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:20.539743 kubelet[2946]: W1013 05:52:20.539556 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:20.539743 kubelet[2946]: E1013 05:52:20.539564 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:20.540099 kubelet[2946]: E1013 05:52:20.539912 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:20.540099 kubelet[2946]: W1013 05:52:20.539921 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:20.540099 kubelet[2946]: E1013 05:52:20.539927 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:20.540211 kubelet[2946]: E1013 05:52:20.540203 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:20.540465 kubelet[2946]: W1013 05:52:20.540241 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:20.540465 kubelet[2946]: E1013 05:52:20.540252 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:20.540465 kubelet[2946]: E1013 05:52:20.540428 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:20.540465 kubelet[2946]: W1013 05:52:20.540435 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:20.540465 kubelet[2946]: E1013 05:52:20.540443 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:20.540907 kubelet[2946]: E1013 05:52:20.540538 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:20.540907 kubelet[2946]: W1013 05:52:20.540546 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:20.540907 kubelet[2946]: E1013 05:52:20.540552 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:20.540907 kubelet[2946]: E1013 05:52:20.540634 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:20.540907 kubelet[2946]: W1013 05:52:20.540642 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:20.540907 kubelet[2946]: E1013 05:52:20.540649 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:20.540907 kubelet[2946]: E1013 05:52:20.540780 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:20.540907 kubelet[2946]: W1013 05:52:20.540788 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:20.540907 kubelet[2946]: E1013 05:52:20.540796 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:20.541084 kubelet[2946]: E1013 05:52:20.541014 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:20.541084 kubelet[2946]: W1013 05:52:20.541019 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:20.541084 kubelet[2946]: E1013 05:52:20.541025 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:20.541174 kubelet[2946]: E1013 05:52:20.541120 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:20.541174 kubelet[2946]: W1013 05:52:20.541126 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:20.541174 kubelet[2946]: E1013 05:52:20.541135 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:20.541247 kubelet[2946]: E1013 05:52:20.541215 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:20.541247 kubelet[2946]: W1013 05:52:20.541221 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:20.541247 kubelet[2946]: E1013 05:52:20.541227 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:20.541334 kubelet[2946]: E1013 05:52:20.541320 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:20.541334 kubelet[2946]: W1013 05:52:20.541329 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:20.541381 kubelet[2946]: E1013 05:52:20.541336 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:20.541526 kubelet[2946]: E1013 05:52:20.541512 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:20.541526 kubelet[2946]: W1013 05:52:20.541522 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:20.541606 kubelet[2946]: E1013 05:52:20.541546 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:20.541667 kubelet[2946]: E1013 05:52:20.541656 2946 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:52:20.541667 kubelet[2946]: W1013 05:52:20.541665 2946 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:52:20.541868 kubelet[2946]: E1013 05:52:20.541672 2946 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:52:20.871995 containerd[1643]: time="2025-10-13T05:52:20.871965339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:20.872413 containerd[1643]: time="2025-10-13T05:52:20.872380719Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Oct 13 05:52:20.872634 containerd[1643]: time="2025-10-13T05:52:20.872616999Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:20.873931 containerd[1643]: time="2025-10-13T05:52:20.873876938Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.404355753s" Oct 13 05:52:20.873931 containerd[1643]: time="2025-10-13T05:52:20.873895028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Oct 13 05:52:20.874124 containerd[1643]: time="2025-10-13T05:52:20.874109030Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:20.876658 containerd[1643]: time="2025-10-13T05:52:20.876635634Z" level=info msg="CreateContainer within sandbox \"251196e69d0b89ccc938485a2664143d468affacc8d2e7c296cb8d76a3e7feed\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 13 05:52:20.882795 containerd[1643]: time="2025-10-13T05:52:20.882757544Z" level=info msg="Container bd30e3b5ded37e33c1d55aa19f168ccbb881d5ed305a0baa55d30e42222940f8: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:52:20.884498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount196305317.mount: Deactivated successfully. Oct 13 05:52:20.893360 containerd[1643]: time="2025-10-13T05:52:20.893322921Z" level=info msg="CreateContainer within sandbox \"251196e69d0b89ccc938485a2664143d468affacc8d2e7c296cb8d76a3e7feed\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bd30e3b5ded37e33c1d55aa19f168ccbb881d5ed305a0baa55d30e42222940f8\"" Oct 13 05:52:20.895585 containerd[1643]: time="2025-10-13T05:52:20.894621010Z" level=info msg="StartContainer for \"bd30e3b5ded37e33c1d55aa19f168ccbb881d5ed305a0baa55d30e42222940f8\"" Oct 13 05:52:20.896053 containerd[1643]: time="2025-10-13T05:52:20.896034862Z" level=info msg="connecting to shim bd30e3b5ded37e33c1d55aa19f168ccbb881d5ed305a0baa55d30e42222940f8" address="unix:///run/containerd/s/0f16b1b1dfcd2fa458b36c99888aead6a9e7aaef070ee1646f0e5c360546dc95" protocol=ttrpc version=3 Oct 13 05:52:20.916663 systemd[1]: Started cri-containerd-bd30e3b5ded37e33c1d55aa19f168ccbb881d5ed305a0baa55d30e42222940f8.scope - libcontainer container bd30e3b5ded37e33c1d55aa19f168ccbb881d5ed305a0baa55d30e42222940f8. Oct 13 05:52:20.943575 containerd[1643]: time="2025-10-13T05:52:20.943500688Z" level=info msg="StartContainer for \"bd30e3b5ded37e33c1d55aa19f168ccbb881d5ed305a0baa55d30e42222940f8\" returns successfully" Oct 13 05:52:20.950618 systemd[1]: cri-containerd-bd30e3b5ded37e33c1d55aa19f168ccbb881d5ed305a0baa55d30e42222940f8.scope: Deactivated successfully. Oct 13 05:52:20.958140 containerd[1643]: time="2025-10-13T05:52:20.958015123Z" level=info msg="received exit event container_id:\"bd30e3b5ded37e33c1d55aa19f168ccbb881d5ed305a0baa55d30e42222940f8\" id:\"bd30e3b5ded37e33c1d55aa19f168ccbb881d5ed305a0baa55d30e42222940f8\" pid:3612 exited_at:{seconds:1760334740 nanos:952824054}" Oct 13 05:52:20.969198 containerd[1643]: time="2025-10-13T05:52:20.969170112Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd30e3b5ded37e33c1d55aa19f168ccbb881d5ed305a0baa55d30e42222940f8\" id:\"bd30e3b5ded37e33c1d55aa19f168ccbb881d5ed305a0baa55d30e42222940f8\" pid:3612 exited_at:{seconds:1760334740 nanos:952824054}" Oct 13 05:52:20.997665 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd30e3b5ded37e33c1d55aa19f168ccbb881d5ed305a0baa55d30e42222940f8-rootfs.mount: Deactivated successfully. Oct 13 05:52:21.380948 kubelet[2946]: E1013 05:52:21.380716 2946 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6hqlq" podUID="5c9445f8-2a83-4f1a-ae81-7a18fbffe769" Oct 13 05:52:21.442879 kubelet[2946]: I1013 05:52:21.442865 2946 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:52:21.443868 containerd[1643]: time="2025-10-13T05:52:21.443846881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Oct 13 05:52:23.380202 kubelet[2946]: E1013 05:52:23.380009 2946 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6hqlq" podUID="5c9445f8-2a83-4f1a-ae81-7a18fbffe769" Oct 13 05:52:24.210562 containerd[1643]: time="2025-10-13T05:52:24.210251114Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:24.210871 containerd[1643]: time="2025-10-13T05:52:24.210743481Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Oct 13 05:52:24.211231 containerd[1643]: time="2025-10-13T05:52:24.211207036Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:24.212661 containerd[1643]: time="2025-10-13T05:52:24.212632093Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:24.213271 containerd[1643]: time="2025-10-13T05:52:24.213182063Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 2.769313948s" Oct 13 05:52:24.213271 containerd[1643]: time="2025-10-13T05:52:24.213204032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Oct 13 05:52:24.216967 containerd[1643]: time="2025-10-13T05:52:24.216924309Z" level=info msg="CreateContainer within sandbox \"251196e69d0b89ccc938485a2664143d468affacc8d2e7c296cb8d76a3e7feed\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 13 05:52:24.222667 containerd[1643]: time="2025-10-13T05:52:24.222643888Z" level=info msg="Container 518ffaa52002f131b610a5f78fbc7d1bf6f16e22e8177ba5cb4de6b30f9f4a57: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:52:24.227579 containerd[1643]: time="2025-10-13T05:52:24.227553525Z" level=info msg="CreateContainer within sandbox \"251196e69d0b89ccc938485a2664143d468affacc8d2e7c296cb8d76a3e7feed\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"518ffaa52002f131b610a5f78fbc7d1bf6f16e22e8177ba5cb4de6b30f9f4a57\"" Oct 13 05:52:24.228320 containerd[1643]: time="2025-10-13T05:52:24.228300903Z" level=info msg="StartContainer for \"518ffaa52002f131b610a5f78fbc7d1bf6f16e22e8177ba5cb4de6b30f9f4a57\"" Oct 13 05:52:24.229081 containerd[1643]: time="2025-10-13T05:52:24.229063908Z" level=info msg="connecting to shim 518ffaa52002f131b610a5f78fbc7d1bf6f16e22e8177ba5cb4de6b30f9f4a57" address="unix:///run/containerd/s/0f16b1b1dfcd2fa458b36c99888aead6a9e7aaef070ee1646f0e5c360546dc95" protocol=ttrpc version=3 Oct 13 05:52:24.252670 systemd[1]: Started cri-containerd-518ffaa52002f131b610a5f78fbc7d1bf6f16e22e8177ba5cb4de6b30f9f4a57.scope - libcontainer container 518ffaa52002f131b610a5f78fbc7d1bf6f16e22e8177ba5cb4de6b30f9f4a57. Oct 13 05:52:24.281174 containerd[1643]: time="2025-10-13T05:52:24.281123650Z" level=info msg="StartContainer for \"518ffaa52002f131b610a5f78fbc7d1bf6f16e22e8177ba5cb4de6b30f9f4a57\" returns successfully" Oct 13 05:52:25.379937 kubelet[2946]: E1013 05:52:25.379905 2946 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6hqlq" podUID="5c9445f8-2a83-4f1a-ae81-7a18fbffe769" Oct 13 05:52:25.919132 systemd[1]: cri-containerd-518ffaa52002f131b610a5f78fbc7d1bf6f16e22e8177ba5cb4de6b30f9f4a57.scope: Deactivated successfully. Oct 13 05:52:25.919630 systemd[1]: cri-containerd-518ffaa52002f131b610a5f78fbc7d1bf6f16e22e8177ba5cb4de6b30f9f4a57.scope: Consumed 311ms CPU time, 158.6M memory peak, 12K read from disk, 171.3M written to disk. Oct 13 05:52:25.920852 containerd[1643]: time="2025-10-13T05:52:25.920746080Z" level=info msg="received exit event container_id:\"518ffaa52002f131b610a5f78fbc7d1bf6f16e22e8177ba5cb4de6b30f9f4a57\" id:\"518ffaa52002f131b610a5f78fbc7d1bf6f16e22e8177ba5cb4de6b30f9f4a57\" pid:3670 exited_at:{seconds:1760334745 nanos:920364016}" Oct 13 05:52:25.922537 containerd[1643]: time="2025-10-13T05:52:25.922520222Z" level=info msg="TaskExit event in podsandbox handler container_id:\"518ffaa52002f131b610a5f78fbc7d1bf6f16e22e8177ba5cb4de6b30f9f4a57\" id:\"518ffaa52002f131b610a5f78fbc7d1bf6f16e22e8177ba5cb4de6b30f9f4a57\" pid:3670 exited_at:{seconds:1760334745 nanos:920364016}" Oct 13 05:52:25.971266 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-518ffaa52002f131b610a5f78fbc7d1bf6f16e22e8177ba5cb4de6b30f9f4a57-rootfs.mount: Deactivated successfully. Oct 13 05:52:26.013009 kubelet[2946]: I1013 05:52:26.012995 2946 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 13 05:52:26.066565 systemd[1]: Created slice kubepods-burstable-pod7eaeee97_3247_46a8_9ef2_3b297a8c1e04.slice - libcontainer container kubepods-burstable-pod7eaeee97_3247_46a8_9ef2_3b297a8c1e04.slice. Oct 13 05:52:26.074550 kubelet[2946]: I1013 05:52:26.074375 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtxr8\" (UniqueName: \"kubernetes.io/projected/7eaeee97-3247-46a8-9ef2-3b297a8c1e04-kube-api-access-rtxr8\") pod \"coredns-66bc5c9577-nl4pb\" (UID: \"7eaeee97-3247-46a8-9ef2-3b297a8c1e04\") " pod="kube-system/coredns-66bc5c9577-nl4pb" Oct 13 05:52:26.074550 kubelet[2946]: I1013 05:52:26.074398 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7eaeee97-3247-46a8-9ef2-3b297a8c1e04-config-volume\") pod \"coredns-66bc5c9577-nl4pb\" (UID: \"7eaeee97-3247-46a8-9ef2-3b297a8c1e04\") " pod="kube-system/coredns-66bc5c9577-nl4pb" Oct 13 05:52:26.079614 systemd[1]: Created slice kubepods-besteffort-pod3393829f_fb4c_4d93_8969_62a461d00810.slice - libcontainer container kubepods-besteffort-pod3393829f_fb4c_4d93_8969_62a461d00810.slice. Oct 13 05:52:26.085410 systemd[1]: Created slice kubepods-besteffort-pod0eba1397_50d5_488f_94c0_3c1a3b8792a7.slice - libcontainer container kubepods-besteffort-pod0eba1397_50d5_488f_94c0_3c1a3b8792a7.slice. Oct 13 05:52:26.089080 systemd[1]: Created slice kubepods-burstable-pod44debfcd_ef12_4ead_bdc8_ed45ef8b190e.slice - libcontainer container kubepods-burstable-pod44debfcd_ef12_4ead_bdc8_ed45ef8b190e.slice. Oct 13 05:52:26.092539 systemd[1]: Created slice kubepods-besteffort-poda83d05c4_9848_49d0_89bd_67d758be0b93.slice - libcontainer container kubepods-besteffort-poda83d05c4_9848_49d0_89bd_67d758be0b93.slice. Oct 13 05:52:26.096777 systemd[1]: Created slice kubepods-besteffort-pode9d214b6_f41a_4546_9795_8f0393cb97df.slice - libcontainer container kubepods-besteffort-pode9d214b6_f41a_4546_9795_8f0393cb97df.slice. Oct 13 05:52:26.101105 systemd[1]: Created slice kubepods-besteffort-pod5e24d0aa_f0fd_4652_9b7a_0891ae94aa4f.slice - libcontainer container kubepods-besteffort-pod5e24d0aa_f0fd_4652_9b7a_0891ae94aa4f.slice. Oct 13 05:52:26.176173 kubelet[2946]: I1013 05:52:26.175645 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e9d214b6-f41a-4546-9795-8f0393cb97df-goldmane-key-pair\") pod \"goldmane-854f97d977-q72x8\" (UID: \"e9d214b6-f41a-4546-9795-8f0393cb97df\") " pod="calico-system/goldmane-854f97d977-q72x8" Oct 13 05:52:26.176173 kubelet[2946]: I1013 05:52:26.175723 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7w9h\" (UniqueName: \"kubernetes.io/projected/e9d214b6-f41a-4546-9795-8f0393cb97df-kube-api-access-n7w9h\") pod \"goldmane-854f97d977-q72x8\" (UID: \"e9d214b6-f41a-4546-9795-8f0393cb97df\") " pod="calico-system/goldmane-854f97d977-q72x8" Oct 13 05:52:26.176173 kubelet[2946]: I1013 05:52:26.175744 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a83d05c4-9848-49d0-89bd-67d758be0b93-whisker-backend-key-pair\") pod \"whisker-586b8f6495-mmcbq\" (UID: \"a83d05c4-9848-49d0-89bd-67d758be0b93\") " pod="calico-system/whisker-586b8f6495-mmcbq" Oct 13 05:52:26.176173 kubelet[2946]: I1013 05:52:26.175768 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvt9l\" (UniqueName: \"kubernetes.io/projected/3393829f-fb4c-4d93-8969-62a461d00810-kube-api-access-gvt9l\") pod \"calico-apiserver-6785666f76-kmpv4\" (UID: \"3393829f-fb4c-4d93-8969-62a461d00810\") " pod="calico-apiserver/calico-apiserver-6785666f76-kmpv4" Oct 13 05:52:26.176173 kubelet[2946]: I1013 05:52:26.175785 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rck2w\" (UniqueName: \"kubernetes.io/projected/5e24d0aa-f0fd-4652-9b7a-0891ae94aa4f-kube-api-access-rck2w\") pod \"calico-apiserver-6785666f76-tdh2j\" (UID: \"5e24d0aa-f0fd-4652-9b7a-0891ae94aa4f\") " pod="calico-apiserver/calico-apiserver-6785666f76-tdh2j" Oct 13 05:52:26.176883 kubelet[2946]: I1013 05:52:26.175803 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbrvm\" (UniqueName: \"kubernetes.io/projected/44debfcd-ef12-4ead-bdc8-ed45ef8b190e-kube-api-access-cbrvm\") pod \"coredns-66bc5c9577-f2t27\" (UID: \"44debfcd-ef12-4ead-bdc8-ed45ef8b190e\") " pod="kube-system/coredns-66bc5c9577-f2t27" Oct 13 05:52:26.176883 kubelet[2946]: I1013 05:52:26.175896 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3393829f-fb4c-4d93-8969-62a461d00810-calico-apiserver-certs\") pod \"calico-apiserver-6785666f76-kmpv4\" (UID: \"3393829f-fb4c-4d93-8969-62a461d00810\") " pod="calico-apiserver/calico-apiserver-6785666f76-kmpv4" Oct 13 05:52:26.176883 kubelet[2946]: I1013 05:52:26.175918 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9d214b6-f41a-4546-9795-8f0393cb97df-goldmane-ca-bundle\") pod \"goldmane-854f97d977-q72x8\" (UID: \"e9d214b6-f41a-4546-9795-8f0393cb97df\") " pod="calico-system/goldmane-854f97d977-q72x8" Oct 13 05:52:26.176883 kubelet[2946]: I1013 05:52:26.175940 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5e24d0aa-f0fd-4652-9b7a-0891ae94aa4f-calico-apiserver-certs\") pod \"calico-apiserver-6785666f76-tdh2j\" (UID: \"5e24d0aa-f0fd-4652-9b7a-0891ae94aa4f\") " pod="calico-apiserver/calico-apiserver-6785666f76-tdh2j" Oct 13 05:52:26.176883 kubelet[2946]: I1013 05:52:26.175949 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0eba1397-50d5-488f-94c0-3c1a3b8792a7-tigera-ca-bundle\") pod \"calico-kube-controllers-6f7646d5d5-2mn25\" (UID: \"0eba1397-50d5-488f-94c0-3c1a3b8792a7\") " pod="calico-system/calico-kube-controllers-6f7646d5d5-2mn25" Oct 13 05:52:26.177038 kubelet[2946]: I1013 05:52:26.175964 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfv7c\" (UniqueName: \"kubernetes.io/projected/0eba1397-50d5-488f-94c0-3c1a3b8792a7-kube-api-access-xfv7c\") pod \"calico-kube-controllers-6f7646d5d5-2mn25\" (UID: \"0eba1397-50d5-488f-94c0-3c1a3b8792a7\") " pod="calico-system/calico-kube-controllers-6f7646d5d5-2mn25" Oct 13 05:52:26.177038 kubelet[2946]: I1013 05:52:26.175976 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44debfcd-ef12-4ead-bdc8-ed45ef8b190e-config-volume\") pod \"coredns-66bc5c9577-f2t27\" (UID: \"44debfcd-ef12-4ead-bdc8-ed45ef8b190e\") " pod="kube-system/coredns-66bc5c9577-f2t27" Oct 13 05:52:26.177038 kubelet[2946]: I1013 05:52:26.175987 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a83d05c4-9848-49d0-89bd-67d758be0b93-whisker-ca-bundle\") pod \"whisker-586b8f6495-mmcbq\" (UID: \"a83d05c4-9848-49d0-89bd-67d758be0b93\") " pod="calico-system/whisker-586b8f6495-mmcbq" Oct 13 05:52:26.177038 kubelet[2946]: I1013 05:52:26.175996 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9tsb\" (UniqueName: \"kubernetes.io/projected/a83d05c4-9848-49d0-89bd-67d758be0b93-kube-api-access-k9tsb\") pod \"whisker-586b8f6495-mmcbq\" (UID: \"a83d05c4-9848-49d0-89bd-67d758be0b93\") " pod="calico-system/whisker-586b8f6495-mmcbq" Oct 13 05:52:26.177038 kubelet[2946]: I1013 05:52:26.176015 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9d214b6-f41a-4546-9795-8f0393cb97df-config\") pod \"goldmane-854f97d977-q72x8\" (UID: \"e9d214b6-f41a-4546-9795-8f0393cb97df\") " pod="calico-system/goldmane-854f97d977-q72x8" Oct 13 05:52:26.375690 containerd[1643]: time="2025-10-13T05:52:26.375658101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nl4pb,Uid:7eaeee97-3247-46a8-9ef2-3b297a8c1e04,Namespace:kube-system,Attempt:0,}" Oct 13 05:52:26.384736 containerd[1643]: time="2025-10-13T05:52:26.384687603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6785666f76-kmpv4,Uid:3393829f-fb4c-4d93-8969-62a461d00810,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:52:26.401419 containerd[1643]: time="2025-10-13T05:52:26.401390912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-q72x8,Uid:e9d214b6-f41a-4546-9795-8f0393cb97df,Namespace:calico-system,Attempt:0,}" Oct 13 05:52:26.401846 containerd[1643]: time="2025-10-13T05:52:26.401561490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f7646d5d5-2mn25,Uid:0eba1397-50d5-488f-94c0-3c1a3b8792a7,Namespace:calico-system,Attempt:0,}" Oct 13 05:52:26.401906 containerd[1643]: time="2025-10-13T05:52:26.401595037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-586b8f6495-mmcbq,Uid:a83d05c4-9848-49d0-89bd-67d758be0b93,Namespace:calico-system,Attempt:0,}" Oct 13 05:52:26.402982 containerd[1643]: time="2025-10-13T05:52:26.401598056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-f2t27,Uid:44debfcd-ef12-4ead-bdc8-ed45ef8b190e,Namespace:kube-system,Attempt:0,}" Oct 13 05:52:26.428748 containerd[1643]: time="2025-10-13T05:52:26.428517598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6785666f76-tdh2j,Uid:5e24d0aa-f0fd-4652-9b7a-0891ae94aa4f,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:52:26.464736 containerd[1643]: time="2025-10-13T05:52:26.464485040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Oct 13 05:52:26.834351 containerd[1643]: time="2025-10-13T05:52:26.834318287Z" level=error msg="Failed to destroy network for sandbox \"a43e260d9868be4702b264f519528c84eb1223a9223e9803e61817f921e39fde\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:52:26.834783 containerd[1643]: time="2025-10-13T05:52:26.834742330Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-q72x8,Uid:e9d214b6-f41a-4546-9795-8f0393cb97df,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a43e260d9868be4702b264f519528c84eb1223a9223e9803e61817f921e39fde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:52:26.835606 containerd[1643]: time="2025-10-13T05:52:26.835554249Z" level=error msg="Failed to destroy network for sandbox \"04bb734865afa7a101a9481570444f703872b67dff172a6572638843cd2dd6ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:52:26.836855 containerd[1643]: time="2025-10-13T05:52:26.836757160Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-f2t27,Uid:44debfcd-ef12-4ead-bdc8-ed45ef8b190e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"04bb734865afa7a101a9481570444f703872b67dff172a6572638843cd2dd6ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:52:26.843312 kubelet[2946]: E1013 05:52:26.841356 2946 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a43e260d9868be4702b264f519528c84eb1223a9223e9803e61817f921e39fde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:52:26.843312 kubelet[2946]: E1013 05:52:26.841476 2946 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a43e260d9868be4702b264f519528c84eb1223a9223e9803e61817f921e39fde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-854f97d977-q72x8" Oct 13 05:52:26.843312 kubelet[2946]: E1013 05:52:26.841500 2946 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a43e260d9868be4702b264f519528c84eb1223a9223e9803e61817f921e39fde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-854f97d977-q72x8" Oct 13 05:52:26.843898 kubelet[2946]: E1013 05:52:26.843045 2946 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-854f97d977-q72x8_calico-system(e9d214b6-f41a-4546-9795-8f0393cb97df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-854f97d977-q72x8_calico-system(e9d214b6-f41a-4546-9795-8f0393cb97df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a43e260d9868be4702b264f519528c84eb1223a9223e9803e61817f921e39fde\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-854f97d977-q72x8" podUID="e9d214b6-f41a-4546-9795-8f0393cb97df" Oct 13 05:52:26.851434 containerd[1643]: time="2025-10-13T05:52:26.851397911Z" level=error msg="Failed to destroy network for sandbox \"7357839653843a2d5162c6c7140744db8a35bf1244e1bcae575b9995967d7eb0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:52:26.851862 containerd[1643]: time="2025-10-13T05:52:26.851841834Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nl4pb,Uid:7eaeee97-3247-46a8-9ef2-3b297a8c1e04,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7357839653843a2d5162c6c7140744db8a35bf1244e1bcae575b9995967d7eb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:52:26.852192 kubelet[2946]: E1013 05:52:26.852061 2946 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7357839653843a2d5162c6c7140744db8a35bf1244e1bcae575b9995967d7eb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:52:26.852192 kubelet[2946]: E1013 05:52:26.852065 2946 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04bb734865afa7a101a9481570444f703872b67dff172a6572638843cd2dd6ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:52:26.852873 kubelet[2946]: E1013 05:52:26.852857 2946 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04bb734865afa7a101a9481570444f703872b67dff172a6572638843cd2dd6ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-f2t27" Oct 13 05:52:26.852910 kubelet[2946]: E1013 05:52:26.852876 2946 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04bb734865afa7a101a9481570444f703872b67dff172a6572638843cd2dd6ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-f2t27" Oct 13 05:52:26.852933 kubelet[2946]: E1013 05:52:26.852919 2946 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-f2t27_kube-system(44debfcd-ef12-4ead-bdc8-ed45ef8b190e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-f2t27_kube-system(44debfcd-ef12-4ead-bdc8-ed45ef8b190e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"04bb734865afa7a101a9481570444f703872b67dff172a6572638843cd2dd6ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-f2t27" podUID="44debfcd-ef12-4ead-bdc8-ed45ef8b190e" Oct 13 05:52:26.853366 kubelet[2946]: E1013 05:52:26.852096 2946 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7357839653843a2d5162c6c7140744db8a35bf1244e1bcae575b9995967d7eb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nl4pb" Oct 13 05:52:26.853903 kubelet[2946]: E1013 05:52:26.853447 2946 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7357839653843a2d5162c6c7140744db8a35bf1244e1bcae575b9995967d7eb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nl4pb" Oct 13 05:52:26.855090 kubelet[2946]: E1013 05:52:26.854565 2946 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-nl4pb_kube-system(7eaeee97-3247-46a8-9ef2-3b297a8c1e04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-nl4pb_kube-system(7eaeee97-3247-46a8-9ef2-3b297a8c1e04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7357839653843a2d5162c6c7140744db8a35bf1244e1bcae575b9995967d7eb0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-nl4pb" podUID="7eaeee97-3247-46a8-9ef2-3b297a8c1e04" Oct 13 05:52:26.866374 containerd[1643]: time="2025-10-13T05:52:26.866231757Z" level=error msg="Failed to destroy network for sandbox \"3633c2a566f6179ada9f030b77ae8bf88698cb543f45ced7957e12617a5c0399\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:52:26.867847 containerd[1643]: time="2025-10-13T05:52:26.867723591Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-586b8f6495-mmcbq,Uid:a83d05c4-9848-49d0-89bd-67d758be0b93,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3633c2a566f6179ada9f030b77ae8bf88698cb543f45ced7957e12617a5c0399\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:52:26.870347 kubelet[2946]: E1013 05:52:26.868244 2946 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3633c2a566f6179ada9f030b77ae8bf88698cb543f45ced7957e12617a5c0399\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:52:26.871366 kubelet[2946]: E1013 05:52:26.870695 2946 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3633c2a566f6179ada9f030b77ae8bf88698cb543f45ced7957e12617a5c0399\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-586b8f6495-mmcbq" Oct 13 05:52:26.871366 kubelet[2946]: E1013 05:52:26.870716 2946 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3633c2a566f6179ada9f030b77ae8bf88698cb543f45ced7957e12617a5c0399\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-586b8f6495-mmcbq" Oct 13 05:52:26.871366 kubelet[2946]: E1013 05:52:26.870757 2946 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-586b8f6495-mmcbq_calico-system(a83d05c4-9848-49d0-89bd-67d758be0b93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-586b8f6495-mmcbq_calico-system(a83d05c4-9848-49d0-89bd-67d758be0b93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3633c2a566f6179ada9f030b77ae8bf88698cb543f45ced7957e12617a5c0399\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-586b8f6495-mmcbq" podUID="a83d05c4-9848-49d0-89bd-67d758be0b93" Oct 13 05:52:26.875014 containerd[1643]: time="2025-10-13T05:52:26.874258388Z" level=error msg="Failed to destroy network for sandbox \"6bdbb083559f974ef77534031680a234c0c33dcabfd740db5611a6b5846a8b70\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:52:26.875981 containerd[1643]: time="2025-10-13T05:52:26.875949082Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f7646d5d5-2mn25,Uid:0eba1397-50d5-488f-94c0-3c1a3b8792a7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bdbb083559f974ef77534031680a234c0c33dcabfd740db5611a6b5846a8b70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:52:26.877319 kubelet[2946]: E1013 05:52:26.876084 2946 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bdbb083559f974ef77534031680a234c0c33dcabfd740db5611a6b5846a8b70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:52:26.877319 kubelet[2946]: E1013 05:52:26.876116 2946 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bdbb083559f974ef77534031680a234c0c33dcabfd740db5611a6b5846a8b70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f7646d5d5-2mn25" Oct 13 05:52:26.877319 kubelet[2946]: E1013 05:52:26.876131 2946 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bdbb083559f974ef77534031680a234c0c33dcabfd740db5611a6b5846a8b70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f7646d5d5-2mn25" Oct 13 05:52:26.877443 kubelet[2946]: E1013 05:52:26.876167 2946 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f7646d5d5-2mn25_calico-system(0eba1397-50d5-488f-94c0-3c1a3b8792a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f7646d5d5-2mn25_calico-system(0eba1397-50d5-488f-94c0-3c1a3b8792a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6bdbb083559f974ef77534031680a234c0c33dcabfd740db5611a6b5846a8b70\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f7646d5d5-2mn25" podUID="0eba1397-50d5-488f-94c0-3c1a3b8792a7" Oct 13 05:52:26.879491 containerd[1643]: time="2025-10-13T05:52:26.879205996Z" level=error msg="Failed to destroy network for sandbox \"6822c7d3c7f72880dde1668bba77ab36ebdac22408fb8c931552e7c6dadd1c3e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:52:26.880672 containerd[1643]: time="2025-10-13T05:52:26.880654420Z" level=error msg="Failed to destroy network for sandbox \"c33549e3b89c4007c6f382afdab8eabdb046da8e02415d8a7914c83cab98154d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:52:26.880863 containerd[1643]: time="2025-10-13T05:52:26.880847666Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6785666f76-kmpv4,Uid:3393829f-fb4c-4d93-8969-62a461d00810,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6822c7d3c7f72880dde1668bba77ab36ebdac22408fb8c931552e7c6dadd1c3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:52:26.881162 kubelet[2946]: E1013 05:52:26.881114 2946 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6822c7d3c7f72880dde1668bba77ab36ebdac22408fb8c931552e7c6dadd1c3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:52:26.881162 kubelet[2946]: E1013 05:52:26.881158 2946 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6822c7d3c7f72880dde1668bba77ab36ebdac22408fb8c931552e7c6dadd1c3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6785666f76-kmpv4" Oct 13 05:52:26.882042 kubelet[2946]: E1013 05:52:26.881171 2946 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6822c7d3c7f72880dde1668bba77ab36ebdac22408fb8c931552e7c6dadd1c3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6785666f76-kmpv4" Oct 13 05:52:26.882042 kubelet[2946]: E1013 05:52:26.881207 2946 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6785666f76-kmpv4_calico-apiserver(3393829f-fb4c-4d93-8969-62a461d00810)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6785666f76-kmpv4_calico-apiserver(3393829f-fb4c-4d93-8969-62a461d00810)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6822c7d3c7f72880dde1668bba77ab36ebdac22408fb8c931552e7c6dadd1c3e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6785666f76-kmpv4" podUID="3393829f-fb4c-4d93-8969-62a461d00810" Oct 13 05:52:26.885935 containerd[1643]: time="2025-10-13T05:52:26.885876629Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6785666f76-tdh2j,Uid:5e24d0aa-f0fd-4652-9b7a-0891ae94aa4f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c33549e3b89c4007c6f382afdab8eabdb046da8e02415d8a7914c83cab98154d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:52:26.886154 kubelet[2946]: E1013 05:52:26.886131 2946 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c33549e3b89c4007c6f382afdab8eabdb046da8e02415d8a7914c83cab98154d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:52:26.886237 kubelet[2946]: E1013 05:52:26.886219 2946 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c33549e3b89c4007c6f382afdab8eabdb046da8e02415d8a7914c83cab98154d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6785666f76-tdh2j" Oct 13 05:52:26.886312 kubelet[2946]: E1013 05:52:26.886275 2946 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c33549e3b89c4007c6f382afdab8eabdb046da8e02415d8a7914c83cab98154d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6785666f76-tdh2j" Oct 13 05:52:26.886378 kubelet[2946]: E1013 05:52:26.886361 2946 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6785666f76-tdh2j_calico-apiserver(5e24d0aa-f0fd-4652-9b7a-0891ae94aa4f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6785666f76-tdh2j_calico-apiserver(5e24d0aa-f0fd-4652-9b7a-0891ae94aa4f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c33549e3b89c4007c6f382afdab8eabdb046da8e02415d8a7914c83cab98154d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6785666f76-tdh2j" podUID="5e24d0aa-f0fd-4652-9b7a-0891ae94aa4f" Oct 13 05:52:27.384035 systemd[1]: Created slice kubepods-besteffort-pod5c9445f8_2a83_4f1a_ae81_7a18fbffe769.slice - libcontainer container kubepods-besteffort-pod5c9445f8_2a83_4f1a_ae81_7a18fbffe769.slice. Oct 13 05:52:27.386337 containerd[1643]: time="2025-10-13T05:52:27.386154396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6hqlq,Uid:5c9445f8-2a83-4f1a-ae81-7a18fbffe769,Namespace:calico-system,Attempt:0,}" Oct 13 05:52:27.421723 containerd[1643]: time="2025-10-13T05:52:27.421691834Z" level=error msg="Failed to destroy network for sandbox \"437eeda8118b03b4b142e4da6cd91e9f02b4b007c01cc8165f92b92da1a63e21\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:52:27.423744 systemd[1]: run-netns-cni\x2d8cca2944\x2dbeb7\x2de0bd\x2d9bd3\x2d72156e2d0a86.mount: Deactivated successfully. Oct 13 05:52:27.424237 containerd[1643]: time="2025-10-13T05:52:27.424208382Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6hqlq,Uid:5c9445f8-2a83-4f1a-ae81-7a18fbffe769,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"437eeda8118b03b4b142e4da6cd91e9f02b4b007c01cc8165f92b92da1a63e21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:52:27.424643 kubelet[2946]: E1013 05:52:27.424425 2946 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"437eeda8118b03b4b142e4da6cd91e9f02b4b007c01cc8165f92b92da1a63e21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:52:27.424643 kubelet[2946]: E1013 05:52:27.424472 2946 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"437eeda8118b03b4b142e4da6cd91e9f02b4b007c01cc8165f92b92da1a63e21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6hqlq" Oct 13 05:52:27.424643 kubelet[2946]: E1013 05:52:27.424498 2946 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"437eeda8118b03b4b142e4da6cd91e9f02b4b007c01cc8165f92b92da1a63e21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6hqlq" Oct 13 05:52:27.425677 kubelet[2946]: E1013 05:52:27.425649 2946 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6hqlq_calico-system(5c9445f8-2a83-4f1a-ae81-7a18fbffe769)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6hqlq_calico-system(5c9445f8-2a83-4f1a-ae81-7a18fbffe769)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"437eeda8118b03b4b142e4da6cd91e9f02b4b007c01cc8165f92b92da1a63e21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6hqlq" podUID="5c9445f8-2a83-4f1a-ae81-7a18fbffe769" Oct 13 05:52:30.044004 kubelet[2946]: I1013 05:52:30.043977 2946 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:52:30.976079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1333987446.mount: Deactivated successfully. Oct 13 05:52:31.114205 containerd[1643]: time="2025-10-13T05:52:31.114163244Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:31.120547 containerd[1643]: time="2025-10-13T05:52:31.120496181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Oct 13 05:52:31.152552 containerd[1643]: time="2025-10-13T05:52:31.151945122Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:31.156256 containerd[1643]: time="2025-10-13T05:52:31.156234057Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:31.157787 containerd[1643]: time="2025-10-13T05:52:31.157772154Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 4.692005621s" Oct 13 05:52:31.157848 containerd[1643]: time="2025-10-13T05:52:31.157839636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Oct 13 05:52:31.245662 containerd[1643]: time="2025-10-13T05:52:31.245528108Z" level=info msg="CreateContainer within sandbox \"251196e69d0b89ccc938485a2664143d468affacc8d2e7c296cb8d76a3e7feed\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 13 05:52:31.336181 containerd[1643]: time="2025-10-13T05:52:31.333520152Z" level=info msg="Container f1f348e0fda6b17c4fc4a7fdd9399ac7116bc1fcb6716eb0f74c96d231c537d6: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:52:31.335755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2950991915.mount: Deactivated successfully. Oct 13 05:52:31.465707 containerd[1643]: time="2025-10-13T05:52:31.465641836Z" level=info msg="CreateContainer within sandbox \"251196e69d0b89ccc938485a2664143d468affacc8d2e7c296cb8d76a3e7feed\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f1f348e0fda6b17c4fc4a7fdd9399ac7116bc1fcb6716eb0f74c96d231c537d6\"" Oct 13 05:52:31.466631 containerd[1643]: time="2025-10-13T05:52:31.466612551Z" level=info msg="StartContainer for \"f1f348e0fda6b17c4fc4a7fdd9399ac7116bc1fcb6716eb0f74c96d231c537d6\"" Oct 13 05:52:31.469787 containerd[1643]: time="2025-10-13T05:52:31.469719845Z" level=info msg="connecting to shim f1f348e0fda6b17c4fc4a7fdd9399ac7116bc1fcb6716eb0f74c96d231c537d6" address="unix:///run/containerd/s/0f16b1b1dfcd2fa458b36c99888aead6a9e7aaef070ee1646f0e5c360546dc95" protocol=ttrpc version=3 Oct 13 05:52:31.631651 systemd[1]: Started cri-containerd-f1f348e0fda6b17c4fc4a7fdd9399ac7116bc1fcb6716eb0f74c96d231c537d6.scope - libcontainer container f1f348e0fda6b17c4fc4a7fdd9399ac7116bc1fcb6716eb0f74c96d231c537d6. Oct 13 05:52:31.667219 containerd[1643]: time="2025-10-13T05:52:31.667154549Z" level=info msg="StartContainer for \"f1f348e0fda6b17c4fc4a7fdd9399ac7116bc1fcb6716eb0f74c96d231c537d6\" returns successfully" Oct 13 05:52:32.187281 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 13 05:52:32.193658 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 13 05:52:32.498611 kubelet[2946]: I1013 05:52:32.497607 2946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rx7wn" podStartSLOduration=1.9858552999999999 podStartE2EDuration="15.497596214s" podCreationTimestamp="2025-10-13 05:52:17 +0000 UTC" firstStartedPulling="2025-10-13 05:52:17.646595415 +0000 UTC m=+19.404280150" lastFinishedPulling="2025-10-13 05:52:31.158336329 +0000 UTC m=+32.916021064" observedRunningTime="2025-10-13 05:52:32.496888475 +0000 UTC m=+34.254573216" watchObservedRunningTime="2025-10-13 05:52:32.497596214 +0000 UTC m=+34.255280949" Oct 13 05:52:32.515520 kubelet[2946]: I1013 05:52:32.515484 2946 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a83d05c4-9848-49d0-89bd-67d758be0b93-whisker-ca-bundle\") pod \"a83d05c4-9848-49d0-89bd-67d758be0b93\" (UID: \"a83d05c4-9848-49d0-89bd-67d758be0b93\") " Oct 13 05:52:32.515656 kubelet[2946]: I1013 05:52:32.515551 2946 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9tsb\" (UniqueName: \"kubernetes.io/projected/a83d05c4-9848-49d0-89bd-67d758be0b93-kube-api-access-k9tsb\") pod \"a83d05c4-9848-49d0-89bd-67d758be0b93\" (UID: \"a83d05c4-9848-49d0-89bd-67d758be0b93\") " Oct 13 05:52:32.515656 kubelet[2946]: I1013 05:52:32.515578 2946 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a83d05c4-9848-49d0-89bd-67d758be0b93-whisker-backend-key-pair\") pod \"a83d05c4-9848-49d0-89bd-67d758be0b93\" (UID: \"a83d05c4-9848-49d0-89bd-67d758be0b93\") " Oct 13 05:52:32.525760 kubelet[2946]: I1013 05:52:32.525719 2946 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a83d05c4-9848-49d0-89bd-67d758be0b93-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a83d05c4-9848-49d0-89bd-67d758be0b93" (UID: "a83d05c4-9848-49d0-89bd-67d758be0b93"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 13 05:52:32.527861 systemd[1]: var-lib-kubelet-pods-a83d05c4\x2d9848\x2d49d0\x2d89bd\x2d67d758be0b93-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk9tsb.mount: Deactivated successfully. Oct 13 05:52:32.532485 kubelet[2946]: I1013 05:52:32.532455 2946 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a83d05c4-9848-49d0-89bd-67d758be0b93-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a83d05c4-9848-49d0-89bd-67d758be0b93" (UID: "a83d05c4-9848-49d0-89bd-67d758be0b93"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 05:52:32.533467 systemd[1]: var-lib-kubelet-pods-a83d05c4\x2d9848\x2d49d0\x2d89bd\x2d67d758be0b93-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 13 05:52:32.534583 kubelet[2946]: I1013 05:52:32.534510 2946 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a83d05c4-9848-49d0-89bd-67d758be0b93-kube-api-access-k9tsb" (OuterVolumeSpecName: "kube-api-access-k9tsb") pod "a83d05c4-9848-49d0-89bd-67d758be0b93" (UID: "a83d05c4-9848-49d0-89bd-67d758be0b93"). InnerVolumeSpecName "kube-api-access-k9tsb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 05:52:32.616816 kubelet[2946]: I1013 05:52:32.616790 2946 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a83d05c4-9848-49d0-89bd-67d758be0b93-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 13 05:52:32.616816 kubelet[2946]: I1013 05:52:32.616809 2946 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k9tsb\" (UniqueName: \"kubernetes.io/projected/a83d05c4-9848-49d0-89bd-67d758be0b93-kube-api-access-k9tsb\") on node \"localhost\" DevicePath \"\"" Oct 13 05:52:32.616816 kubelet[2946]: I1013 05:52:32.616815 2946 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a83d05c4-9848-49d0-89bd-67d758be0b93-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 13 05:52:32.781079 systemd[1]: Removed slice kubepods-besteffort-poda83d05c4_9848_49d0_89bd_67d758be0b93.slice - libcontainer container kubepods-besteffort-poda83d05c4_9848_49d0_89bd_67d758be0b93.slice. Oct 13 05:52:32.874314 systemd[1]: Created slice kubepods-besteffort-podbb614cbe_a112_42ac_9a64_678362ad281d.slice - libcontainer container kubepods-besteffort-podbb614cbe_a112_42ac_9a64_678362ad281d.slice. Oct 13 05:52:32.918033 kubelet[2946]: I1013 05:52:32.917995 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flnln\" (UniqueName: \"kubernetes.io/projected/bb614cbe-a112-42ac-9a64-678362ad281d-kube-api-access-flnln\") pod \"whisker-55c944ffd6-rltqd\" (UID: \"bb614cbe-a112-42ac-9a64-678362ad281d\") " pod="calico-system/whisker-55c944ffd6-rltqd" Oct 13 05:52:32.918246 kubelet[2946]: I1013 05:52:32.918146 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb614cbe-a112-42ac-9a64-678362ad281d-whisker-ca-bundle\") pod \"whisker-55c944ffd6-rltqd\" (UID: \"bb614cbe-a112-42ac-9a64-678362ad281d\") " pod="calico-system/whisker-55c944ffd6-rltqd" Oct 13 05:52:32.918246 kubelet[2946]: I1013 05:52:32.918165 2946 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bb614cbe-a112-42ac-9a64-678362ad281d-whisker-backend-key-pair\") pod \"whisker-55c944ffd6-rltqd\" (UID: \"bb614cbe-a112-42ac-9a64-678362ad281d\") " pod="calico-system/whisker-55c944ffd6-rltqd" Oct 13 05:52:33.180896 containerd[1643]: time="2025-10-13T05:52:33.180719101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55c944ffd6-rltqd,Uid:bb614cbe-a112-42ac-9a64-678362ad281d,Namespace:calico-system,Attempt:0,}" Oct 13 05:52:33.629700 systemd-networkd[1430]: cali8cf15663f6a: Link UP Oct 13 05:52:33.630388 systemd-networkd[1430]: cali8cf15663f6a: Gained carrier Oct 13 05:52:33.646150 containerd[1643]: 2025-10-13 05:52:33.205 [INFO][4002] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:52:33.646150 containerd[1643]: 2025-10-13 05:52:33.289 [INFO][4002] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--55c944ffd6--rltqd-eth0 whisker-55c944ffd6- calico-system bb614cbe-a112-42ac-9a64-678362ad281d 868 0 2025-10-13 05:52:32 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:55c944ffd6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-55c944ffd6-rltqd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8cf15663f6a [] [] }} ContainerID="c9e6dcefd20024098fbaa4ace28e3793d18adad0a889471fdf9049c6fb58bc98" Namespace="calico-system" Pod="whisker-55c944ffd6-rltqd" WorkloadEndpoint="localhost-k8s-whisker--55c944ffd6--rltqd-" Oct 13 05:52:33.646150 containerd[1643]: 2025-10-13 05:52:33.289 [INFO][4002] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c9e6dcefd20024098fbaa4ace28e3793d18adad0a889471fdf9049c6fb58bc98" Namespace="calico-system" Pod="whisker-55c944ffd6-rltqd" WorkloadEndpoint="localhost-k8s-whisker--55c944ffd6--rltqd-eth0" Oct 13 05:52:33.646150 containerd[1643]: 2025-10-13 05:52:33.570 [INFO][4012] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c9e6dcefd20024098fbaa4ace28e3793d18adad0a889471fdf9049c6fb58bc98" HandleID="k8s-pod-network.c9e6dcefd20024098fbaa4ace28e3793d18adad0a889471fdf9049c6fb58bc98" Workload="localhost-k8s-whisker--55c944ffd6--rltqd-eth0" Oct 13 05:52:33.646481 containerd[1643]: 2025-10-13 05:52:33.573 [INFO][4012] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c9e6dcefd20024098fbaa4ace28e3793d18adad0a889471fdf9049c6fb58bc98" HandleID="k8s-pod-network.c9e6dcefd20024098fbaa4ace28e3793d18adad0a889471fdf9049c6fb58bc98" Workload="localhost-k8s-whisker--55c944ffd6--rltqd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f010), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-55c944ffd6-rltqd", "timestamp":"2025-10-13 05:52:33.570947204 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:52:33.646481 containerd[1643]: 2025-10-13 05:52:33.574 [INFO][4012] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:52:33.646481 containerd[1643]: 2025-10-13 05:52:33.574 [INFO][4012] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:52:33.646481 containerd[1643]: 2025-10-13 05:52:33.574 [INFO][4012] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:52:33.646481 containerd[1643]: 2025-10-13 05:52:33.588 [INFO][4012] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c9e6dcefd20024098fbaa4ace28e3793d18adad0a889471fdf9049c6fb58bc98" host="localhost" Oct 13 05:52:33.646481 containerd[1643]: 2025-10-13 05:52:33.602 [INFO][4012] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:52:33.646481 containerd[1643]: 2025-10-13 05:52:33.606 [INFO][4012] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:52:33.646481 containerd[1643]: 2025-10-13 05:52:33.609 [INFO][4012] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:52:33.646481 containerd[1643]: 2025-10-13 05:52:33.610 [INFO][4012] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:52:33.646481 containerd[1643]: 2025-10-13 05:52:33.610 [INFO][4012] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c9e6dcefd20024098fbaa4ace28e3793d18adad0a889471fdf9049c6fb58bc98" host="localhost" Oct 13 05:52:33.647021 containerd[1643]: 2025-10-13 05:52:33.611 [INFO][4012] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c9e6dcefd20024098fbaa4ace28e3793d18adad0a889471fdf9049c6fb58bc98 Oct 13 05:52:33.647021 containerd[1643]: 2025-10-13 05:52:33.613 [INFO][4012] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c9e6dcefd20024098fbaa4ace28e3793d18adad0a889471fdf9049c6fb58bc98" host="localhost" Oct 13 05:52:33.647021 containerd[1643]: 2025-10-13 05:52:33.616 [INFO][4012] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.c9e6dcefd20024098fbaa4ace28e3793d18adad0a889471fdf9049c6fb58bc98" host="localhost" Oct 13 05:52:33.647021 containerd[1643]: 2025-10-13 05:52:33.616 [INFO][4012] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.c9e6dcefd20024098fbaa4ace28e3793d18adad0a889471fdf9049c6fb58bc98" host="localhost" Oct 13 05:52:33.647021 containerd[1643]: 2025-10-13 05:52:33.616 [INFO][4012] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:52:33.647021 containerd[1643]: 2025-10-13 05:52:33.616 [INFO][4012] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="c9e6dcefd20024098fbaa4ace28e3793d18adad0a889471fdf9049c6fb58bc98" HandleID="k8s-pod-network.c9e6dcefd20024098fbaa4ace28e3793d18adad0a889471fdf9049c6fb58bc98" Workload="localhost-k8s-whisker--55c944ffd6--rltqd-eth0" Oct 13 05:52:33.647334 containerd[1643]: 2025-10-13 05:52:33.618 [INFO][4002] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c9e6dcefd20024098fbaa4ace28e3793d18adad0a889471fdf9049c6fb58bc98" Namespace="calico-system" Pod="whisker-55c944ffd6-rltqd" WorkloadEndpoint="localhost-k8s-whisker--55c944ffd6--rltqd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--55c944ffd6--rltqd-eth0", GenerateName:"whisker-55c944ffd6-", Namespace:"calico-system", SelfLink:"", UID:"bb614cbe-a112-42ac-9a64-678362ad281d", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 52, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55c944ffd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-55c944ffd6-rltqd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8cf15663f6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:52:33.647334 containerd[1643]: 2025-10-13 05:52:33.618 [INFO][4002] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="c9e6dcefd20024098fbaa4ace28e3793d18adad0a889471fdf9049c6fb58bc98" Namespace="calico-system" Pod="whisker-55c944ffd6-rltqd" WorkloadEndpoint="localhost-k8s-whisker--55c944ffd6--rltqd-eth0" Oct 13 05:52:33.647582 containerd[1643]: 2025-10-13 05:52:33.618 [INFO][4002] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8cf15663f6a ContainerID="c9e6dcefd20024098fbaa4ace28e3793d18adad0a889471fdf9049c6fb58bc98" Namespace="calico-system" Pod="whisker-55c944ffd6-rltqd" WorkloadEndpoint="localhost-k8s-whisker--55c944ffd6--rltqd-eth0" Oct 13 05:52:33.647582 containerd[1643]: 2025-10-13 05:52:33.632 [INFO][4002] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c9e6dcefd20024098fbaa4ace28e3793d18adad0a889471fdf9049c6fb58bc98" Namespace="calico-system" Pod="whisker-55c944ffd6-rltqd" WorkloadEndpoint="localhost-k8s-whisker--55c944ffd6--rltqd-eth0" Oct 13 05:52:33.647733 containerd[1643]: 2025-10-13 05:52:33.633 [INFO][4002] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c9e6dcefd20024098fbaa4ace28e3793d18adad0a889471fdf9049c6fb58bc98" Namespace="calico-system" Pod="whisker-55c944ffd6-rltqd" WorkloadEndpoint="localhost-k8s-whisker--55c944ffd6--rltqd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--55c944ffd6--rltqd-eth0", GenerateName:"whisker-55c944ffd6-", Namespace:"calico-system", SelfLink:"", UID:"bb614cbe-a112-42ac-9a64-678362ad281d", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 52, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55c944ffd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c9e6dcefd20024098fbaa4ace28e3793d18adad0a889471fdf9049c6fb58bc98", Pod:"whisker-55c944ffd6-rltqd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8cf15663f6a", MAC:"12:4d:d2:21:ea:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:52:33.647894 containerd[1643]: 2025-10-13 05:52:33.643 [INFO][4002] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c9e6dcefd20024098fbaa4ace28e3793d18adad0a889471fdf9049c6fb58bc98" Namespace="calico-system" Pod="whisker-55c944ffd6-rltqd" WorkloadEndpoint="localhost-k8s-whisker--55c944ffd6--rltqd-eth0" Oct 13 05:52:33.833284 containerd[1643]: time="2025-10-13T05:52:33.833216635Z" level=info msg="connecting to shim c9e6dcefd20024098fbaa4ace28e3793d18adad0a889471fdf9049c6fb58bc98" address="unix:///run/containerd/s/f1a3d0ea77af960b098f8f47e0390b93acabff2b3f98367bcf039e5223b978c7" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:52:33.859787 systemd[1]: Started cri-containerd-c9e6dcefd20024098fbaa4ace28e3793d18adad0a889471fdf9049c6fb58bc98.scope - libcontainer container c9e6dcefd20024098fbaa4ace28e3793d18adad0a889471fdf9049c6fb58bc98. Oct 13 05:52:33.883828 systemd-resolved[1561]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:52:33.947155 containerd[1643]: time="2025-10-13T05:52:33.946973831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55c944ffd6-rltqd,Uid:bb614cbe-a112-42ac-9a64-678362ad281d,Namespace:calico-system,Attempt:0,} returns sandbox id \"c9e6dcefd20024098fbaa4ace28e3793d18adad0a889471fdf9049c6fb58bc98\"" Oct 13 05:52:33.959028 containerd[1643]: time="2025-10-13T05:52:33.958943965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Oct 13 05:52:34.239075 systemd-networkd[1430]: vxlan.calico: Link UP Oct 13 05:52:34.239080 systemd-networkd[1430]: vxlan.calico: Gained carrier Oct 13 05:52:34.385321 kubelet[2946]: I1013 05:52:34.385292 2946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a83d05c4-9848-49d0-89bd-67d758be0b93" path="/var/lib/kubelet/pods/a83d05c4-9848-49d0-89bd-67d758be0b93/volumes" Oct 13 05:52:35.179027 containerd[1643]: time="2025-10-13T05:52:35.178570183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:35.179027 containerd[1643]: time="2025-10-13T05:52:35.178943470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Oct 13 05:52:35.179027 containerd[1643]: time="2025-10-13T05:52:35.179002324Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:35.180031 containerd[1643]: time="2025-10-13T05:52:35.180019770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:35.180421 containerd[1643]: time="2025-10-13T05:52:35.180406055Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.221438283s" Oct 13 05:52:35.180448 containerd[1643]: time="2025-10-13T05:52:35.180423433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Oct 13 05:52:35.182627 containerd[1643]: time="2025-10-13T05:52:35.182609057Z" level=info msg="CreateContainer within sandbox \"c9e6dcefd20024098fbaa4ace28e3793d18adad0a889471fdf9049c6fb58bc98\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Oct 13 05:52:35.188216 containerd[1643]: time="2025-10-13T05:52:35.187271186Z" level=info msg="Container d8eb8038756da3bfae1fc7fae7934faa56ba87202beb6290af0b95e52ba9517d: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:52:35.188110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount471530599.mount: Deactivated successfully. Oct 13 05:52:35.198695 systemd-networkd[1430]: cali8cf15663f6a: Gained IPv6LL Oct 13 05:52:35.199721 containerd[1643]: time="2025-10-13T05:52:35.198908698Z" level=info msg="CreateContainer within sandbox \"c9e6dcefd20024098fbaa4ace28e3793d18adad0a889471fdf9049c6fb58bc98\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"d8eb8038756da3bfae1fc7fae7934faa56ba87202beb6290af0b95e52ba9517d\"" Oct 13 05:52:35.199721 containerd[1643]: time="2025-10-13T05:52:35.199454764Z" level=info msg="StartContainer for \"d8eb8038756da3bfae1fc7fae7934faa56ba87202beb6290af0b95e52ba9517d\"" Oct 13 05:52:35.201768 containerd[1643]: time="2025-10-13T05:52:35.201734932Z" level=info msg="connecting to shim d8eb8038756da3bfae1fc7fae7934faa56ba87202beb6290af0b95e52ba9517d" address="unix:///run/containerd/s/f1a3d0ea77af960b098f8f47e0390b93acabff2b3f98367bcf039e5223b978c7" protocol=ttrpc version=3 Oct 13 05:52:35.215618 systemd[1]: Started cri-containerd-d8eb8038756da3bfae1fc7fae7934faa56ba87202beb6290af0b95e52ba9517d.scope - libcontainer container d8eb8038756da3bfae1fc7fae7934faa56ba87202beb6290af0b95e52ba9517d. Oct 13 05:52:35.247280 containerd[1643]: time="2025-10-13T05:52:35.247211684Z" level=info msg="StartContainer for \"d8eb8038756da3bfae1fc7fae7934faa56ba87202beb6290af0b95e52ba9517d\" returns successfully" Oct 13 05:52:35.248941 containerd[1643]: time="2025-10-13T05:52:35.248917522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Oct 13 05:52:36.222762 systemd-networkd[1430]: vxlan.calico: Gained IPv6LL Oct 13 05:52:36.878931 kubelet[2946]: I1013 05:52:36.878371 2946 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:52:37.087997 containerd[1643]: time="2025-10-13T05:52:37.087966779Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1f348e0fda6b17c4fc4a7fdd9399ac7116bc1fcb6716eb0f74c96d231c537d6\" id:\"30d09203d5ef01b52371817c6b2c3c3b78b783b4498028341df6fffaad3f227c\" pid:4328 exited_at:{seconds:1760334757 nanos:87648569}" Oct 13 05:52:37.244726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3858652282.mount: Deactivated successfully. Oct 13 05:52:37.302038 containerd[1643]: time="2025-10-13T05:52:37.302012976Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1f348e0fda6b17c4fc4a7fdd9399ac7116bc1fcb6716eb0f74c96d231c537d6\" id:\"8c22969fc5d8464db8981c31b6b3f2b69906ad4d46e9447b4c4ae092359ae3cb\" pid:4352 exited_at:{seconds:1760334757 nanos:301312109}" Oct 13 05:52:37.302686 containerd[1643]: time="2025-10-13T05:52:37.302632327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:37.303382 containerd[1643]: time="2025-10-13T05:52:37.303263279Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Oct 13 05:52:37.304260 containerd[1643]: time="2025-10-13T05:52:37.303764699Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:37.306628 containerd[1643]: time="2025-10-13T05:52:37.306601041Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:37.307173 containerd[1643]: time="2025-10-13T05:52:37.307154001Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.058216486s" Oct 13 05:52:37.307173 containerd[1643]: time="2025-10-13T05:52:37.307168869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Oct 13 05:52:37.309515 containerd[1643]: time="2025-10-13T05:52:37.309081899Z" level=info msg="CreateContainer within sandbox \"c9e6dcefd20024098fbaa4ace28e3793d18adad0a889471fdf9049c6fb58bc98\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Oct 13 05:52:37.314250 containerd[1643]: time="2025-10-13T05:52:37.312432926Z" level=info msg="Container b07165cccbef6f4bf07ef2bc2d0e68f2a66e2ac3642f780e99ef9335c624010b: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:52:37.319428 containerd[1643]: time="2025-10-13T05:52:37.319403608Z" level=info msg="CreateContainer within sandbox \"c9e6dcefd20024098fbaa4ace28e3793d18adad0a889471fdf9049c6fb58bc98\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"b07165cccbef6f4bf07ef2bc2d0e68f2a66e2ac3642f780e99ef9335c624010b\"" Oct 13 05:52:37.324998 containerd[1643]: time="2025-10-13T05:52:37.324878518Z" level=info msg="StartContainer for \"b07165cccbef6f4bf07ef2bc2d0e68f2a66e2ac3642f780e99ef9335c624010b\"" Oct 13 05:52:37.326516 containerd[1643]: time="2025-10-13T05:52:37.326485152Z" level=info msg="connecting to shim b07165cccbef6f4bf07ef2bc2d0e68f2a66e2ac3642f780e99ef9335c624010b" address="unix:///run/containerd/s/f1a3d0ea77af960b098f8f47e0390b93acabff2b3f98367bcf039e5223b978c7" protocol=ttrpc version=3 Oct 13 05:52:37.354700 systemd[1]: Started cri-containerd-b07165cccbef6f4bf07ef2bc2d0e68f2a66e2ac3642f780e99ef9335c624010b.scope - libcontainer container b07165cccbef6f4bf07ef2bc2d0e68f2a66e2ac3642f780e99ef9335c624010b. Oct 13 05:52:37.392688 containerd[1643]: time="2025-10-13T05:52:37.392640751Z" level=info msg="StartContainer for \"b07165cccbef6f4bf07ef2bc2d0e68f2a66e2ac3642f780e99ef9335c624010b\" returns successfully" Oct 13 05:52:37.516753 kubelet[2946]: I1013 05:52:37.516332 2946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-55c944ffd6-rltqd" podStartSLOduration=2.167225827 podStartE2EDuration="5.516168753s" podCreationTimestamp="2025-10-13 05:52:32 +0000 UTC" firstStartedPulling="2025-10-13 05:52:33.958662835 +0000 UTC m=+35.716347571" lastFinishedPulling="2025-10-13 05:52:37.307605759 +0000 UTC m=+39.065290497" observedRunningTime="2025-10-13 05:52:37.515335572 +0000 UTC m=+39.273020318" watchObservedRunningTime="2025-10-13 05:52:37.516168753 +0000 UTC m=+39.273853496" Oct 13 05:52:38.400546 containerd[1643]: time="2025-10-13T05:52:38.400028166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6785666f76-tdh2j,Uid:5e24d0aa-f0fd-4652-9b7a-0891ae94aa4f,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:52:38.738586 systemd-networkd[1430]: cali0e04d0f3d6f: Link UP Oct 13 05:52:38.739505 systemd-networkd[1430]: cali0e04d0f3d6f: Gained carrier Oct 13 05:52:38.754704 containerd[1643]: 2025-10-13 05:52:38.478 [INFO][4404] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6785666f76--tdh2j-eth0 calico-apiserver-6785666f76- calico-apiserver 5e24d0aa-f0fd-4652-9b7a-0891ae94aa4f 798 0 2025-10-13 05:52:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6785666f76 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6785666f76-tdh2j eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0e04d0f3d6f [] [] }} ContainerID="04be058dcbdee96aa583b012502f674e673ea815ddeb61c50372468802e66ffb" Namespace="calico-apiserver" Pod="calico-apiserver-6785666f76-tdh2j" WorkloadEndpoint="localhost-k8s-calico--apiserver--6785666f76--tdh2j-" Oct 13 05:52:38.754704 containerd[1643]: 2025-10-13 05:52:38.478 [INFO][4404] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="04be058dcbdee96aa583b012502f674e673ea815ddeb61c50372468802e66ffb" Namespace="calico-apiserver" Pod="calico-apiserver-6785666f76-tdh2j" WorkloadEndpoint="localhost-k8s-calico--apiserver--6785666f76--tdh2j-eth0" Oct 13 05:52:38.754704 containerd[1643]: 2025-10-13 05:52:38.667 [INFO][4417] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="04be058dcbdee96aa583b012502f674e673ea815ddeb61c50372468802e66ffb" HandleID="k8s-pod-network.04be058dcbdee96aa583b012502f674e673ea815ddeb61c50372468802e66ffb" Workload="localhost-k8s-calico--apiserver--6785666f76--tdh2j-eth0" Oct 13 05:52:38.755344 containerd[1643]: 2025-10-13 05:52:38.670 [INFO][4417] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="04be058dcbdee96aa583b012502f674e673ea815ddeb61c50372468802e66ffb" HandleID="k8s-pod-network.04be058dcbdee96aa583b012502f674e673ea815ddeb61c50372468802e66ffb" Workload="localhost-k8s-calico--apiserver--6785666f76--tdh2j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103850), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6785666f76-tdh2j", "timestamp":"2025-10-13 05:52:38.667486176 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:52:38.755344 containerd[1643]: 2025-10-13 05:52:38.670 [INFO][4417] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:52:38.755344 containerd[1643]: 2025-10-13 05:52:38.671 [INFO][4417] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:52:38.755344 containerd[1643]: 2025-10-13 05:52:38.672 [INFO][4417] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:52:38.755344 containerd[1643]: 2025-10-13 05:52:38.688 [INFO][4417] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.04be058dcbdee96aa583b012502f674e673ea815ddeb61c50372468802e66ffb" host="localhost" Oct 13 05:52:38.755344 containerd[1643]: 2025-10-13 05:52:38.718 [INFO][4417] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:52:38.755344 containerd[1643]: 2025-10-13 05:52:38.720 [INFO][4417] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:52:38.755344 containerd[1643]: 2025-10-13 05:52:38.721 [INFO][4417] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:52:38.755344 containerd[1643]: 2025-10-13 05:52:38.722 [INFO][4417] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:52:38.755344 containerd[1643]: 2025-10-13 05:52:38.722 [INFO][4417] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.04be058dcbdee96aa583b012502f674e673ea815ddeb61c50372468802e66ffb" host="localhost" Oct 13 05:52:38.757795 containerd[1643]: 2025-10-13 05:52:38.723 [INFO][4417] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.04be058dcbdee96aa583b012502f674e673ea815ddeb61c50372468802e66ffb Oct 13 05:52:38.757795 containerd[1643]: 2025-10-13 05:52:38.726 [INFO][4417] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.04be058dcbdee96aa583b012502f674e673ea815ddeb61c50372468802e66ffb" host="localhost" Oct 13 05:52:38.757795 containerd[1643]: 2025-10-13 05:52:38.730 [INFO][4417] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.04be058dcbdee96aa583b012502f674e673ea815ddeb61c50372468802e66ffb" host="localhost" Oct 13 05:52:38.757795 containerd[1643]: 2025-10-13 05:52:38.730 [INFO][4417] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.04be058dcbdee96aa583b012502f674e673ea815ddeb61c50372468802e66ffb" host="localhost" Oct 13 05:52:38.757795 containerd[1643]: 2025-10-13 05:52:38.730 [INFO][4417] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:52:38.757795 containerd[1643]: 2025-10-13 05:52:38.730 [INFO][4417] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="04be058dcbdee96aa583b012502f674e673ea815ddeb61c50372468802e66ffb" HandleID="k8s-pod-network.04be058dcbdee96aa583b012502f674e673ea815ddeb61c50372468802e66ffb" Workload="localhost-k8s-calico--apiserver--6785666f76--tdh2j-eth0" Oct 13 05:52:38.759621 containerd[1643]: 2025-10-13 05:52:38.732 [INFO][4404] cni-plugin/k8s.go 418: Populated endpoint ContainerID="04be058dcbdee96aa583b012502f674e673ea815ddeb61c50372468802e66ffb" Namespace="calico-apiserver" Pod="calico-apiserver-6785666f76-tdh2j" WorkloadEndpoint="localhost-k8s-calico--apiserver--6785666f76--tdh2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6785666f76--tdh2j-eth0", GenerateName:"calico-apiserver-6785666f76-", Namespace:"calico-apiserver", SelfLink:"", UID:"5e24d0aa-f0fd-4652-9b7a-0891ae94aa4f", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 52, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6785666f76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6785666f76-tdh2j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0e04d0f3d6f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:52:38.759683 containerd[1643]: 2025-10-13 05:52:38.732 [INFO][4404] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="04be058dcbdee96aa583b012502f674e673ea815ddeb61c50372468802e66ffb" Namespace="calico-apiserver" Pod="calico-apiserver-6785666f76-tdh2j" WorkloadEndpoint="localhost-k8s-calico--apiserver--6785666f76--tdh2j-eth0" Oct 13 05:52:38.759683 containerd[1643]: 2025-10-13 05:52:38.732 [INFO][4404] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e04d0f3d6f ContainerID="04be058dcbdee96aa583b012502f674e673ea815ddeb61c50372468802e66ffb" Namespace="calico-apiserver" Pod="calico-apiserver-6785666f76-tdh2j" WorkloadEndpoint="localhost-k8s-calico--apiserver--6785666f76--tdh2j-eth0" Oct 13 05:52:38.759683 containerd[1643]: 2025-10-13 05:52:38.740 [INFO][4404] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="04be058dcbdee96aa583b012502f674e673ea815ddeb61c50372468802e66ffb" Namespace="calico-apiserver" Pod="calico-apiserver-6785666f76-tdh2j" WorkloadEndpoint="localhost-k8s-calico--apiserver--6785666f76--tdh2j-eth0" Oct 13 05:52:38.759738 containerd[1643]: 2025-10-13 05:52:38.741 [INFO][4404] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="04be058dcbdee96aa583b012502f674e673ea815ddeb61c50372468802e66ffb" Namespace="calico-apiserver" Pod="calico-apiserver-6785666f76-tdh2j" WorkloadEndpoint="localhost-k8s-calico--apiserver--6785666f76--tdh2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6785666f76--tdh2j-eth0", GenerateName:"calico-apiserver-6785666f76-", Namespace:"calico-apiserver", SelfLink:"", UID:"5e24d0aa-f0fd-4652-9b7a-0891ae94aa4f", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 52, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6785666f76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"04be058dcbdee96aa583b012502f674e673ea815ddeb61c50372468802e66ffb", Pod:"calico-apiserver-6785666f76-tdh2j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0e04d0f3d6f", MAC:"0e:39:cb:13:87:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:52:38.759783 containerd[1643]: 2025-10-13 05:52:38.748 [INFO][4404] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="04be058dcbdee96aa583b012502f674e673ea815ddeb61c50372468802e66ffb" Namespace="calico-apiserver" Pod="calico-apiserver-6785666f76-tdh2j" WorkloadEndpoint="localhost-k8s-calico--apiserver--6785666f76--tdh2j-eth0" Oct 13 05:52:38.791568 containerd[1643]: time="2025-10-13T05:52:38.790930340Z" level=info msg="connecting to shim 04be058dcbdee96aa583b012502f674e673ea815ddeb61c50372468802e66ffb" address="unix:///run/containerd/s/6681258c451ea237c01d8f09efa69ee136378e723fe0277e0854347ce86cff8c" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:52:38.815683 systemd[1]: Started cri-containerd-04be058dcbdee96aa583b012502f674e673ea815ddeb61c50372468802e66ffb.scope - libcontainer container 04be058dcbdee96aa583b012502f674e673ea815ddeb61c50372468802e66ffb. Oct 13 05:52:38.823918 systemd-resolved[1561]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:52:38.852415 containerd[1643]: time="2025-10-13T05:52:38.852389503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6785666f76-tdh2j,Uid:5e24d0aa-f0fd-4652-9b7a-0891ae94aa4f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"04be058dcbdee96aa583b012502f674e673ea815ddeb61c50372468802e66ffb\"" Oct 13 05:52:38.856409 containerd[1643]: time="2025-10-13T05:52:38.856182907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:52:39.384340 containerd[1643]: time="2025-10-13T05:52:39.384290160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f7646d5d5-2mn25,Uid:0eba1397-50d5-488f-94c0-3c1a3b8792a7,Namespace:calico-system,Attempt:0,}" Oct 13 05:52:39.485161 systemd-networkd[1430]: calif2a8203849b: Link UP Oct 13 05:52:39.486137 systemd-networkd[1430]: calif2a8203849b: Gained carrier Oct 13 05:52:39.500703 containerd[1643]: 2025-10-13 05:52:39.422 [INFO][4483] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6f7646d5d5--2mn25-eth0 calico-kube-controllers-6f7646d5d5- calico-system 0eba1397-50d5-488f-94c0-3c1a3b8792a7 790 0 2025-10-13 05:52:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6f7646d5d5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6f7646d5d5-2mn25 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif2a8203849b [] [] }} ContainerID="db9b62432faaac82392bd4490d83ee096c36c4c9721c0b3f8e2ea4de2ed3ce9f" Namespace="calico-system" Pod="calico-kube-controllers-6f7646d5d5-2mn25" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f7646d5d5--2mn25-" Oct 13 05:52:39.500703 containerd[1643]: 2025-10-13 05:52:39.422 [INFO][4483] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="db9b62432faaac82392bd4490d83ee096c36c4c9721c0b3f8e2ea4de2ed3ce9f" Namespace="calico-system" Pod="calico-kube-controllers-6f7646d5d5-2mn25" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f7646d5d5--2mn25-eth0" Oct 13 05:52:39.500703 containerd[1643]: 2025-10-13 05:52:39.444 [INFO][4494] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db9b62432faaac82392bd4490d83ee096c36c4c9721c0b3f8e2ea4de2ed3ce9f" HandleID="k8s-pod-network.db9b62432faaac82392bd4490d83ee096c36c4c9721c0b3f8e2ea4de2ed3ce9f" Workload="localhost-k8s-calico--kube--controllers--6f7646d5d5--2mn25-eth0" Oct 13 05:52:39.501052 containerd[1643]: 2025-10-13 05:52:39.444 [INFO][4494] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="db9b62432faaac82392bd4490d83ee096c36c4c9721c0b3f8e2ea4de2ed3ce9f" HandleID="k8s-pod-network.db9b62432faaac82392bd4490d83ee096c36c4c9721c0b3f8e2ea4de2ed3ce9f" Workload="localhost-k8s-calico--kube--controllers--6f7646d5d5--2mn25-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f2e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6f7646d5d5-2mn25", "timestamp":"2025-10-13 05:52:39.444143898 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:52:39.501052 containerd[1643]: 2025-10-13 05:52:39.444 [INFO][4494] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:52:39.501052 containerd[1643]: 2025-10-13 05:52:39.444 [INFO][4494] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:52:39.501052 containerd[1643]: 2025-10-13 05:52:39.444 [INFO][4494] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:52:39.501052 containerd[1643]: 2025-10-13 05:52:39.452 [INFO][4494] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.db9b62432faaac82392bd4490d83ee096c36c4c9721c0b3f8e2ea4de2ed3ce9f" host="localhost" Oct 13 05:52:39.501052 containerd[1643]: 2025-10-13 05:52:39.455 [INFO][4494] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:52:39.501052 containerd[1643]: 2025-10-13 05:52:39.457 [INFO][4494] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:52:39.501052 containerd[1643]: 2025-10-13 05:52:39.459 [INFO][4494] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:52:39.501052 containerd[1643]: 2025-10-13 05:52:39.460 [INFO][4494] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:52:39.501052 containerd[1643]: 2025-10-13 05:52:39.460 [INFO][4494] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.db9b62432faaac82392bd4490d83ee096c36c4c9721c0b3f8e2ea4de2ed3ce9f" host="localhost" Oct 13 05:52:39.501231 containerd[1643]: 2025-10-13 05:52:39.461 [INFO][4494] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.db9b62432faaac82392bd4490d83ee096c36c4c9721c0b3f8e2ea4de2ed3ce9f Oct 13 05:52:39.501231 containerd[1643]: 2025-10-13 05:52:39.465 [INFO][4494] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.db9b62432faaac82392bd4490d83ee096c36c4c9721c0b3f8e2ea4de2ed3ce9f" host="localhost" Oct 13 05:52:39.501231 containerd[1643]: 2025-10-13 05:52:39.479 [INFO][4494] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.db9b62432faaac82392bd4490d83ee096c36c4c9721c0b3f8e2ea4de2ed3ce9f" host="localhost" Oct 13 05:52:39.501231 containerd[1643]: 2025-10-13 05:52:39.479 [INFO][4494] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.db9b62432faaac82392bd4490d83ee096c36c4c9721c0b3f8e2ea4de2ed3ce9f" host="localhost" Oct 13 05:52:39.501231 containerd[1643]: 2025-10-13 05:52:39.479 [INFO][4494] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:52:39.501231 containerd[1643]: 2025-10-13 05:52:39.479 [INFO][4494] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="db9b62432faaac82392bd4490d83ee096c36c4c9721c0b3f8e2ea4de2ed3ce9f" HandleID="k8s-pod-network.db9b62432faaac82392bd4490d83ee096c36c4c9721c0b3f8e2ea4de2ed3ce9f" Workload="localhost-k8s-calico--kube--controllers--6f7646d5d5--2mn25-eth0" Oct 13 05:52:39.501330 containerd[1643]: 2025-10-13 05:52:39.481 [INFO][4483] cni-plugin/k8s.go 418: Populated endpoint ContainerID="db9b62432faaac82392bd4490d83ee096c36c4c9721c0b3f8e2ea4de2ed3ce9f" Namespace="calico-system" Pod="calico-kube-controllers-6f7646d5d5-2mn25" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f7646d5d5--2mn25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6f7646d5d5--2mn25-eth0", GenerateName:"calico-kube-controllers-6f7646d5d5-", Namespace:"calico-system", SelfLink:"", UID:"0eba1397-50d5-488f-94c0-3c1a3b8792a7", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 52, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f7646d5d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6f7646d5d5-2mn25", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif2a8203849b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:52:39.501372 containerd[1643]: 2025-10-13 05:52:39.482 [INFO][4483] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="db9b62432faaac82392bd4490d83ee096c36c4c9721c0b3f8e2ea4de2ed3ce9f" Namespace="calico-system" Pod="calico-kube-controllers-6f7646d5d5-2mn25" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f7646d5d5--2mn25-eth0" Oct 13 05:52:39.501372 containerd[1643]: 2025-10-13 05:52:39.482 [INFO][4483] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2a8203849b ContainerID="db9b62432faaac82392bd4490d83ee096c36c4c9721c0b3f8e2ea4de2ed3ce9f" Namespace="calico-system" Pod="calico-kube-controllers-6f7646d5d5-2mn25" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f7646d5d5--2mn25-eth0" Oct 13 05:52:39.501372 containerd[1643]: 2025-10-13 05:52:39.487 [INFO][4483] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db9b62432faaac82392bd4490d83ee096c36c4c9721c0b3f8e2ea4de2ed3ce9f" Namespace="calico-system" Pod="calico-kube-controllers-6f7646d5d5-2mn25" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f7646d5d5--2mn25-eth0" Oct 13 05:52:39.501423 containerd[1643]: 2025-10-13 05:52:39.488 [INFO][4483] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="db9b62432faaac82392bd4490d83ee096c36c4c9721c0b3f8e2ea4de2ed3ce9f" Namespace="calico-system" Pod="calico-kube-controllers-6f7646d5d5-2mn25" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f7646d5d5--2mn25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6f7646d5d5--2mn25-eth0", GenerateName:"calico-kube-controllers-6f7646d5d5-", Namespace:"calico-system", SelfLink:"", UID:"0eba1397-50d5-488f-94c0-3c1a3b8792a7", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 52, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f7646d5d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"db9b62432faaac82392bd4490d83ee096c36c4c9721c0b3f8e2ea4de2ed3ce9f", Pod:"calico-kube-controllers-6f7646d5d5-2mn25", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif2a8203849b", MAC:"6a:db:0f:35:d0:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:52:39.501468 containerd[1643]: 2025-10-13 05:52:39.498 [INFO][4483] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="db9b62432faaac82392bd4490d83ee096c36c4c9721c0b3f8e2ea4de2ed3ce9f" Namespace="calico-system" Pod="calico-kube-controllers-6f7646d5d5-2mn25" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f7646d5d5--2mn25-eth0" Oct 13 05:52:39.529074 containerd[1643]: time="2025-10-13T05:52:39.529040855Z" level=info msg="connecting to shim db9b62432faaac82392bd4490d83ee096c36c4c9721c0b3f8e2ea4de2ed3ce9f" address="unix:///run/containerd/s/cadbbdcb5cc54d45d56185416a571d86f8982accff696a5909ebb03c3943258a" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:52:39.554935 systemd[1]: Started cri-containerd-db9b62432faaac82392bd4490d83ee096c36c4c9721c0b3f8e2ea4de2ed3ce9f.scope - libcontainer container db9b62432faaac82392bd4490d83ee096c36c4c9721c0b3f8e2ea4de2ed3ce9f. Oct 13 05:52:39.568449 systemd-resolved[1561]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:52:39.609292 containerd[1643]: time="2025-10-13T05:52:39.609257862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f7646d5d5-2mn25,Uid:0eba1397-50d5-488f-94c0-3c1a3b8792a7,Namespace:calico-system,Attempt:0,} returns sandbox id \"db9b62432faaac82392bd4490d83ee096c36c4c9721c0b3f8e2ea4de2ed3ce9f\"" Oct 13 05:52:40.254623 systemd-networkd[1430]: cali0e04d0f3d6f: Gained IPv6LL Oct 13 05:52:40.393497 containerd[1643]: time="2025-10-13T05:52:40.393418860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-f2t27,Uid:44debfcd-ef12-4ead-bdc8-ed45ef8b190e,Namespace:kube-system,Attempt:0,}" Oct 13 05:52:40.411226 containerd[1643]: time="2025-10-13T05:52:40.411099154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6785666f76-kmpv4,Uid:3393829f-fb4c-4d93-8969-62a461d00810,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:52:40.680387 systemd-networkd[1430]: cali8b053e67ca3: Link UP Oct 13 05:52:40.681894 systemd-networkd[1430]: cali8b053e67ca3: Gained carrier Oct 13 05:52:40.701316 containerd[1643]: 2025-10-13 05:52:40.456 [INFO][4575] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6785666f76--kmpv4-eth0 calico-apiserver-6785666f76- calico-apiserver 3393829f-fb4c-4d93-8969-62a461d00810 796 0 2025-10-13 05:52:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6785666f76 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6785666f76-kmpv4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8b053e67ca3 [] [] }} ContainerID="70069cb0b4e00d7549d730d8aa4a51dbd026c57ac18dd156f7e7479e64fc0c2f" Namespace="calico-apiserver" Pod="calico-apiserver-6785666f76-kmpv4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6785666f76--kmpv4-" Oct 13 05:52:40.701316 containerd[1643]: 2025-10-13 05:52:40.456 [INFO][4575] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70069cb0b4e00d7549d730d8aa4a51dbd026c57ac18dd156f7e7479e64fc0c2f" Namespace="calico-apiserver" Pod="calico-apiserver-6785666f76-kmpv4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6785666f76--kmpv4-eth0" Oct 13 05:52:40.701316 containerd[1643]: 2025-10-13 05:52:40.595 [INFO][4590] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70069cb0b4e00d7549d730d8aa4a51dbd026c57ac18dd156f7e7479e64fc0c2f" HandleID="k8s-pod-network.70069cb0b4e00d7549d730d8aa4a51dbd026c57ac18dd156f7e7479e64fc0c2f" Workload="localhost-k8s-calico--apiserver--6785666f76--kmpv4-eth0" Oct 13 05:52:40.703225 containerd[1643]: 2025-10-13 05:52:40.595 [INFO][4590] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="70069cb0b4e00d7549d730d8aa4a51dbd026c57ac18dd156f7e7479e64fc0c2f" HandleID="k8s-pod-network.70069cb0b4e00d7549d730d8aa4a51dbd026c57ac18dd156f7e7479e64fc0c2f" Workload="localhost-k8s-calico--apiserver--6785666f76--kmpv4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd0a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6785666f76-kmpv4", "timestamp":"2025-10-13 05:52:40.595002706 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:52:40.703225 containerd[1643]: 2025-10-13 05:52:40.595 [INFO][4590] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:52:40.703225 containerd[1643]: 2025-10-13 05:52:40.595 [INFO][4590] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:52:40.703225 containerd[1643]: 2025-10-13 05:52:40.595 [INFO][4590] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:52:40.703225 containerd[1643]: 2025-10-13 05:52:40.620 [INFO][4590] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.70069cb0b4e00d7549d730d8aa4a51dbd026c57ac18dd156f7e7479e64fc0c2f" host="localhost" Oct 13 05:52:40.703225 containerd[1643]: 2025-10-13 05:52:40.627 [INFO][4590] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:52:40.703225 containerd[1643]: 2025-10-13 05:52:40.640 [INFO][4590] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:52:40.703225 containerd[1643]: 2025-10-13 05:52:40.641 [INFO][4590] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:52:40.703225 containerd[1643]: 2025-10-13 05:52:40.647 [INFO][4590] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:52:40.703225 containerd[1643]: 2025-10-13 05:52:40.647 [INFO][4590] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.70069cb0b4e00d7549d730d8aa4a51dbd026c57ac18dd156f7e7479e64fc0c2f" host="localhost" Oct 13 05:52:40.713866 containerd[1643]: 2025-10-13 05:52:40.651 [INFO][4590] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.70069cb0b4e00d7549d730d8aa4a51dbd026c57ac18dd156f7e7479e64fc0c2f Oct 13 05:52:40.713866 containerd[1643]: 2025-10-13 05:52:40.658 [INFO][4590] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.70069cb0b4e00d7549d730d8aa4a51dbd026c57ac18dd156f7e7479e64fc0c2f" host="localhost" Oct 13 05:52:40.713866 containerd[1643]: 2025-10-13 05:52:40.667 [INFO][4590] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.70069cb0b4e00d7549d730d8aa4a51dbd026c57ac18dd156f7e7479e64fc0c2f" host="localhost" Oct 13 05:52:40.713866 containerd[1643]: 2025-10-13 05:52:40.667 [INFO][4590] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.70069cb0b4e00d7549d730d8aa4a51dbd026c57ac18dd156f7e7479e64fc0c2f" host="localhost" Oct 13 05:52:40.713866 containerd[1643]: 2025-10-13 05:52:40.667 [INFO][4590] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:52:40.713866 containerd[1643]: 2025-10-13 05:52:40.667 [INFO][4590] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="70069cb0b4e00d7549d730d8aa4a51dbd026c57ac18dd156f7e7479e64fc0c2f" HandleID="k8s-pod-network.70069cb0b4e00d7549d730d8aa4a51dbd026c57ac18dd156f7e7479e64fc0c2f" Workload="localhost-k8s-calico--apiserver--6785666f76--kmpv4-eth0" Oct 13 05:52:40.725158 containerd[1643]: 2025-10-13 05:52:40.674 [INFO][4575] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70069cb0b4e00d7549d730d8aa4a51dbd026c57ac18dd156f7e7479e64fc0c2f" Namespace="calico-apiserver" Pod="calico-apiserver-6785666f76-kmpv4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6785666f76--kmpv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6785666f76--kmpv4-eth0", GenerateName:"calico-apiserver-6785666f76-", Namespace:"calico-apiserver", SelfLink:"", UID:"3393829f-fb4c-4d93-8969-62a461d00810", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 52, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6785666f76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6785666f76-kmpv4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8b053e67ca3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:52:40.725324 containerd[1643]: 2025-10-13 05:52:40.676 [INFO][4575] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="70069cb0b4e00d7549d730d8aa4a51dbd026c57ac18dd156f7e7479e64fc0c2f" Namespace="calico-apiserver" Pod="calico-apiserver-6785666f76-kmpv4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6785666f76--kmpv4-eth0" Oct 13 05:52:40.725324 containerd[1643]: 2025-10-13 05:52:40.676 [INFO][4575] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8b053e67ca3 ContainerID="70069cb0b4e00d7549d730d8aa4a51dbd026c57ac18dd156f7e7479e64fc0c2f" Namespace="calico-apiserver" Pod="calico-apiserver-6785666f76-kmpv4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6785666f76--kmpv4-eth0" Oct 13 05:52:40.725324 containerd[1643]: 2025-10-13 05:52:40.682 [INFO][4575] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70069cb0b4e00d7549d730d8aa4a51dbd026c57ac18dd156f7e7479e64fc0c2f" Namespace="calico-apiserver" Pod="calico-apiserver-6785666f76-kmpv4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6785666f76--kmpv4-eth0" Oct 13 05:52:40.725488 containerd[1643]: 2025-10-13 05:52:40.684 [INFO][4575] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70069cb0b4e00d7549d730d8aa4a51dbd026c57ac18dd156f7e7479e64fc0c2f" Namespace="calico-apiserver" Pod="calico-apiserver-6785666f76-kmpv4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6785666f76--kmpv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6785666f76--kmpv4-eth0", GenerateName:"calico-apiserver-6785666f76-", Namespace:"calico-apiserver", SelfLink:"", UID:"3393829f-fb4c-4d93-8969-62a461d00810", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 52, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6785666f76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"70069cb0b4e00d7549d730d8aa4a51dbd026c57ac18dd156f7e7479e64fc0c2f", Pod:"calico-apiserver-6785666f76-kmpv4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8b053e67ca3", MAC:"06:ac:43:1a:79:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:52:40.725765 containerd[1643]: 2025-10-13 05:52:40.693 [INFO][4575] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70069cb0b4e00d7549d730d8aa4a51dbd026c57ac18dd156f7e7479e64fc0c2f" Namespace="calico-apiserver" Pod="calico-apiserver-6785666f76-kmpv4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6785666f76--kmpv4-eth0" Oct 13 05:52:40.783486 systemd-networkd[1430]: calif7c7175e06c: Link UP Oct 13 05:52:40.783664 systemd-networkd[1430]: calif7c7175e06c: Gained carrier Oct 13 05:52:40.813033 containerd[1643]: 2025-10-13 05:52:40.453 [INFO][4561] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--f2t27-eth0 coredns-66bc5c9577- kube-system 44debfcd-ef12-4ead-bdc8-ed45ef8b190e 794 0 2025-10-13 05:52:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-f2t27 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif7c7175e06c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="5582b77a1f1e815662c81689148cf48e61a766b9d7e73cc6affabe000ed8d732" Namespace="kube-system" Pod="coredns-66bc5c9577-f2t27" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--f2t27-" Oct 13 05:52:40.813033 containerd[1643]: 2025-10-13 05:52:40.453 [INFO][4561] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5582b77a1f1e815662c81689148cf48e61a766b9d7e73cc6affabe000ed8d732" Namespace="kube-system" Pod="coredns-66bc5c9577-f2t27" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--f2t27-eth0" Oct 13 05:52:40.813033 containerd[1643]: 2025-10-13 05:52:40.598 [INFO][4588] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5582b77a1f1e815662c81689148cf48e61a766b9d7e73cc6affabe000ed8d732" HandleID="k8s-pod-network.5582b77a1f1e815662c81689148cf48e61a766b9d7e73cc6affabe000ed8d732" Workload="localhost-k8s-coredns--66bc5c9577--f2t27-eth0" Oct 13 05:52:40.813170 containerd[1643]: 2025-10-13 05:52:40.598 [INFO][4588] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5582b77a1f1e815662c81689148cf48e61a766b9d7e73cc6affabe000ed8d732" HandleID="k8s-pod-network.5582b77a1f1e815662c81689148cf48e61a766b9d7e73cc6affabe000ed8d732" Workload="localhost-k8s-coredns--66bc5c9577--f2t27-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd610), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-f2t27", "timestamp":"2025-10-13 05:52:40.598106752 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:52:40.813170 containerd[1643]: 2025-10-13 05:52:40.598 [INFO][4588] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:52:40.813170 containerd[1643]: 2025-10-13 05:52:40.667 [INFO][4588] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:52:40.813170 containerd[1643]: 2025-10-13 05:52:40.667 [INFO][4588] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:52:40.813170 containerd[1643]: 2025-10-13 05:52:40.714 [INFO][4588] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5582b77a1f1e815662c81689148cf48e61a766b9d7e73cc6affabe000ed8d732" host="localhost" Oct 13 05:52:40.813170 containerd[1643]: 2025-10-13 05:52:40.727 [INFO][4588] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:52:40.813170 containerd[1643]: 2025-10-13 05:52:40.735 [INFO][4588] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:52:40.813170 containerd[1643]: 2025-10-13 05:52:40.739 [INFO][4588] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:52:40.813170 containerd[1643]: 2025-10-13 05:52:40.742 [INFO][4588] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:52:40.813170 containerd[1643]: 2025-10-13 05:52:40.742 [INFO][4588] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5582b77a1f1e815662c81689148cf48e61a766b9d7e73cc6affabe000ed8d732" host="localhost" Oct 13 05:52:40.821132 containerd[1643]: 2025-10-13 05:52:40.746 [INFO][4588] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5582b77a1f1e815662c81689148cf48e61a766b9d7e73cc6affabe000ed8d732 Oct 13 05:52:40.821132 containerd[1643]: 2025-10-13 05:52:40.755 [INFO][4588] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5582b77a1f1e815662c81689148cf48e61a766b9d7e73cc6affabe000ed8d732" host="localhost" Oct 13 05:52:40.821132 containerd[1643]: 2025-10-13 05:52:40.768 [INFO][4588] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.5582b77a1f1e815662c81689148cf48e61a766b9d7e73cc6affabe000ed8d732" host="localhost" Oct 13 05:52:40.821132 containerd[1643]: 2025-10-13 05:52:40.768 [INFO][4588] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.5582b77a1f1e815662c81689148cf48e61a766b9d7e73cc6affabe000ed8d732" host="localhost" Oct 13 05:52:40.821132 containerd[1643]: 2025-10-13 05:52:40.768 [INFO][4588] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:52:40.821132 containerd[1643]: 2025-10-13 05:52:40.768 [INFO][4588] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="5582b77a1f1e815662c81689148cf48e61a766b9d7e73cc6affabe000ed8d732" HandleID="k8s-pod-network.5582b77a1f1e815662c81689148cf48e61a766b9d7e73cc6affabe000ed8d732" Workload="localhost-k8s-coredns--66bc5c9577--f2t27-eth0" Oct 13 05:52:40.821241 containerd[1643]: 2025-10-13 05:52:40.771 [INFO][4561] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5582b77a1f1e815662c81689148cf48e61a766b9d7e73cc6affabe000ed8d732" Namespace="kube-system" Pod="coredns-66bc5c9577-f2t27" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--f2t27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--f2t27-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"44debfcd-ef12-4ead-bdc8-ed45ef8b190e", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 52, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-f2t27", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7c7175e06c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:52:40.821241 containerd[1643]: 2025-10-13 05:52:40.771 [INFO][4561] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="5582b77a1f1e815662c81689148cf48e61a766b9d7e73cc6affabe000ed8d732" Namespace="kube-system" Pod="coredns-66bc5c9577-f2t27" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--f2t27-eth0" Oct 13 05:52:40.821241 containerd[1643]: 2025-10-13 05:52:40.771 [INFO][4561] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7c7175e06c ContainerID="5582b77a1f1e815662c81689148cf48e61a766b9d7e73cc6affabe000ed8d732" Namespace="kube-system" Pod="coredns-66bc5c9577-f2t27" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--f2t27-eth0" Oct 13 05:52:40.821241 containerd[1643]: 2025-10-13 05:52:40.782 [INFO][4561] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5582b77a1f1e815662c81689148cf48e61a766b9d7e73cc6affabe000ed8d732" Namespace="kube-system" Pod="coredns-66bc5c9577-f2t27" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--f2t27-eth0" Oct 13 05:52:40.821241 containerd[1643]: 2025-10-13 05:52:40.791 [INFO][4561] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5582b77a1f1e815662c81689148cf48e61a766b9d7e73cc6affabe000ed8d732" Namespace="kube-system" Pod="coredns-66bc5c9577-f2t27" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--f2t27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--f2t27-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"44debfcd-ef12-4ead-bdc8-ed45ef8b190e", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 52, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5582b77a1f1e815662c81689148cf48e61a766b9d7e73cc6affabe000ed8d732", Pod:"coredns-66bc5c9577-f2t27", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7c7175e06c", MAC:"42:c9:6f:47:63:60", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:52:40.821241 containerd[1643]: 2025-10-13 05:52:40.808 [INFO][4561] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5582b77a1f1e815662c81689148cf48e61a766b9d7e73cc6affabe000ed8d732" Namespace="kube-system" Pod="coredns-66bc5c9577-f2t27" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--f2t27-eth0" Oct 13 05:52:40.830053 containerd[1643]: time="2025-10-13T05:52:40.824921868Z" level=info msg="connecting to shim 70069cb0b4e00d7549d730d8aa4a51dbd026c57ac18dd156f7e7479e64fc0c2f" address="unix:///run/containerd/s/285becfdc4826a23cf327f766d6a70433ca3f27e844640588145ad563156cbf8" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:52:40.868816 systemd[1]: Started cri-containerd-70069cb0b4e00d7549d730d8aa4a51dbd026c57ac18dd156f7e7479e64fc0c2f.scope - libcontainer container 70069cb0b4e00d7549d730d8aa4a51dbd026c57ac18dd156f7e7479e64fc0c2f. Oct 13 05:52:40.881517 systemd-resolved[1561]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:52:40.914717 containerd[1643]: time="2025-10-13T05:52:40.913726428Z" level=info msg="connecting to shim 5582b77a1f1e815662c81689148cf48e61a766b9d7e73cc6affabe000ed8d732" address="unix:///run/containerd/s/101a2af0a459e7c2b8aa2eb55f476252c3ece8b57796eeb6dc68f07aa8a8ae51" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:52:40.943512 systemd[1]: Started cri-containerd-5582b77a1f1e815662c81689148cf48e61a766b9d7e73cc6affabe000ed8d732.scope - libcontainer container 5582b77a1f1e815662c81689148cf48e61a766b9d7e73cc6affabe000ed8d732. Oct 13 05:52:40.979626 systemd-resolved[1561]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:52:41.086943 containerd[1643]: time="2025-10-13T05:52:41.086917997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-f2t27,Uid:44debfcd-ef12-4ead-bdc8-ed45ef8b190e,Namespace:kube-system,Attempt:0,} returns sandbox id \"5582b77a1f1e815662c81689148cf48e61a766b9d7e73cc6affabe000ed8d732\"" Oct 13 05:52:41.087676 systemd-networkd[1430]: calif2a8203849b: Gained IPv6LL Oct 13 05:52:41.154416 containerd[1643]: time="2025-10-13T05:52:41.154391020Z" level=info msg="CreateContainer within sandbox \"5582b77a1f1e815662c81689148cf48e61a766b9d7e73cc6affabe000ed8d732\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:52:41.187843 containerd[1643]: time="2025-10-13T05:52:41.187773673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6785666f76-kmpv4,Uid:3393829f-fb4c-4d93-8969-62a461d00810,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"70069cb0b4e00d7549d730d8aa4a51dbd026c57ac18dd156f7e7479e64fc0c2f\"" Oct 13 05:52:41.422687 containerd[1643]: time="2025-10-13T05:52:41.422660774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-q72x8,Uid:e9d214b6-f41a-4546-9795-8f0393cb97df,Namespace:calico-system,Attempt:0,}" Oct 13 05:52:41.880420 containerd[1643]: time="2025-10-13T05:52:41.880394731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nl4pb,Uid:7eaeee97-3247-46a8-9ef2-3b297a8c1e04,Namespace:kube-system,Attempt:0,}" Oct 13 05:52:42.015894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2804685857.mount: Deactivated successfully. Oct 13 05:52:42.021215 containerd[1643]: time="2025-10-13T05:52:42.021177541Z" level=info msg="Container 488c51de26c25370552541176f50d3c5020bbd70db75e808bb1b98e8dc0b81e9: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:52:42.280739 containerd[1643]: time="2025-10-13T05:52:42.280671458Z" level=info msg="CreateContainer within sandbox \"5582b77a1f1e815662c81689148cf48e61a766b9d7e73cc6affabe000ed8d732\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"488c51de26c25370552541176f50d3c5020bbd70db75e808bb1b98e8dc0b81e9\"" Oct 13 05:52:42.283076 containerd[1643]: time="2025-10-13T05:52:42.282057255Z" level=info msg="StartContainer for \"488c51de26c25370552541176f50d3c5020bbd70db75e808bb1b98e8dc0b81e9\"" Oct 13 05:52:42.286424 systemd-networkd[1430]: cali52093fb6470: Link UP Oct 13 05:52:42.290071 systemd-networkd[1430]: cali52093fb6470: Gained carrier Oct 13 05:52:42.342983 containerd[1643]: time="2025-10-13T05:52:42.336623538Z" level=info msg="connecting to shim 488c51de26c25370552541176f50d3c5020bbd70db75e808bb1b98e8dc0b81e9" address="unix:///run/containerd/s/101a2af0a459e7c2b8aa2eb55f476252c3ece8b57796eeb6dc68f07aa8a8ae51" protocol=ttrpc version=3 Oct 13 05:52:42.370276 containerd[1643]: 2025-10-13 05:52:42.072 [INFO][4714] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--nl4pb-eth0 coredns-66bc5c9577- kube-system 7eaeee97-3247-46a8-9ef2-3b297a8c1e04 786 0 2025-10-13 05:52:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-nl4pb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali52093fb6470 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="d12df7fabaabb04c07ddbf45e27b384e6bf7066e0d8c858e7bf29fa4866cab95" Namespace="kube-system" Pod="coredns-66bc5c9577-nl4pb" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nl4pb-" Oct 13 05:52:42.370276 containerd[1643]: 2025-10-13 05:52:42.073 [INFO][4714] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d12df7fabaabb04c07ddbf45e27b384e6bf7066e0d8c858e7bf29fa4866cab95" Namespace="kube-system" Pod="coredns-66bc5c9577-nl4pb" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nl4pb-eth0" Oct 13 05:52:42.370276 containerd[1643]: 2025-10-13 05:52:42.100 [INFO][4741] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d12df7fabaabb04c07ddbf45e27b384e6bf7066e0d8c858e7bf29fa4866cab95" HandleID="k8s-pod-network.d12df7fabaabb04c07ddbf45e27b384e6bf7066e0d8c858e7bf29fa4866cab95" Workload="localhost-k8s-coredns--66bc5c9577--nl4pb-eth0" Oct 13 05:52:42.370276 containerd[1643]: 2025-10-13 05:52:42.100 [INFO][4741] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d12df7fabaabb04c07ddbf45e27b384e6bf7066e0d8c858e7bf29fa4866cab95" HandleID="k8s-pod-network.d12df7fabaabb04c07ddbf45e27b384e6bf7066e0d8c858e7bf29fa4866cab95" Workload="localhost-k8s-coredns--66bc5c9577--nl4pb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-nl4pb", "timestamp":"2025-10-13 05:52:42.100510633 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:52:42.370276 containerd[1643]: 2025-10-13 05:52:42.100 [INFO][4741] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:52:42.370276 containerd[1643]: 2025-10-13 05:52:42.100 [INFO][4741] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:52:42.370276 containerd[1643]: 2025-10-13 05:52:42.100 [INFO][4741] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:52:42.370276 containerd[1643]: 2025-10-13 05:52:42.115 [INFO][4741] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d12df7fabaabb04c07ddbf45e27b384e6bf7066e0d8c858e7bf29fa4866cab95" host="localhost" Oct 13 05:52:42.370276 containerd[1643]: 2025-10-13 05:52:42.126 [INFO][4741] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:52:42.370276 containerd[1643]: 2025-10-13 05:52:42.144 [INFO][4741] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:52:42.370276 containerd[1643]: 2025-10-13 05:52:42.146 [INFO][4741] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:52:42.370276 containerd[1643]: 2025-10-13 05:52:42.161 [INFO][4741] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:52:42.370276 containerd[1643]: 2025-10-13 05:52:42.266 [INFO][4741] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d12df7fabaabb04c07ddbf45e27b384e6bf7066e0d8c858e7bf29fa4866cab95" host="localhost" Oct 13 05:52:42.370276 containerd[1643]: 2025-10-13 05:52:42.268 [INFO][4741] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d12df7fabaabb04c07ddbf45e27b384e6bf7066e0d8c858e7bf29fa4866cab95 Oct 13 05:52:42.370276 containerd[1643]: 2025-10-13 05:52:42.273 [INFO][4741] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d12df7fabaabb04c07ddbf45e27b384e6bf7066e0d8c858e7bf29fa4866cab95" host="localhost" Oct 13 05:52:42.370276 containerd[1643]: 2025-10-13 05:52:42.277 [INFO][4741] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.d12df7fabaabb04c07ddbf45e27b384e6bf7066e0d8c858e7bf29fa4866cab95" host="localhost" Oct 13 05:52:42.370276 containerd[1643]: 2025-10-13 05:52:42.278 [INFO][4741] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.d12df7fabaabb04c07ddbf45e27b384e6bf7066e0d8c858e7bf29fa4866cab95" host="localhost" Oct 13 05:52:42.370276 containerd[1643]: 2025-10-13 05:52:42.278 [INFO][4741] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:52:42.370276 containerd[1643]: 2025-10-13 05:52:42.278 [INFO][4741] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="d12df7fabaabb04c07ddbf45e27b384e6bf7066e0d8c858e7bf29fa4866cab95" HandleID="k8s-pod-network.d12df7fabaabb04c07ddbf45e27b384e6bf7066e0d8c858e7bf29fa4866cab95" Workload="localhost-k8s-coredns--66bc5c9577--nl4pb-eth0" Oct 13 05:52:42.352141 systemd[1]: Started cri-containerd-488c51de26c25370552541176f50d3c5020bbd70db75e808bb1b98e8dc0b81e9.scope - libcontainer container 488c51de26c25370552541176f50d3c5020bbd70db75e808bb1b98e8dc0b81e9. Oct 13 05:52:42.378134 containerd[1643]: 2025-10-13 05:52:42.281 [INFO][4714] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d12df7fabaabb04c07ddbf45e27b384e6bf7066e0d8c858e7bf29fa4866cab95" Namespace="kube-system" Pod="coredns-66bc5c9577-nl4pb" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nl4pb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--nl4pb-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7eaeee97-3247-46a8-9ef2-3b297a8c1e04", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 52, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-nl4pb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali52093fb6470", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:52:42.378134 containerd[1643]: 2025-10-13 05:52:42.281 [INFO][4714] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="d12df7fabaabb04c07ddbf45e27b384e6bf7066e0d8c858e7bf29fa4866cab95" Namespace="kube-system" Pod="coredns-66bc5c9577-nl4pb" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nl4pb-eth0" Oct 13 05:52:42.378134 containerd[1643]: 2025-10-13 05:52:42.281 [INFO][4714] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali52093fb6470 ContainerID="d12df7fabaabb04c07ddbf45e27b384e6bf7066e0d8c858e7bf29fa4866cab95" Namespace="kube-system" Pod="coredns-66bc5c9577-nl4pb" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nl4pb-eth0" Oct 13 05:52:42.378134 containerd[1643]: 2025-10-13 05:52:42.288 [INFO][4714] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d12df7fabaabb04c07ddbf45e27b384e6bf7066e0d8c858e7bf29fa4866cab95" Namespace="kube-system" Pod="coredns-66bc5c9577-nl4pb" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nl4pb-eth0" Oct 13 05:52:42.378134 containerd[1643]: 2025-10-13 05:52:42.289 [INFO][4714] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d12df7fabaabb04c07ddbf45e27b384e6bf7066e0d8c858e7bf29fa4866cab95" Namespace="kube-system" Pod="coredns-66bc5c9577-nl4pb" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nl4pb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--nl4pb-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7eaeee97-3247-46a8-9ef2-3b297a8c1e04", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 52, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d12df7fabaabb04c07ddbf45e27b384e6bf7066e0d8c858e7bf29fa4866cab95", Pod:"coredns-66bc5c9577-nl4pb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali52093fb6470", MAC:"2a:29:95:47:1e:b1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:52:42.378134 containerd[1643]: 2025-10-13 05:52:42.344 [INFO][4714] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d12df7fabaabb04c07ddbf45e27b384e6bf7066e0d8c858e7bf29fa4866cab95" Namespace="kube-system" Pod="coredns-66bc5c9577-nl4pb" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nl4pb-eth0" Oct 13 05:52:42.391636 containerd[1643]: time="2025-10-13T05:52:42.391277246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6hqlq,Uid:5c9445f8-2a83-4f1a-ae81-7a18fbffe769,Namespace:calico-system,Attempt:0,}" Oct 13 05:52:42.394910 systemd-networkd[1430]: calib4f5e3949dd: Link UP Oct 13 05:52:42.395825 systemd-networkd[1430]: calib4f5e3949dd: Gained carrier Oct 13 05:52:42.405912 containerd[1643]: time="2025-10-13T05:52:42.405880092Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:42.421453 containerd[1643]: 2025-10-13 05:52:42.095 [INFO][4725] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--854f97d977--q72x8-eth0 goldmane-854f97d977- calico-system e9d214b6-f41a-4546-9795-8f0393cb97df 797 0 2025-10-13 05:52:16 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:854f97d977 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-854f97d977-q72x8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib4f5e3949dd [] [] }} ContainerID="7f053438ab63894bbd18562a275a4988a6b84bb23641ef5f0e930c48e9646920" Namespace="calico-system" Pod="goldmane-854f97d977-q72x8" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--q72x8-" Oct 13 05:52:42.421453 containerd[1643]: 2025-10-13 05:52:42.095 [INFO][4725] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7f053438ab63894bbd18562a275a4988a6b84bb23641ef5f0e930c48e9646920" Namespace="calico-system" Pod="goldmane-854f97d977-q72x8" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--q72x8-eth0" Oct 13 05:52:42.421453 containerd[1643]: 2025-10-13 05:52:42.136 [INFO][4749] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f053438ab63894bbd18562a275a4988a6b84bb23641ef5f0e930c48e9646920" HandleID="k8s-pod-network.7f053438ab63894bbd18562a275a4988a6b84bb23641ef5f0e930c48e9646920" Workload="localhost-k8s-goldmane--854f97d977--q72x8-eth0" Oct 13 05:52:42.421453 containerd[1643]: 2025-10-13 05:52:42.140 [INFO][4749] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7f053438ab63894bbd18562a275a4988a6b84bb23641ef5f0e930c48e9646920" HandleID="k8s-pod-network.7f053438ab63894bbd18562a275a4988a6b84bb23641ef5f0e930c48e9646920" Workload="localhost-k8s-goldmane--854f97d977--q72x8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000259b00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-854f97d977-q72x8", "timestamp":"2025-10-13 05:52:42.136393984 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:52:42.421453 containerd[1643]: 2025-10-13 05:52:42.140 [INFO][4749] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:52:42.421453 containerd[1643]: 2025-10-13 05:52:42.278 [INFO][4749] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:52:42.421453 containerd[1643]: 2025-10-13 05:52:42.278 [INFO][4749] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:52:42.421453 containerd[1643]: 2025-10-13 05:52:42.290 [INFO][4749] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7f053438ab63894bbd18562a275a4988a6b84bb23641ef5f0e930c48e9646920" host="localhost" Oct 13 05:52:42.421453 containerd[1643]: 2025-10-13 05:52:42.360 [INFO][4749] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:52:42.421453 containerd[1643]: 2025-10-13 05:52:42.363 [INFO][4749] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:52:42.421453 containerd[1643]: 2025-10-13 05:52:42.364 [INFO][4749] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:52:42.421453 containerd[1643]: 2025-10-13 05:52:42.367 [INFO][4749] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:52:42.421453 containerd[1643]: 2025-10-13 05:52:42.367 [INFO][4749] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7f053438ab63894bbd18562a275a4988a6b84bb23641ef5f0e930c48e9646920" host="localhost" Oct 13 05:52:42.421453 containerd[1643]: 2025-10-13 05:52:42.369 [INFO][4749] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7f053438ab63894bbd18562a275a4988a6b84bb23641ef5f0e930c48e9646920 Oct 13 05:52:42.421453 containerd[1643]: 2025-10-13 05:52:42.371 [INFO][4749] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7f053438ab63894bbd18562a275a4988a6b84bb23641ef5f0e930c48e9646920" host="localhost" Oct 13 05:52:42.421453 containerd[1643]: 2025-10-13 05:52:42.382 [INFO][4749] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.7f053438ab63894bbd18562a275a4988a6b84bb23641ef5f0e930c48e9646920" host="localhost" Oct 13 05:52:42.421453 containerd[1643]: 2025-10-13 05:52:42.382 [INFO][4749] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.7f053438ab63894bbd18562a275a4988a6b84bb23641ef5f0e930c48e9646920" host="localhost" Oct 13 05:52:42.421453 containerd[1643]: 2025-10-13 05:52:42.382 [INFO][4749] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:52:42.421453 containerd[1643]: 2025-10-13 05:52:42.382 [INFO][4749] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="7f053438ab63894bbd18562a275a4988a6b84bb23641ef5f0e930c48e9646920" HandleID="k8s-pod-network.7f053438ab63894bbd18562a275a4988a6b84bb23641ef5f0e930c48e9646920" Workload="localhost-k8s-goldmane--854f97d977--q72x8-eth0" Oct 13 05:52:42.425353 containerd[1643]: 2025-10-13 05:52:42.384 [INFO][4725] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7f053438ab63894bbd18562a275a4988a6b84bb23641ef5f0e930c48e9646920" Namespace="calico-system" Pod="goldmane-854f97d977-q72x8" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--q72x8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--854f97d977--q72x8-eth0", GenerateName:"goldmane-854f97d977-", Namespace:"calico-system", SelfLink:"", UID:"e9d214b6-f41a-4546-9795-8f0393cb97df", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 52, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"854f97d977", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-854f97d977-q72x8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib4f5e3949dd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:52:42.425353 containerd[1643]: 2025-10-13 05:52:42.391 [INFO][4725] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="7f053438ab63894bbd18562a275a4988a6b84bb23641ef5f0e930c48e9646920" Namespace="calico-system" Pod="goldmane-854f97d977-q72x8" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--q72x8-eth0" Oct 13 05:52:42.425353 containerd[1643]: 2025-10-13 05:52:42.391 [INFO][4725] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib4f5e3949dd ContainerID="7f053438ab63894bbd18562a275a4988a6b84bb23641ef5f0e930c48e9646920" Namespace="calico-system" Pod="goldmane-854f97d977-q72x8" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--q72x8-eth0" Oct 13 05:52:42.425353 containerd[1643]: 2025-10-13 05:52:42.393 [INFO][4725] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f053438ab63894bbd18562a275a4988a6b84bb23641ef5f0e930c48e9646920" Namespace="calico-system" Pod="goldmane-854f97d977-q72x8" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--q72x8-eth0" Oct 13 05:52:42.425353 containerd[1643]: 2025-10-13 05:52:42.396 [INFO][4725] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7f053438ab63894bbd18562a275a4988a6b84bb23641ef5f0e930c48e9646920" Namespace="calico-system" Pod="goldmane-854f97d977-q72x8" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--q72x8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--854f97d977--q72x8-eth0", GenerateName:"goldmane-854f97d977-", Namespace:"calico-system", SelfLink:"", UID:"e9d214b6-f41a-4546-9795-8f0393cb97df", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 52, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"854f97d977", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7f053438ab63894bbd18562a275a4988a6b84bb23641ef5f0e930c48e9646920", Pod:"goldmane-854f97d977-q72x8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib4f5e3949dd", MAC:"f2:0c:dc:3b:c5:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:52:42.425353 containerd[1643]: 2025-10-13 05:52:42.414 [INFO][4725] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7f053438ab63894bbd18562a275a4988a6b84bb23641ef5f0e930c48e9646920" Namespace="calico-system" Pod="goldmane-854f97d977-q72x8" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--q72x8-eth0" Oct 13 05:52:42.430690 systemd-networkd[1430]: cali8b053e67ca3: Gained IPv6LL Oct 13 05:52:42.439121 containerd[1643]: time="2025-10-13T05:52:42.439092097Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Oct 13 05:52:42.495001 containerd[1643]: time="2025-10-13T05:52:42.494977125Z" level=info msg="StartContainer for \"488c51de26c25370552541176f50d3c5020bbd70db75e808bb1b98e8dc0b81e9\" returns successfully" Oct 13 05:52:42.558547 containerd[1643]: time="2025-10-13T05:52:42.556839108Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:42.570580 containerd[1643]: time="2025-10-13T05:52:42.570557914Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:42.571650 containerd[1643]: time="2025-10-13T05:52:42.571630218Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.715415485s" Oct 13 05:52:42.571704 containerd[1643]: time="2025-10-13T05:52:42.571650915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 05:52:42.574631 containerd[1643]: time="2025-10-13T05:52:42.572521450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Oct 13 05:52:42.580547 containerd[1643]: time="2025-10-13T05:52:42.579189546Z" level=info msg="CreateContainer within sandbox \"04be058dcbdee96aa583b012502f674e673ea815ddeb61c50372468802e66ffb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:52:42.586546 containerd[1643]: time="2025-10-13T05:52:42.585501400Z" level=info msg="connecting to shim d12df7fabaabb04c07ddbf45e27b384e6bf7066e0d8c858e7bf29fa4866cab95" address="unix:///run/containerd/s/04556724b45e065a5e9bfdd2c3b443571497dd2496026a89c9eefa0f9b0ee9c5" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:52:42.594038 containerd[1643]: time="2025-10-13T05:52:42.594014362Z" level=info msg="Container 62d0e3ef10b0c2f6aefce237c8a18627d9e0b8186da2a64efb8d25a5b94132df: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:52:42.600428 containerd[1643]: time="2025-10-13T05:52:42.600405246Z" level=info msg="connecting to shim 7f053438ab63894bbd18562a275a4988a6b84bb23641ef5f0e930c48e9646920" address="unix:///run/containerd/s/69388e186b5e3b27925b5586ba88329d1f2bda64e1f1d250dd4b553f5a0d85ef" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:52:42.625648 systemd[1]: Started cri-containerd-d12df7fabaabb04c07ddbf45e27b384e6bf7066e0d8c858e7bf29fa4866cab95.scope - libcontainer container d12df7fabaabb04c07ddbf45e27b384e6bf7066e0d8c858e7bf29fa4866cab95. Oct 13 05:52:42.638566 containerd[1643]: time="2025-10-13T05:52:42.637657740Z" level=info msg="CreateContainer within sandbox \"04be058dcbdee96aa583b012502f674e673ea815ddeb61c50372468802e66ffb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"62d0e3ef10b0c2f6aefce237c8a18627d9e0b8186da2a64efb8d25a5b94132df\"" Oct 13 05:52:42.639872 containerd[1643]: time="2025-10-13T05:52:42.639361209Z" level=info msg="StartContainer for \"62d0e3ef10b0c2f6aefce237c8a18627d9e0b8186da2a64efb8d25a5b94132df\"" Oct 13 05:52:42.640563 containerd[1643]: time="2025-10-13T05:52:42.639959673Z" level=info msg="connecting to shim 62d0e3ef10b0c2f6aefce237c8a18627d9e0b8186da2a64efb8d25a5b94132df" address="unix:///run/containerd/s/6681258c451ea237c01d8f09efa69ee136378e723fe0277e0854347ce86cff8c" protocol=ttrpc version=3 Oct 13 05:52:42.676721 systemd[1]: Started cri-containerd-7f053438ab63894bbd18562a275a4988a6b84bb23641ef5f0e930c48e9646920.scope - libcontainer container 7f053438ab63894bbd18562a275a4988a6b84bb23641ef5f0e930c48e9646920. Oct 13 05:52:42.680159 systemd-resolved[1561]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:52:42.686659 systemd-networkd[1430]: calif7c7175e06c: Gained IPv6LL Oct 13 05:52:42.689930 systemd[1]: Started cri-containerd-62d0e3ef10b0c2f6aefce237c8a18627d9e0b8186da2a64efb8d25a5b94132df.scope - libcontainer container 62d0e3ef10b0c2f6aefce237c8a18627d9e0b8186da2a64efb8d25a5b94132df. Oct 13 05:52:42.828516 systemd-networkd[1430]: calif51655f6161: Link UP Oct 13 05:52:42.834850 systemd-networkd[1430]: calif51655f6161: Gained carrier Oct 13 05:52:42.844582 systemd-resolved[1561]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:52:42.856508 containerd[1643]: time="2025-10-13T05:52:42.855825664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nl4pb,Uid:7eaeee97-3247-46a8-9ef2-3b297a8c1e04,Namespace:kube-system,Attempt:0,} returns sandbox id \"d12df7fabaabb04c07ddbf45e27b384e6bf7066e0d8c858e7bf29fa4866cab95\"" Oct 13 05:52:42.874204 containerd[1643]: 2025-10-13 05:52:42.680 [INFO][4797] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--6hqlq-eth0 csi-node-driver- calico-system 5c9445f8-2a83-4f1a-ae81-7a18fbffe769 697 0 2025-10-13 05:52:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:f8549cf5c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-6hqlq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif51655f6161 [] [] }} ContainerID="bcdb5b6134ff56ad3b134968fe02dc4b610d011e2ecded09737adb00f0549766" Namespace="calico-system" Pod="csi-node-driver-6hqlq" WorkloadEndpoint="localhost-k8s-csi--node--driver--6hqlq-" Oct 13 05:52:42.874204 containerd[1643]: 2025-10-13 05:52:42.680 [INFO][4797] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bcdb5b6134ff56ad3b134968fe02dc4b610d011e2ecded09737adb00f0549766" Namespace="calico-system" Pod="csi-node-driver-6hqlq" WorkloadEndpoint="localhost-k8s-csi--node--driver--6hqlq-eth0" Oct 13 05:52:42.874204 containerd[1643]: 2025-10-13 05:52:42.739 [INFO][4897] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bcdb5b6134ff56ad3b134968fe02dc4b610d011e2ecded09737adb00f0549766" HandleID="k8s-pod-network.bcdb5b6134ff56ad3b134968fe02dc4b610d011e2ecded09737adb00f0549766" Workload="localhost-k8s-csi--node--driver--6hqlq-eth0" Oct 13 05:52:42.874204 containerd[1643]: 2025-10-13 05:52:42.739 [INFO][4897] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bcdb5b6134ff56ad3b134968fe02dc4b610d011e2ecded09737adb00f0549766" HandleID="k8s-pod-network.bcdb5b6134ff56ad3b134968fe02dc4b610d011e2ecded09737adb00f0549766" Workload="localhost-k8s-csi--node--driver--6hqlq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002bde10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-6hqlq", "timestamp":"2025-10-13 05:52:42.738545765 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:52:42.874204 containerd[1643]: 2025-10-13 05:52:42.739 [INFO][4897] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:52:42.874204 containerd[1643]: 2025-10-13 05:52:42.740 [INFO][4897] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:52:42.874204 containerd[1643]: 2025-10-13 05:52:42.740 [INFO][4897] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:52:42.874204 containerd[1643]: 2025-10-13 05:52:42.752 [INFO][4897] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bcdb5b6134ff56ad3b134968fe02dc4b610d011e2ecded09737adb00f0549766" host="localhost" Oct 13 05:52:42.874204 containerd[1643]: 2025-10-13 05:52:42.766 [INFO][4897] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:52:42.874204 containerd[1643]: 2025-10-13 05:52:42.785 [INFO][4897] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:52:42.874204 containerd[1643]: 2025-10-13 05:52:42.787 [INFO][4897] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:52:42.874204 containerd[1643]: 2025-10-13 05:52:42.788 [INFO][4897] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:52:42.874204 containerd[1643]: 2025-10-13 05:52:42.788 [INFO][4897] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bcdb5b6134ff56ad3b134968fe02dc4b610d011e2ecded09737adb00f0549766" host="localhost" Oct 13 05:52:42.874204 containerd[1643]: 2025-10-13 05:52:42.791 [INFO][4897] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bcdb5b6134ff56ad3b134968fe02dc4b610d011e2ecded09737adb00f0549766 Oct 13 05:52:42.874204 containerd[1643]: 2025-10-13 05:52:42.803 [INFO][4897] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bcdb5b6134ff56ad3b134968fe02dc4b610d011e2ecded09737adb00f0549766" host="localhost" Oct 13 05:52:42.874204 containerd[1643]: 2025-10-13 05:52:42.810 [INFO][4897] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.bcdb5b6134ff56ad3b134968fe02dc4b610d011e2ecded09737adb00f0549766" host="localhost" Oct 13 05:52:42.874204 containerd[1643]: 2025-10-13 05:52:42.810 [INFO][4897] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.bcdb5b6134ff56ad3b134968fe02dc4b610d011e2ecded09737adb00f0549766" host="localhost" Oct 13 05:52:42.874204 containerd[1643]: 2025-10-13 05:52:42.811 [INFO][4897] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:52:42.874204 containerd[1643]: 2025-10-13 05:52:42.811 [INFO][4897] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="bcdb5b6134ff56ad3b134968fe02dc4b610d011e2ecded09737adb00f0549766" HandleID="k8s-pod-network.bcdb5b6134ff56ad3b134968fe02dc4b610d011e2ecded09737adb00f0549766" Workload="localhost-k8s-csi--node--driver--6hqlq-eth0" Oct 13 05:52:42.877248 containerd[1643]: 2025-10-13 05:52:42.821 [INFO][4797] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bcdb5b6134ff56ad3b134968fe02dc4b610d011e2ecded09737adb00f0549766" Namespace="calico-system" Pod="csi-node-driver-6hqlq" WorkloadEndpoint="localhost-k8s-csi--node--driver--6hqlq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6hqlq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5c9445f8-2a83-4f1a-ae81-7a18fbffe769", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 52, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"f8549cf5c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-6hqlq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif51655f6161", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:52:42.877248 containerd[1643]: 2025-10-13 05:52:42.821 [INFO][4797] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="bcdb5b6134ff56ad3b134968fe02dc4b610d011e2ecded09737adb00f0549766" Namespace="calico-system" Pod="csi-node-driver-6hqlq" WorkloadEndpoint="localhost-k8s-csi--node--driver--6hqlq-eth0" Oct 13 05:52:42.877248 containerd[1643]: 2025-10-13 05:52:42.822 [INFO][4797] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif51655f6161 ContainerID="bcdb5b6134ff56ad3b134968fe02dc4b610d011e2ecded09737adb00f0549766" Namespace="calico-system" Pod="csi-node-driver-6hqlq" WorkloadEndpoint="localhost-k8s-csi--node--driver--6hqlq-eth0" Oct 13 05:52:42.877248 containerd[1643]: 2025-10-13 05:52:42.839 [INFO][4797] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bcdb5b6134ff56ad3b134968fe02dc4b610d011e2ecded09737adb00f0549766" Namespace="calico-system" Pod="csi-node-driver-6hqlq" WorkloadEndpoint="localhost-k8s-csi--node--driver--6hqlq-eth0" Oct 13 05:52:42.877248 containerd[1643]: 2025-10-13 05:52:42.840 [INFO][4797] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bcdb5b6134ff56ad3b134968fe02dc4b610d011e2ecded09737adb00f0549766" Namespace="calico-system" Pod="csi-node-driver-6hqlq" WorkloadEndpoint="localhost-k8s-csi--node--driver--6hqlq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6hqlq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5c9445f8-2a83-4f1a-ae81-7a18fbffe769", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 52, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"f8549cf5c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bcdb5b6134ff56ad3b134968fe02dc4b610d011e2ecded09737adb00f0549766", Pod:"csi-node-driver-6hqlq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif51655f6161", MAC:"02:7f:63:e8:68:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:52:42.877248 containerd[1643]: 2025-10-13 05:52:42.863 [INFO][4797] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bcdb5b6134ff56ad3b134968fe02dc4b610d011e2ecded09737adb00f0549766" Namespace="calico-system" Pod="csi-node-driver-6hqlq" WorkloadEndpoint="localhost-k8s-csi--node--driver--6hqlq-eth0" Oct 13 05:52:42.878873 kubelet[2946]: I1013 05:52:42.868007 2946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-f2t27" podStartSLOduration=37.867994044 podStartE2EDuration="37.867994044s" podCreationTimestamp="2025-10-13 05:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:52:42.866917184 +0000 UTC m=+44.624601925" watchObservedRunningTime="2025-10-13 05:52:42.867994044 +0000 UTC m=+44.625678786" Oct 13 05:52:42.882572 containerd[1643]: time="2025-10-13T05:52:42.882459645Z" level=info msg="CreateContainer within sandbox \"d12df7fabaabb04c07ddbf45e27b384e6bf7066e0d8c858e7bf29fa4866cab95\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:52:42.891313 containerd[1643]: time="2025-10-13T05:52:42.891276349Z" level=info msg="Container 4ab888cd88713c01aabbf5e5ba467fae34f32bcb85b69c57832ce18d368083bb: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:52:42.898578 containerd[1643]: time="2025-10-13T05:52:42.898340416Z" level=info msg="CreateContainer within sandbox \"d12df7fabaabb04c07ddbf45e27b384e6bf7066e0d8c858e7bf29fa4866cab95\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4ab888cd88713c01aabbf5e5ba467fae34f32bcb85b69c57832ce18d368083bb\"" Oct 13 05:52:42.900903 containerd[1643]: time="2025-10-13T05:52:42.899660605Z" level=info msg="StartContainer for \"4ab888cd88713c01aabbf5e5ba467fae34f32bcb85b69c57832ce18d368083bb\"" Oct 13 05:52:42.911606 containerd[1643]: time="2025-10-13T05:52:42.909968726Z" level=info msg="connecting to shim bcdb5b6134ff56ad3b134968fe02dc4b610d011e2ecded09737adb00f0549766" address="unix:///run/containerd/s/3544d74250c33969f98ddaa0c62bf492a5a7b111192fb8b0275ba3f1b0192596" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:52:42.925694 containerd[1643]: time="2025-10-13T05:52:42.925588978Z" level=info msg="connecting to shim 4ab888cd88713c01aabbf5e5ba467fae34f32bcb85b69c57832ce18d368083bb" address="unix:///run/containerd/s/04556724b45e065a5e9bfdd2c3b443571497dd2496026a89c9eefa0f9b0ee9c5" protocol=ttrpc version=3 Oct 13 05:52:42.934792 systemd[1]: Started cri-containerd-bcdb5b6134ff56ad3b134968fe02dc4b610d011e2ecded09737adb00f0549766.scope - libcontainer container bcdb5b6134ff56ad3b134968fe02dc4b610d011e2ecded09737adb00f0549766. Oct 13 05:52:42.954878 systemd[1]: Started cri-containerd-4ab888cd88713c01aabbf5e5ba467fae34f32bcb85b69c57832ce18d368083bb.scope - libcontainer container 4ab888cd88713c01aabbf5e5ba467fae34f32bcb85b69c57832ce18d368083bb. Oct 13 05:52:42.958679 containerd[1643]: time="2025-10-13T05:52:42.958014422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-q72x8,Uid:e9d214b6-f41a-4546-9795-8f0393cb97df,Namespace:calico-system,Attempt:0,} returns sandbox id \"7f053438ab63894bbd18562a275a4988a6b84bb23641ef5f0e930c48e9646920\"" Oct 13 05:52:42.970935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4081984226.mount: Deactivated successfully. Oct 13 05:52:42.987525 systemd-resolved[1561]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:52:43.038176 containerd[1643]: time="2025-10-13T05:52:43.038035383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6hqlq,Uid:5c9445f8-2a83-4f1a-ae81-7a18fbffe769,Namespace:calico-system,Attempt:0,} returns sandbox id \"bcdb5b6134ff56ad3b134968fe02dc4b610d011e2ecded09737adb00f0549766\"" Oct 13 05:52:43.040593 containerd[1643]: time="2025-10-13T05:52:43.039921203Z" level=info msg="StartContainer for \"4ab888cd88713c01aabbf5e5ba467fae34f32bcb85b69c57832ce18d368083bb\" returns successfully" Oct 13 05:52:43.040593 containerd[1643]: time="2025-10-13T05:52:43.040127408Z" level=info msg="StartContainer for \"62d0e3ef10b0c2f6aefce237c8a18627d9e0b8186da2a64efb8d25a5b94132df\" returns successfully" Oct 13 05:52:43.775608 systemd-networkd[1430]: calib4f5e3949dd: Gained IPv6LL Oct 13 05:52:43.775811 systemd-networkd[1430]: cali52093fb6470: Gained IPv6LL Oct 13 05:52:43.797388 kubelet[2946]: I1013 05:52:43.797341 2946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6785666f76-tdh2j" podStartSLOduration=26.080774988 podStartE2EDuration="29.797328335s" podCreationTimestamp="2025-10-13 05:52:14 +0000 UTC" firstStartedPulling="2025-10-13 05:52:38.855753791 +0000 UTC m=+40.613438526" lastFinishedPulling="2025-10-13 05:52:42.572307138 +0000 UTC m=+44.329991873" observedRunningTime="2025-10-13 05:52:43.79569452 +0000 UTC m=+45.553379260" watchObservedRunningTime="2025-10-13 05:52:43.797328335 +0000 UTC m=+45.555013076" Oct 13 05:52:43.826239 kubelet[2946]: I1013 05:52:43.826115 2946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-nl4pb" podStartSLOduration=38.826102842 podStartE2EDuration="38.826102842s" podCreationTimestamp="2025-10-13 05:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:52:43.807285734 +0000 UTC m=+45.564970481" watchObservedRunningTime="2025-10-13 05:52:43.826102842 +0000 UTC m=+45.583787596" Oct 13 05:52:44.757834 kubelet[2946]: I1013 05:52:44.757680 2946 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:52:44.863655 systemd-networkd[1430]: calif51655f6161: Gained IPv6LL Oct 13 05:52:46.601364 containerd[1643]: time="2025-10-13T05:52:46.601331316Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:46.602962 containerd[1643]: time="2025-10-13T05:52:46.602863709Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Oct 13 05:52:46.626549 containerd[1643]: time="2025-10-13T05:52:46.626094885Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:46.627509 containerd[1643]: time="2025-10-13T05:52:46.627491458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:46.633211 containerd[1643]: time="2025-10-13T05:52:46.633142573Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 4.054876663s" Oct 13 05:52:46.633211 containerd[1643]: time="2025-10-13T05:52:46.633176213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Oct 13 05:52:46.706821 containerd[1643]: time="2025-10-13T05:52:46.706620206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:52:46.873387 containerd[1643]: time="2025-10-13T05:52:46.873057463Z" level=info msg="CreateContainer within sandbox \"db9b62432faaac82392bd4490d83ee096c36c4c9721c0b3f8e2ea4de2ed3ce9f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 13 05:52:46.883868 containerd[1643]: time="2025-10-13T05:52:46.883810183Z" level=info msg="Container c3ec350062b6a1e3f01cb1ac620f7c74a64560eda5159279de704b10c945e724: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:52:46.893944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3507078425.mount: Deactivated successfully. Oct 13 05:52:46.923037 containerd[1643]: time="2025-10-13T05:52:46.923003446Z" level=info msg="CreateContainer within sandbox \"db9b62432faaac82392bd4490d83ee096c36c4c9721c0b3f8e2ea4de2ed3ce9f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c3ec350062b6a1e3f01cb1ac620f7c74a64560eda5159279de704b10c945e724\"" Oct 13 05:52:46.928488 containerd[1643]: time="2025-10-13T05:52:46.928387618Z" level=info msg="StartContainer for \"c3ec350062b6a1e3f01cb1ac620f7c74a64560eda5159279de704b10c945e724\"" Oct 13 05:52:46.929781 containerd[1643]: time="2025-10-13T05:52:46.929717664Z" level=info msg="connecting to shim c3ec350062b6a1e3f01cb1ac620f7c74a64560eda5159279de704b10c945e724" address="unix:///run/containerd/s/cadbbdcb5cc54d45d56185416a571d86f8982accff696a5909ebb03c3943258a" protocol=ttrpc version=3 Oct 13 05:52:46.990632 systemd[1]: Started cri-containerd-c3ec350062b6a1e3f01cb1ac620f7c74a64560eda5159279de704b10c945e724.scope - libcontainer container c3ec350062b6a1e3f01cb1ac620f7c74a64560eda5159279de704b10c945e724. Oct 13 05:52:47.069425 containerd[1643]: time="2025-10-13T05:52:47.069343211Z" level=info msg="StartContainer for \"c3ec350062b6a1e3f01cb1ac620f7c74a64560eda5159279de704b10c945e724\" returns successfully" Oct 13 05:52:47.167351 containerd[1643]: time="2025-10-13T05:52:47.167225413Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:47.168543 containerd[1643]: time="2025-10-13T05:52:47.168399675Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Oct 13 05:52:47.169298 containerd[1643]: time="2025-10-13T05:52:47.169286658Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 462.635594ms" Oct 13 05:52:47.169361 containerd[1643]: time="2025-10-13T05:52:47.169350548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 05:52:47.170191 containerd[1643]: time="2025-10-13T05:52:47.170175033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Oct 13 05:52:47.173388 containerd[1643]: time="2025-10-13T05:52:47.173356453Z" level=info msg="CreateContainer within sandbox \"70069cb0b4e00d7549d730d8aa4a51dbd026c57ac18dd156f7e7479e64fc0c2f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:52:47.178236 containerd[1643]: time="2025-10-13T05:52:47.177819304Z" level=info msg="Container 6ed3ca830dbf0a12d6fa2a824bcef91caaac428fea5d1a91c8cce65701c0683c: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:52:47.194259 containerd[1643]: time="2025-10-13T05:52:47.194234835Z" level=info msg="CreateContainer within sandbox \"70069cb0b4e00d7549d730d8aa4a51dbd026c57ac18dd156f7e7479e64fc0c2f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6ed3ca830dbf0a12d6fa2a824bcef91caaac428fea5d1a91c8cce65701c0683c\"" Oct 13 05:52:47.194770 containerd[1643]: time="2025-10-13T05:52:47.194756686Z" level=info msg="StartContainer for \"6ed3ca830dbf0a12d6fa2a824bcef91caaac428fea5d1a91c8cce65701c0683c\"" Oct 13 05:52:47.195999 containerd[1643]: time="2025-10-13T05:52:47.195928667Z" level=info msg="connecting to shim 6ed3ca830dbf0a12d6fa2a824bcef91caaac428fea5d1a91c8cce65701c0683c" address="unix:///run/containerd/s/285becfdc4826a23cf327f766d6a70433ca3f27e844640588145ad563156cbf8" protocol=ttrpc version=3 Oct 13 05:52:47.211762 systemd[1]: Started cri-containerd-6ed3ca830dbf0a12d6fa2a824bcef91caaac428fea5d1a91c8cce65701c0683c.scope - libcontainer container 6ed3ca830dbf0a12d6fa2a824bcef91caaac428fea5d1a91c8cce65701c0683c. Oct 13 05:52:47.286993 containerd[1643]: time="2025-10-13T05:52:47.286968486Z" level=info msg="StartContainer for \"6ed3ca830dbf0a12d6fa2a824bcef91caaac428fea5d1a91c8cce65701c0683c\" returns successfully" Oct 13 05:52:47.911972 kubelet[2946]: I1013 05:52:47.911940 2946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6785666f76-kmpv4" podStartSLOduration=27.930700036 podStartE2EDuration="33.911929651s" podCreationTimestamp="2025-10-13 05:52:14 +0000 UTC" firstStartedPulling="2025-10-13 05:52:41.188687183 +0000 UTC m=+42.946371918" lastFinishedPulling="2025-10-13 05:52:47.169916798 +0000 UTC m=+48.927601533" observedRunningTime="2025-10-13 05:52:47.909792039 +0000 UTC m=+49.667476786" watchObservedRunningTime="2025-10-13 05:52:47.911929651 +0000 UTC m=+49.669614391" Oct 13 05:52:47.919312 kubelet[2946]: I1013 05:52:47.918370 2946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6f7646d5d5-2mn25" podStartSLOduration=23.859163099 podStartE2EDuration="30.918358019s" podCreationTimestamp="2025-10-13 05:52:17 +0000 UTC" firstStartedPulling="2025-10-13 05:52:39.610448129 +0000 UTC m=+41.368132865" lastFinishedPulling="2025-10-13 05:52:46.669643047 +0000 UTC m=+48.427327785" observedRunningTime="2025-10-13 05:52:47.918152371 +0000 UTC m=+49.675837130" watchObservedRunningTime="2025-10-13 05:52:47.918358019 +0000 UTC m=+49.676042766" Oct 13 05:52:48.872755 kubelet[2946]: I1013 05:52:48.872732 2946 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:52:48.906000 kubelet[2946]: I1013 05:52:48.905979 2946 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:52:49.255152 containerd[1643]: time="2025-10-13T05:52:49.254901661Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c3ec350062b6a1e3f01cb1ac620f7c74a64560eda5159279de704b10c945e724\" id:\"8a3fe71856177118d5705d0219394ed658915ee2dde96d37d0ce649dab074fb3\" pid:5159 exited_at:{seconds:1760334769 nanos:193476715}" Oct 13 05:52:51.789844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4216205214.mount: Deactivated successfully. Oct 13 05:52:52.816434 containerd[1643]: time="2025-10-13T05:52:52.816402295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:52.819641 containerd[1643]: time="2025-10-13T05:52:52.818870182Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Oct 13 05:52:52.898142 containerd[1643]: time="2025-10-13T05:52:52.898098038Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:52.899722 containerd[1643]: time="2025-10-13T05:52:52.899586291Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:52.908863 containerd[1643]: time="2025-10-13T05:52:52.908845590Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 5.738581334s" Oct 13 05:52:52.908929 containerd[1643]: time="2025-10-13T05:52:52.908921059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Oct 13 05:52:52.936553 containerd[1643]: time="2025-10-13T05:52:52.936191540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Oct 13 05:52:53.058555 containerd[1643]: time="2025-10-13T05:52:53.058209577Z" level=info msg="CreateContainer within sandbox \"7f053438ab63894bbd18562a275a4988a6b84bb23641ef5f0e930c48e9646920\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Oct 13 05:52:53.088576 containerd[1643]: time="2025-10-13T05:52:53.086958589Z" level=info msg="Container 2155a251521198cc1b839cd09ae07965cc045e2c0ececec15e6003ea84079e3e: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:52:53.167294 containerd[1643]: time="2025-10-13T05:52:53.167269363Z" level=info msg="CreateContainer within sandbox \"7f053438ab63894bbd18562a275a4988a6b84bb23641ef5f0e930c48e9646920\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2155a251521198cc1b839cd09ae07965cc045e2c0ececec15e6003ea84079e3e\"" Oct 13 05:52:53.170481 containerd[1643]: time="2025-10-13T05:52:53.170446464Z" level=info msg="StartContainer for \"2155a251521198cc1b839cd09ae07965cc045e2c0ececec15e6003ea84079e3e\"" Oct 13 05:52:53.174284 containerd[1643]: time="2025-10-13T05:52:53.174263600Z" level=info msg="connecting to shim 2155a251521198cc1b839cd09ae07965cc045e2c0ececec15e6003ea84079e3e" address="unix:///run/containerd/s/69388e186b5e3b27925b5586ba88329d1f2bda64e1f1d250dd4b553f5a0d85ef" protocol=ttrpc version=3 Oct 13 05:52:53.257424 systemd[1]: Started cri-containerd-2155a251521198cc1b839cd09ae07965cc045e2c0ececec15e6003ea84079e3e.scope - libcontainer container 2155a251521198cc1b839cd09ae07965cc045e2c0ececec15e6003ea84079e3e. Oct 13 05:52:53.332474 containerd[1643]: time="2025-10-13T05:52:53.332449252Z" level=info msg="StartContainer for \"2155a251521198cc1b839cd09ae07965cc045e2c0ececec15e6003ea84079e3e\" returns successfully" Oct 13 05:52:53.799192 kubelet[2946]: I1013 05:52:53.799066 2946 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:52:54.249677 kubelet[2946]: I1013 05:52:54.212550 2946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-854f97d977-q72x8" podStartSLOduration=28.226523652 podStartE2EDuration="38.200598855s" podCreationTimestamp="2025-10-13 05:52:16 +0000 UTC" firstStartedPulling="2025-10-13 05:52:42.959987606 +0000 UTC m=+44.717672342" lastFinishedPulling="2025-10-13 05:52:52.934062809 +0000 UTC m=+54.691747545" observedRunningTime="2025-10-13 05:52:54.194257154 +0000 UTC m=+55.951941901" watchObservedRunningTime="2025-10-13 05:52:54.200598855 +0000 UTC m=+55.958283595" Oct 13 05:52:54.410190 containerd[1643]: time="2025-10-13T05:52:54.410160112Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2155a251521198cc1b839cd09ae07965cc045e2c0ececec15e6003ea84079e3e\" id:\"0f35a0b2c81083e6cb529e9fbe30d15a7762cf79ed1ed0739a34e208b2fcacb2\" pid:5233 exit_status:1 exited_at:{seconds:1760334774 nanos:388132273}" Oct 13 05:52:54.449173 containerd[1643]: time="2025-10-13T05:52:54.449096300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:54.454760 containerd[1643]: time="2025-10-13T05:52:54.454745813Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Oct 13 05:52:54.455126 containerd[1643]: time="2025-10-13T05:52:54.455103741Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:54.455931 containerd[1643]: time="2025-10-13T05:52:54.455908939Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:54.456459 containerd[1643]: time="2025-10-13T05:52:54.456261622Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.520046868s" Oct 13 05:52:54.456459 containerd[1643]: time="2025-10-13T05:52:54.456280886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Oct 13 05:52:54.481612 containerd[1643]: time="2025-10-13T05:52:54.481583687Z" level=info msg="CreateContainer within sandbox \"bcdb5b6134ff56ad3b134968fe02dc4b610d011e2ecded09737adb00f0549766\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 13 05:52:54.498652 containerd[1643]: time="2025-10-13T05:52:54.498610617Z" level=info msg="Container d76548c34b4f013123aaf52129f4a13d8df6a4ccaa51664269f8cb2c23f22e49: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:52:54.503404 containerd[1643]: time="2025-10-13T05:52:54.503335353Z" level=info msg="CreateContainer within sandbox \"bcdb5b6134ff56ad3b134968fe02dc4b610d011e2ecded09737adb00f0549766\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d76548c34b4f013123aaf52129f4a13d8df6a4ccaa51664269f8cb2c23f22e49\"" Oct 13 05:52:54.504100 containerd[1643]: time="2025-10-13T05:52:54.503968801Z" level=info msg="StartContainer for \"d76548c34b4f013123aaf52129f4a13d8df6a4ccaa51664269f8cb2c23f22e49\"" Oct 13 05:52:54.506300 containerd[1643]: time="2025-10-13T05:52:54.506264264Z" level=info msg="connecting to shim d76548c34b4f013123aaf52129f4a13d8df6a4ccaa51664269f8cb2c23f22e49" address="unix:///run/containerd/s/3544d74250c33969f98ddaa0c62bf492a5a7b111192fb8b0275ba3f1b0192596" protocol=ttrpc version=3 Oct 13 05:52:54.520628 systemd[1]: Started cri-containerd-d76548c34b4f013123aaf52129f4a13d8df6a4ccaa51664269f8cb2c23f22e49.scope - libcontainer container d76548c34b4f013123aaf52129f4a13d8df6a4ccaa51664269f8cb2c23f22e49. Oct 13 05:52:54.553588 containerd[1643]: time="2025-10-13T05:52:54.553563219Z" level=info msg="StartContainer for \"d76548c34b4f013123aaf52129f4a13d8df6a4ccaa51664269f8cb2c23f22e49\" returns successfully" Oct 13 05:52:54.562136 containerd[1643]: time="2025-10-13T05:52:54.562057010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Oct 13 05:52:55.456714 containerd[1643]: time="2025-10-13T05:52:55.456320549Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2155a251521198cc1b839cd09ae07965cc045e2c0ececec15e6003ea84079e3e\" id:\"787b12b9f5a84e2fd11f64937ddee1d1a44fda3bb9daac65f46c3371a5885df6\" pid:5297 exit_status:1 exited_at:{seconds:1760334775 nanos:456140522}" Oct 13 05:52:56.170182 containerd[1643]: time="2025-10-13T05:52:56.170154916Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:56.171083 containerd[1643]: time="2025-10-13T05:52:56.171067741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Oct 13 05:52:56.171393 containerd[1643]: time="2025-10-13T05:52:56.171372343Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:56.174788 containerd[1643]: time="2025-10-13T05:52:56.174772459Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.61254689s" Oct 13 05:52:56.174991 containerd[1643]: time="2025-10-13T05:52:56.174861584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Oct 13 05:52:56.174991 containerd[1643]: time="2025-10-13T05:52:56.174849902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:52:56.178566 containerd[1643]: time="2025-10-13T05:52:56.178077928Z" level=info msg="CreateContainer within sandbox \"bcdb5b6134ff56ad3b134968fe02dc4b610d011e2ecded09737adb00f0549766\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 13 05:52:56.187721 containerd[1643]: time="2025-10-13T05:52:56.185507500Z" level=info msg="Container f15c85ffb388612d257b83a4a591719da0307c9f01fd7097e05995302f707f98: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:52:56.202922 containerd[1643]: time="2025-10-13T05:52:56.202898494Z" level=info msg="CreateContainer within sandbox \"bcdb5b6134ff56ad3b134968fe02dc4b610d011e2ecded09737adb00f0549766\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f15c85ffb388612d257b83a4a591719da0307c9f01fd7097e05995302f707f98\"" Oct 13 05:52:56.203553 containerd[1643]: time="2025-10-13T05:52:56.203362312Z" level=info msg="StartContainer for \"f15c85ffb388612d257b83a4a591719da0307c9f01fd7097e05995302f707f98\"" Oct 13 05:52:56.204978 containerd[1643]: time="2025-10-13T05:52:56.204961718Z" level=info msg="connecting to shim f15c85ffb388612d257b83a4a591719da0307c9f01fd7097e05995302f707f98" address="unix:///run/containerd/s/3544d74250c33969f98ddaa0c62bf492a5a7b111192fb8b0275ba3f1b0192596" protocol=ttrpc version=3 Oct 13 05:52:56.252628 systemd[1]: Started cri-containerd-f15c85ffb388612d257b83a4a591719da0307c9f01fd7097e05995302f707f98.scope - libcontainer container f15c85ffb388612d257b83a4a591719da0307c9f01fd7097e05995302f707f98. Oct 13 05:52:56.299959 containerd[1643]: time="2025-10-13T05:52:56.299888059Z" level=info msg="StartContainer for \"f15c85ffb388612d257b83a4a591719da0307c9f01fd7097e05995302f707f98\" returns successfully" Oct 13 05:52:56.341361 containerd[1643]: time="2025-10-13T05:52:56.341334801Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2155a251521198cc1b839cd09ae07965cc045e2c0ececec15e6003ea84079e3e\" id:\"443c8bcbb3911f67d98703282ae9478fbdc6947f8694a5051eba62a2f3fc1f8a\" pid:5339 exit_status:1 exited_at:{seconds:1760334776 nanos:340842487}" Oct 13 05:52:56.723119 kubelet[2946]: I1013 05:52:56.715367 2946 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 13 05:52:56.727839 kubelet[2946]: I1013 05:52:56.727667 2946 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 13 05:52:57.308551 kubelet[2946]: I1013 05:52:57.308022 2946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6hqlq" podStartSLOduration=27.120579863 podStartE2EDuration="40.253890328s" podCreationTimestamp="2025-10-13 05:52:17 +0000 UTC" firstStartedPulling="2025-10-13 05:52:43.043010385 +0000 UTC m=+44.800695121" lastFinishedPulling="2025-10-13 05:52:56.17632085 +0000 UTC m=+57.934005586" observedRunningTime="2025-10-13 05:52:57.253669182 +0000 UTC m=+59.011353920" watchObservedRunningTime="2025-10-13 05:52:57.253890328 +0000 UTC m=+59.011575075" Oct 13 05:53:07.876500 containerd[1643]: time="2025-10-13T05:53:07.875010178Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1f348e0fda6b17c4fc4a7fdd9399ac7116bc1fcb6716eb0f74c96d231c537d6\" id:\"b38a47363056f5fc8b6f74c689e0bf3e4debaaf16be5d112ca600cc0ab8a11b8\" pid:5418 exited_at:{seconds:1760334787 nanos:787162299}" Oct 13 05:53:18.709706 systemd[1]: Started sshd@7-139.178.70.106:22-147.75.109.163:60914.service - OpenSSH per-connection server daemon (147.75.109.163:60914). Oct 13 05:53:19.144396 containerd[1643]: time="2025-10-13T05:53:19.144371319Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c3ec350062b6a1e3f01cb1ac620f7c74a64560eda5159279de704b10c945e724\" id:\"7c4a7b8f7def4fac70ac17575f92e4a255fb53b0a1f938b7d7a168706c99cd01\" pid:5476 exited_at:{seconds:1760334799 nanos:143932453}" Oct 13 05:53:19.600869 sshd[5462]: Accepted publickey for core from 147.75.109.163 port 60914 ssh2: RSA SHA256:ai+pxKByarUoBL/gnU9GvjBvPxJtyFadwyl0rYxwxDk Oct 13 05:53:19.605255 sshd-session[5462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:53:19.626703 systemd-logind[1616]: New session 10 of user core. Oct 13 05:53:19.631734 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 13 05:53:20.748739 sshd[5487]: Connection closed by 147.75.109.163 port 60914 Oct 13 05:53:20.749107 sshd-session[5462]: pam_unix(sshd:session): session closed for user core Oct 13 05:53:20.758767 systemd[1]: sshd@7-139.178.70.106:22-147.75.109.163:60914.service: Deactivated successfully. Oct 13 05:53:20.759833 systemd[1]: session-10.scope: Deactivated successfully. Oct 13 05:53:20.772971 systemd-logind[1616]: Session 10 logged out. Waiting for processes to exit. Oct 13 05:53:20.774930 systemd-logind[1616]: Removed session 10. Oct 13 05:53:25.767477 systemd[1]: Started sshd@8-139.178.70.106:22-147.75.109.163:40292.service - OpenSSH per-connection server daemon (147.75.109.163:40292). Oct 13 05:53:26.319345 sshd[5503]: Accepted publickey for core from 147.75.109.163 port 40292 ssh2: RSA SHA256:ai+pxKByarUoBL/gnU9GvjBvPxJtyFadwyl0rYxwxDk Oct 13 05:53:26.346105 sshd-session[5503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:53:26.356581 systemd-logind[1616]: New session 11 of user core. Oct 13 05:53:26.360696 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 13 05:53:27.533658 sshd[5511]: Connection closed by 147.75.109.163 port 40292 Oct 13 05:53:27.536969 sshd-session[5503]: pam_unix(sshd:session): session closed for user core Oct 13 05:53:27.543072 systemd[1]: sshd@8-139.178.70.106:22-147.75.109.163:40292.service: Deactivated successfully. Oct 13 05:53:27.544650 systemd[1]: session-11.scope: Deactivated successfully. Oct 13 05:53:27.545868 systemd-logind[1616]: Session 11 logged out. Waiting for processes to exit. Oct 13 05:53:27.549001 systemd[1]: Started sshd@9-139.178.70.106:22-147.75.109.163:40302.service - OpenSSH per-connection server daemon (147.75.109.163:40302). Oct 13 05:53:27.549370 systemd-logind[1616]: Removed session 11. Oct 13 05:53:27.661548 sshd[5539]: Accepted publickey for core from 147.75.109.163 port 40302 ssh2: RSA SHA256:ai+pxKByarUoBL/gnU9GvjBvPxJtyFadwyl0rYxwxDk Oct 13 05:53:27.663412 sshd-session[5539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:53:27.669563 systemd-logind[1616]: New session 12 of user core. Oct 13 05:53:27.674759 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 13 05:53:27.753102 containerd[1643]: time="2025-10-13T05:53:27.752810143Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2155a251521198cc1b839cd09ae07965cc045e2c0ececec15e6003ea84079e3e\" id:\"143c800f3f72aadd5bf2cc4ce094e54d277a43184954e25c9f47fcdecfa0ed49\" pid:5518 exited_at:{seconds:1760334807 nanos:719596417}" Oct 13 05:53:27.944107 sshd[5543]: Connection closed by 147.75.109.163 port 40302 Oct 13 05:53:27.945945 sshd-session[5539]: pam_unix(sshd:session): session closed for user core Oct 13 05:53:27.955120 systemd[1]: Started sshd@10-139.178.70.106:22-147.75.109.163:40318.service - OpenSSH per-connection server daemon (147.75.109.163:40318). Oct 13 05:53:27.973335 systemd[1]: sshd@9-139.178.70.106:22-147.75.109.163:40302.service: Deactivated successfully. Oct 13 05:53:27.975371 systemd[1]: session-12.scope: Deactivated successfully. Oct 13 05:53:27.978459 systemd-logind[1616]: Session 12 logged out. Waiting for processes to exit. Oct 13 05:53:27.979285 systemd-logind[1616]: Removed session 12. Oct 13 05:53:28.124471 sshd[5551]: Accepted publickey for core from 147.75.109.163 port 40318 ssh2: RSA SHA256:ai+pxKByarUoBL/gnU9GvjBvPxJtyFadwyl0rYxwxDk Oct 13 05:53:28.125997 sshd-session[5551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:53:28.133665 systemd-logind[1616]: New session 13 of user core. Oct 13 05:53:28.139294 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 13 05:53:28.239199 sshd[5557]: Connection closed by 147.75.109.163 port 40318 Oct 13 05:53:28.240672 sshd-session[5551]: pam_unix(sshd:session): session closed for user core Oct 13 05:53:28.244624 systemd-logind[1616]: Session 13 logged out. Waiting for processes to exit. Oct 13 05:53:28.244952 systemd[1]: sshd@10-139.178.70.106:22-147.75.109.163:40318.service: Deactivated successfully. Oct 13 05:53:28.247739 systemd[1]: session-13.scope: Deactivated successfully. Oct 13 05:53:28.249750 systemd-logind[1616]: Removed session 13. Oct 13 05:53:33.257708 systemd[1]: Started sshd@11-139.178.70.106:22-147.75.109.163:45582.service - OpenSSH per-connection server daemon (147.75.109.163:45582). Oct 13 05:53:33.479629 sshd[5584]: Accepted publickey for core from 147.75.109.163 port 45582 ssh2: RSA SHA256:ai+pxKByarUoBL/gnU9GvjBvPxJtyFadwyl0rYxwxDk Oct 13 05:53:33.496494 sshd-session[5584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:53:33.505609 systemd-logind[1616]: New session 14 of user core. Oct 13 05:53:33.514664 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 13 05:53:33.861467 sshd[5587]: Connection closed by 147.75.109.163 port 45582 Oct 13 05:53:33.862052 sshd-session[5584]: pam_unix(sshd:session): session closed for user core Oct 13 05:53:33.865390 systemd-logind[1616]: Session 14 logged out. Waiting for processes to exit. Oct 13 05:53:33.865638 systemd[1]: sshd@11-139.178.70.106:22-147.75.109.163:45582.service: Deactivated successfully. Oct 13 05:53:33.867152 systemd[1]: session-14.scope: Deactivated successfully. Oct 13 05:53:33.868370 systemd-logind[1616]: Removed session 14. Oct 13 05:53:36.299603 containerd[1643]: time="2025-10-13T05:53:36.289928704Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c3ec350062b6a1e3f01cb1ac620f7c74a64560eda5159279de704b10c945e724\" id:\"74eddd9ea8e5478d4f71a566993b9a38143e84ff6530c7137489c8462a1e65d8\" pid:5612 exited_at:{seconds:1760334816 nanos:286966768}" Oct 13 05:53:38.065476 containerd[1643]: time="2025-10-13T05:53:38.065445193Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1f348e0fda6b17c4fc4a7fdd9399ac7116bc1fcb6716eb0f74c96d231c537d6\" id:\"bba1845b7e376d18c332a852cfa08d340a4501bc57affacc4af05fd24c38b5b3\" pid:5633 exited_at:{seconds:1760334818 nanos:65095905}" Oct 13 05:53:38.874685 systemd[1]: Started sshd@12-139.178.70.106:22-147.75.109.163:45596.service - OpenSSH per-connection server daemon (147.75.109.163:45596). Oct 13 05:53:39.006017 sshd[5648]: Accepted publickey for core from 147.75.109.163 port 45596 ssh2: RSA SHA256:ai+pxKByarUoBL/gnU9GvjBvPxJtyFadwyl0rYxwxDk Oct 13 05:53:39.013050 sshd-session[5648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:53:39.018164 systemd-logind[1616]: New session 15 of user core. Oct 13 05:53:39.023707 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 13 05:53:40.091114 sshd[5651]: Connection closed by 147.75.109.163 port 45596 Oct 13 05:53:40.092436 sshd-session[5648]: pam_unix(sshd:session): session closed for user core Oct 13 05:53:40.095128 systemd-logind[1616]: Session 15 logged out. Waiting for processes to exit. Oct 13 05:53:40.095203 systemd[1]: sshd@12-139.178.70.106:22-147.75.109.163:45596.service: Deactivated successfully. Oct 13 05:53:40.096860 systemd[1]: session-15.scope: Deactivated successfully. Oct 13 05:53:40.099146 systemd-logind[1616]: Removed session 15. Oct 13 05:53:40.653808 containerd[1643]: time="2025-10-13T05:53:40.653643210Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2155a251521198cc1b839cd09ae07965cc045e2c0ececec15e6003ea84079e3e\" id:\"291fb67761adb6d8b1a08e134f4d9dce4297fa495c16d259c415ed8ad5c26759\" pid:5676 exited_at:{seconds:1760334820 nanos:649600866}" Oct 13 05:53:45.110997 systemd[1]: Started sshd@13-139.178.70.106:22-147.75.109.163:44704.service - OpenSSH per-connection server daemon (147.75.109.163:44704). Oct 13 05:53:45.673166 sshd[5689]: Accepted publickey for core from 147.75.109.163 port 44704 ssh2: RSA SHA256:ai+pxKByarUoBL/gnU9GvjBvPxJtyFadwyl0rYxwxDk Oct 13 05:53:45.674332 sshd-session[5689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:53:45.677289 systemd-logind[1616]: New session 16 of user core. Oct 13 05:53:45.685824 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 13 05:53:46.126654 sshd[5692]: Connection closed by 147.75.109.163 port 44704 Oct 13 05:53:46.127914 sshd-session[5689]: pam_unix(sshd:session): session closed for user core Oct 13 05:53:46.135781 systemd[1]: sshd@13-139.178.70.106:22-147.75.109.163:44704.service: Deactivated successfully. Oct 13 05:53:46.136990 systemd[1]: session-16.scope: Deactivated successfully. Oct 13 05:53:46.137841 systemd-logind[1616]: Session 16 logged out. Waiting for processes to exit. Oct 13 05:53:46.139820 systemd[1]: Started sshd@14-139.178.70.106:22-147.75.109.163:44720.service - OpenSSH per-connection server daemon (147.75.109.163:44720). Oct 13 05:53:46.140604 systemd-logind[1616]: Removed session 16. Oct 13 05:53:46.200064 sshd[5704]: Accepted publickey for core from 147.75.109.163 port 44720 ssh2: RSA SHA256:ai+pxKByarUoBL/gnU9GvjBvPxJtyFadwyl0rYxwxDk Oct 13 05:53:46.201164 sshd-session[5704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:53:46.204161 systemd-logind[1616]: New session 17 of user core. Oct 13 05:53:46.206612 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 13 05:53:46.587638 sshd[5707]: Connection closed by 147.75.109.163 port 44720 Oct 13 05:53:46.588000 sshd-session[5704]: pam_unix(sshd:session): session closed for user core Oct 13 05:53:46.598088 systemd[1]: sshd@14-139.178.70.106:22-147.75.109.163:44720.service: Deactivated successfully. Oct 13 05:53:46.599178 systemd[1]: session-17.scope: Deactivated successfully. Oct 13 05:53:46.600021 systemd-logind[1616]: Session 17 logged out. Waiting for processes to exit. Oct 13 05:53:46.602209 systemd[1]: Started sshd@15-139.178.70.106:22-147.75.109.163:44732.service - OpenSSH per-connection server daemon (147.75.109.163:44732). Oct 13 05:53:46.603064 systemd-logind[1616]: Removed session 17. Oct 13 05:53:46.646000 sshd[5717]: Accepted publickey for core from 147.75.109.163 port 44732 ssh2: RSA SHA256:ai+pxKByarUoBL/gnU9GvjBvPxJtyFadwyl0rYxwxDk Oct 13 05:53:46.646885 sshd-session[5717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:53:46.650095 systemd-logind[1616]: New session 18 of user core. Oct 13 05:53:46.656619 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 13 05:53:47.197896 sshd[5720]: Connection closed by 147.75.109.163 port 44732 Oct 13 05:53:47.198022 sshd-session[5717]: pam_unix(sshd:session): session closed for user core Oct 13 05:53:47.206478 systemd[1]: sshd@15-139.178.70.106:22-147.75.109.163:44732.service: Deactivated successfully. Oct 13 05:53:47.208350 systemd[1]: session-18.scope: Deactivated successfully. Oct 13 05:53:47.210067 systemd-logind[1616]: Session 18 logged out. Waiting for processes to exit. Oct 13 05:53:47.213497 systemd-logind[1616]: Removed session 18. Oct 13 05:53:47.214373 systemd[1]: Started sshd@16-139.178.70.106:22-147.75.109.163:44746.service - OpenSSH per-connection server daemon (147.75.109.163:44746). Oct 13 05:53:47.260650 sshd[5733]: Accepted publickey for core from 147.75.109.163 port 44746 ssh2: RSA SHA256:ai+pxKByarUoBL/gnU9GvjBvPxJtyFadwyl0rYxwxDk Oct 13 05:53:47.262201 sshd-session[5733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:53:47.266050 systemd-logind[1616]: New session 19 of user core. Oct 13 05:53:47.271611 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 13 05:53:47.606607 sshd[5738]: Connection closed by 147.75.109.163 port 44746 Oct 13 05:53:47.607617 sshd-session[5733]: pam_unix(sshd:session): session closed for user core Oct 13 05:53:47.614068 systemd[1]: sshd@16-139.178.70.106:22-147.75.109.163:44746.service: Deactivated successfully. Oct 13 05:53:47.615639 systemd[1]: session-19.scope: Deactivated successfully. Oct 13 05:53:47.616807 systemd-logind[1616]: Session 19 logged out. Waiting for processes to exit. Oct 13 05:53:47.619378 systemd[1]: Started sshd@17-139.178.70.106:22-147.75.109.163:44748.service - OpenSSH per-connection server daemon (147.75.109.163:44748). Oct 13 05:53:47.621198 systemd-logind[1616]: Removed session 19. Oct 13 05:53:47.665555 sshd[5749]: Accepted publickey for core from 147.75.109.163 port 44748 ssh2: RSA SHA256:ai+pxKByarUoBL/gnU9GvjBvPxJtyFadwyl0rYxwxDk Oct 13 05:53:47.665584 sshd-session[5749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:53:47.668958 systemd-logind[1616]: New session 20 of user core. Oct 13 05:53:47.675641 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 13 05:53:47.789985 sshd[5752]: Connection closed by 147.75.109.163 port 44748 Oct 13 05:53:47.790425 sshd-session[5749]: pam_unix(sshd:session): session closed for user core Oct 13 05:53:47.794053 systemd[1]: sshd@17-139.178.70.106:22-147.75.109.163:44748.service: Deactivated successfully. Oct 13 05:53:47.795576 systemd[1]: session-20.scope: Deactivated successfully. Oct 13 05:53:47.798044 systemd-logind[1616]: Session 20 logged out. Waiting for processes to exit. Oct 13 05:53:47.798878 systemd-logind[1616]: Removed session 20. Oct 13 05:53:49.398305 containerd[1643]: time="2025-10-13T05:53:49.398271585Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c3ec350062b6a1e3f01cb1ac620f7c74a64560eda5159279de704b10c945e724\" id:\"bf9dd0ec0bb297b64012bc863acb94316e6ac3d1c4a1ebc7695a34d54c657cff\" pid:5776 exited_at:{seconds:1760334829 nanos:394200644}" Oct 13 05:53:52.800752 systemd[1]: Started sshd@18-139.178.70.106:22-147.75.109.163:55594.service - OpenSSH per-connection server daemon (147.75.109.163:55594). Oct 13 05:53:52.902400 sshd[5788]: Accepted publickey for core from 147.75.109.163 port 55594 ssh2: RSA SHA256:ai+pxKByarUoBL/gnU9GvjBvPxJtyFadwyl0rYxwxDk Oct 13 05:53:52.906772 sshd-session[5788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:53:52.911814 systemd-logind[1616]: New session 21 of user core. Oct 13 05:53:52.916686 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 13 05:53:53.513555 sshd[5791]: Connection closed by 147.75.109.163 port 55594 Oct 13 05:53:53.513504 sshd-session[5788]: pam_unix(sshd:session): session closed for user core Oct 13 05:53:53.523145 systemd[1]: sshd@18-139.178.70.106:22-147.75.109.163:55594.service: Deactivated successfully. Oct 13 05:53:53.525188 systemd[1]: session-21.scope: Deactivated successfully. Oct 13 05:53:53.531601 systemd-logind[1616]: Session 21 logged out. Waiting for processes to exit. Oct 13 05:53:53.533635 systemd-logind[1616]: Removed session 21. Oct 13 05:53:56.918334 containerd[1643]: time="2025-10-13T05:53:56.918307610Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2155a251521198cc1b839cd09ae07965cc045e2c0ececec15e6003ea84079e3e\" id:\"cc06fdd0fe42ec5cc5f2873ca10f9f83e391e4b9b61784f9c09a0636bc8c5d02\" pid:5821 exited_at:{seconds:1760334836 nanos:917612163}" Oct 13 05:53:58.553779 systemd[1]: Started sshd@19-139.178.70.106:22-147.75.109.163:55598.service - OpenSSH per-connection server daemon (147.75.109.163:55598). Oct 13 05:53:58.963765 sshd[5848]: Accepted publickey for core from 147.75.109.163 port 55598 ssh2: RSA SHA256:ai+pxKByarUoBL/gnU9GvjBvPxJtyFadwyl0rYxwxDk Oct 13 05:53:58.974501 sshd-session[5848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:53:58.979737 systemd-logind[1616]: New session 22 of user core. Oct 13 05:53:58.986710 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 13 05:54:00.471093 sshd[5851]: Connection closed by 147.75.109.163 port 55598 Oct 13 05:54:00.472326 sshd-session[5848]: pam_unix(sshd:session): session closed for user core Oct 13 05:54:00.479943 systemd[1]: sshd@19-139.178.70.106:22-147.75.109.163:55598.service: Deactivated successfully. Oct 13 05:54:00.481352 systemd[1]: session-22.scope: Deactivated successfully. Oct 13 05:54:00.484835 systemd-logind[1616]: Session 22 logged out. Waiting for processes to exit. Oct 13 05:54:00.486273 systemd-logind[1616]: Removed session 22.