Oct 31 01:08:30.518660 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Thu Oct 30 22:25:00 -00 2025 Oct 31 01:08:30.518678 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=0b271910e93d73fd8f787b704ab61b381ac88c2b5070fc1461584f1dcd7f91c9 Oct 31 01:08:30.518685 kernel: Disabled fast string operations Oct 31 01:08:30.518690 kernel: BIOS-provided physical RAM map: Oct 31 01:08:30.518694 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Oct 31 01:08:30.518698 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Oct 31 01:08:30.518705 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Oct 31 01:08:30.518710 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Oct 31 01:08:30.518714 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Oct 31 01:08:30.518719 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Oct 31 01:08:30.518723 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Oct 31 01:08:30.518728 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Oct 31 01:08:30.518732 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Oct 31 01:08:30.518737 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Oct 31 01:08:30.518744 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Oct 31 01:08:30.518749 kernel: NX (Execute Disable) protection: active Oct 31 01:08:30.518754 kernel: APIC: Static calls initialized Oct 31 01:08:30.518759 kernel: SMBIOS 2.7 present. Oct 31 01:08:30.518765 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Oct 31 01:08:30.518770 kernel: DMI: Memory slots populated: 1/128 Oct 31 01:08:30.518776 kernel: vmware: hypercall mode: 0x00 Oct 31 01:08:30.518782 kernel: Hypervisor detected: VMware Oct 31 01:08:30.518787 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Oct 31 01:08:30.518792 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Oct 31 01:08:30.518797 kernel: vmware: using clock offset of 3106167030 ns Oct 31 01:08:30.518802 kernel: tsc: Detected 3408.000 MHz processor Oct 31 01:08:30.518808 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 31 01:08:30.518814 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 31 01:08:30.518820 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Oct 31 01:08:30.518826 kernel: total RAM covered: 3072M Oct 31 01:08:30.518832 kernel: Found optimal setting for mtrr clean up Oct 31 01:08:30.518838 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Oct 31 01:08:30.518843 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Oct 31 01:08:30.518848 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 31 01:08:30.518854 kernel: Using GB pages for direct mapping Oct 31 01:08:30.518859 kernel: ACPI: Early table checksum verification disabled Oct 31 01:08:30.518865 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Oct 31 01:08:30.518871 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Oct 31 01:08:30.518877 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Oct 31 01:08:30.518883 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Oct 31 01:08:30.518890 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Oct 31 01:08:30.518896 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Oct 31 01:08:30.518902 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Oct 31 01:08:30.518908 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Oct 31 01:08:30.518914 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Oct 31 01:08:30.518920 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Oct 31 01:08:30.518926 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Oct 31 01:08:30.518931 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Oct 31 01:08:30.518938 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Oct 31 01:08:30.518944 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Oct 31 01:08:30.518949 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Oct 31 01:08:30.518955 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Oct 31 01:08:30.518961 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Oct 31 01:08:30.518966 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Oct 31 01:08:30.518972 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Oct 31 01:08:30.518977 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Oct 31 01:08:30.518984 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Oct 31 01:08:30.518989 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Oct 31 01:08:30.518995 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 31 01:08:30.519000 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 31 01:08:30.519006 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Oct 31 01:08:30.519012 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00001000-0x7fffffff] Oct 31 01:08:30.519018 kernel: NODE_DATA(0) allocated [mem 0x7fff8dc0-0x7fffffff] Oct 31 01:08:30.519025 kernel: Zone ranges: Oct 31 01:08:30.519031 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 31 01:08:30.519036 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Oct 31 01:08:30.519042 kernel: Normal empty Oct 31 01:08:30.519047 kernel: Device empty Oct 31 01:08:30.519053 kernel: Movable zone start for each node Oct 31 01:08:30.519058 kernel: Early memory node ranges Oct 31 01:08:30.519064 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Oct 31 01:08:30.519070 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Oct 31 01:08:30.519076 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Oct 31 01:08:30.519082 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Oct 31 01:08:30.519088 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 31 01:08:30.519093 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Oct 31 01:08:30.519099 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Oct 31 01:08:30.519105 kernel: ACPI: PM-Timer IO Port: 0x1008 Oct 31 01:08:30.519111 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Oct 31 01:08:30.519117 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Oct 31 01:08:30.519123 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Oct 31 01:08:30.519128 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Oct 31 01:08:30.519134 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Oct 31 01:08:30.519139 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Oct 31 01:08:30.519145 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Oct 31 01:08:30.519151 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Oct 31 01:08:30.519156 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Oct 31 01:08:30.519162 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Oct 31 01:08:30.519168 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Oct 31 01:08:30.519173 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Oct 31 01:08:30.519179 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Oct 31 01:08:30.519184 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Oct 31 01:08:30.519190 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Oct 31 01:08:30.519195 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Oct 31 01:08:30.519201 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Oct 31 01:08:30.519207 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Oct 31 01:08:30.519230 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Oct 31 01:08:30.519235 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Oct 31 01:08:30.519241 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Oct 31 01:08:30.519246 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Oct 31 01:08:30.519252 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Oct 31 01:08:30.519257 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Oct 31 01:08:30.519263 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Oct 31 01:08:30.519270 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Oct 31 01:08:30.519276 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Oct 31 01:08:30.519281 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Oct 31 01:08:30.519287 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Oct 31 01:08:30.519292 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Oct 31 01:08:30.519298 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Oct 31 01:08:30.519304 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Oct 31 01:08:30.519309 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Oct 31 01:08:30.519314 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Oct 31 01:08:30.519321 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Oct 31 01:08:30.519327 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Oct 31 01:08:30.519332 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Oct 31 01:08:30.519337 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Oct 31 01:08:30.519343 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Oct 31 01:08:30.519349 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Oct 31 01:08:30.519358 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Oct 31 01:08:30.519364 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Oct 31 01:08:30.519370 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Oct 31 01:08:30.519377 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Oct 31 01:08:30.519382 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Oct 31 01:08:30.519388 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Oct 31 01:08:30.519393 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Oct 31 01:08:30.519399 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Oct 31 01:08:30.519405 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Oct 31 01:08:30.519411 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Oct 31 01:08:30.519418 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Oct 31 01:08:30.519423 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Oct 31 01:08:30.519429 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Oct 31 01:08:30.519435 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Oct 31 01:08:30.519440 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Oct 31 01:08:30.519446 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Oct 31 01:08:30.519452 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Oct 31 01:08:30.519459 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Oct 31 01:08:30.519464 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Oct 31 01:08:30.519470 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Oct 31 01:08:30.519476 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Oct 31 01:08:30.519481 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Oct 31 01:08:30.519487 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Oct 31 01:08:30.519493 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Oct 31 01:08:30.519499 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Oct 31 01:08:30.519506 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Oct 31 01:08:30.519512 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Oct 31 01:08:30.519517 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Oct 31 01:08:30.519523 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Oct 31 01:08:30.519529 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Oct 31 01:08:30.519534 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Oct 31 01:08:30.519540 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Oct 31 01:08:30.519546 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Oct 31 01:08:30.519552 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Oct 31 01:08:30.519558 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Oct 31 01:08:30.519564 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Oct 31 01:08:30.519570 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Oct 31 01:08:30.519576 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Oct 31 01:08:30.519581 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Oct 31 01:08:30.519587 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Oct 31 01:08:30.519593 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Oct 31 01:08:30.519599 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Oct 31 01:08:30.519606 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Oct 31 01:08:30.519612 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Oct 31 01:08:30.519617 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Oct 31 01:08:30.519623 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Oct 31 01:08:30.519629 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Oct 31 01:08:30.519635 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Oct 31 01:08:30.519640 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Oct 31 01:08:30.519646 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Oct 31 01:08:30.519653 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Oct 31 01:08:30.519659 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Oct 31 01:08:30.519664 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Oct 31 01:08:30.519670 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Oct 31 01:08:30.519676 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Oct 31 01:08:30.519682 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Oct 31 01:08:30.519687 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Oct 31 01:08:30.519693 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Oct 31 01:08:30.519700 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Oct 31 01:08:30.519705 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Oct 31 01:08:30.519711 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Oct 31 01:08:30.519717 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Oct 31 01:08:30.519722 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Oct 31 01:08:30.519728 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Oct 31 01:08:30.519734 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Oct 31 01:08:30.519740 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Oct 31 01:08:30.519746 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Oct 31 01:08:30.519753 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Oct 31 01:08:30.519758 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Oct 31 01:08:30.519764 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Oct 31 01:08:30.519770 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Oct 31 01:08:30.519775 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Oct 31 01:08:30.519781 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Oct 31 01:08:30.519787 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Oct 31 01:08:30.519792 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Oct 31 01:08:30.519799 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Oct 31 01:08:30.519805 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Oct 31 01:08:30.519811 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Oct 31 01:08:30.519816 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Oct 31 01:08:30.519822 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Oct 31 01:08:30.519828 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Oct 31 01:08:30.519833 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Oct 31 01:08:30.519839 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Oct 31 01:08:30.519846 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Oct 31 01:08:30.519852 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Oct 31 01:08:30.519857 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Oct 31 01:08:30.519863 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Oct 31 01:08:30.519869 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Oct 31 01:08:30.519875 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Oct 31 01:08:30.519881 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Oct 31 01:08:30.519887 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 31 01:08:30.519894 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Oct 31 01:08:30.519900 kernel: TSC deadline timer available Oct 31 01:08:30.519905 kernel: CPU topo: Max. logical packages: 128 Oct 31 01:08:30.519911 kernel: CPU topo: Max. logical dies: 128 Oct 31 01:08:30.519917 kernel: CPU topo: Max. dies per package: 1 Oct 31 01:08:30.519923 kernel: CPU topo: Max. threads per core: 1 Oct 31 01:08:30.519929 kernel: CPU topo: Num. cores per package: 1 Oct 31 01:08:30.519934 kernel: CPU topo: Num. threads per package: 1 Oct 31 01:08:30.519941 kernel: CPU topo: Allowing 2 present CPUs plus 126 hotplug CPUs Oct 31 01:08:30.519947 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Oct 31 01:08:30.519953 kernel: Booting paravirtualized kernel on VMware hypervisor Oct 31 01:08:30.519959 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 31 01:08:30.519965 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Oct 31 01:08:30.519971 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Oct 31 01:08:30.519978 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Oct 31 01:08:30.519984 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Oct 31 01:08:30.519990 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Oct 31 01:08:30.519996 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Oct 31 01:08:30.520002 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Oct 31 01:08:30.520008 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Oct 31 01:08:30.520014 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Oct 31 01:08:30.520020 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Oct 31 01:08:30.520026 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Oct 31 01:08:30.520033 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Oct 31 01:08:30.520038 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Oct 31 01:08:30.520044 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Oct 31 01:08:30.520050 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Oct 31 01:08:30.520056 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Oct 31 01:08:30.520061 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Oct 31 01:08:30.520067 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Oct 31 01:08:30.520074 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Oct 31 01:08:30.520081 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=0b271910e93d73fd8f787b704ab61b381ac88c2b5070fc1461584f1dcd7f91c9 Oct 31 01:08:30.520087 kernel: random: crng init done Oct 31 01:08:30.520093 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Oct 31 01:08:30.520099 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Oct 31 01:08:30.520105 kernel: printk: log_buf_len min size: 262144 bytes Oct 31 01:08:30.520112 kernel: printk: log_buf_len: 1048576 bytes Oct 31 01:08:30.520118 kernel: printk: early log buf free: 245688(93%) Oct 31 01:08:30.520123 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 31 01:08:30.520130 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 31 01:08:30.520136 kernel: Fallback order for Node 0: 0 Oct 31 01:08:30.520142 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524157 Oct 31 01:08:30.520148 kernel: Policy zone: DMA32 Oct 31 01:08:30.520153 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 31 01:08:30.520161 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Oct 31 01:08:30.520167 kernel: ftrace: allocating 40092 entries in 157 pages Oct 31 01:08:30.520172 kernel: ftrace: allocated 157 pages with 5 groups Oct 31 01:08:30.520178 kernel: Dynamic Preempt: voluntary Oct 31 01:08:30.520184 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 31 01:08:30.520191 kernel: rcu: RCU event tracing is enabled. Oct 31 01:08:30.520196 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Oct 31 01:08:30.520203 kernel: Trampoline variant of Tasks RCU enabled. Oct 31 01:08:30.520218 kernel: Rude variant of Tasks RCU enabled. Oct 31 01:08:30.520225 kernel: Tracing variant of Tasks RCU enabled. Oct 31 01:08:30.520231 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 31 01:08:30.520237 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Oct 31 01:08:30.520643 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Oct 31 01:08:30.520651 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Oct 31 01:08:30.520657 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Oct 31 01:08:30.520666 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Oct 31 01:08:30.520672 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Oct 31 01:08:30.520678 kernel: Console: colour VGA+ 80x25 Oct 31 01:08:30.520684 kernel: printk: legacy console [tty0] enabled Oct 31 01:08:30.520690 kernel: printk: legacy console [ttyS0] enabled Oct 31 01:08:30.520696 kernel: ACPI: Core revision 20240827 Oct 31 01:08:30.520702 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Oct 31 01:08:30.520710 kernel: APIC: Switch to symmetric I/O mode setup Oct 31 01:08:30.520716 kernel: x2apic enabled Oct 31 01:08:30.520722 kernel: APIC: Switched APIC routing to: physical x2apic Oct 31 01:08:30.520728 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 31 01:08:30.520735 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Oct 31 01:08:30.520741 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Oct 31 01:08:30.520747 kernel: Disabled fast string operations Oct 31 01:08:30.520754 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Oct 31 01:08:30.520761 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Oct 31 01:08:30.520767 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 31 01:08:30.520773 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Oct 31 01:08:30.520779 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Oct 31 01:08:30.520785 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Oct 31 01:08:30.520791 kernel: RETBleed: Mitigation: Enhanced IBRS Oct 31 01:08:30.520797 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 31 01:08:30.520804 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 31 01:08:30.520810 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 31 01:08:30.520816 kernel: SRBDS: Unknown: Dependent on hypervisor status Oct 31 01:08:30.520822 kernel: GDS: Unknown: Dependent on hypervisor status Oct 31 01:08:30.520828 kernel: active return thunk: its_return_thunk Oct 31 01:08:30.520834 kernel: ITS: Mitigation: Aligned branch/return thunks Oct 31 01:08:30.520840 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 31 01:08:30.520848 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 31 01:08:30.520854 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 31 01:08:30.520860 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 31 01:08:30.520866 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 31 01:08:30.520872 kernel: Freeing SMP alternatives memory: 32K Oct 31 01:08:30.520878 kernel: pid_max: default: 131072 minimum: 1024 Oct 31 01:08:30.520884 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 31 01:08:30.520891 kernel: landlock: Up and running. Oct 31 01:08:30.520897 kernel: SELinux: Initializing. Oct 31 01:08:30.520903 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 31 01:08:30.520909 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 31 01:08:30.520915 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Oct 31 01:08:30.520921 kernel: Performance Events: Skylake events, core PMU driver. Oct 31 01:08:30.520927 kernel: core: CPUID marked event: 'cpu cycles' unavailable Oct 31 01:08:30.520934 kernel: core: CPUID marked event: 'instructions' unavailable Oct 31 01:08:30.520940 kernel: core: CPUID marked event: 'bus cycles' unavailable Oct 31 01:08:30.520946 kernel: core: CPUID marked event: 'cache references' unavailable Oct 31 01:08:30.520952 kernel: core: CPUID marked event: 'cache misses' unavailable Oct 31 01:08:30.520958 kernel: core: CPUID marked event: 'branch instructions' unavailable Oct 31 01:08:30.520963 kernel: core: CPUID marked event: 'branch misses' unavailable Oct 31 01:08:30.520969 kernel: ... version: 1 Oct 31 01:08:30.520976 kernel: ... bit width: 48 Oct 31 01:08:30.520982 kernel: ... generic registers: 4 Oct 31 01:08:30.520988 kernel: ... value mask: 0000ffffffffffff Oct 31 01:08:30.520994 kernel: ... max period: 000000007fffffff Oct 31 01:08:30.521000 kernel: ... fixed-purpose events: 0 Oct 31 01:08:30.521006 kernel: ... event mask: 000000000000000f Oct 31 01:08:30.521012 kernel: signal: max sigframe size: 1776 Oct 31 01:08:30.521019 kernel: rcu: Hierarchical SRCU implementation. Oct 31 01:08:30.521026 kernel: rcu: Max phase no-delay instances is 400. Oct 31 01:08:30.521032 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Oct 31 01:08:30.521038 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 31 01:08:30.521044 kernel: smp: Bringing up secondary CPUs ... Oct 31 01:08:30.521050 kernel: smpboot: x86: Booting SMP configuration: Oct 31 01:08:30.521056 kernel: .... node #0, CPUs: #1 Oct 31 01:08:30.521063 kernel: Disabled fast string operations Oct 31 01:08:30.521069 kernel: smp: Brought up 1 node, 2 CPUs Oct 31 01:08:30.521075 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Oct 31 01:08:30.521081 kernel: Memory: 1946764K/2096628K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15964K init, 2080K bss, 138480K reserved, 0K cma-reserved) Oct 31 01:08:30.521088 kernel: devtmpfs: initialized Oct 31 01:08:30.521094 kernel: x86/mm: Memory block size: 128MB Oct 31 01:08:30.521100 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Oct 31 01:08:30.521107 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 31 01:08:30.521113 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Oct 31 01:08:30.521119 kernel: pinctrl core: initialized pinctrl subsystem Oct 31 01:08:30.521125 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 31 01:08:30.521131 kernel: audit: initializing netlink subsys (disabled) Oct 31 01:08:30.521137 kernel: audit: type=2000 audit(1761872908.273:1): state=initialized audit_enabled=0 res=1 Oct 31 01:08:30.521143 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 31 01:08:30.521149 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 31 01:08:30.521156 kernel: cpuidle: using governor menu Oct 31 01:08:30.521162 kernel: Simple Boot Flag at 0x36 set to 0x80 Oct 31 01:08:30.521168 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 31 01:08:30.521174 kernel: dca service started, version 1.12.1 Oct 31 01:08:30.521180 kernel: PCI: ECAM [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) for domain 0000 [bus 00-7f] Oct 31 01:08:30.521193 kernel: PCI: Using configuration type 1 for base access Oct 31 01:08:30.521201 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 31 01:08:30.521661 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 31 01:08:30.521672 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 31 01:08:30.521679 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 31 01:08:30.521686 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 31 01:08:30.521693 kernel: ACPI: Added _OSI(Module Device) Oct 31 01:08:30.521700 kernel: ACPI: Added _OSI(Processor Device) Oct 31 01:08:30.521706 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 31 01:08:30.521715 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 31 01:08:30.521721 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Oct 31 01:08:30.521727 kernel: ACPI: Interpreter enabled Oct 31 01:08:30.521734 kernel: ACPI: PM: (supports S0 S1 S5) Oct 31 01:08:30.521740 kernel: ACPI: Using IOAPIC for interrupt routing Oct 31 01:08:30.521747 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 31 01:08:30.521753 kernel: PCI: Using E820 reservations for host bridge windows Oct 31 01:08:30.521762 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Oct 31 01:08:30.521769 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Oct 31 01:08:30.521875 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 31 01:08:30.521947 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Oct 31 01:08:30.522014 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Oct 31 01:08:30.522024 kernel: PCI host bridge to bus 0000:00 Oct 31 01:08:30.522093 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 31 01:08:30.522153 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Oct 31 01:08:30.523638 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 31 01:08:30.523713 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 31 01:08:30.523775 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Oct 31 01:08:30.523839 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Oct 31 01:08:30.523919 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 conventional PCI endpoint Oct 31 01:08:30.523993 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 conventional PCI bridge Oct 31 01:08:30.524060 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 31 01:08:30.524134 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Oct 31 01:08:30.524218 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a conventional PCI endpoint Oct 31 01:08:30.524298 kernel: pci 0000:00:07.1: BAR 4 [io 0x1060-0x106f] Oct 31 01:08:30.524364 kernel: pci 0000:00:07.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Oct 31 01:08:30.524431 kernel: pci 0000:00:07.1: BAR 1 [io 0x03f6]: legacy IDE quirk Oct 31 01:08:30.524501 kernel: pci 0000:00:07.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Oct 31 01:08:30.524570 kernel: pci 0000:00:07.1: BAR 3 [io 0x0376]: legacy IDE quirk Oct 31 01:08:30.524640 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Oct 31 01:08:30.524706 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Oct 31 01:08:30.524772 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Oct 31 01:08:30.524844 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 conventional PCI endpoint Oct 31 01:08:30.524915 kernel: pci 0000:00:07.7: BAR 0 [io 0x1080-0x10bf] Oct 31 01:08:30.524984 kernel: pci 0000:00:07.7: BAR 1 [mem 0xfebfe000-0xfebfffff 64bit] Oct 31 01:08:30.525054 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 conventional PCI endpoint Oct 31 01:08:30.525121 kernel: pci 0000:00:0f.0: BAR 0 [io 0x1070-0x107f] Oct 31 01:08:30.525188 kernel: pci 0000:00:0f.0: BAR 1 [mem 0xe8000000-0xefffffff pref] Oct 31 01:08:30.525798 kernel: pci 0000:00:0f.0: BAR 2 [mem 0xfe000000-0xfe7fffff] Oct 31 01:08:30.525868 kernel: pci 0000:00:0f.0: ROM [mem 0x00000000-0x00007fff pref] Oct 31 01:08:30.525934 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 31 01:08:30.526008 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 conventional PCI bridge Oct 31 01:08:30.526075 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Oct 31 01:08:30.526148 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Oct 31 01:08:30.526303 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Oct 31 01:08:30.526385 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 31 01:08:30.526457 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 01:08:30.526524 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 31 01:08:30.526591 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Oct 31 01:08:30.526661 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Oct 31 01:08:30.526727 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Oct 31 01:08:30.526797 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 01:08:30.526864 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 31 01:08:30.526930 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Oct 31 01:08:30.526995 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Oct 31 01:08:30.527065 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Oct 31 01:08:30.527130 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Oct 31 01:08:30.527199 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 01:08:30.527288 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 31 01:08:30.527356 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Oct 31 01:08:30.527425 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Oct 31 01:08:30.527491 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Oct 31 01:08:30.527557 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Oct 31 01:08:30.527630 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 01:08:30.527696 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 31 01:08:30.527763 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Oct 31 01:08:30.527831 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Oct 31 01:08:30.527897 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Oct 31 01:08:30.527967 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 01:08:30.528033 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 31 01:08:30.528098 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Oct 31 01:08:30.528163 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 31 01:08:30.528246 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Oct 31 01:08:30.528321 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 01:08:30.528389 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 31 01:08:30.528455 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Oct 31 01:08:30.528521 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Oct 31 01:08:30.528587 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Oct 31 01:08:30.528660 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 01:08:30.528727 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 31 01:08:30.528792 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Oct 31 01:08:30.528858 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Oct 31 01:08:30.528923 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Oct 31 01:08:30.528994 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 01:08:30.529063 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 31 01:08:30.529128 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Oct 31 01:08:30.529194 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Oct 31 01:08:30.529277 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Oct 31 01:08:30.529350 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 01:08:30.529418 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 31 01:08:30.529488 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Oct 31 01:08:30.529553 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Oct 31 01:08:30.529618 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Oct 31 01:08:30.529687 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 01:08:30.529753 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 31 01:08:30.529820 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Oct 31 01:08:30.529887 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Oct 31 01:08:30.529954 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Oct 31 01:08:30.530020 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Oct 31 01:08:30.530089 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 01:08:30.530156 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 31 01:08:30.530235 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Oct 31 01:08:30.530303 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Oct 31 01:08:30.530369 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Oct 31 01:08:30.530434 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Oct 31 01:08:30.530504 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 01:08:30.530569 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 31 01:08:30.530637 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Oct 31 01:08:30.530703 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 31 01:08:30.530778 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Oct 31 01:08:30.530851 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 01:08:30.530920 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 31 01:08:30.530988 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Oct 31 01:08:30.531055 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 31 01:08:30.531120 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Oct 31 01:08:30.531192 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 01:08:30.531281 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 31 01:08:30.531347 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Oct 31 01:08:30.531411 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Oct 31 01:08:30.531481 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Oct 31 01:08:30.531550 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 01:08:30.531616 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 31 01:08:30.531681 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Oct 31 01:08:30.531746 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Oct 31 01:08:30.531810 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Oct 31 01:08:30.531882 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 01:08:30.531948 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 31 01:08:30.532012 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Oct 31 01:08:30.532078 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 31 01:08:30.532146 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Oct 31 01:08:30.533223 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 01:08:30.533314 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 31 01:08:30.533385 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Oct 31 01:08:30.533456 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Oct 31 01:08:30.533523 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 31 01:08:30.533589 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Oct 31 01:08:30.533660 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 01:08:30.533729 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 31 01:08:30.533795 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Oct 31 01:08:30.533860 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Oct 31 01:08:30.533928 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Oct 31 01:08:30.533994 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Oct 31 01:08:30.534064 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 01:08:30.534129 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 31 01:08:30.534194 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Oct 31 01:08:30.534280 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Oct 31 01:08:30.534351 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Oct 31 01:08:30.534416 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Oct 31 01:08:30.534486 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 01:08:30.534554 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 31 01:08:30.534620 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Oct 31 01:08:30.534687 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 31 01:08:30.534752 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Oct 31 01:08:30.534822 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 01:08:30.534888 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 31 01:08:30.534954 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Oct 31 01:08:30.535019 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 31 01:08:30.535086 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Oct 31 01:08:30.535157 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 01:08:30.536247 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 31 01:08:30.536326 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Oct 31 01:08:30.536396 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Oct 31 01:08:30.536464 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Oct 31 01:08:30.536538 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 01:08:30.536606 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 31 01:08:30.536672 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Oct 31 01:08:30.536738 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Oct 31 01:08:30.536804 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Oct 31 01:08:30.536875 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 01:08:30.536944 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 31 01:08:30.537010 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Oct 31 01:08:30.537076 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 31 01:08:30.537141 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Oct 31 01:08:30.537221 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 01:08:30.537295 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 31 01:08:30.537364 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Oct 31 01:08:30.537429 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Oct 31 01:08:30.537494 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Oct 31 01:08:30.537559 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Oct 31 01:08:30.537630 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 01:08:30.537697 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 31 01:08:30.537765 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Oct 31 01:08:30.537830 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Oct 31 01:08:30.537894 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Oct 31 01:08:30.537959 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Oct 31 01:08:30.538029 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 01:08:30.538096 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 31 01:08:30.538163 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Oct 31 01:08:30.538842 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Oct 31 01:08:30.538920 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Oct 31 01:08:30.538994 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 01:08:30.539063 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 31 01:08:30.539131 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Oct 31 01:08:30.539201 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 31 01:08:30.539317 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Oct 31 01:08:30.539392 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 01:08:30.539464 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 31 01:08:30.539529 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Oct 31 01:08:30.539594 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Oct 31 01:08:30.539661 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Oct 31 01:08:30.539731 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 01:08:30.539796 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 31 01:08:30.539860 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Oct 31 01:08:30.539926 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Oct 31 01:08:30.539991 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Oct 31 01:08:30.540078 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 01:08:30.540168 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 31 01:08:30.540344 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Oct 31 01:08:30.540422 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Oct 31 01:08:30.540507 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Oct 31 01:08:30.540597 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 01:08:30.540675 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 31 01:08:30.540741 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Oct 31 01:08:30.540804 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 31 01:08:30.540867 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Oct 31 01:08:30.540933 kernel: pci_bus 0000:01: extended config space not accessible Oct 31 01:08:30.541018 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 31 01:08:30.541088 kernel: pci_bus 0000:02: extended config space not accessible Oct 31 01:08:30.541098 kernel: acpiphp: Slot [32] registered Oct 31 01:08:30.541105 kernel: acpiphp: Slot [33] registered Oct 31 01:08:30.541111 kernel: acpiphp: Slot [34] registered Oct 31 01:08:30.541117 kernel: acpiphp: Slot [35] registered Oct 31 01:08:30.541124 kernel: acpiphp: Slot [36] registered Oct 31 01:08:30.541130 kernel: acpiphp: Slot [37] registered Oct 31 01:08:30.541139 kernel: acpiphp: Slot [38] registered Oct 31 01:08:30.541145 kernel: acpiphp: Slot [39] registered Oct 31 01:08:30.541151 kernel: acpiphp: Slot [40] registered Oct 31 01:08:30.541158 kernel: acpiphp: Slot [41] registered Oct 31 01:08:30.541164 kernel: acpiphp: Slot [42] registered Oct 31 01:08:30.541170 kernel: acpiphp: Slot [43] registered Oct 31 01:08:30.541176 kernel: acpiphp: Slot [44] registered Oct 31 01:08:30.541183 kernel: acpiphp: Slot [45] registered Oct 31 01:08:30.541190 kernel: acpiphp: Slot [46] registered Oct 31 01:08:30.541196 kernel: acpiphp: Slot [47] registered Oct 31 01:08:30.541202 kernel: acpiphp: Slot [48] registered Oct 31 01:08:30.541218 kernel: acpiphp: Slot [49] registered Oct 31 01:08:30.541225 kernel: acpiphp: Slot [50] registered Oct 31 01:08:30.541231 kernel: acpiphp: Slot [51] registered Oct 31 01:08:30.541237 kernel: acpiphp: Slot [52] registered Oct 31 01:08:30.541245 kernel: acpiphp: Slot [53] registered Oct 31 01:08:30.541251 kernel: acpiphp: Slot [54] registered Oct 31 01:08:30.541257 kernel: acpiphp: Slot [55] registered Oct 31 01:08:30.541264 kernel: acpiphp: Slot [56] registered Oct 31 01:08:30.541270 kernel: acpiphp: Slot [57] registered Oct 31 01:08:30.541276 kernel: acpiphp: Slot [58] registered Oct 31 01:08:30.541283 kernel: acpiphp: Slot [59] registered Oct 31 01:08:30.541290 kernel: acpiphp: Slot [60] registered Oct 31 01:08:30.541296 kernel: acpiphp: Slot [61] registered Oct 31 01:08:30.541302 kernel: acpiphp: Slot [62] registered Oct 31 01:08:30.541308 kernel: acpiphp: Slot [63] registered Oct 31 01:08:30.541394 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Oct 31 01:08:30.541461 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Oct 31 01:08:30.541526 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Oct 31 01:08:30.541602 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Oct 31 01:08:30.541669 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Oct 31 01:08:30.541756 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Oct 31 01:08:30.541831 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 PCIe Endpoint Oct 31 01:08:30.541900 kernel: pci 0000:03:00.0: BAR 0 [io 0x4000-0x4007] Oct 31 01:08:30.541968 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfd5f8000-0xfd5fffff 64bit] Oct 31 01:08:30.542039 kernel: pci 0000:03:00.0: ROM [mem 0x00000000-0x0000ffff pref] Oct 31 01:08:30.542106 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Oct 31 01:08:30.542172 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Oct 31 01:08:30.542260 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 31 01:08:30.542331 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 31 01:08:30.542397 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 31 01:08:30.542475 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 31 01:08:30.542544 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 31 01:08:30.542612 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 31 01:08:30.542680 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 31 01:08:30.542749 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 31 01:08:30.542821 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 PCIe Endpoint Oct 31 01:08:30.542891 kernel: pci 0000:0b:00.0: BAR 0 [mem 0xfd4fc000-0xfd4fcfff] Oct 31 01:08:30.542957 kernel: pci 0000:0b:00.0: BAR 1 [mem 0xfd4fd000-0xfd4fdfff] Oct 31 01:08:30.543022 kernel: pci 0000:0b:00.0: BAR 2 [mem 0xfd4fe000-0xfd4fffff] Oct 31 01:08:30.543089 kernel: pci 0000:0b:00.0: BAR 3 [io 0x5000-0x500f] Oct 31 01:08:30.543154 kernel: pci 0000:0b:00.0: ROM [mem 0x00000000-0x0000ffff pref] Oct 31 01:08:30.543233 kernel: pci 0000:0b:00.0: supports D1 D2 Oct 31 01:08:30.543307 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 31 01:08:30.543374 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Oct 31 01:08:30.543441 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 31 01:08:30.543509 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 31 01:08:30.543577 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 31 01:08:30.543644 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 31 01:08:30.543713 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 31 01:08:30.543779 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 31 01:08:30.543846 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 31 01:08:30.543914 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 31 01:08:30.543979 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 31 01:08:30.544045 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 31 01:08:30.544116 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 31 01:08:30.547464 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 31 01:08:30.547559 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 31 01:08:30.547633 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 31 01:08:30.547702 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 31 01:08:30.547772 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 31 01:08:30.547845 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 31 01:08:30.547919 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 31 01:08:30.547987 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 31 01:08:30.548056 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 31 01:08:30.548126 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 31 01:08:30.548193 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 31 01:08:30.551329 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 31 01:08:30.551406 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 31 01:08:30.551417 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Oct 31 01:08:30.551424 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Oct 31 01:08:30.551430 kernel: ACPI: PCI: Interrupt link LNKB disabled Oct 31 01:08:30.551437 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 31 01:08:30.551444 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Oct 31 01:08:30.551453 kernel: iommu: Default domain type: Translated Oct 31 01:08:30.551460 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 31 01:08:30.551466 kernel: PCI: Using ACPI for IRQ routing Oct 31 01:08:30.551473 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 31 01:08:30.551480 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Oct 31 01:08:30.551486 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Oct 31 01:08:30.551554 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Oct 31 01:08:30.551625 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Oct 31 01:08:30.551691 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 31 01:08:30.551701 kernel: vgaarb: loaded Oct 31 01:08:30.551708 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Oct 31 01:08:30.551715 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Oct 31 01:08:30.551722 kernel: clocksource: Switched to clocksource tsc-early Oct 31 01:08:30.551729 kernel: VFS: Disk quotas dquot_6.6.0 Oct 31 01:08:30.551737 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 31 01:08:30.551744 kernel: pnp: PnP ACPI init Oct 31 01:08:30.551819 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Oct 31 01:08:30.551883 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Oct 31 01:08:30.551945 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Oct 31 01:08:30.552015 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Oct 31 01:08:30.552085 kernel: pnp 00:06: [dma 2] Oct 31 01:08:30.552151 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Oct 31 01:08:30.552225 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Oct 31 01:08:30.552289 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Oct 31 01:08:30.552298 kernel: pnp: PnP ACPI: found 8 devices Oct 31 01:08:30.552307 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 31 01:08:30.552314 kernel: NET: Registered PF_INET protocol family Oct 31 01:08:30.552321 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 31 01:08:30.552328 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 31 01:08:30.552335 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 31 01:08:30.552341 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 31 01:08:30.552348 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 31 01:08:30.552355 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 31 01:08:30.552362 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 31 01:08:30.552369 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 31 01:08:30.552376 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 31 01:08:30.552382 kernel: NET: Registered PF_XDP protocol family Oct 31 01:08:30.552449 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Oct 31 01:08:30.552518 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Oct 31 01:08:30.552588 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 31 01:08:30.552656 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 31 01:08:30.552723 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 31 01:08:30.552790 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Oct 31 01:08:30.552857 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Oct 31 01:08:30.552925 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Oct 31 01:08:30.552996 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Oct 31 01:08:30.553063 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Oct 31 01:08:30.553130 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Oct 31 01:08:30.553197 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Oct 31 01:08:30.554570 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Oct 31 01:08:30.554648 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Oct 31 01:08:30.554726 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Oct 31 01:08:30.554798 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Oct 31 01:08:30.558827 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Oct 31 01:08:30.558903 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Oct 31 01:08:30.558975 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Oct 31 01:08:30.559045 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Oct 31 01:08:30.559114 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Oct 31 01:08:30.559188 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Oct 31 01:08:30.559272 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Oct 31 01:08:30.559343 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref]: assigned Oct 31 01:08:30.559412 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref]: assigned Oct 31 01:08:30.559481 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.559555 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.559625 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.559692 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.559760 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.559828 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.559896 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.559963 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.560031 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.560100 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.560168 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.561679 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.561756 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.561825 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.561895 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.561982 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.562049 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.562115 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.562181 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.562918 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.562990 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.563061 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.563128 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.563192 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.563294 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.563359 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.563442 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.563542 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.563611 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.563676 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.563742 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.563808 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.563874 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.565948 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.566030 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.566099 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.566167 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.567018 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.567239 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.567313 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.567386 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.567452 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.567519 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.567586 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.567656 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.567722 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.567789 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.567856 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.567923 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.567987 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.568053 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.568117 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.568181 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.569278 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.569355 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.569429 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.569499 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.569566 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.569636 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.569704 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.569772 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.569840 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.569912 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.569981 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.571982 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.572063 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.572137 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.572205 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.572310 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.572387 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.572461 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.572531 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.572600 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.572670 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.572743 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.572810 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.572878 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.572943 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.573009 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.573075 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.573142 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.573206 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.574343 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Oct 31 01:08:30.574416 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Oct 31 01:08:30.574486 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 31 01:08:30.574554 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Oct 31 01:08:30.574620 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Oct 31 01:08:30.574685 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Oct 31 01:08:30.574753 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 31 01:08:30.574825 kernel: pci 0000:03:00.0: ROM [mem 0xfd500000-0xfd50ffff pref]: assigned Oct 31 01:08:30.574892 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 31 01:08:30.574958 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Oct 31 01:08:30.575023 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Oct 31 01:08:30.575088 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Oct 31 01:08:30.575156 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 31 01:08:30.576142 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Oct 31 01:08:30.578152 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Oct 31 01:08:30.578255 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Oct 31 01:08:30.578331 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 31 01:08:30.578400 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Oct 31 01:08:30.578468 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Oct 31 01:08:30.578535 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Oct 31 01:08:30.578603 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 31 01:08:30.578673 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Oct 31 01:08:30.578740 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Oct 31 01:08:30.578807 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 31 01:08:30.578873 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Oct 31 01:08:30.578940 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 31 01:08:30.579008 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 31 01:08:30.579078 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Oct 31 01:08:30.579144 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Oct 31 01:08:30.579222 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 31 01:08:30.579294 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Oct 31 01:08:30.579361 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Oct 31 01:08:30.579428 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 31 01:08:30.579498 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Oct 31 01:08:30.579564 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Oct 31 01:08:30.579635 kernel: pci 0000:0b:00.0: ROM [mem 0xfd400000-0xfd40ffff pref]: assigned Oct 31 01:08:30.581367 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 31 01:08:30.581442 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Oct 31 01:08:30.581513 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Oct 31 01:08:30.581581 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Oct 31 01:08:30.581656 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 31 01:08:30.581723 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Oct 31 01:08:30.581793 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Oct 31 01:08:30.581862 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Oct 31 01:08:30.581930 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 31 01:08:30.581997 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Oct 31 01:08:30.582064 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Oct 31 01:08:30.582132 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Oct 31 01:08:30.582200 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 31 01:08:30.582289 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Oct 31 01:08:30.582355 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 31 01:08:30.582425 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 31 01:08:30.582492 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Oct 31 01:08:30.582560 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 31 01:08:30.582638 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 31 01:08:30.582708 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Oct 31 01:08:30.582783 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Oct 31 01:08:30.582860 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 31 01:08:30.582927 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Oct 31 01:08:30.582993 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Oct 31 01:08:30.583064 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 31 01:08:30.583132 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Oct 31 01:08:30.583199 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 31 01:08:30.583283 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 31 01:08:30.583350 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Oct 31 01:08:30.583418 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Oct 31 01:08:30.583487 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 31 01:08:30.583555 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 31 01:08:30.583621 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Oct 31 01:08:30.583688 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Oct 31 01:08:30.583755 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Oct 31 01:08:30.583823 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 31 01:08:30.583890 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Oct 31 01:08:30.583959 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Oct 31 01:08:30.584025 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Oct 31 01:08:30.584093 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 31 01:08:30.584158 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Oct 31 01:08:30.584245 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 31 01:08:30.584317 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 31 01:08:30.584384 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Oct 31 01:08:30.584454 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 31 01:08:30.584523 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 31 01:08:30.584588 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Oct 31 01:08:30.584655 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Oct 31 01:08:30.584724 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 31 01:08:30.584791 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Oct 31 01:08:30.584859 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Oct 31 01:08:30.584927 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 31 01:08:30.584994 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Oct 31 01:08:30.585062 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 31 01:08:30.585130 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 31 01:08:30.585198 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Oct 31 01:08:30.586288 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Oct 31 01:08:30.586354 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Oct 31 01:08:30.586421 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 31 01:08:30.586486 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Oct 31 01:08:30.586551 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Oct 31 01:08:30.586616 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Oct 31 01:08:30.586683 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 31 01:08:30.586747 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Oct 31 01:08:30.586816 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Oct 31 01:08:30.586882 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 31 01:08:30.586947 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Oct 31 01:08:30.587012 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 31 01:08:30.587078 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 31 01:08:30.587143 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Oct 31 01:08:30.587233 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Oct 31 01:08:30.587340 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 31 01:08:30.587406 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Oct 31 01:08:30.587471 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Oct 31 01:08:30.587538 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 31 01:08:30.587603 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Oct 31 01:08:30.587671 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Oct 31 01:08:30.587739 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 31 01:08:30.587803 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Oct 31 01:08:30.587869 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 31 01:08:30.587933 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Oct 31 01:08:30.587995 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Oct 31 01:08:30.588053 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Oct 31 01:08:30.588110 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Oct 31 01:08:30.588186 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Oct 31 01:08:30.588260 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Oct 31 01:08:30.588322 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Oct 31 01:08:30.588401 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 31 01:08:30.588460 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Oct 31 01:08:30.588519 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Oct 31 01:08:30.588578 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Oct 31 01:08:30.588638 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Oct 31 01:08:30.588697 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Oct 31 01:08:30.588765 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Oct 31 01:08:30.588825 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Oct 31 01:08:30.588885 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Oct 31 01:08:30.588949 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Oct 31 01:08:30.589010 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Oct 31 01:08:30.589069 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Oct 31 01:08:30.589155 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Oct 31 01:08:30.589230 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Oct 31 01:08:30.589296 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Oct 31 01:08:30.589397 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Oct 31 01:08:30.589474 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Oct 31 01:08:30.589539 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Oct 31 01:08:30.589602 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 31 01:08:30.589666 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Oct 31 01:08:30.589727 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Oct 31 01:08:30.589792 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Oct 31 01:08:30.589852 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Oct 31 01:08:30.589919 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Oct 31 01:08:30.589980 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Oct 31 01:08:30.590044 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Oct 31 01:08:30.590104 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Oct 31 01:08:30.590166 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Oct 31 01:08:30.590253 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Oct 31 01:08:30.590316 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Oct 31 01:08:30.590375 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Oct 31 01:08:30.590440 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Oct 31 01:08:30.590500 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Oct 31 01:08:30.590582 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Oct 31 01:08:30.590646 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Oct 31 01:08:30.590708 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 31 01:08:30.590772 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Oct 31 01:08:30.590834 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 31 01:08:30.590904 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Oct 31 01:08:30.590965 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Oct 31 01:08:30.591032 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Oct 31 01:08:30.591094 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Oct 31 01:08:30.591158 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Oct 31 01:08:30.591232 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 31 01:08:30.591304 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Oct 31 01:08:30.591366 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Oct 31 01:08:30.591427 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 31 01:08:30.591492 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Oct 31 01:08:30.591554 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Oct 31 01:08:30.591619 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Oct 31 01:08:30.591686 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Oct 31 01:08:30.591749 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Oct 31 01:08:30.591821 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Oct 31 01:08:30.591890 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Oct 31 01:08:30.591952 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 31 01:08:30.592021 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Oct 31 01:08:30.592083 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 31 01:08:30.592150 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Oct 31 01:08:30.592222 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Oct 31 01:08:30.592293 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Oct 31 01:08:30.592357 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Oct 31 01:08:30.592423 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Oct 31 01:08:30.592485 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 31 01:08:30.592550 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Oct 31 01:08:30.592611 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Oct 31 01:08:30.592675 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Oct 31 01:08:30.592740 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Oct 31 01:08:30.592801 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Oct 31 01:08:30.592862 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Oct 31 01:08:30.592927 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Oct 31 01:08:30.592989 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Oct 31 01:08:30.593059 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Oct 31 01:08:30.593120 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 31 01:08:30.593187 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Oct 31 01:08:30.593281 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Oct 31 01:08:30.593346 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Oct 31 01:08:30.593409 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Oct 31 01:08:30.593493 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Oct 31 01:08:30.593562 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Oct 31 01:08:30.593629 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Oct 31 01:08:30.593691 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 31 01:08:30.593762 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 31 01:08:30.593774 kernel: PCI: CLS 32 bytes, default 64 Oct 31 01:08:30.593781 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 31 01:08:30.593788 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Oct 31 01:08:30.593795 kernel: clocksource: Switched to clocksource tsc Oct 31 01:08:30.593802 kernel: Initialise system trusted keyrings Oct 31 01:08:30.593809 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 31 01:08:30.593817 kernel: Key type asymmetric registered Oct 31 01:08:30.593823 kernel: Asymmetric key parser 'x509' registered Oct 31 01:08:30.593830 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 31 01:08:30.593837 kernel: io scheduler mq-deadline registered Oct 31 01:08:30.593843 kernel: io scheduler kyber registered Oct 31 01:08:30.593850 kernel: io scheduler bfq registered Oct 31 01:08:30.593919 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Oct 31 01:08:30.593988 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:08:30.594057 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Oct 31 01:08:30.594125 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:08:30.594193 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Oct 31 01:08:30.594281 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:08:30.594349 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Oct 31 01:08:30.594418 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:08:30.594487 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Oct 31 01:08:30.594554 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:08:30.594622 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Oct 31 01:08:30.594691 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:08:30.594770 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Oct 31 01:08:30.594841 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:08:30.594908 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Oct 31 01:08:30.594976 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:08:30.595042 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Oct 31 01:08:30.595108 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:08:30.595176 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Oct 31 01:08:30.595257 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:08:30.595325 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Oct 31 01:08:30.595392 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:08:30.595461 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Oct 31 01:08:30.595528 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:08:30.595605 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Oct 31 01:08:30.595676 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:08:30.595743 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Oct 31 01:08:30.595811 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:08:30.595878 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Oct 31 01:08:30.595963 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:08:30.596030 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Oct 31 01:08:30.596099 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:08:30.596169 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Oct 31 01:08:30.596267 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:08:30.596336 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Oct 31 01:08:30.596402 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:08:30.596468 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Oct 31 01:08:30.596538 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:08:30.596605 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Oct 31 01:08:30.596672 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:08:30.596740 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Oct 31 01:08:30.596807 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:08:30.596875 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Oct 31 01:08:30.596943 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:08:30.597010 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Oct 31 01:08:30.597076 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:08:30.597142 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Oct 31 01:08:30.597216 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:08:30.597290 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Oct 31 01:08:30.597360 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:08:30.597427 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Oct 31 01:08:30.597493 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:08:30.597559 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Oct 31 01:08:30.597624 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:08:30.597693 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Oct 31 01:08:30.597759 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:08:30.597828 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Oct 31 01:08:30.597896 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:08:30.597963 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Oct 31 01:08:30.598029 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:08:30.598096 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Oct 31 01:08:30.598162 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:08:30.598245 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Oct 31 01:08:30.598315 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:08:30.598327 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 31 01:08:30.598334 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 31 01:08:30.598342 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 31 01:08:30.598350 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Oct 31 01:08:30.598357 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 31 01:08:30.598364 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 31 01:08:30.598432 kernel: rtc_cmos 00:01: registered as rtc0 Oct 31 01:08:30.598495 kernel: rtc_cmos 00:01: setting system clock to 2025-10-31T01:08:29 UTC (1761872909) Oct 31 01:08:30.598505 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 31 01:08:30.598564 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Oct 31 01:08:30.598576 kernel: intel_pstate: CPU model not supported Oct 31 01:08:30.598583 kernel: NET: Registered PF_INET6 protocol family Oct 31 01:08:30.598590 kernel: Segment Routing with IPv6 Oct 31 01:08:30.598597 kernel: In-situ OAM (IOAM) with IPv6 Oct 31 01:08:30.598604 kernel: NET: Registered PF_PACKET protocol family Oct 31 01:08:30.598611 kernel: Key type dns_resolver registered Oct 31 01:08:30.598618 kernel: IPI shorthand broadcast: enabled Oct 31 01:08:30.598626 kernel: sched_clock: Marking stable (1434003732, 169682910)->(1622417215, -18730573) Oct 31 01:08:30.598633 kernel: registered taskstats version 1 Oct 31 01:08:30.598639 kernel: Loading compiled-in X.509 certificates Oct 31 01:08:30.598646 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 37af3a3e1542a1adffb64ee08d2e0c081809dc67' Oct 31 01:08:30.598652 kernel: Demotion targets for Node 0: null Oct 31 01:08:30.598659 kernel: Key type .fscrypt registered Oct 31 01:08:30.598666 kernel: Key type fscrypt-provisioning registered Oct 31 01:08:30.598673 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 31 01:08:30.598680 kernel: ima: Allocated hash algorithm: sha1 Oct 31 01:08:30.598687 kernel: ima: No architecture policies found Oct 31 01:08:30.598694 kernel: clk: Disabling unused clocks Oct 31 01:08:30.598701 kernel: Freeing unused kernel image (initmem) memory: 15964K Oct 31 01:08:30.598707 kernel: Write protecting the kernel read-only data: 40960k Oct 31 01:08:30.598714 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Oct 31 01:08:30.598722 kernel: Run /init as init process Oct 31 01:08:30.598728 kernel: with arguments: Oct 31 01:08:30.598735 kernel: /init Oct 31 01:08:30.598742 kernel: with environment: Oct 31 01:08:30.598748 kernel: HOME=/ Oct 31 01:08:30.598754 kernel: TERM=linux Oct 31 01:08:30.598761 kernel: SCSI subsystem initialized Oct 31 01:08:30.598767 kernel: VMware PVSCSI driver - version 1.0.7.0-k Oct 31 01:08:30.598775 kernel: vmw_pvscsi: using 64bit dma Oct 31 01:08:30.598782 kernel: vmw_pvscsi: max_id: 16 Oct 31 01:08:30.598789 kernel: vmw_pvscsi: setting ring_pages to 8 Oct 31 01:08:30.598796 kernel: vmw_pvscsi: enabling reqCallThreshold Oct 31 01:08:30.598803 kernel: vmw_pvscsi: driver-based request coalescing enabled Oct 31 01:08:30.598810 kernel: vmw_pvscsi: using MSI-X Oct 31 01:08:30.598887 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Oct 31 01:08:30.598960 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Oct 31 01:08:30.599043 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Oct 31 01:08:30.599116 kernel: sd 0:0:0:0: [sda] 25804800 512-byte logical blocks: (13.2 GB/12.3 GiB) Oct 31 01:08:30.599187 kernel: sd 0:0:0:0: [sda] Write Protect is off Oct 31 01:08:30.599267 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Oct 31 01:08:30.599339 kernel: sd 0:0:0:0: [sda] Cache data unavailable Oct 31 01:08:30.599409 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Oct 31 01:08:30.599420 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 31 01:08:30.599488 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Oct 31 01:08:30.599497 kernel: libata version 3.00 loaded. Oct 31 01:08:30.599565 kernel: ata_piix 0000:00:07.1: version 2.13 Oct 31 01:08:30.599636 kernel: scsi host1: ata_piix Oct 31 01:08:30.599710 kernel: scsi host2: ata_piix Oct 31 01:08:30.599720 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 lpm-pol 0 Oct 31 01:08:30.599728 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 lpm-pol 0 Oct 31 01:08:30.599734 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Oct 31 01:08:30.599810 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Oct 31 01:08:30.599882 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Oct 31 01:08:30.599894 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 31 01:08:30.599901 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 31 01:08:30.599908 kernel: device-mapper: uevent: version 1.0.3 Oct 31 01:08:30.599915 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 31 01:08:30.599985 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 31 01:08:30.599995 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 31 01:08:30.600004 kernel: raid6: avx2x4 gen() 46004 MB/s Oct 31 01:08:30.600011 kernel: raid6: avx2x2 gen() 51604 MB/s Oct 31 01:08:30.600018 kernel: raid6: avx2x1 gen() 44074 MB/s Oct 31 01:08:30.600024 kernel: raid6: using algorithm avx2x2 gen() 51604 MB/s Oct 31 01:08:30.600031 kernel: raid6: .... xor() 32040 MB/s, rmw enabled Oct 31 01:08:30.600038 kernel: raid6: using avx2x2 recovery algorithm Oct 31 01:08:30.600044 kernel: xor: automatically using best checksumming function avx Oct 31 01:08:30.600051 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 31 01:08:30.600059 kernel: BTRFS: device fsid 6b7146e0-0df0-402b-9935-18c5cf141a3e devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (196) Oct 31 01:08:30.600066 kernel: BTRFS info (device dm-0): first mount of filesystem 6b7146e0-0df0-402b-9935-18c5cf141a3e Oct 31 01:08:30.600073 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 31 01:08:30.600079 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 31 01:08:30.600087 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 31 01:08:30.600094 kernel: BTRFS info (device dm-0): enabling free space tree Oct 31 01:08:30.600100 kernel: loop: module loaded Oct 31 01:08:30.600108 kernel: loop0: detected capacity change from 0 to 100120 Oct 31 01:08:30.600115 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 31 01:08:30.600122 systemd[1]: Successfully made /usr/ read-only. Oct 31 01:08:30.600132 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 31 01:08:30.600140 systemd[1]: Detected virtualization vmware. Oct 31 01:08:30.600148 systemd[1]: Detected architecture x86-64. Oct 31 01:08:30.600155 systemd[1]: Running in initrd. Oct 31 01:08:30.600162 systemd[1]: No hostname configured, using default hostname. Oct 31 01:08:30.600170 systemd[1]: Hostname set to . Oct 31 01:08:30.600177 systemd[1]: Initializing machine ID from random generator. Oct 31 01:08:30.600184 systemd[1]: Queued start job for default target initrd.target. Oct 31 01:08:30.600191 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 31 01:08:30.600199 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 01:08:30.600206 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 01:08:30.600306 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 31 01:08:30.600314 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 31 01:08:30.600322 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 31 01:08:30.600332 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 31 01:08:30.600339 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 01:08:30.600347 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 31 01:08:30.600355 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 31 01:08:30.600362 systemd[1]: Reached target paths.target - Path Units. Oct 31 01:08:30.600369 systemd[1]: Reached target slices.target - Slice Units. Oct 31 01:08:30.600377 systemd[1]: Reached target swap.target - Swaps. Oct 31 01:08:30.600385 systemd[1]: Reached target timers.target - Timer Units. Oct 31 01:08:30.600392 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 31 01:08:30.600400 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 31 01:08:30.600407 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 31 01:08:30.600414 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 31 01:08:30.600422 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 31 01:08:30.600429 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 31 01:08:30.600437 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 01:08:30.600444 systemd[1]: Reached target sockets.target - Socket Units. Oct 31 01:08:30.600452 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Oct 31 01:08:30.600459 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 31 01:08:30.600466 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 31 01:08:30.600474 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 31 01:08:30.600481 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 31 01:08:30.600489 systemd[1]: Starting systemd-fsck-usr.service... Oct 31 01:08:30.600496 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 31 01:08:30.600503 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 31 01:08:30.600510 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 01:08:30.600519 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 31 01:08:30.600527 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 01:08:30.600534 systemd[1]: Finished systemd-fsck-usr.service. Oct 31 01:08:30.600541 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 31 01:08:30.600566 systemd-journald[333]: Collecting audit messages is disabled. Oct 31 01:08:30.600586 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 31 01:08:30.600594 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 01:08:30.600601 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 31 01:08:30.600609 kernel: Bridge firewalling registered Oct 31 01:08:30.600617 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 31 01:08:30.600625 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 31 01:08:30.600633 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Oct 31 01:08:30.600641 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 01:08:30.600648 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 31 01:08:30.600655 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 01:08:30.600663 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 31 01:08:30.600672 systemd-journald[333]: Journal started Oct 31 01:08:30.600686 systemd-journald[333]: Runtime Journal (/run/log/journal/d1adbab1a03c439d9ba30e615bfebfc7) is 4.8M, max 38.5M, 33.7M free. Oct 31 01:08:30.565184 systemd-modules-load[334]: Inserted module 'br_netfilter' Oct 31 01:08:30.603222 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 31 01:08:30.606223 systemd[1]: Started systemd-journald.service - Journal Service. Oct 31 01:08:30.609283 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 31 01:08:30.614743 systemd-tmpfiles[363]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 31 01:08:30.617336 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 01:08:30.621973 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 01:08:30.623599 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 31 01:08:30.636503 dracut-cmdline[377]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 ip=139.178.70.100::139.178.70.97:28::ens192:off:1.1.1.1:1.0.0.1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=0b271910e93d73fd8f787b704ab61b381ac88c2b5070fc1461584f1dcd7f91c9 Oct 31 01:08:30.638703 systemd-resolved[352]: Positive Trust Anchors: Oct 31 01:08:30.638711 systemd-resolved[352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 01:08:30.638713 systemd-resolved[352]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 31 01:08:30.638735 systemd-resolved[352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 31 01:08:30.653884 systemd-resolved[352]: Defaulting to hostname 'linux'. Oct 31 01:08:30.654812 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 31 01:08:30.655055 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 31 01:08:30.704227 kernel: Loading iSCSI transport class v2.0-870. Oct 31 01:08:30.716231 kernel: iscsi: registered transport (tcp) Oct 31 01:08:30.742228 kernel: iscsi: registered transport (qla4xxx) Oct 31 01:08:30.742273 kernel: QLogic iSCSI HBA Driver Oct 31 01:08:30.757326 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 31 01:08:30.771768 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 31 01:08:30.773077 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 31 01:08:30.795731 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 31 01:08:30.796849 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 31 01:08:30.798272 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 31 01:08:30.818500 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 31 01:08:30.820060 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 01:08:30.839663 systemd-udevd[615]: Using default interface naming scheme 'v257'. Oct 31 01:08:30.846373 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 01:08:30.849643 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 31 01:08:30.860973 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 31 01:08:30.862905 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 31 01:08:30.867911 dracut-pre-trigger[697]: rd.md=0: removing MD RAID activation Oct 31 01:08:30.884035 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 31 01:08:30.886289 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 31 01:08:30.895020 systemd-networkd[720]: lo: Link UP Oct 31 01:08:30.895207 systemd-networkd[720]: lo: Gained carrier Oct 31 01:08:30.895513 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 31 01:08:30.895688 systemd[1]: Reached target network.target - Network. Oct 31 01:08:30.967031 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 01:08:30.968735 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 31 01:08:31.068478 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Oct 31 01:08:31.071108 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 31 01:08:31.081226 kernel: VMware vmxnet3 virtual NIC driver - version 1.9.0.0-k-NAPI Oct 31 01:08:31.083416 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Oct 31 01:08:31.089614 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Oct 31 01:08:31.089740 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Oct 31 01:08:31.125935 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Oct 31 01:08:31.137532 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Oct 31 01:08:31.154234 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Oct 31 01:08:31.158287 (udev-worker)[759]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Oct 31 01:08:31.161980 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 01:08:31.162087 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 01:08:31.162712 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 01:08:31.163250 kernel: cryptd: max_cpu_qlen set to 1000 Oct 31 01:08:31.162959 systemd-networkd[720]: eth0: Interface name change detected, renamed to ens192. Oct 31 01:08:31.165414 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 01:08:31.185238 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Oct 31 01:08:31.193237 kernel: AES CTR mode by8 optimization enabled Oct 31 01:08:31.197704 systemd-networkd[720]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Oct 31 01:08:31.207553 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Oct 31 01:08:31.207709 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Oct 31 01:08:31.214809 systemd-networkd[720]: ens192: Link UP Oct 31 01:08:31.217318 systemd-networkd[720]: ens192: Gained carrier Oct 31 01:08:31.229244 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 01:08:31.237630 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 31 01:08:31.238117 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 31 01:08:31.238408 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 01:08:31.238498 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 31 01:08:31.239425 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 31 01:08:31.250552 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 31 01:08:32.172903 disk-uuid[791]: Warning: The kernel is still using the old partition table. Oct 31 01:08:32.172903 disk-uuid[791]: The new table will be used at the next reboot or after you Oct 31 01:08:32.172903 disk-uuid[791]: run partprobe(8) or kpartx(8) Oct 31 01:08:32.172903 disk-uuid[791]: The operation has completed successfully. Oct 31 01:08:32.176063 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 31 01:08:32.176125 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 31 01:08:32.176937 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 31 01:08:32.198136 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (884) Oct 31 01:08:32.198173 kernel: BTRFS info (device sda6): first mount of filesystem f2323c14-05e7-4d2c-a4ad-0e69533657d6 Oct 31 01:08:32.198182 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 01:08:32.202769 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 31 01:08:32.202794 kernel: BTRFS info (device sda6): enabling free space tree Oct 31 01:08:32.207224 kernel: BTRFS info (device sda6): last unmount of filesystem f2323c14-05e7-4d2c-a4ad-0e69533657d6 Oct 31 01:08:32.208410 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 31 01:08:32.209248 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 31 01:08:32.356082 ignition[903]: Ignition 2.22.0 Oct 31 01:08:32.356089 ignition[903]: Stage: fetch-offline Oct 31 01:08:32.356110 ignition[903]: no configs at "/usr/lib/ignition/base.d" Oct 31 01:08:32.356116 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 01:08:32.356167 ignition[903]: parsed url from cmdline: "" Oct 31 01:08:32.358457 systemd-networkd[720]: ens192: Gained IPv6LL Oct 31 01:08:32.356169 ignition[903]: no config URL provided Oct 31 01:08:32.356172 ignition[903]: reading system config file "/usr/lib/ignition/user.ign" Oct 31 01:08:32.356177 ignition[903]: no config at "/usr/lib/ignition/user.ign" Oct 31 01:08:32.356930 ignition[903]: config successfully fetched Oct 31 01:08:32.356949 ignition[903]: parsing config with SHA512: 585170059c8c4c73f70ab90a0fa9dd50de81da319ce2d223fb11618eff8f761c3ba71f054770e5b3f3c86b0618fafdbeed184c6c90a794e55135d2d4dc821420 Oct 31 01:08:32.360396 unknown[903]: fetched base config from "system" Oct 31 01:08:32.360703 unknown[903]: fetched user config from "vmware" Oct 31 01:08:32.360996 ignition[903]: fetch-offline: fetch-offline passed Oct 31 01:08:32.361037 ignition[903]: Ignition finished successfully Oct 31 01:08:32.362084 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 31 01:08:32.362443 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 31 01:08:32.363082 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 31 01:08:32.378574 ignition[909]: Ignition 2.22.0 Oct 31 01:08:32.378586 ignition[909]: Stage: kargs Oct 31 01:08:32.378679 ignition[909]: no configs at "/usr/lib/ignition/base.d" Oct 31 01:08:32.378684 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 01:08:32.379126 ignition[909]: kargs: kargs passed Oct 31 01:08:32.379154 ignition[909]: Ignition finished successfully Oct 31 01:08:32.380420 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 31 01:08:32.381395 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 31 01:08:32.395111 ignition[915]: Ignition 2.22.0 Oct 31 01:08:32.395118 ignition[915]: Stage: disks Oct 31 01:08:32.395197 ignition[915]: no configs at "/usr/lib/ignition/base.d" Oct 31 01:08:32.395202 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 01:08:32.395813 ignition[915]: disks: disks passed Oct 31 01:08:32.395842 ignition[915]: Ignition finished successfully Oct 31 01:08:32.396708 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 31 01:08:32.397192 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 31 01:08:32.397481 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 31 01:08:32.397722 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 31 01:08:32.397926 systemd[1]: Reached target sysinit.target - System Initialization. Oct 31 01:08:32.398135 systemd[1]: Reached target basic.target - Basic System. Oct 31 01:08:32.398876 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 31 01:08:32.429963 systemd-fsck[923]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Oct 31 01:08:32.430750 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 31 01:08:32.431744 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 31 01:08:32.513663 kernel: EXT4-fs (sda9): mounted filesystem a07d2dcb-9505-4a27-b1c0-7adbc2e26272 r/w with ordered data mode. Quota mode: none. Oct 31 01:08:32.513142 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 31 01:08:32.513492 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 31 01:08:32.514831 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 31 01:08:32.517250 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 31 01:08:32.517669 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 31 01:08:32.517852 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 31 01:08:32.517867 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 31 01:08:32.525267 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 31 01:08:32.526020 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 31 01:08:32.531224 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (931) Oct 31 01:08:32.534265 kernel: BTRFS info (device sda6): first mount of filesystem f2323c14-05e7-4d2c-a4ad-0e69533657d6 Oct 31 01:08:32.534284 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 01:08:32.539219 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 31 01:08:32.539242 kernel: BTRFS info (device sda6): enabling free space tree Oct 31 01:08:32.540169 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 31 01:08:32.564810 initrd-setup-root[955]: cut: /sysroot/etc/passwd: No such file or directory Oct 31 01:08:32.567740 initrd-setup-root[962]: cut: /sysroot/etc/group: No such file or directory Oct 31 01:08:32.570354 initrd-setup-root[969]: cut: /sysroot/etc/shadow: No such file or directory Oct 31 01:08:32.572683 initrd-setup-root[976]: cut: /sysroot/etc/gshadow: No such file or directory Oct 31 01:08:32.631667 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 31 01:08:32.632652 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 31 01:08:32.633283 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 31 01:08:32.646429 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 31 01:08:32.648241 kernel: BTRFS info (device sda6): last unmount of filesystem f2323c14-05e7-4d2c-a4ad-0e69533657d6 Oct 31 01:08:32.665063 ignition[1043]: INFO : Ignition 2.22.0 Oct 31 01:08:32.665388 ignition[1043]: INFO : Stage: mount Oct 31 01:08:32.665749 ignition[1043]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 01:08:32.665882 ignition[1043]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 01:08:32.666698 ignition[1043]: INFO : mount: mount passed Oct 31 01:08:32.666847 ignition[1043]: INFO : Ignition finished successfully Oct 31 01:08:32.667729 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 31 01:08:32.668429 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 31 01:08:32.679300 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 31 01:08:32.718563 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 31 01:08:32.732225 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1055) Oct 31 01:08:32.734552 kernel: BTRFS info (device sda6): first mount of filesystem f2323c14-05e7-4d2c-a4ad-0e69533657d6 Oct 31 01:08:32.734566 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 01:08:32.738278 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 31 01:08:32.738301 kernel: BTRFS info (device sda6): enabling free space tree Oct 31 01:08:32.739645 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 31 01:08:32.755834 ignition[1073]: INFO : Ignition 2.22.0 Oct 31 01:08:32.755834 ignition[1073]: INFO : Stage: files Oct 31 01:08:32.756186 ignition[1073]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 01:08:32.756186 ignition[1073]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 01:08:32.756552 ignition[1073]: DEBUG : files: compiled without relabeling support, skipping Oct 31 01:08:32.757283 ignition[1073]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 31 01:08:32.757283 ignition[1073]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 31 01:08:32.759815 ignition[1073]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 31 01:08:32.759959 ignition[1073]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 31 01:08:32.760091 ignition[1073]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 31 01:08:32.760050 unknown[1073]: wrote ssh authorized keys file for user: core Oct 31 01:08:32.761885 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 31 01:08:32.762168 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 31 01:08:32.803250 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 31 01:08:32.862822 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 31 01:08:32.863080 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 31 01:08:32.863080 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 31 01:08:32.863080 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 31 01:08:32.863080 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 31 01:08:32.863080 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 01:08:32.863853 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 01:08:32.863853 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 01:08:32.863853 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 01:08:32.891896 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 01:08:32.892141 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 01:08:32.892141 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 31 01:08:32.901367 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 31 01:08:32.901367 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 31 01:08:32.901845 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Oct 31 01:08:33.380198 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 31 01:08:33.683481 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 31 01:08:33.683481 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Oct 31 01:08:33.684241 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Oct 31 01:08:33.684241 ignition[1073]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Oct 31 01:08:33.684690 ignition[1073]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 01:08:33.685038 ignition[1073]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 01:08:33.685038 ignition[1073]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Oct 31 01:08:33.685038 ignition[1073]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Oct 31 01:08:33.685781 ignition[1073]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 01:08:33.685781 ignition[1073]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 01:08:33.685781 ignition[1073]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Oct 31 01:08:33.685781 ignition[1073]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Oct 31 01:08:33.707147 ignition[1073]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 01:08:33.709183 ignition[1073]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 01:08:33.709367 ignition[1073]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Oct 31 01:08:33.709367 ignition[1073]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 31 01:08:33.709367 ignition[1073]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 31 01:08:33.709367 ignition[1073]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 31 01:08:33.710705 ignition[1073]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 31 01:08:33.710705 ignition[1073]: INFO : files: files passed Oct 31 01:08:33.710705 ignition[1073]: INFO : Ignition finished successfully Oct 31 01:08:33.710308 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 31 01:08:33.712301 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 31 01:08:33.712946 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 31 01:08:33.724506 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 31 01:08:33.724573 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 31 01:08:33.729150 initrd-setup-root-after-ignition[1107]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 01:08:33.729150 initrd-setup-root-after-ignition[1107]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 31 01:08:33.729830 initrd-setup-root-after-ignition[1111]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 01:08:33.730966 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 31 01:08:33.731184 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 31 01:08:33.731750 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 31 01:08:33.755511 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 31 01:08:33.755743 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 31 01:08:33.756121 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 31 01:08:33.756352 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 31 01:08:33.756744 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 31 01:08:33.757369 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 31 01:08:33.772666 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 31 01:08:33.773533 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 31 01:08:33.786918 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 31 01:08:33.787006 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 31 01:08:33.787200 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 01:08:33.787487 systemd[1]: Stopped target timers.target - Timer Units. Oct 31 01:08:33.787679 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 31 01:08:33.787742 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 31 01:08:33.788085 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 31 01:08:33.788265 systemd[1]: Stopped target basic.target - Basic System. Oct 31 01:08:33.788449 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 31 01:08:33.788639 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 31 01:08:33.788839 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 31 01:08:33.789044 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 31 01:08:33.789256 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 31 01:08:33.789447 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 31 01:08:33.789655 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 31 01:08:33.789862 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 31 01:08:33.790043 systemd[1]: Stopped target swap.target - Swaps. Oct 31 01:08:33.790252 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 31 01:08:33.790362 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 31 01:08:33.790750 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 31 01:08:33.790935 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 01:08:33.791117 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 31 01:08:33.791161 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 01:08:33.791357 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 31 01:08:33.791416 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 31 01:08:33.791762 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 31 01:08:33.791826 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 31 01:08:33.792066 systemd[1]: Stopped target paths.target - Path Units. Oct 31 01:08:33.792246 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 31 01:08:33.792292 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 01:08:33.792465 systemd[1]: Stopped target slices.target - Slice Units. Oct 31 01:08:33.792656 systemd[1]: Stopped target sockets.target - Socket Units. Oct 31 01:08:33.792849 systemd[1]: iscsid.socket: Deactivated successfully. Oct 31 01:08:33.792895 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 31 01:08:33.793089 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 31 01:08:33.793132 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 31 01:08:33.793324 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 31 01:08:33.793388 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 31 01:08:33.793637 systemd[1]: ignition-files.service: Deactivated successfully. Oct 31 01:08:33.793698 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 31 01:08:33.795307 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 31 01:08:33.795435 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 31 01:08:33.795505 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 01:08:33.796079 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 31 01:08:33.796189 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 31 01:08:33.796274 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 01:08:33.797516 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 31 01:08:33.797582 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 01:08:33.797748 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 31 01:08:33.797808 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 31 01:08:33.800388 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 31 01:08:33.805723 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 31 01:08:33.816414 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 31 01:08:33.818892 ignition[1133]: INFO : Ignition 2.22.0 Oct 31 01:08:33.818892 ignition[1133]: INFO : Stage: umount Oct 31 01:08:33.818892 ignition[1133]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 01:08:33.818892 ignition[1133]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 01:08:33.818892 ignition[1133]: INFO : umount: umount passed Oct 31 01:08:33.818892 ignition[1133]: INFO : Ignition finished successfully Oct 31 01:08:33.819493 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 31 01:08:33.819565 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 31 01:08:33.821591 systemd[1]: Stopped target network.target - Network. Oct 31 01:08:33.821807 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 31 01:08:33.821840 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 31 01:08:33.821955 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 31 01:08:33.821979 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 31 01:08:33.822083 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 31 01:08:33.822106 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 31 01:08:33.822333 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 31 01:08:33.822356 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 31 01:08:33.822757 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 31 01:08:33.822925 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 31 01:08:33.827291 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 31 01:08:33.827366 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 31 01:08:33.833001 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 31 01:08:33.833075 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 31 01:08:33.834063 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 31 01:08:33.834206 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 31 01:08:33.834258 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 31 01:08:33.835316 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 31 01:08:33.835432 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 31 01:08:33.835460 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 31 01:08:33.835644 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Oct 31 01:08:33.835668 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Oct 31 01:08:33.835824 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 31 01:08:33.835847 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 31 01:08:33.836279 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 31 01:08:33.836304 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 31 01:08:33.837454 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 01:08:33.844956 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 31 01:08:33.845150 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 01:08:33.845375 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 31 01:08:33.845398 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 31 01:08:33.845505 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 31 01:08:33.845522 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 01:08:33.845617 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 31 01:08:33.845643 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 31 01:08:33.845782 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 31 01:08:33.845806 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 31 01:08:33.845932 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 31 01:08:33.845954 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 01:08:33.847791 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 31 01:08:33.848019 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 31 01:08:33.848151 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 31 01:08:33.848444 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 31 01:08:33.848567 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 01:08:33.848692 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 01:08:33.848717 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 01:08:33.854580 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 31 01:08:33.854803 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 31 01:08:33.880304 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 31 01:08:33.880559 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 31 01:08:33.880942 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 31 01:08:33.880976 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 31 01:08:33.881498 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 31 01:08:33.881693 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 31 01:08:33.882135 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 31 01:08:33.882793 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 31 01:08:33.894171 systemd[1]: Switching root. Oct 31 01:08:33.932385 systemd-journald[333]: Journal stopped Oct 31 01:08:35.482946 systemd-journald[333]: Received SIGTERM from PID 1 (systemd). Oct 31 01:08:35.482969 kernel: SELinux: policy capability network_peer_controls=1 Oct 31 01:08:35.482979 kernel: SELinux: policy capability open_perms=1 Oct 31 01:08:35.482985 kernel: SELinux: policy capability extended_socket_class=1 Oct 31 01:08:35.482991 kernel: SELinux: policy capability always_check_network=0 Oct 31 01:08:35.482997 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 31 01:08:35.483004 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 31 01:08:35.483011 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 31 01:08:35.483017 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 31 01:08:35.483023 kernel: SELinux: policy capability userspace_initial_context=0 Oct 31 01:08:35.483030 systemd[1]: Successfully loaded SELinux policy in 80.557ms. Oct 31 01:08:35.483038 kernel: audit: type=1403 audit(1761872914.761:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 31 01:08:35.483045 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.067ms. Oct 31 01:08:35.483052 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 31 01:08:35.483060 systemd[1]: Detected virtualization vmware. Oct 31 01:08:35.483068 systemd[1]: Detected architecture x86-64. Oct 31 01:08:35.483075 systemd[1]: Detected first boot. Oct 31 01:08:35.483082 systemd[1]: Initializing machine ID from random generator. Oct 31 01:08:35.483089 zram_generator::config[1177]: No configuration found. Oct 31 01:08:35.483191 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Oct 31 01:08:35.483205 kernel: Guest personality initialized and is active Oct 31 01:08:35.483221 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 31 01:08:35.483228 kernel: Initialized host personality Oct 31 01:08:35.483235 kernel: NET: Registered PF_VSOCK protocol family Oct 31 01:08:35.483242 systemd[1]: Populated /etc with preset unit settings. Oct 31 01:08:35.483251 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 31 01:08:35.483260 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Oct 31 01:08:35.483268 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 31 01:08:35.483275 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 31 01:08:35.483282 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 31 01:08:35.483289 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 31 01:08:35.483297 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 31 01:08:35.483306 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 31 01:08:35.483313 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 31 01:08:35.483321 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 31 01:08:35.483328 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 31 01:08:35.483336 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 31 01:08:35.483343 systemd[1]: Created slice user.slice - User and Session Slice. Oct 31 01:08:35.483352 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 01:08:35.483360 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 01:08:35.483369 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 31 01:08:35.483376 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 31 01:08:35.483385 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 31 01:08:35.483393 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 31 01:08:35.483401 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 31 01:08:35.483409 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 01:08:35.483417 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 31 01:08:35.483424 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 31 01:08:35.483431 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 31 01:08:35.483439 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 31 01:08:35.483446 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 31 01:08:35.483455 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 01:08:35.483462 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 31 01:08:35.483470 systemd[1]: Reached target slices.target - Slice Units. Oct 31 01:08:35.483477 systemd[1]: Reached target swap.target - Swaps. Oct 31 01:08:35.483484 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 31 01:08:35.483492 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 31 01:08:35.483501 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 31 01:08:35.483508 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 31 01:08:35.483516 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 31 01:08:35.483523 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 01:08:35.483532 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 31 01:08:35.483540 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 31 01:08:35.483547 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 31 01:08:35.483555 systemd[1]: Mounting media.mount - External Media Directory... Oct 31 01:08:35.483562 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:08:35.483570 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 31 01:08:35.483577 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 31 01:08:35.483586 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 31 01:08:35.483594 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 31 01:08:35.483602 systemd[1]: Reached target machines.target - Containers. Oct 31 01:08:35.483610 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 31 01:08:35.483617 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Oct 31 01:08:35.483625 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 31 01:08:35.483633 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 31 01:08:35.483641 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 01:08:35.483649 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 31 01:08:35.483657 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 01:08:35.483664 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 31 01:08:35.483672 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 01:08:35.483680 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 31 01:08:35.483688 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 31 01:08:35.483696 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 31 01:08:35.483703 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 31 01:08:35.483711 systemd[1]: Stopped systemd-fsck-usr.service. Oct 31 01:08:35.483719 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 31 01:08:35.483727 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 31 01:08:35.483734 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 31 01:08:35.483743 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 31 01:08:35.483751 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 31 01:08:35.483758 kernel: fuse: init (API version 7.41) Oct 31 01:08:35.483765 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 31 01:08:35.483773 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 31 01:08:35.483780 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:08:35.483788 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 31 01:08:35.483796 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 31 01:08:35.483804 systemd[1]: Mounted media.mount - External Media Directory. Oct 31 01:08:35.483811 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 31 01:08:35.483819 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 31 01:08:35.483826 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 31 01:08:35.483834 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 01:08:35.483842 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 31 01:08:35.483850 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 31 01:08:35.483857 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 01:08:35.483864 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 01:08:35.483872 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 01:08:35.483879 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 01:08:35.483887 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 31 01:08:35.483895 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 31 01:08:35.483903 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 01:08:35.483910 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 01:08:35.483918 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 31 01:08:35.483926 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 31 01:08:35.483934 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 31 01:08:35.483942 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 31 01:08:35.483951 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 31 01:08:35.483958 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 31 01:08:35.483966 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 31 01:08:35.483976 kernel: ACPI: bus type drm_connector registered Oct 31 01:08:35.483985 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 31 01:08:35.483993 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 31 01:08:35.484001 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 01:08:35.484009 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 31 01:08:35.484018 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 31 01:08:35.484038 systemd-journald[1263]: Collecting audit messages is disabled. Oct 31 01:08:35.484058 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 31 01:08:35.484066 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 31 01:08:35.484074 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 31 01:08:35.484081 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 31 01:08:35.484089 systemd-journald[1263]: Journal started Oct 31 01:08:35.484105 systemd-journald[1263]: Runtime Journal (/run/log/journal/7ee7583560644837a3c8fbd6e50ace04) is 4.8M, max 38.5M, 33.7M free. Oct 31 01:08:35.485518 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 01:08:35.276733 systemd[1]: Queued start job for default target multi-user.target. Oct 31 01:08:35.289615 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Oct 31 01:08:35.289953 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 31 01:08:35.488315 jq[1247]: true Oct 31 01:08:35.488855 jq[1279]: true Oct 31 01:08:35.494191 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 31 01:08:35.494296 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 01:08:35.496237 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 31 01:08:35.504437 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 31 01:08:35.509223 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 31 01:08:35.511234 systemd[1]: Started systemd-journald.service - Journal Service. Oct 31 01:08:35.515868 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 31 01:08:35.520864 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 31 01:08:35.524341 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 31 01:08:35.526316 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 31 01:08:35.527264 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 31 01:08:35.531002 ignition[1284]: Ignition 2.22.0 Oct 31 01:08:35.537572 ignition[1284]: deleting config from guestinfo properties Oct 31 01:08:35.542373 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 31 01:08:35.548325 systemd-journald[1263]: Time spent on flushing to /var/log/journal/7ee7583560644837a3c8fbd6e50ace04 is 35.831ms for 1750 entries. Oct 31 01:08:35.548325 systemd-journald[1263]: System Journal (/var/log/journal/7ee7583560644837a3c8fbd6e50ace04) is 8M, max 588.1M, 580.1M free. Oct 31 01:08:35.587575 systemd-journald[1263]: Received client request to flush runtime journal. Oct 31 01:08:35.587613 kernel: loop1: detected capacity change from 0 to 2960 Oct 31 01:08:35.553566 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 31 01:08:35.552679 ignition[1284]: Successfully deleted config Oct 31 01:08:35.571448 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Oct 31 01:08:35.589334 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 31 01:08:35.597229 kernel: loop2: detected capacity change from 0 to 128048 Oct 31 01:08:35.599253 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 31 01:08:35.608459 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 31 01:08:35.610992 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 31 01:08:35.612452 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 31 01:08:35.625331 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 31 01:08:35.632226 kernel: loop3: detected capacity change from 0 to 219144 Oct 31 01:08:35.642044 systemd-tmpfiles[1346]: ACLs are not supported, ignoring. Oct 31 01:08:35.642058 systemd-tmpfiles[1346]: ACLs are not supported, ignoring. Oct 31 01:08:35.644787 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 01:08:35.652243 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 01:08:35.655558 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 31 01:08:35.662224 kernel: loop4: detected capacity change from 0 to 110984 Oct 31 01:08:35.685224 kernel: loop5: detected capacity change from 0 to 2960 Oct 31 01:08:35.704914 systemd-resolved[1345]: Positive Trust Anchors: Oct 31 01:08:35.704925 systemd-resolved[1345]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 01:08:35.704927 systemd-resolved[1345]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 31 01:08:35.704949 systemd-resolved[1345]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 31 01:08:35.716871 systemd-resolved[1345]: Defaulting to hostname 'linux'. Oct 31 01:08:35.717746 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 31 01:08:35.717974 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 31 01:08:35.733221 kernel: loop6: detected capacity change from 0 to 128048 Oct 31 01:08:35.740971 kernel: loop7: detected capacity change from 0 to 219144 Oct 31 01:08:35.830232 kernel: loop1: detected capacity change from 0 to 110984 Oct 31 01:08:35.855099 (sd-merge)[1359]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-vmware.raw'. Oct 31 01:08:35.857339 (sd-merge)[1359]: Merged extensions into '/usr'. Oct 31 01:08:35.861185 systemd[1]: Reload requested from client PID 1303 ('systemd-sysext') (unit systemd-sysext.service)... Oct 31 01:08:35.861196 systemd[1]: Reloading... Oct 31 01:08:35.897224 zram_generator::config[1383]: No configuration found. Oct 31 01:08:36.000294 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 31 01:08:36.047911 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 31 01:08:36.048123 systemd[1]: Reloading finished in 186 ms. Oct 31 01:08:36.070923 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 31 01:08:36.071266 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 31 01:08:36.083285 systemd[1]: Starting ensure-sysext.service... Oct 31 01:08:36.084189 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 31 01:08:36.086363 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 01:08:36.093685 systemd[1]: Reload requested from client PID 1443 ('systemctl') (unit ensure-sysext.service)... Oct 31 01:08:36.093695 systemd[1]: Reloading... Oct 31 01:08:36.104505 systemd-tmpfiles[1444]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 31 01:08:36.104525 systemd-tmpfiles[1444]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 31 01:08:36.104686 systemd-tmpfiles[1444]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 31 01:08:36.104847 systemd-tmpfiles[1444]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 31 01:08:36.105673 systemd-tmpfiles[1444]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 31 01:08:36.105877 systemd-tmpfiles[1444]: ACLs are not supported, ignoring. Oct 31 01:08:36.105948 systemd-tmpfiles[1444]: ACLs are not supported, ignoring. Oct 31 01:08:36.108907 systemd-tmpfiles[1444]: Detected autofs mount point /boot during canonicalization of boot. Oct 31 01:08:36.108990 systemd-tmpfiles[1444]: Skipping /boot Oct 31 01:08:36.113599 systemd-tmpfiles[1444]: Detected autofs mount point /boot during canonicalization of boot. Oct 31 01:08:36.113661 systemd-tmpfiles[1444]: Skipping /boot Oct 31 01:08:36.119096 systemd-udevd[1445]: Using default interface naming scheme 'v257'. Oct 31 01:08:36.139266 zram_generator::config[1469]: No configuration found. Oct 31 01:08:36.221264 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 31 01:08:36.271054 systemd[1]: Reloading finished in 177 ms. Oct 31 01:08:36.304484 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 01:08:36.309300 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 31 01:08:36.310442 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 31 01:08:36.311528 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 31 01:08:36.320731 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 31 01:08:36.322419 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 31 01:08:36.323828 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:08:36.325826 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 01:08:36.327558 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 01:08:36.333723 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 01:08:36.333900 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 01:08:36.333973 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 31 01:08:36.334047 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:08:36.336091 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:08:36.336191 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 01:08:36.337057 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 31 01:08:36.337122 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:08:36.339133 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:08:36.344358 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 31 01:08:36.344765 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 01:08:36.344843 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 31 01:08:36.344951 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:08:36.348896 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 01:08:36.352465 systemd[1]: Finished ensure-sysext.service. Oct 31 01:08:36.352891 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 31 01:08:36.363542 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 31 01:08:36.373745 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 31 01:08:36.374090 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 01:08:36.374300 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 01:08:36.374551 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 01:08:36.374660 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 01:08:36.375193 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 01:08:36.381829 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 31 01:08:36.382458 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 01:08:36.382574 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 01:08:36.383511 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 31 01:08:36.388861 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 01:08:36.389258 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 31 01:08:36.460096 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 31 01:08:36.460568 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 31 01:08:36.460808 augenrules[1587]: No rules Oct 31 01:08:36.462848 systemd[1]: audit-rules.service: Deactivated successfully. Oct 31 01:08:36.463188 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 31 01:08:36.472813 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 31 01:08:36.480098 systemd-networkd[1554]: lo: Link UP Oct 31 01:08:36.480272 systemd-networkd[1554]: lo: Gained carrier Oct 31 01:08:36.483452 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 31 01:08:36.483807 systemd[1]: Reached target network.target - Network. Oct 31 01:08:36.485353 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 31 01:08:36.486644 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 31 01:08:36.488328 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 31 01:08:36.488464 systemd[1]: Reached target time-set.target - System Time Set. Oct 31 01:08:36.504656 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 31 01:08:36.512264 systemd-networkd[1554]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Oct 31 01:08:36.515239 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Oct 31 01:08:36.515418 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Oct 31 01:08:36.516988 systemd-networkd[1554]: ens192: Link UP Oct 31 01:08:36.519280 kernel: mousedev: PS/2 mouse device common for all mice Oct 31 01:08:36.519403 systemd-networkd[1554]: ens192: Gained carrier Oct 31 01:08:36.523019 systemd-timesyncd[1558]: Network configuration changed, trying to establish connection. Oct 31 01:08:36.537228 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 31 01:08:36.553228 kernel: ACPI: button: Power Button [PWRF] Oct 31 01:10:14.998372 systemd-timesyncd[1558]: Contacted time server 137.190.2.4:123 (0.flatcar.pool.ntp.org). Oct 31 01:10:14.998423 systemd-resolved[1345]: Clock change detected. Flushing caches. Oct 31 01:10:14.998669 systemd-timesyncd[1558]: Initial clock synchronization to Fri 2025-10-31 01:10:14.998318 UTC. Oct 31 01:10:15.024226 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Oct 31 01:10:15.026804 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 31 01:10:15.054023 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 31 01:10:15.062741 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Oct 31 01:10:15.151065 (udev-worker)[1541]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Oct 31 01:10:15.174585 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 01:10:15.237035 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 01:10:15.312325 ldconfig[1532]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 31 01:10:15.314382 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 31 01:10:15.315623 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 31 01:10:15.325171 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 31 01:10:15.325751 systemd[1]: Reached target sysinit.target - System Initialization. Oct 31 01:10:15.325995 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 31 01:10:15.326205 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 31 01:10:15.326369 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 31 01:10:15.326608 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 31 01:10:15.326849 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 31 01:10:15.327022 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 31 01:10:15.327185 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 31 01:10:15.327239 systemd[1]: Reached target paths.target - Path Units. Oct 31 01:10:15.327382 systemd[1]: Reached target timers.target - Timer Units. Oct 31 01:10:15.328218 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 31 01:10:15.329344 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 31 01:10:15.330828 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 31 01:10:15.331035 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 31 01:10:15.331160 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 31 01:10:15.332479 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 31 01:10:15.332770 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 31 01:10:15.333304 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 31 01:10:15.333851 systemd[1]: Reached target sockets.target - Socket Units. Oct 31 01:10:15.333961 systemd[1]: Reached target basic.target - Basic System. Oct 31 01:10:15.334099 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 31 01:10:15.334115 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 31 01:10:15.334826 systemd[1]: Starting containerd.service - containerd container runtime... Oct 31 01:10:15.337804 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 31 01:10:15.342804 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 31 01:10:15.343702 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 31 01:10:15.345621 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 31 01:10:15.345794 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 31 01:10:15.346870 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 31 01:10:15.348767 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 31 01:10:15.352847 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 31 01:10:15.356303 jq[1643]: false Oct 31 01:10:15.356538 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 31 01:10:15.359820 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 31 01:10:15.361406 google_oslogin_nss_cache[1645]: oslogin_cache_refresh[1645]: Refreshing passwd entry cache Oct 31 01:10:15.362517 oslogin_cache_refresh[1645]: Refreshing passwd entry cache Oct 31 01:10:15.364793 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 31 01:10:15.364933 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 31 01:10:15.368319 extend-filesystems[1644]: Found /dev/sda6 Oct 31 01:10:15.369835 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 31 01:10:15.373456 google_oslogin_nss_cache[1645]: oslogin_cache_refresh[1645]: Failure getting users, quitting Oct 31 01:10:15.372849 systemd[1]: Starting update-engine.service - Update Engine... Oct 31 01:10:15.373544 extend-filesystems[1644]: Found /dev/sda9 Oct 31 01:10:15.371709 oslogin_cache_refresh[1645]: Failure getting users, quitting Oct 31 01:10:15.374000 google_oslogin_nss_cache[1645]: oslogin_cache_refresh[1645]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 31 01:10:15.374000 google_oslogin_nss_cache[1645]: oslogin_cache_refresh[1645]: Refreshing group entry cache Oct 31 01:10:15.373688 oslogin_cache_refresh[1645]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 31 01:10:15.373714 oslogin_cache_refresh[1645]: Refreshing group entry cache Oct 31 01:10:15.374185 extend-filesystems[1644]: Checking size of /dev/sda9 Oct 31 01:10:15.377820 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 31 01:10:15.380786 extend-filesystems[1644]: Resized partition /dev/sda9 Oct 31 01:10:15.382066 google_oslogin_nss_cache[1645]: oslogin_cache_refresh[1645]: Failure getting groups, quitting Oct 31 01:10:15.382066 google_oslogin_nss_cache[1645]: oslogin_cache_refresh[1645]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 31 01:10:15.381598 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Oct 31 01:10:15.381585 oslogin_cache_refresh[1645]: Failure getting groups, quitting Oct 31 01:10:15.381593 oslogin_cache_refresh[1645]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 31 01:10:15.384203 extend-filesystems[1669]: resize2fs 1.47.3 (8-Jul-2025) Oct 31 01:10:15.387772 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 1635323 blocks Oct 31 01:10:15.387804 kernel: EXT4-fs (sda9): resized filesystem to 1635323 Oct 31 01:10:15.388553 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 31 01:10:15.388924 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 31 01:10:15.389064 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 31 01:10:15.389223 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 31 01:10:15.389355 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 31 01:10:15.390632 extend-filesystems[1669]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Oct 31 01:10:15.390632 extend-filesystems[1669]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 31 01:10:15.390632 extend-filesystems[1669]: The filesystem on /dev/sda9 is now 1635323 (4k) blocks long. Oct 31 01:10:15.391044 extend-filesystems[1644]: Resized filesystem in /dev/sda9 Oct 31 01:10:15.390958 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 31 01:10:15.391080 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 31 01:10:15.392287 systemd[1]: motdgen.service: Deactivated successfully. Oct 31 01:10:15.392417 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 31 01:10:15.393983 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 31 01:10:15.399297 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 31 01:10:15.401112 jq[1662]: true Oct 31 01:10:15.414311 (ntainerd)[1680]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 31 01:10:15.420658 update_engine[1657]: I20251031 01:10:15.420614 1657 main.cc:92] Flatcar Update Engine starting Oct 31 01:10:15.421072 jq[1686]: true Oct 31 01:10:15.429081 tar[1674]: linux-amd64/LICENSE Oct 31 01:10:15.429919 tar[1674]: linux-amd64/helm Oct 31 01:10:15.435312 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Oct 31 01:10:15.439403 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Oct 31 01:10:15.488497 dbus-daemon[1641]: [system] SELinux support is enabled Oct 31 01:10:15.492508 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 31 01:10:15.494102 unknown[1700]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Oct 31 01:10:15.494174 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 31 01:10:15.494190 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 31 01:10:15.495773 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 31 01:10:15.495785 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 31 01:10:15.496447 unknown[1700]: Core dump limit set to -1 Oct 31 01:10:15.510809 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Oct 31 01:10:15.511892 systemd[1]: Started update-engine.service - Update Engine. Oct 31 01:10:15.512748 update_engine[1657]: I20251031 01:10:15.512260 1657 update_check_scheduler.cc:74] Next update check in 9m28s Oct 31 01:10:15.524999 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 31 01:10:15.530349 systemd-logind[1650]: Watching system buttons on /dev/input/event2 (Power Button) Oct 31 01:10:15.530363 systemd-logind[1650]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 31 01:10:15.530507 systemd-logind[1650]: New seat seat0. Oct 31 01:10:15.531010 systemd[1]: Started systemd-logind.service - User Login Management. Oct 31 01:10:15.565200 bash[1716]: Updated "/home/core/.ssh/authorized_keys" Oct 31 01:10:15.571236 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 31 01:10:15.572001 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 31 01:10:15.614910 sshd_keygen[1675]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 31 01:10:15.677101 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 31 01:10:15.690855 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 31 01:10:15.709939 systemd[1]: issuegen.service: Deactivated successfully. Oct 31 01:10:15.710320 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 31 01:10:15.712387 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 31 01:10:15.736859 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 31 01:10:15.738512 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 31 01:10:15.739761 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 31 01:10:15.740900 systemd[1]: Reached target getty.target - Login Prompts. Oct 31 01:10:15.744754 locksmithd[1718]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 31 01:10:15.761701 containerd[1680]: time="2025-10-31T01:10:15Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 31 01:10:15.763880 containerd[1680]: time="2025-10-31T01:10:15.762057062Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 31 01:10:15.770900 containerd[1680]: time="2025-10-31T01:10:15.770870535Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.67µs" Oct 31 01:10:15.770900 containerd[1680]: time="2025-10-31T01:10:15.770893935Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 31 01:10:15.770982 containerd[1680]: time="2025-10-31T01:10:15.770906199Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 31 01:10:15.771011 containerd[1680]: time="2025-10-31T01:10:15.770998799Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 31 01:10:15.771031 containerd[1680]: time="2025-10-31T01:10:15.771010827Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 31 01:10:15.771045 containerd[1680]: time="2025-10-31T01:10:15.771029956Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 31 01:10:15.771080 containerd[1680]: time="2025-10-31T01:10:15.771066670Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 31 01:10:15.771104 containerd[1680]: time="2025-10-31T01:10:15.771078474Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 31 01:10:15.771220 containerd[1680]: time="2025-10-31T01:10:15.771207871Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 31 01:10:15.771220 containerd[1680]: time="2025-10-31T01:10:15.771218572Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 31 01:10:15.771253 containerd[1680]: time="2025-10-31T01:10:15.771229648Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 31 01:10:15.771253 containerd[1680]: time="2025-10-31T01:10:15.771235652Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 31 01:10:15.771304 containerd[1680]: time="2025-10-31T01:10:15.771292201Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 31 01:10:15.771425 containerd[1680]: time="2025-10-31T01:10:15.771414050Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 31 01:10:15.771445 containerd[1680]: time="2025-10-31T01:10:15.771436782Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 31 01:10:15.771445 containerd[1680]: time="2025-10-31T01:10:15.771443495Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 31 01:10:15.771485 containerd[1680]: time="2025-10-31T01:10:15.771461146Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 31 01:10:15.771868 containerd[1680]: time="2025-10-31T01:10:15.771852497Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 31 01:10:15.771910 containerd[1680]: time="2025-10-31T01:10:15.771899580Z" level=info msg="metadata content store policy set" policy=shared Oct 31 01:10:15.773741 containerd[1680]: time="2025-10-31T01:10:15.773677233Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 31 01:10:15.773741 containerd[1680]: time="2025-10-31T01:10:15.773704808Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 31 01:10:15.773741 containerd[1680]: time="2025-10-31T01:10:15.773717645Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 31 01:10:15.773808 containerd[1680]: time="2025-10-31T01:10:15.773749615Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 31 01:10:15.773808 containerd[1680]: time="2025-10-31T01:10:15.773763683Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 31 01:10:15.773808 containerd[1680]: time="2025-10-31T01:10:15.773773759Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 31 01:10:15.773808 containerd[1680]: time="2025-10-31T01:10:15.773784071Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 31 01:10:15.773808 containerd[1680]: time="2025-10-31T01:10:15.773793996Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 31 01:10:15.773808 containerd[1680]: time="2025-10-31T01:10:15.773801176Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 31 01:10:15.773886 containerd[1680]: time="2025-10-31T01:10:15.773809741Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 31 01:10:15.773886 containerd[1680]: time="2025-10-31T01:10:15.773817218Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 31 01:10:15.773886 containerd[1680]: time="2025-10-31T01:10:15.773826364Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 31 01:10:15.773924 containerd[1680]: time="2025-10-31T01:10:15.773892327Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 31 01:10:15.773924 containerd[1680]: time="2025-10-31T01:10:15.773905478Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 31 01:10:15.773924 containerd[1680]: time="2025-10-31T01:10:15.773917747Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 31 01:10:15.773964 containerd[1680]: time="2025-10-31T01:10:15.773927291Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 31 01:10:15.773964 containerd[1680]: time="2025-10-31T01:10:15.773935919Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 31 01:10:15.773964 containerd[1680]: time="2025-10-31T01:10:15.773942899Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 31 01:10:15.773964 containerd[1680]: time="2025-10-31T01:10:15.773955064Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 31 01:10:15.774019 containerd[1680]: time="2025-10-31T01:10:15.773964277Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 31 01:10:15.774019 containerd[1680]: time="2025-10-31T01:10:15.773973626Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 31 01:10:15.774019 containerd[1680]: time="2025-10-31T01:10:15.773980164Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 31 01:10:15.774019 containerd[1680]: time="2025-10-31T01:10:15.773988180Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 31 01:10:15.774070 containerd[1680]: time="2025-10-31T01:10:15.774032396Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 31 01:10:15.774070 containerd[1680]: time="2025-10-31T01:10:15.774041927Z" level=info msg="Start snapshots syncer" Oct 31 01:10:15.774070 containerd[1680]: time="2025-10-31T01:10:15.774056573Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 31 01:10:15.774405 containerd[1680]: time="2025-10-31T01:10:15.774253767Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 31 01:10:15.774405 containerd[1680]: time="2025-10-31T01:10:15.774288629Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 31 01:10:15.774493 containerd[1680]: time="2025-10-31T01:10:15.774337895Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 31 01:10:15.774493 containerd[1680]: time="2025-10-31T01:10:15.774393558Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 31 01:10:15.774493 containerd[1680]: time="2025-10-31T01:10:15.774409758Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 31 01:10:15.774493 containerd[1680]: time="2025-10-31T01:10:15.774423379Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 31 01:10:15.774493 containerd[1680]: time="2025-10-31T01:10:15.774433407Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 31 01:10:15.774493 containerd[1680]: time="2025-10-31T01:10:15.774445397Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 31 01:10:15.774493 containerd[1680]: time="2025-10-31T01:10:15.774453638Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 31 01:10:15.774493 containerd[1680]: time="2025-10-31T01:10:15.774462646Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 31 01:10:15.774493 containerd[1680]: time="2025-10-31T01:10:15.774478849Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 31 01:10:15.774493 containerd[1680]: time="2025-10-31T01:10:15.774488618Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 31 01:10:15.774628 containerd[1680]: time="2025-10-31T01:10:15.774497614Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 31 01:10:15.774628 containerd[1680]: time="2025-10-31T01:10:15.774522754Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 31 01:10:15.774628 containerd[1680]: time="2025-10-31T01:10:15.774534539Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 31 01:10:15.774628 containerd[1680]: time="2025-10-31T01:10:15.774542845Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 31 01:10:15.774628 containerd[1680]: time="2025-10-31T01:10:15.774550936Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 31 01:10:15.774628 containerd[1680]: time="2025-10-31T01:10:15.774555901Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 31 01:10:15.774628 containerd[1680]: time="2025-10-31T01:10:15.774563920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 31 01:10:15.774628 containerd[1680]: time="2025-10-31T01:10:15.774572805Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 31 01:10:15.774628 containerd[1680]: time="2025-10-31T01:10:15.774582670Z" level=info msg="runtime interface created" Oct 31 01:10:15.774628 containerd[1680]: time="2025-10-31T01:10:15.774588111Z" level=info msg="created NRI interface" Oct 31 01:10:15.774628 containerd[1680]: time="2025-10-31T01:10:15.774593283Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 31 01:10:15.774628 containerd[1680]: time="2025-10-31T01:10:15.774601513Z" level=info msg="Connect containerd service" Oct 31 01:10:15.774628 containerd[1680]: time="2025-10-31T01:10:15.774620676Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 31 01:10:15.775374 containerd[1680]: time="2025-10-31T01:10:15.775089694Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 31 01:10:15.875146 tar[1674]: linux-amd64/README.md Oct 31 01:10:15.886523 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 31 01:10:15.888940 containerd[1680]: time="2025-10-31T01:10:15.888224840Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 31 01:10:15.888940 containerd[1680]: time="2025-10-31T01:10:15.888265686Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 31 01:10:15.888940 containerd[1680]: time="2025-10-31T01:10:15.888281622Z" level=info msg="Start subscribing containerd event" Oct 31 01:10:15.888940 containerd[1680]: time="2025-10-31T01:10:15.888297409Z" level=info msg="Start recovering state" Oct 31 01:10:15.888940 containerd[1680]: time="2025-10-31T01:10:15.888351386Z" level=info msg="Start event monitor" Oct 31 01:10:15.888940 containerd[1680]: time="2025-10-31T01:10:15.888359435Z" level=info msg="Start cni network conf syncer for default" Oct 31 01:10:15.888940 containerd[1680]: time="2025-10-31T01:10:15.888363271Z" level=info msg="Start streaming server" Oct 31 01:10:15.888940 containerd[1680]: time="2025-10-31T01:10:15.888367792Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 31 01:10:15.888940 containerd[1680]: time="2025-10-31T01:10:15.888371952Z" level=info msg="runtime interface starting up..." Oct 31 01:10:15.888940 containerd[1680]: time="2025-10-31T01:10:15.888374696Z" level=info msg="starting plugins..." Oct 31 01:10:15.888940 containerd[1680]: time="2025-10-31T01:10:15.888381201Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 31 01:10:15.888940 containerd[1680]: time="2025-10-31T01:10:15.888439996Z" level=info msg="containerd successfully booted in 0.126954s" Oct 31 01:10:15.888801 systemd[1]: Started containerd.service - containerd container runtime. Oct 31 01:10:16.847834 systemd-networkd[1554]: ens192: Gained IPv6LL Oct 31 01:10:16.849358 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 31 01:10:16.849998 systemd[1]: Reached target network-online.target - Network is Online. Oct 31 01:10:16.851071 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Oct 31 01:10:16.852326 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 01:10:16.854255 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 31 01:10:16.871903 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 31 01:10:16.894435 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 31 01:10:16.894670 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Oct 31 01:10:16.895152 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 31 01:10:18.065654 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 01:10:18.066007 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 31 01:10:18.066564 systemd[1]: Startup finished in 2.187s (kernel) + 4.420s (initrd) + 4.974s (userspace) = 11.582s. Oct 31 01:10:18.070925 (kubelet)[1847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 01:10:18.093270 login[1771]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 31 01:10:18.098086 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 31 01:10:18.098813 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 31 01:10:18.105380 systemd-logind[1650]: New session 1 of user core. Oct 31 01:10:18.113337 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 31 01:10:18.116804 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 31 01:10:18.127275 (systemd)[1852]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:10:18.128815 systemd-logind[1650]: New session c1 of user core. Oct 31 01:10:18.234623 systemd[1852]: Queued start job for default target default.target. Oct 31 01:10:18.239903 systemd[1852]: Created slice app.slice - User Application Slice. Oct 31 01:10:18.239923 systemd[1852]: Reached target paths.target - Paths. Oct 31 01:10:18.240016 systemd[1852]: Reached target timers.target - Timers. Oct 31 01:10:18.242773 systemd[1852]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 31 01:10:18.248161 systemd[1852]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 31 01:10:18.248207 systemd[1852]: Reached target sockets.target - Sockets. Oct 31 01:10:18.248233 systemd[1852]: Reached target basic.target - Basic System. Oct 31 01:10:18.248255 systemd[1852]: Reached target default.target - Main User Target. Oct 31 01:10:18.248271 systemd[1852]: Startup finished in 115ms. Oct 31 01:10:18.248393 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 31 01:10:18.250005 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 31 01:10:18.418893 login[1773]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 31 01:10:18.421641 systemd-logind[1650]: New session 2 of user core. Oct 31 01:10:18.428818 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 31 01:10:19.104304 kubelet[1847]: E1031 01:10:19.104267 1847 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 01:10:19.105879 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 01:10:19.105976 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 01:10:19.106275 systemd[1]: kubelet.service: Consumed 586ms CPU time, 258.4M memory peak. Oct 31 01:10:29.356470 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 31 01:10:29.357807 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 01:10:29.699385 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 01:10:29.702444 (kubelet)[1897]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 01:10:29.741468 kubelet[1897]: E1031 01:10:29.741424 1897 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 01:10:29.744320 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 01:10:29.744489 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 01:10:29.744894 systemd[1]: kubelet.service: Consumed 109ms CPU time, 108.8M memory peak. Oct 31 01:10:39.994760 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 31 01:10:39.996830 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 01:10:40.204606 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 01:10:40.207268 (kubelet)[1912]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 01:10:40.240375 kubelet[1912]: E1031 01:10:40.240339 1912 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 01:10:40.241765 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 01:10:40.241849 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 01:10:40.242304 systemd[1]: kubelet.service: Consumed 90ms CPU time, 112.3M memory peak. Oct 31 01:10:45.610554 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 31 01:10:45.612526 systemd[1]: Started sshd@0-139.178.70.100:22-139.178.89.65:58248.service - OpenSSH per-connection server daemon (139.178.89.65:58248). Oct 31 01:10:45.672303 sshd[1919]: Accepted publickey for core from 139.178.89.65 port 58248 ssh2: RSA SHA256:E9GqLv6yFiNZfazOGFBvIafNGKEBFs5YqxPFoTxd1zU Oct 31 01:10:45.672966 sshd-session[1919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:10:45.675515 systemd-logind[1650]: New session 3 of user core. Oct 31 01:10:45.689805 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 31 01:10:45.745369 systemd[1]: Started sshd@1-139.178.70.100:22-139.178.89.65:58258.service - OpenSSH per-connection server daemon (139.178.89.65:58258). Oct 31 01:10:45.790083 sshd[1925]: Accepted publickey for core from 139.178.89.65 port 58258 ssh2: RSA SHA256:E9GqLv6yFiNZfazOGFBvIafNGKEBFs5YqxPFoTxd1zU Oct 31 01:10:45.790821 sshd-session[1925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:10:45.793312 systemd-logind[1650]: New session 4 of user core. Oct 31 01:10:45.804103 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 31 01:10:45.854008 sshd[1928]: Connection closed by 139.178.89.65 port 58258 Oct 31 01:10:45.854810 sshd-session[1925]: pam_unix(sshd:session): session closed for user core Oct 31 01:10:45.860384 systemd[1]: sshd@1-139.178.70.100:22-139.178.89.65:58258.service: Deactivated successfully. Oct 31 01:10:45.861480 systemd[1]: session-4.scope: Deactivated successfully. Oct 31 01:10:45.862416 systemd-logind[1650]: Session 4 logged out. Waiting for processes to exit. Oct 31 01:10:45.863593 systemd[1]: Started sshd@2-139.178.70.100:22-139.178.89.65:58262.service - OpenSSH per-connection server daemon (139.178.89.65:58262). Oct 31 01:10:45.865015 systemd-logind[1650]: Removed session 4. Oct 31 01:10:45.911412 sshd[1934]: Accepted publickey for core from 139.178.89.65 port 58262 ssh2: RSA SHA256:E9GqLv6yFiNZfazOGFBvIafNGKEBFs5YqxPFoTxd1zU Oct 31 01:10:45.912217 sshd-session[1934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:10:45.915488 systemd-logind[1650]: New session 5 of user core. Oct 31 01:10:45.923826 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 31 01:10:45.970058 sshd[1937]: Connection closed by 139.178.89.65 port 58262 Oct 31 01:10:45.970386 sshd-session[1934]: pam_unix(sshd:session): session closed for user core Oct 31 01:10:45.981381 systemd[1]: sshd@2-139.178.70.100:22-139.178.89.65:58262.service: Deactivated successfully. Oct 31 01:10:45.982544 systemd[1]: session-5.scope: Deactivated successfully. Oct 31 01:10:45.983251 systemd-logind[1650]: Session 5 logged out. Waiting for processes to exit. Oct 31 01:10:45.984241 systemd-logind[1650]: Removed session 5. Oct 31 01:10:45.985191 systemd[1]: Started sshd@3-139.178.70.100:22-139.178.89.65:33758.service - OpenSSH per-connection server daemon (139.178.89.65:33758). Oct 31 01:10:46.030760 sshd[1943]: Accepted publickey for core from 139.178.89.65 port 33758 ssh2: RSA SHA256:E9GqLv6yFiNZfazOGFBvIafNGKEBFs5YqxPFoTxd1zU Oct 31 01:10:46.031626 sshd-session[1943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:10:46.034852 systemd-logind[1650]: New session 6 of user core. Oct 31 01:10:46.043952 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 31 01:10:46.094204 sshd[1946]: Connection closed by 139.178.89.65 port 33758 Oct 31 01:10:46.094135 sshd-session[1943]: pam_unix(sshd:session): session closed for user core Oct 31 01:10:46.100237 systemd[1]: sshd@3-139.178.70.100:22-139.178.89.65:33758.service: Deactivated successfully. Oct 31 01:10:46.101366 systemd[1]: session-6.scope: Deactivated successfully. Oct 31 01:10:46.102037 systemd-logind[1650]: Session 6 logged out. Waiting for processes to exit. Oct 31 01:10:46.103916 systemd[1]: Started sshd@4-139.178.70.100:22-139.178.89.65:33762.service - OpenSSH per-connection server daemon (139.178.89.65:33762). Oct 31 01:10:46.105250 systemd-logind[1650]: Removed session 6. Oct 31 01:10:46.143225 sshd[1952]: Accepted publickey for core from 139.178.89.65 port 33762 ssh2: RSA SHA256:E9GqLv6yFiNZfazOGFBvIafNGKEBFs5YqxPFoTxd1zU Oct 31 01:10:46.144415 sshd-session[1952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:10:46.147931 systemd-logind[1650]: New session 7 of user core. Oct 31 01:10:46.157846 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 31 01:10:46.217423 sudo[1956]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 31 01:10:46.217882 sudo[1956]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 01:10:46.227201 sudo[1956]: pam_unix(sudo:session): session closed for user root Oct 31 01:10:46.228315 sshd[1955]: Connection closed by 139.178.89.65 port 33762 Oct 31 01:10:46.228740 sshd-session[1952]: pam_unix(sshd:session): session closed for user core Oct 31 01:10:46.239382 systemd[1]: sshd@4-139.178.70.100:22-139.178.89.65:33762.service: Deactivated successfully. Oct 31 01:10:46.240636 systemd[1]: session-7.scope: Deactivated successfully. Oct 31 01:10:46.241417 systemd-logind[1650]: Session 7 logged out. Waiting for processes to exit. Oct 31 01:10:46.242572 systemd-logind[1650]: Removed session 7. Oct 31 01:10:46.243908 systemd[1]: Started sshd@5-139.178.70.100:22-139.178.89.65:33770.service - OpenSSH per-connection server daemon (139.178.89.65:33770). Oct 31 01:10:46.288930 sshd[1962]: Accepted publickey for core from 139.178.89.65 port 33770 ssh2: RSA SHA256:E9GqLv6yFiNZfazOGFBvIafNGKEBFs5YqxPFoTxd1zU Oct 31 01:10:46.289887 sshd-session[1962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:10:46.293544 systemd-logind[1650]: New session 8 of user core. Oct 31 01:10:46.301943 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 31 01:10:46.352484 sudo[1967]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 31 01:10:46.352685 sudo[1967]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 01:10:46.355711 sudo[1967]: pam_unix(sudo:session): session closed for user root Oct 31 01:10:46.360944 sudo[1966]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 31 01:10:46.361145 sudo[1966]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 01:10:46.369035 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 31 01:10:46.391895 augenrules[1989]: No rules Oct 31 01:10:46.392630 systemd[1]: audit-rules.service: Deactivated successfully. Oct 31 01:10:46.392807 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 31 01:10:46.393823 sudo[1966]: pam_unix(sudo:session): session closed for user root Oct 31 01:10:46.394744 sshd[1965]: Connection closed by 139.178.89.65 port 33770 Oct 31 01:10:46.395212 sshd-session[1962]: pam_unix(sshd:session): session closed for user core Oct 31 01:10:46.399906 systemd[1]: sshd@5-139.178.70.100:22-139.178.89.65:33770.service: Deactivated successfully. Oct 31 01:10:46.400680 systemd[1]: session-8.scope: Deactivated successfully. Oct 31 01:10:46.401110 systemd-logind[1650]: Session 8 logged out. Waiting for processes to exit. Oct 31 01:10:46.402306 systemd[1]: Started sshd@6-139.178.70.100:22-139.178.89.65:33780.service - OpenSSH per-connection server daemon (139.178.89.65:33780). Oct 31 01:10:46.403899 systemd-logind[1650]: Removed session 8. Oct 31 01:10:46.440782 sshd[1998]: Accepted publickey for core from 139.178.89.65 port 33780 ssh2: RSA SHA256:E9GqLv6yFiNZfazOGFBvIafNGKEBFs5YqxPFoTxd1zU Oct 31 01:10:46.441613 sshd-session[1998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:10:46.445770 systemd-logind[1650]: New session 9 of user core. Oct 31 01:10:46.452879 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 31 01:10:46.501823 sudo[2002]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 31 01:10:46.501998 sudo[2002]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 01:10:46.842181 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 31 01:10:46.854126 (dockerd)[2019]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 31 01:10:47.064665 dockerd[2019]: time="2025-10-31T01:10:47.064628084Z" level=info msg="Starting up" Oct 31 01:10:47.065411 dockerd[2019]: time="2025-10-31T01:10:47.065395979Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 31 01:10:47.072373 dockerd[2019]: time="2025-10-31T01:10:47.072348213Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 31 01:10:47.081144 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1538400631-merged.mount: Deactivated successfully. Oct 31 01:10:47.089572 systemd[1]: var-lib-docker-metacopy\x2dcheck517290992-merged.mount: Deactivated successfully. Oct 31 01:10:47.099952 dockerd[2019]: time="2025-10-31T01:10:47.099914572Z" level=info msg="Loading containers: start." Oct 31 01:10:47.107741 kernel: Initializing XFRM netlink socket Oct 31 01:10:47.246424 systemd-networkd[1554]: docker0: Link UP Oct 31 01:10:47.247540 dockerd[2019]: time="2025-10-31T01:10:47.247499584Z" level=info msg="Loading containers: done." Oct 31 01:10:47.255420 dockerd[2019]: time="2025-10-31T01:10:47.255254583Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 31 01:10:47.255420 dockerd[2019]: time="2025-10-31T01:10:47.255299561Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 31 01:10:47.255420 dockerd[2019]: time="2025-10-31T01:10:47.255340625Z" level=info msg="Initializing buildkit" Oct 31 01:10:47.264387 dockerd[2019]: time="2025-10-31T01:10:47.264378269Z" level=info msg="Completed buildkit initialization" Oct 31 01:10:47.269645 dockerd[2019]: time="2025-10-31T01:10:47.269632121Z" level=info msg="Daemon has completed initialization" Oct 31 01:10:47.269774 dockerd[2019]: time="2025-10-31T01:10:47.269754985Z" level=info msg="API listen on /run/docker.sock" Oct 31 01:10:47.269844 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 31 01:10:47.857925 containerd[1680]: time="2025-10-31T01:10:47.857890283Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 31 01:10:48.409536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2082797365.mount: Deactivated successfully. Oct 31 01:10:49.234856 containerd[1680]: time="2025-10-31T01:10:49.234823580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:10:49.235526 containerd[1680]: time="2025-10-31T01:10:49.235511038Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Oct 31 01:10:49.235710 containerd[1680]: time="2025-10-31T01:10:49.235695530Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:10:49.237159 containerd[1680]: time="2025-10-31T01:10:49.237135636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:10:49.237736 containerd[1680]: time="2025-10-31T01:10:49.237649109Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 1.379726682s" Oct 31 01:10:49.237736 containerd[1680]: time="2025-10-31T01:10:49.237666884Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Oct 31 01:10:49.238132 containerd[1680]: time="2025-10-31T01:10:49.238036322Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 31 01:10:50.313942 containerd[1680]: time="2025-10-31T01:10:50.313867519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:10:50.318769 containerd[1680]: time="2025-10-31T01:10:50.318749085Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Oct 31 01:10:50.324006 containerd[1680]: time="2025-10-31T01:10:50.323963776Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:10:50.332880 containerd[1680]: time="2025-10-31T01:10:50.332841884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:10:50.333743 containerd[1680]: time="2025-10-31T01:10:50.333471855Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.095420035s" Oct 31 01:10:50.333743 containerd[1680]: time="2025-10-31T01:10:50.333493459Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Oct 31 01:10:50.334120 containerd[1680]: time="2025-10-31T01:10:50.333967135Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 31 01:10:50.492283 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 31 01:10:50.493752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 01:10:50.917142 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 01:10:50.919802 (kubelet)[2298]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 01:10:50.951856 kubelet[2298]: E1031 01:10:50.951642 2298 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 01:10:50.953219 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 01:10:50.953358 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 01:10:50.953717 systemd[1]: kubelet.service: Consumed 99ms CPU time, 110.2M memory peak. Oct 31 01:10:51.753200 containerd[1680]: time="2025-10-31T01:10:51.753171349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:10:51.753875 containerd[1680]: time="2025-10-31T01:10:51.753826711Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Oct 31 01:10:51.754072 containerd[1680]: time="2025-10-31T01:10:51.754054344Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:10:51.755732 containerd[1680]: time="2025-10-31T01:10:51.755517485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:10:51.756186 containerd[1680]: time="2025-10-31T01:10:51.756174619Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.422191337s" Oct 31 01:10:51.756239 containerd[1680]: time="2025-10-31T01:10:51.756231545Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Oct 31 01:10:51.756686 containerd[1680]: time="2025-10-31T01:10:51.756671032Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 31 01:10:52.718038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2752263583.mount: Deactivated successfully. Oct 31 01:10:52.971718 containerd[1680]: time="2025-10-31T01:10:52.971648365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:10:52.979527 containerd[1680]: time="2025-10-31T01:10:52.979512200Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Oct 31 01:10:52.986715 containerd[1680]: time="2025-10-31T01:10:52.986699348Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:10:52.991634 containerd[1680]: time="2025-10-31T01:10:52.991618094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:10:52.992474 containerd[1680]: time="2025-10-31T01:10:52.992457306Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.235706612s" Oct 31 01:10:52.992558 containerd[1680]: time="2025-10-31T01:10:52.992545778Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Oct 31 01:10:52.992925 containerd[1680]: time="2025-10-31T01:10:52.992905956Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 31 01:10:53.581530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount308318520.mount: Deactivated successfully. Oct 31 01:10:54.377060 containerd[1680]: time="2025-10-31T01:10:54.377024497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:10:54.382502 containerd[1680]: time="2025-10-31T01:10:54.382475722Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Oct 31 01:10:54.383131 containerd[1680]: time="2025-10-31T01:10:54.383101995Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:10:54.385891 containerd[1680]: time="2025-10-31T01:10:54.385028856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:10:54.385891 containerd[1680]: time="2025-10-31T01:10:54.385764197Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.392761856s" Oct 31 01:10:54.385891 containerd[1680]: time="2025-10-31T01:10:54.385784239Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Oct 31 01:10:54.386253 containerd[1680]: time="2025-10-31T01:10:54.386227575Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 31 01:10:54.905111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3439736208.mount: Deactivated successfully. Oct 31 01:10:54.907671 containerd[1680]: time="2025-10-31T01:10:54.907275252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:10:54.907671 containerd[1680]: time="2025-10-31T01:10:54.907641900Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Oct 31 01:10:54.907671 containerd[1680]: time="2025-10-31T01:10:54.907652672Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:10:54.908696 containerd[1680]: time="2025-10-31T01:10:54.908682397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:10:54.909120 containerd[1680]: time="2025-10-31T01:10:54.909108217Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 522.857828ms" Oct 31 01:10:54.909166 containerd[1680]: time="2025-10-31T01:10:54.909158678Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Oct 31 01:10:54.909489 containerd[1680]: time="2025-10-31T01:10:54.909399242Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 31 01:10:59.416830 containerd[1680]: time="2025-10-31T01:10:59.416643010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:10:59.416830 containerd[1680]: time="2025-10-31T01:10:59.416799867Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Oct 31 01:10:59.417501 containerd[1680]: time="2025-10-31T01:10:59.417488882Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:10:59.418937 containerd[1680]: time="2025-10-31T01:10:59.418921770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:10:59.419528 containerd[1680]: time="2025-10-31T01:10:59.419515681Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 4.509911504s" Oct 31 01:10:59.419589 containerd[1680]: time="2025-10-31T01:10:59.419579435Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Oct 31 01:11:00.497386 update_engine[1657]: I20251031 01:11:00.497121 1657 update_attempter.cc:509] Updating boot flags... Oct 31 01:11:01.068430 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 31 01:11:01.071809 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 01:11:01.578225 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 31 01:11:01.578290 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 31 01:11:01.578526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 01:11:01.578779 systemd[1]: kubelet.service: Consumed 56ms CPU time, 98.3M memory peak. Oct 31 01:11:01.587300 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 01:11:01.601667 systemd[1]: Reload requested from client PID 2472 ('systemctl') (unit session-9.scope)... Oct 31 01:11:01.601678 systemd[1]: Reloading... Oct 31 01:11:01.674751 zram_generator::config[2517]: No configuration found. Oct 31 01:11:01.746241 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 31 01:11:01.814502 systemd[1]: Reloading finished in 212 ms. Oct 31 01:11:01.847288 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 31 01:11:01.847346 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 31 01:11:01.847520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 01:11:01.848630 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 01:11:02.227505 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 01:11:02.236020 (kubelet)[2583]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 31 01:11:02.314400 kubelet[2583]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 01:11:02.314400 kubelet[2583]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 01:11:02.314659 kubelet[2583]: I1031 01:11:02.314453 2583 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 01:11:02.704837 kubelet[2583]: I1031 01:11:02.704818 2583 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 31 01:11:02.704837 kubelet[2583]: I1031 01:11:02.704832 2583 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 01:11:02.704916 kubelet[2583]: I1031 01:11:02.704846 2583 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 31 01:11:02.704916 kubelet[2583]: I1031 01:11:02.704851 2583 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 01:11:02.704972 kubelet[2583]: I1031 01:11:02.704961 2583 server.go:956] "Client rotation is on, will bootstrap in background" Oct 31 01:11:02.715639 kubelet[2583]: E1031 01:11:02.715614 2583 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.70.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 31 01:11:02.716534 kubelet[2583]: I1031 01:11:02.716482 2583 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 01:11:02.731306 kubelet[2583]: I1031 01:11:02.731292 2583 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 31 01:11:02.740651 kubelet[2583]: I1031 01:11:02.739926 2583 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 31 01:11:02.740882 kubelet[2583]: I1031 01:11:02.740867 2583 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 01:11:02.742105 kubelet[2583]: I1031 01:11:02.740885 2583 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 31 01:11:02.742180 kubelet[2583]: I1031 01:11:02.742107 2583 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 01:11:02.742180 kubelet[2583]: I1031 01:11:02.742114 2583 container_manager_linux.go:306] "Creating device plugin manager" Oct 31 01:11:02.742180 kubelet[2583]: I1031 01:11:02.742165 2583 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 31 01:11:02.742878 kubelet[2583]: I1031 01:11:02.742867 2583 state_mem.go:36] "Initialized new in-memory state store" Oct 31 01:11:02.742982 kubelet[2583]: I1031 01:11:02.742972 2583 kubelet.go:475] "Attempting to sync node with API server" Oct 31 01:11:02.742982 kubelet[2583]: I1031 01:11:02.742981 2583 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 01:11:02.744237 kubelet[2583]: I1031 01:11:02.744030 2583 kubelet.go:387] "Adding apiserver pod source" Oct 31 01:11:02.744237 kubelet[2583]: I1031 01:11:02.744047 2583 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 01:11:02.744237 kubelet[2583]: E1031 01:11:02.744067 2583 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 31 01:11:02.747297 kubelet[2583]: E1031 01:11:02.747282 2583 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 31 01:11:02.747432 kubelet[2583]: I1031 01:11:02.747419 2583 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 31 01:11:02.749822 kubelet[2583]: I1031 01:11:02.749584 2583 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 31 01:11:02.749822 kubelet[2583]: I1031 01:11:02.749602 2583 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 31 01:11:02.752614 kubelet[2583]: W1031 01:11:02.752599 2583 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 31 01:11:02.762461 kubelet[2583]: I1031 01:11:02.762446 2583 server.go:1262] "Started kubelet" Oct 31 01:11:02.773167 kubelet[2583]: I1031 01:11:02.773149 2583 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 01:11:02.777794 kubelet[2583]: I1031 01:11:02.777774 2583 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 01:11:02.823370 kubelet[2583]: I1031 01:11:02.823301 2583 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 31 01:11:02.838963 kubelet[2583]: I1031 01:11:02.838944 2583 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 01:11:02.840921 kubelet[2583]: I1031 01:11:02.840884 2583 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 31 01:11:02.841026 kubelet[2583]: E1031 01:11:02.841009 2583 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:11:02.854060 kubelet[2583]: I1031 01:11:02.854010 2583 server.go:310] "Adding debug handlers to kubelet server" Oct 31 01:11:02.855182 kubelet[2583]: I1031 01:11:02.855135 2583 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 01:11:02.855182 kubelet[2583]: I1031 01:11:02.855169 2583 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 31 01:11:02.855345 kubelet[2583]: I1031 01:11:02.855315 2583 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 01:11:02.869350 kubelet[2583]: I1031 01:11:02.869330 2583 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 31 01:11:02.869400 kubelet[2583]: I1031 01:11:02.869365 2583 reconciler.go:29] "Reconciler: start to sync state" Oct 31 01:11:02.869774 kubelet[2583]: I1031 01:11:02.869756 2583 factory.go:223] Registration of the systemd container factory successfully Oct 31 01:11:02.869829 kubelet[2583]: I1031 01:11:02.869814 2583 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 01:11:02.870571 kubelet[2583]: E1031 01:11:02.870551 2583 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="200ms" Oct 31 01:11:02.870647 kubelet[2583]: E1031 01:11:02.870629 2583 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 31 01:11:02.871880 kubelet[2583]: I1031 01:11:02.871746 2583 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 31 01:11:02.871880 kubelet[2583]: I1031 01:11:02.871759 2583 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 31 01:11:02.871880 kubelet[2583]: I1031 01:11:02.871774 2583 kubelet.go:2427] "Starting kubelet main sync loop" Oct 31 01:11:02.871880 kubelet[2583]: E1031 01:11:02.871799 2583 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 01:11:02.874744 kubelet[2583]: E1031 01:11:02.874372 2583 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 31 01:11:02.876902 kubelet[2583]: E1031 01:11:02.866370 2583 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.100:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.100:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18736e373d48a315 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-31 01:11:02.762423061 +0000 UTC m=+0.523950968,LastTimestamp:2025-10-31 01:11:02.762423061 +0000 UTC m=+0.523950968,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 31 01:11:02.908468 kubelet[2583]: I1031 01:11:02.908449 2583 factory.go:223] Registration of the containerd container factory successfully Oct 31 01:11:02.930117 kubelet[2583]: I1031 01:11:02.929991 2583 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 01:11:02.930117 kubelet[2583]: I1031 01:11:02.930001 2583 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 01:11:02.930117 kubelet[2583]: I1031 01:11:02.930011 2583 state_mem.go:36] "Initialized new in-memory state store" Oct 31 01:11:02.941943 kubelet[2583]: E1031 01:11:02.941920 2583 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:11:02.972259 kubelet[2583]: E1031 01:11:02.972208 2583 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 31 01:11:03.042338 kubelet[2583]: E1031 01:11:03.042321 2583 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:11:03.071718 kubelet[2583]: E1031 01:11:03.071699 2583 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="400ms" Oct 31 01:11:03.092666 kubelet[2583]: I1031 01:11:03.092602 2583 policy_none.go:49] "None policy: Start" Oct 31 01:11:03.092666 kubelet[2583]: I1031 01:11:03.092613 2583 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 31 01:11:03.092666 kubelet[2583]: I1031 01:11:03.092620 2583 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 31 01:11:03.093080 kubelet[2583]: I1031 01:11:03.093066 2583 policy_none.go:47] "Start" Oct 31 01:11:03.095658 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 31 01:11:03.108108 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 31 01:11:03.110247 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 31 01:11:03.115182 kubelet[2583]: E1031 01:11:03.115166 2583 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 31 01:11:03.115270 kubelet[2583]: I1031 01:11:03.115258 2583 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 01:11:03.115297 kubelet[2583]: I1031 01:11:03.115268 2583 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 01:11:03.116634 kubelet[2583]: I1031 01:11:03.116484 2583 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 01:11:03.117218 kubelet[2583]: E1031 01:11:03.117207 2583 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 01:11:03.117442 kubelet[2583]: E1031 01:11:03.117226 2583 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 31 01:11:03.179778 systemd[1]: Created slice kubepods-burstable-pod0135e446c74040ba35521da994c5b98a.slice - libcontainer container kubepods-burstable-pod0135e446c74040ba35521da994c5b98a.slice. Oct 31 01:11:03.195767 kubelet[2583]: E1031 01:11:03.195747 2583 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:11:03.197162 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Oct 31 01:11:03.207584 kubelet[2583]: E1031 01:11:03.207568 2583 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:11:03.209996 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Oct 31 01:11:03.211452 kubelet[2583]: E1031 01:11:03.211436 2583 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:11:03.216619 kubelet[2583]: I1031 01:11:03.216410 2583 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 01:11:03.216685 kubelet[2583]: E1031 01:11:03.216667 2583 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.100:6443/api/v1/nodes\": dial tcp 139.178.70.100:6443: connect: connection refused" node="localhost" Oct 31 01:11:03.271458 kubelet[2583]: I1031 01:11:03.271416 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 31 01:11:03.271539 kubelet[2583]: I1031 01:11:03.271529 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:11:03.271670 kubelet[2583]: I1031 01:11:03.271601 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:11:03.271670 kubelet[2583]: I1031 01:11:03.271619 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0135e446c74040ba35521da994c5b98a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0135e446c74040ba35521da994c5b98a\") " pod="kube-system/kube-apiserver-localhost" Oct 31 01:11:03.271670 kubelet[2583]: I1031 01:11:03.271631 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0135e446c74040ba35521da994c5b98a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0135e446c74040ba35521da994c5b98a\") " pod="kube-system/kube-apiserver-localhost" Oct 31 01:11:03.271670 kubelet[2583]: I1031 01:11:03.271641 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0135e446c74040ba35521da994c5b98a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0135e446c74040ba35521da994c5b98a\") " pod="kube-system/kube-apiserver-localhost" Oct 31 01:11:03.271670 kubelet[2583]: I1031 01:11:03.271652 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:11:03.271892 kubelet[2583]: I1031 01:11:03.271819 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:11:03.271892 kubelet[2583]: I1031 01:11:03.271842 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:11:03.418287 kubelet[2583]: I1031 01:11:03.418248 2583 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 01:11:03.418631 kubelet[2583]: E1031 01:11:03.418482 2583 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.100:6443/api/v1/nodes\": dial tcp 139.178.70.100:6443: connect: connection refused" node="localhost" Oct 31 01:11:03.472443 kubelet[2583]: E1031 01:11:03.472410 2583 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="800ms" Oct 31 01:11:03.498278 containerd[1680]: time="2025-10-31T01:11:03.498253639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0135e446c74040ba35521da994c5b98a,Namespace:kube-system,Attempt:0,}" Oct 31 01:11:03.509502 containerd[1680]: time="2025-10-31T01:11:03.509480316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Oct 31 01:11:03.512737 containerd[1680]: time="2025-10-31T01:11:03.512660500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Oct 31 01:11:03.706684 kubelet[2583]: E1031 01:11:03.706661 2583 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 31 01:11:03.712946 kubelet[2583]: E1031 01:11:03.712923 2583 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 31 01:11:03.798943 kubelet[2583]: E1031 01:11:03.798913 2583 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 31 01:11:03.819703 kubelet[2583]: I1031 01:11:03.819658 2583 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 01:11:03.820034 kubelet[2583]: E1031 01:11:03.819941 2583 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.100:6443/api/v1/nodes\": dial tcp 139.178.70.100:6443: connect: connection refused" node="localhost" Oct 31 01:11:03.955408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount283925372.mount: Deactivated successfully. Oct 31 01:11:03.957214 containerd[1680]: time="2025-10-31T01:11:03.956803208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 01:11:03.957494 containerd[1680]: time="2025-10-31T01:11:03.957428227Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 01:11:03.957872 containerd[1680]: time="2025-10-31T01:11:03.957855670Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 01:11:03.958490 containerd[1680]: time="2025-10-31T01:11:03.958480131Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 31 01:11:03.958961 containerd[1680]: time="2025-10-31T01:11:03.958950181Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 01:11:03.959672 containerd[1680]: time="2025-10-31T01:11:03.959655050Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 31 01:11:03.960543 containerd[1680]: time="2025-10-31T01:11:03.960529034Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 31 01:11:03.961244 containerd[1680]: time="2025-10-31T01:11:03.961233488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 01:11:03.961732 containerd[1680]: time="2025-10-31T01:11:03.961467709Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 451.251552ms" Oct 31 01:11:03.962376 containerd[1680]: time="2025-10-31T01:11:03.962361093Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 448.782761ms" Oct 31 01:11:03.963207 containerd[1680]: time="2025-10-31T01:11:03.963195517Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 463.515584ms" Oct 31 01:11:04.118923 kubelet[2583]: E1031 01:11:04.118896 2583 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 31 01:11:04.133581 containerd[1680]: time="2025-10-31T01:11:04.133549186Z" level=info msg="connecting to shim 7325a3d583585dcf815d7d7d275c01078dfec24a7ac0bc893050d2283c1af5bd" address="unix:///run/containerd/s/4884eacda9d3559ef3102758371dbac5b8bb8c1af8f8f4192ac452d9a91eb8b7" namespace=k8s.io protocol=ttrpc version=3 Oct 31 01:11:04.133788 containerd[1680]: time="2025-10-31T01:11:04.133772232Z" level=info msg="connecting to shim 85fd021eeda2aea713325dce7c68c5eea5f64b0f7150b3b0c058f337daeefbdb" address="unix:///run/containerd/s/3dfc5041ad8c8c784f4430ac46f24dc6284f1410df29e1cfc7dc44c163889950" namespace=k8s.io protocol=ttrpc version=3 Oct 31 01:11:04.136889 containerd[1680]: time="2025-10-31T01:11:04.136875351Z" level=info msg="connecting to shim 03d3c9ebcbe5bac7e0c8cbc2aeb0ab7c70cf1f27e3a1bf4ed515e8e4e319b096" address="unix:///run/containerd/s/3b77fe2588bfc10f51d63c5744799e8459cc2140e3b878a818e3de8749bf7d9d" namespace=k8s.io protocol=ttrpc version=3 Oct 31 01:11:04.188844 systemd[1]: Started cri-containerd-03d3c9ebcbe5bac7e0c8cbc2aeb0ab7c70cf1f27e3a1bf4ed515e8e4e319b096.scope - libcontainer container 03d3c9ebcbe5bac7e0c8cbc2aeb0ab7c70cf1f27e3a1bf4ed515e8e4e319b096. Oct 31 01:11:04.192462 systemd[1]: Started cri-containerd-7325a3d583585dcf815d7d7d275c01078dfec24a7ac0bc893050d2283c1af5bd.scope - libcontainer container 7325a3d583585dcf815d7d7d275c01078dfec24a7ac0bc893050d2283c1af5bd. Oct 31 01:11:04.193838 systemd[1]: Started cri-containerd-85fd021eeda2aea713325dce7c68c5eea5f64b0f7150b3b0c058f337daeefbdb.scope - libcontainer container 85fd021eeda2aea713325dce7c68c5eea5f64b0f7150b3b0c058f337daeefbdb. Oct 31 01:11:04.243936 containerd[1680]: time="2025-10-31T01:11:04.243432481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"85fd021eeda2aea713325dce7c68c5eea5f64b0f7150b3b0c058f337daeefbdb\"" Oct 31 01:11:04.250173 containerd[1680]: time="2025-10-31T01:11:04.250152134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0135e446c74040ba35521da994c5b98a,Namespace:kube-system,Attempt:0,} returns sandbox id \"03d3c9ebcbe5bac7e0c8cbc2aeb0ab7c70cf1f27e3a1bf4ed515e8e4e319b096\"" Oct 31 01:11:04.251044 containerd[1680]: time="2025-10-31T01:11:04.250822111Z" level=info msg="CreateContainer within sandbox \"85fd021eeda2aea713325dce7c68c5eea5f64b0f7150b3b0c058f337daeefbdb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 31 01:11:04.253680 containerd[1680]: time="2025-10-31T01:11:04.253663672Z" level=info msg="CreateContainer within sandbox \"03d3c9ebcbe5bac7e0c8cbc2aeb0ab7c70cf1f27e3a1bf4ed515e8e4e319b096\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 31 01:11:04.257663 containerd[1680]: time="2025-10-31T01:11:04.257649119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"7325a3d583585dcf815d7d7d275c01078dfec24a7ac0bc893050d2283c1af5bd\"" Oct 31 01:11:04.260019 containerd[1680]: time="2025-10-31T01:11:04.259993989Z" level=info msg="CreateContainer within sandbox \"7325a3d583585dcf815d7d7d275c01078dfec24a7ac0bc893050d2283c1af5bd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 31 01:11:04.260077 containerd[1680]: time="2025-10-31T01:11:04.260063901Z" level=info msg="Container afb8a27337488fdf62519ce6ad865baa61847e4a691ebb88d52b88e9150cd979: CDI devices from CRI Config.CDIDevices: []" Oct 31 01:11:04.261606 containerd[1680]: time="2025-10-31T01:11:04.261550839Z" level=info msg="Container 3e7ba7ace836641b6771560e36fe9e91c1623a4a04a6d9e6af970cd4cb3eae9d: CDI devices from CRI Config.CDIDevices: []" Oct 31 01:11:04.266107 containerd[1680]: time="2025-10-31T01:11:04.266096207Z" level=info msg="Container 73c5a4333e113ab36e64ddd3e04c8f4efc5e5f7a953595515fc410709740ebf2: CDI devices from CRI Config.CDIDevices: []" Oct 31 01:11:04.271445 containerd[1680]: time="2025-10-31T01:11:04.271296757Z" level=info msg="CreateContainer within sandbox \"7325a3d583585dcf815d7d7d275c01078dfec24a7ac0bc893050d2283c1af5bd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"73c5a4333e113ab36e64ddd3e04c8f4efc5e5f7a953595515fc410709740ebf2\"" Oct 31 01:11:04.271784 containerd[1680]: time="2025-10-31T01:11:04.271771139Z" level=info msg="StartContainer for \"73c5a4333e113ab36e64ddd3e04c8f4efc5e5f7a953595515fc410709740ebf2\"" Oct 31 01:11:04.272047 containerd[1680]: time="2025-10-31T01:11:04.272035459Z" level=info msg="CreateContainer within sandbox \"03d3c9ebcbe5bac7e0c8cbc2aeb0ab7c70cf1f27e3a1bf4ed515e8e4e319b096\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"afb8a27337488fdf62519ce6ad865baa61847e4a691ebb88d52b88e9150cd979\"" Oct 31 01:11:04.272447 containerd[1680]: time="2025-10-31T01:11:04.272437672Z" level=info msg="CreateContainer within sandbox \"85fd021eeda2aea713325dce7c68c5eea5f64b0f7150b3b0c058f337daeefbdb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3e7ba7ace836641b6771560e36fe9e91c1623a4a04a6d9e6af970cd4cb3eae9d\"" Oct 31 01:11:04.272756 containerd[1680]: time="2025-10-31T01:11:04.272644519Z" level=info msg="StartContainer for \"afb8a27337488fdf62519ce6ad865baa61847e4a691ebb88d52b88e9150cd979\"" Oct 31 01:11:04.273232 containerd[1680]: time="2025-10-31T01:11:04.273219510Z" level=info msg="connecting to shim afb8a27337488fdf62519ce6ad865baa61847e4a691ebb88d52b88e9150cd979" address="unix:///run/containerd/s/3b77fe2588bfc10f51d63c5744799e8459cc2140e3b878a818e3de8749bf7d9d" protocol=ttrpc version=3 Oct 31 01:11:04.273296 containerd[1680]: time="2025-10-31T01:11:04.273281397Z" level=info msg="StartContainer for \"3e7ba7ace836641b6771560e36fe9e91c1623a4a04a6d9e6af970cd4cb3eae9d\"" Oct 31 01:11:04.273464 kubelet[2583]: E1031 01:11:04.273447 2583 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="1.6s" Oct 31 01:11:04.273816 containerd[1680]: time="2025-10-31T01:11:04.273243382Z" level=info msg="connecting to shim 73c5a4333e113ab36e64ddd3e04c8f4efc5e5f7a953595515fc410709740ebf2" address="unix:///run/containerd/s/4884eacda9d3559ef3102758371dbac5b8bb8c1af8f8f4192ac452d9a91eb8b7" protocol=ttrpc version=3 Oct 31 01:11:04.274283 containerd[1680]: time="2025-10-31T01:11:04.274268151Z" level=info msg="connecting to shim 3e7ba7ace836641b6771560e36fe9e91c1623a4a04a6d9e6af970cd4cb3eae9d" address="unix:///run/containerd/s/3dfc5041ad8c8c784f4430ac46f24dc6284f1410df29e1cfc7dc44c163889950" protocol=ttrpc version=3 Oct 31 01:11:04.287955 systemd[1]: Started cri-containerd-afb8a27337488fdf62519ce6ad865baa61847e4a691ebb88d52b88e9150cd979.scope - libcontainer container afb8a27337488fdf62519ce6ad865baa61847e4a691ebb88d52b88e9150cd979. Oct 31 01:11:04.290651 systemd[1]: Started cri-containerd-3e7ba7ace836641b6771560e36fe9e91c1623a4a04a6d9e6af970cd4cb3eae9d.scope - libcontainer container 3e7ba7ace836641b6771560e36fe9e91c1623a4a04a6d9e6af970cd4cb3eae9d. Oct 31 01:11:04.291999 systemd[1]: Started cri-containerd-73c5a4333e113ab36e64ddd3e04c8f4efc5e5f7a953595515fc410709740ebf2.scope - libcontainer container 73c5a4333e113ab36e64ddd3e04c8f4efc5e5f7a953595515fc410709740ebf2. Oct 31 01:11:04.328990 containerd[1680]: time="2025-10-31T01:11:04.328959435Z" level=info msg="StartContainer for \"73c5a4333e113ab36e64ddd3e04c8f4efc5e5f7a953595515fc410709740ebf2\" returns successfully" Oct 31 01:11:04.344059 containerd[1680]: time="2025-10-31T01:11:04.343991869Z" level=info msg="StartContainer for \"afb8a27337488fdf62519ce6ad865baa61847e4a691ebb88d52b88e9150cd979\" returns successfully" Oct 31 01:11:04.361641 containerd[1680]: time="2025-10-31T01:11:04.361617236Z" level=info msg="StartContainer for \"3e7ba7ace836641b6771560e36fe9e91c1623a4a04a6d9e6af970cd4cb3eae9d\" returns successfully" Oct 31 01:11:04.622162 kubelet[2583]: I1031 01:11:04.622128 2583 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 01:11:04.622576 kubelet[2583]: E1031 01:11:04.622562 2583 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.100:6443/api/v1/nodes\": dial tcp 139.178.70.100:6443: connect: connection refused" node="localhost" Oct 31 01:11:04.939362 kubelet[2583]: E1031 01:11:04.939034 2583 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:11:04.939528 kubelet[2583]: E1031 01:11:04.939445 2583 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:11:04.940699 kubelet[2583]: E1031 01:11:04.940692 2583 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:11:05.880549 kubelet[2583]: E1031 01:11:05.880495 2583 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 31 01:11:05.942944 kubelet[2583]: E1031 01:11:05.942928 2583 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:11:05.943195 kubelet[2583]: E1031 01:11:05.943121 2583 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:11:06.224022 kubelet[2583]: I1031 01:11:06.223950 2583 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 01:11:06.232401 kubelet[2583]: I1031 01:11:06.232347 2583 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 01:11:06.232401 kubelet[2583]: E1031 01:11:06.232366 2583 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 31 01:11:06.270080 kubelet[2583]: I1031 01:11:06.269828 2583 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 01:11:06.272633 kubelet[2583]: E1031 01:11:06.272619 2583 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 31 01:11:06.272714 kubelet[2583]: I1031 01:11:06.272705 2583 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 01:11:06.273743 kubelet[2583]: E1031 01:11:06.273705 2583 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 31 01:11:06.273858 kubelet[2583]: I1031 01:11:06.273808 2583 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 01:11:06.274855 kubelet[2583]: E1031 01:11:06.274834 2583 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 31 01:11:06.750739 kubelet[2583]: I1031 01:11:06.750683 2583 apiserver.go:52] "Watching apiserver" Oct 31 01:11:06.770963 kubelet[2583]: I1031 01:11:06.770939 2583 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 31 01:11:06.942622 kubelet[2583]: I1031 01:11:06.942449 2583 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 01:11:07.815520 systemd[1]: Reload requested from client PID 2858 ('systemctl') (unit session-9.scope)... Oct 31 01:11:07.815540 systemd[1]: Reloading... Oct 31 01:11:07.878770 zram_generator::config[2906]: No configuration found. Oct 31 01:11:07.956322 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 31 01:11:08.031063 systemd[1]: Reloading finished in 215 ms. Oct 31 01:11:08.050516 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 01:11:08.064076 systemd[1]: kubelet.service: Deactivated successfully. Oct 31 01:11:08.064310 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 01:11:08.064372 systemd[1]: kubelet.service: Consumed 709ms CPU time, 121.9M memory peak. Oct 31 01:11:08.066349 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 01:11:08.293492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 01:11:08.302769 (kubelet)[2970]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 31 01:11:08.374602 kubelet[2970]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 01:11:08.374602 kubelet[2970]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 01:11:08.388561 kubelet[2970]: I1031 01:11:08.388517 2970 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 01:11:08.396052 kubelet[2970]: I1031 01:11:08.396032 2970 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 31 01:11:08.396052 kubelet[2970]: I1031 01:11:08.396047 2970 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 01:11:08.407938 kubelet[2970]: I1031 01:11:08.407907 2970 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 31 01:11:08.407938 kubelet[2970]: I1031 01:11:08.407919 2970 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 01:11:08.408149 kubelet[2970]: I1031 01:11:08.408085 2970 server.go:956] "Client rotation is on, will bootstrap in background" Oct 31 01:11:08.408968 kubelet[2970]: I1031 01:11:08.408955 2970 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 31 01:11:08.421768 kubelet[2970]: I1031 01:11:08.421748 2970 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 01:11:08.430408 kubelet[2970]: I1031 01:11:08.430318 2970 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 31 01:11:08.434885 kubelet[2970]: I1031 01:11:08.434871 2970 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 31 01:11:08.436423 kubelet[2970]: I1031 01:11:08.436308 2970 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 01:11:08.437043 kubelet[2970]: I1031 01:11:08.436443 2970 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 31 01:11:08.437043 kubelet[2970]: I1031 01:11:08.436801 2970 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 01:11:08.437043 kubelet[2970]: I1031 01:11:08.436808 2970 container_manager_linux.go:306] "Creating device plugin manager" Oct 31 01:11:08.437043 kubelet[2970]: I1031 01:11:08.436838 2970 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 31 01:11:08.437632 kubelet[2970]: I1031 01:11:08.437620 2970 state_mem.go:36] "Initialized new in-memory state store" Oct 31 01:11:08.439269 kubelet[2970]: I1031 01:11:08.439258 2970 kubelet.go:475] "Attempting to sync node with API server" Oct 31 01:11:08.439306 kubelet[2970]: I1031 01:11:08.439276 2970 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 01:11:08.439306 kubelet[2970]: I1031 01:11:08.439296 2970 kubelet.go:387] "Adding apiserver pod source" Oct 31 01:11:08.439306 kubelet[2970]: I1031 01:11:08.439306 2970 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 01:11:08.442927 kubelet[2970]: I1031 01:11:08.442911 2970 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 31 01:11:08.443695 kubelet[2970]: I1031 01:11:08.443682 2970 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 31 01:11:08.443736 kubelet[2970]: I1031 01:11:08.443703 2970 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 31 01:11:08.445761 kubelet[2970]: I1031 01:11:08.445217 2970 server.go:1262] "Started kubelet" Oct 31 01:11:08.447373 kubelet[2970]: I1031 01:11:08.446633 2970 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 01:11:08.455039 kubelet[2970]: I1031 01:11:08.455011 2970 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 01:11:08.456554 kubelet[2970]: I1031 01:11:08.456542 2970 server.go:310] "Adding debug handlers to kubelet server" Oct 31 01:11:08.460453 kubelet[2970]: I1031 01:11:08.460443 2970 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 01:11:08.461297 kubelet[2970]: I1031 01:11:08.461275 2970 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 01:11:08.461332 kubelet[2970]: I1031 01:11:08.461308 2970 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 31 01:11:08.461407 kubelet[2970]: I1031 01:11:08.461397 2970 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 01:11:08.461449 kubelet[2970]: I1031 01:11:08.461440 2970 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 31 01:11:08.463559 kubelet[2970]: E1031 01:11:08.463549 2970 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 31 01:11:08.464143 kubelet[2970]: I1031 01:11:08.464089 2970 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 31 01:11:08.464175 kubelet[2970]: I1031 01:11:08.464156 2970 reconciler.go:29] "Reconciler: start to sync state" Oct 31 01:11:08.465957 kubelet[2970]: I1031 01:11:08.465183 2970 factory.go:223] Registration of the systemd container factory successfully Oct 31 01:11:08.465957 kubelet[2970]: I1031 01:11:08.465237 2970 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 01:11:08.465957 kubelet[2970]: I1031 01:11:08.465484 2970 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 31 01:11:08.466297 kubelet[2970]: I1031 01:11:08.466263 2970 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 31 01:11:08.466297 kubelet[2970]: I1031 01:11:08.466278 2970 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 31 01:11:08.466297 kubelet[2970]: I1031 01:11:08.466290 2970 kubelet.go:2427] "Starting kubelet main sync loop" Oct 31 01:11:08.466804 kubelet[2970]: E1031 01:11:08.466313 2970 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 01:11:08.467802 kubelet[2970]: I1031 01:11:08.467669 2970 factory.go:223] Registration of the containerd container factory successfully Oct 31 01:11:08.511796 kubelet[2970]: I1031 01:11:08.511782 2970 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 01:11:08.511890 kubelet[2970]: I1031 01:11:08.511883 2970 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 01:11:08.511929 kubelet[2970]: I1031 01:11:08.511925 2970 state_mem.go:36] "Initialized new in-memory state store" Oct 31 01:11:08.512946 kubelet[2970]: I1031 01:11:08.512494 2970 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 31 01:11:08.512946 kubelet[2970]: I1031 01:11:08.512502 2970 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 31 01:11:08.512946 kubelet[2970]: I1031 01:11:08.512537 2970 policy_none.go:49] "None policy: Start" Oct 31 01:11:08.512946 kubelet[2970]: I1031 01:11:08.512544 2970 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 31 01:11:08.512946 kubelet[2970]: I1031 01:11:08.512551 2970 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 31 01:11:08.512946 kubelet[2970]: I1031 01:11:08.512607 2970 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 31 01:11:08.512946 kubelet[2970]: I1031 01:11:08.512613 2970 policy_none.go:47] "Start" Oct 31 01:11:08.515389 kubelet[2970]: E1031 01:11:08.515379 2970 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 31 01:11:08.515663 kubelet[2970]: I1031 01:11:08.515657 2970 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 01:11:08.515717 kubelet[2970]: I1031 01:11:08.515703 2970 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 01:11:08.515949 kubelet[2970]: I1031 01:11:08.515942 2970 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 01:11:08.517606 kubelet[2970]: E1031 01:11:08.517593 2970 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 01:11:08.568457 kubelet[2970]: I1031 01:11:08.568435 2970 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 01:11:08.569752 kubelet[2970]: I1031 01:11:08.568441 2970 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 01:11:08.569927 kubelet[2970]: I1031 01:11:08.569905 2970 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 01:11:08.573039 kubelet[2970]: E1031 01:11:08.573002 2970 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 31 01:11:08.619070 kubelet[2970]: I1031 01:11:08.619050 2970 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 01:11:08.623550 kubelet[2970]: I1031 01:11:08.623102 2970 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 31 01:11:08.623550 kubelet[2970]: I1031 01:11:08.623150 2970 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 01:11:08.665540 kubelet[2970]: I1031 01:11:08.665472 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 31 01:11:08.665660 kubelet[2970]: I1031 01:11:08.665648 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0135e446c74040ba35521da994c5b98a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0135e446c74040ba35521da994c5b98a\") " pod="kube-system/kube-apiserver-localhost" Oct 31 01:11:08.665770 kubelet[2970]: I1031 01:11:08.665759 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0135e446c74040ba35521da994c5b98a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0135e446c74040ba35521da994c5b98a\") " pod="kube-system/kube-apiserver-localhost" Oct 31 01:11:08.665856 kubelet[2970]: I1031 01:11:08.665848 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:11:08.665933 kubelet[2970]: I1031 01:11:08.665923 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:11:08.666021 kubelet[2970]: I1031 01:11:08.666006 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:11:08.666145 kubelet[2970]: I1031 01:11:08.666106 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0135e446c74040ba35521da994c5b98a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0135e446c74040ba35521da994c5b98a\") " pod="kube-system/kube-apiserver-localhost" Oct 31 01:11:08.666145 kubelet[2970]: I1031 01:11:08.666122 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:11:08.666248 kubelet[2970]: I1031 01:11:08.666232 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:11:09.441535 kubelet[2970]: I1031 01:11:09.440770 2970 apiserver.go:52] "Watching apiserver" Oct 31 01:11:09.464657 kubelet[2970]: I1031 01:11:09.464634 2970 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 31 01:11:09.503263 kubelet[2970]: I1031 01:11:09.503180 2970 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 01:11:09.510978 kubelet[2970]: E1031 01:11:09.510937 2970 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 31 01:11:09.541268 kubelet[2970]: I1031 01:11:09.541228 2970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.541216496 podStartE2EDuration="1.541216496s" podCreationTimestamp="2025-10-31 01:11:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 01:11:09.532948828 +0000 UTC m=+1.212026586" watchObservedRunningTime="2025-10-31 01:11:09.541216496 +0000 UTC m=+1.220294243" Oct 31 01:11:09.549025 kubelet[2970]: I1031 01:11:09.548960 2970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.548948737 podStartE2EDuration="3.548948737s" podCreationTimestamp="2025-10-31 01:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 01:11:09.541517362 +0000 UTC m=+1.220595113" watchObservedRunningTime="2025-10-31 01:11:09.548948737 +0000 UTC m=+1.228026488" Oct 31 01:11:09.554297 kubelet[2970]: I1031 01:11:09.554263 2970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.554252215 podStartE2EDuration="1.554252215s" podCreationTimestamp="2025-10-31 01:11:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 01:11:09.548987151 +0000 UTC m=+1.228064907" watchObservedRunningTime="2025-10-31 01:11:09.554252215 +0000 UTC m=+1.233329970" Oct 31 01:11:13.417804 kubelet[2970]: I1031 01:11:13.417775 2970 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 31 01:11:13.418368 containerd[1680]: time="2025-10-31T01:11:13.418268129Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 31 01:11:13.419086 kubelet[2970]: I1031 01:11:13.418440 2970 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 31 01:11:14.217976 systemd[1]: Created slice kubepods-besteffort-pod0b0de15b_2d23_44eb_9e49_38ca5beae3c4.slice - libcontainer container kubepods-besteffort-pod0b0de15b_2d23_44eb_9e49_38ca5beae3c4.slice. Oct 31 01:11:14.302733 kubelet[2970]: I1031 01:11:14.302705 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b0de15b-2d23-44eb-9e49-38ca5beae3c4-xtables-lock\") pod \"kube-proxy-7pqrb\" (UID: \"0b0de15b-2d23-44eb-9e49-38ca5beae3c4\") " pod="kube-system/kube-proxy-7pqrb" Oct 31 01:11:14.302885 kubelet[2970]: I1031 01:11:14.302744 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b0de15b-2d23-44eb-9e49-38ca5beae3c4-lib-modules\") pod \"kube-proxy-7pqrb\" (UID: \"0b0de15b-2d23-44eb-9e49-38ca5beae3c4\") " pod="kube-system/kube-proxy-7pqrb" Oct 31 01:11:14.302885 kubelet[2970]: I1031 01:11:14.302759 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c894\" (UniqueName: \"kubernetes.io/projected/0b0de15b-2d23-44eb-9e49-38ca5beae3c4-kube-api-access-8c894\") pod \"kube-proxy-7pqrb\" (UID: \"0b0de15b-2d23-44eb-9e49-38ca5beae3c4\") " pod="kube-system/kube-proxy-7pqrb" Oct 31 01:11:14.302885 kubelet[2970]: I1031 01:11:14.302774 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0b0de15b-2d23-44eb-9e49-38ca5beae3c4-kube-proxy\") pod \"kube-proxy-7pqrb\" (UID: \"0b0de15b-2d23-44eb-9e49-38ca5beae3c4\") " pod="kube-system/kube-proxy-7pqrb" Oct 31 01:11:14.412633 kubelet[2970]: E1031 01:11:14.412568 2970 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 31 01:11:14.412633 kubelet[2970]: E1031 01:11:14.412595 2970 projected.go:196] Error preparing data for projected volume kube-api-access-8c894 for pod kube-system/kube-proxy-7pqrb: configmap "kube-root-ca.crt" not found Oct 31 01:11:14.412878 kubelet[2970]: E1031 01:11:14.412785 2970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b0de15b-2d23-44eb-9e49-38ca5beae3c4-kube-api-access-8c894 podName:0b0de15b-2d23-44eb-9e49-38ca5beae3c4 nodeName:}" failed. No retries permitted until 2025-10-31 01:11:14.912770751 +0000 UTC m=+6.591848495 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8c894" (UniqueName: "kubernetes.io/projected/0b0de15b-2d23-44eb-9e49-38ca5beae3c4-kube-api-access-8c894") pod "kube-proxy-7pqrb" (UID: "0b0de15b-2d23-44eb-9e49-38ca5beae3c4") : configmap "kube-root-ca.crt" not found Oct 31 01:11:14.576143 systemd[1]: Created slice kubepods-besteffort-pod7e7a54a0_45c8_42c1_bb84_339f4c69c8d5.slice - libcontainer container kubepods-besteffort-pod7e7a54a0_45c8_42c1_bb84_339f4c69c8d5.slice. Oct 31 01:11:14.604676 kubelet[2970]: I1031 01:11:14.604620 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsf7w\" (UniqueName: \"kubernetes.io/projected/7e7a54a0-45c8-42c1-bb84-339f4c69c8d5-kube-api-access-rsf7w\") pod \"tigera-operator-65cdcdfd6d-hfrpk\" (UID: \"7e7a54a0-45c8-42c1-bb84-339f4c69c8d5\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-hfrpk" Oct 31 01:11:14.604676 kubelet[2970]: I1031 01:11:14.604666 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7e7a54a0-45c8-42c1-bb84-339f4c69c8d5-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-hfrpk\" (UID: \"7e7a54a0-45c8-42c1-bb84-339f4c69c8d5\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-hfrpk" Oct 31 01:11:14.881000 containerd[1680]: time="2025-10-31T01:11:14.880921724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-hfrpk,Uid:7e7a54a0-45c8-42c1-bb84-339f4c69c8d5,Namespace:tigera-operator,Attempt:0,}" Oct 31 01:11:14.892952 containerd[1680]: time="2025-10-31T01:11:14.892909871Z" level=info msg="connecting to shim 85306c0651fe713eba45355fbc366bb832b3830b15ee966fabc5919234e64ee6" address="unix:///run/containerd/s/37d60a6e6c9934f222f1c8922f34c8260983efdb08a87842288d7ee62b3da86b" namespace=k8s.io protocol=ttrpc version=3 Oct 31 01:11:14.912913 systemd[1]: Started cri-containerd-85306c0651fe713eba45355fbc366bb832b3830b15ee966fabc5919234e64ee6.scope - libcontainer container 85306c0651fe713eba45355fbc366bb832b3830b15ee966fabc5919234e64ee6. Oct 31 01:11:14.945505 containerd[1680]: time="2025-10-31T01:11:14.945459082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-hfrpk,Uid:7e7a54a0-45c8-42c1-bb84-339f4c69c8d5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"85306c0651fe713eba45355fbc366bb832b3830b15ee966fabc5919234e64ee6\"" Oct 31 01:11:14.946710 containerd[1680]: time="2025-10-31T01:11:14.946689856Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 31 01:11:15.126692 containerd[1680]: time="2025-10-31T01:11:15.126670116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7pqrb,Uid:0b0de15b-2d23-44eb-9e49-38ca5beae3c4,Namespace:kube-system,Attempt:0,}" Oct 31 01:11:15.136995 containerd[1680]: time="2025-10-31T01:11:15.136934376Z" level=info msg="connecting to shim ed648ef0e51d5f57d8ea8a35164e9bcdecd8d5940d901a4565e0dd53986e5fcd" address="unix:///run/containerd/s/fbd17c0f3fdd58059085cdd24e1caee9e293003d7f03b832052f5e97c2da8cbb" namespace=k8s.io protocol=ttrpc version=3 Oct 31 01:11:15.152806 systemd[1]: Started cri-containerd-ed648ef0e51d5f57d8ea8a35164e9bcdecd8d5940d901a4565e0dd53986e5fcd.scope - libcontainer container ed648ef0e51d5f57d8ea8a35164e9bcdecd8d5940d901a4565e0dd53986e5fcd. Oct 31 01:11:15.165976 containerd[1680]: time="2025-10-31T01:11:15.165922771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7pqrb,Uid:0b0de15b-2d23-44eb-9e49-38ca5beae3c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed648ef0e51d5f57d8ea8a35164e9bcdecd8d5940d901a4565e0dd53986e5fcd\"" Oct 31 01:11:15.168767 containerd[1680]: time="2025-10-31T01:11:15.168659868Z" level=info msg="CreateContainer within sandbox \"ed648ef0e51d5f57d8ea8a35164e9bcdecd8d5940d901a4565e0dd53986e5fcd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 31 01:11:15.173390 containerd[1680]: time="2025-10-31T01:11:15.173378773Z" level=info msg="Container 46722a197469eaba9a78617cfb59dc60b85df9060ef1f7362f1bc0fbcbf82e05: CDI devices from CRI Config.CDIDevices: []" Oct 31 01:11:15.176305 containerd[1680]: time="2025-10-31T01:11:15.176255008Z" level=info msg="CreateContainer within sandbox \"ed648ef0e51d5f57d8ea8a35164e9bcdecd8d5940d901a4565e0dd53986e5fcd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"46722a197469eaba9a78617cfb59dc60b85df9060ef1f7362f1bc0fbcbf82e05\"" Oct 31 01:11:15.177076 containerd[1680]: time="2025-10-31T01:11:15.176496502Z" level=info msg="StartContainer for \"46722a197469eaba9a78617cfb59dc60b85df9060ef1f7362f1bc0fbcbf82e05\"" Oct 31 01:11:15.178176 containerd[1680]: time="2025-10-31T01:11:15.178149258Z" level=info msg="connecting to shim 46722a197469eaba9a78617cfb59dc60b85df9060ef1f7362f1bc0fbcbf82e05" address="unix:///run/containerd/s/fbd17c0f3fdd58059085cdd24e1caee9e293003d7f03b832052f5e97c2da8cbb" protocol=ttrpc version=3 Oct 31 01:11:15.193947 systemd[1]: Started cri-containerd-46722a197469eaba9a78617cfb59dc60b85df9060ef1f7362f1bc0fbcbf82e05.scope - libcontainer container 46722a197469eaba9a78617cfb59dc60b85df9060ef1f7362f1bc0fbcbf82e05. Oct 31 01:11:15.220273 containerd[1680]: time="2025-10-31T01:11:15.220251845Z" level=info msg="StartContainer for \"46722a197469eaba9a78617cfb59dc60b85df9060ef1f7362f1bc0fbcbf82e05\" returns successfully" Oct 31 01:11:16.749839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4093739762.mount: Deactivated successfully. Oct 31 01:11:16.886812 kubelet[2970]: I1031 01:11:16.886582 2970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7pqrb" podStartSLOduration=2.886570411 podStartE2EDuration="2.886570411s" podCreationTimestamp="2025-10-31 01:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 01:11:15.524682391 +0000 UTC m=+7.203760156" watchObservedRunningTime="2025-10-31 01:11:16.886570411 +0000 UTC m=+8.565648161" Oct 31 01:11:17.488085 containerd[1680]: time="2025-10-31T01:11:17.488033863Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 31 01:11:17.491094 containerd[1680]: time="2025-10-31T01:11:17.491030141Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.544248899s" Oct 31 01:11:17.491094 containerd[1680]: time="2025-10-31T01:11:17.491048844Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 31 01:11:17.495693 containerd[1680]: time="2025-10-31T01:11:17.495178126Z" level=info msg="CreateContainer within sandbox \"85306c0651fe713eba45355fbc366bb832b3830b15ee966fabc5919234e64ee6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 31 01:11:17.508742 containerd[1680]: time="2025-10-31T01:11:17.506714920Z" level=info msg="Container a51893e3f6838deaf1f67a73e83f46e7212b42218b1c625c719187de91394fcf: CDI devices from CRI Config.CDIDevices: []" Oct 31 01:11:17.508538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4276654885.mount: Deactivated successfully. Oct 31 01:11:17.511589 containerd[1680]: time="2025-10-31T01:11:17.511557910Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:11:17.511980 containerd[1680]: time="2025-10-31T01:11:17.511922445Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:11:17.512230 containerd[1680]: time="2025-10-31T01:11:17.512219013Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:11:17.514460 containerd[1680]: time="2025-10-31T01:11:17.514445041Z" level=info msg="CreateContainer within sandbox \"85306c0651fe713eba45355fbc366bb832b3830b15ee966fabc5919234e64ee6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a51893e3f6838deaf1f67a73e83f46e7212b42218b1c625c719187de91394fcf\"" Oct 31 01:11:17.514900 containerd[1680]: time="2025-10-31T01:11:17.514792763Z" level=info msg="StartContainer for \"a51893e3f6838deaf1f67a73e83f46e7212b42218b1c625c719187de91394fcf\"" Oct 31 01:11:17.515522 containerd[1680]: time="2025-10-31T01:11:17.515490990Z" level=info msg="connecting to shim a51893e3f6838deaf1f67a73e83f46e7212b42218b1c625c719187de91394fcf" address="unix:///run/containerd/s/37d60a6e6c9934f222f1c8922f34c8260983efdb08a87842288d7ee62b3da86b" protocol=ttrpc version=3 Oct 31 01:11:17.534927 systemd[1]: Started cri-containerd-a51893e3f6838deaf1f67a73e83f46e7212b42218b1c625c719187de91394fcf.scope - libcontainer container a51893e3f6838deaf1f67a73e83f46e7212b42218b1c625c719187de91394fcf. Oct 31 01:11:17.552066 containerd[1680]: time="2025-10-31T01:11:17.552043954Z" level=info msg="StartContainer for \"a51893e3f6838deaf1f67a73e83f46e7212b42218b1c625c719187de91394fcf\" returns successfully" Oct 31 01:11:22.462785 sudo[2002]: pam_unix(sudo:session): session closed for user root Oct 31 01:11:22.464434 sshd[2001]: Connection closed by 139.178.89.65 port 33780 Oct 31 01:11:22.466128 sshd-session[1998]: pam_unix(sshd:session): session closed for user core Oct 31 01:11:22.468931 systemd[1]: sshd@6-139.178.70.100:22-139.178.89.65:33780.service: Deactivated successfully. Oct 31 01:11:22.474483 systemd[1]: session-9.scope: Deactivated successfully. Oct 31 01:11:22.475750 systemd[1]: session-9.scope: Consumed 3.240s CPU time, 154.5M memory peak. Oct 31 01:11:22.476663 systemd-logind[1650]: Session 9 logged out. Waiting for processes to exit. Oct 31 01:11:22.479234 systemd-logind[1650]: Removed session 9. Oct 31 01:11:26.764149 kubelet[2970]: I1031 01:11:26.764108 2970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-hfrpk" podStartSLOduration=10.217834362 podStartE2EDuration="12.763491088s" podCreationTimestamp="2025-10-31 01:11:14 +0000 UTC" firstStartedPulling="2025-10-31 01:11:14.946486012 +0000 UTC m=+6.625563756" lastFinishedPulling="2025-10-31 01:11:17.492142738 +0000 UTC m=+9.171220482" observedRunningTime="2025-10-31 01:11:18.540297574 +0000 UTC m=+10.219375330" watchObservedRunningTime="2025-10-31 01:11:26.763491088 +0000 UTC m=+18.442568840" Oct 31 01:11:26.772906 systemd[1]: Created slice kubepods-besteffort-pod11297e08_8a6c_436d_9d6f_00d490fbbf01.slice - libcontainer container kubepods-besteffort-pod11297e08_8a6c_436d_9d6f_00d490fbbf01.slice. Oct 31 01:11:26.781742 kubelet[2970]: I1031 01:11:26.781692 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/11297e08-8a6c-436d-9d6f-00d490fbbf01-typha-certs\") pod \"calico-typha-774f99658b-df988\" (UID: \"11297e08-8a6c-436d-9d6f-00d490fbbf01\") " pod="calico-system/calico-typha-774f99658b-df988" Oct 31 01:11:26.781868 kubelet[2970]: I1031 01:11:26.781718 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11297e08-8a6c-436d-9d6f-00d490fbbf01-tigera-ca-bundle\") pod \"calico-typha-774f99658b-df988\" (UID: \"11297e08-8a6c-436d-9d6f-00d490fbbf01\") " pod="calico-system/calico-typha-774f99658b-df988" Oct 31 01:11:26.781868 kubelet[2970]: I1031 01:11:26.781851 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4bfh\" (UniqueName: \"kubernetes.io/projected/11297e08-8a6c-436d-9d6f-00d490fbbf01-kube-api-access-n4bfh\") pod \"calico-typha-774f99658b-df988\" (UID: \"11297e08-8a6c-436d-9d6f-00d490fbbf01\") " pod="calico-system/calico-typha-774f99658b-df988" Oct 31 01:11:27.020775 systemd[1]: Created slice kubepods-besteffort-podfee7de22_8924_4fc4_82de_bb1f0276e082.slice - libcontainer container kubepods-besteffort-podfee7de22_8924_4fc4_82de_bb1f0276e082.slice. Oct 31 01:11:27.077619 containerd[1680]: time="2025-10-31T01:11:27.077551027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-774f99658b-df988,Uid:11297e08-8a6c-436d-9d6f-00d490fbbf01,Namespace:calico-system,Attempt:0,}" Oct 31 01:11:27.083977 kubelet[2970]: I1031 01:11:27.083742 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fee7de22-8924-4fc4-82de-bb1f0276e082-flexvol-driver-host\") pod \"calico-node-hvqf6\" (UID: \"fee7de22-8924-4fc4-82de-bb1f0276e082\") " pod="calico-system/calico-node-hvqf6" Oct 31 01:11:27.083977 kubelet[2970]: I1031 01:11:27.083770 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fee7de22-8924-4fc4-82de-bb1f0276e082-lib-modules\") pod \"calico-node-hvqf6\" (UID: \"fee7de22-8924-4fc4-82de-bb1f0276e082\") " pod="calico-system/calico-node-hvqf6" Oct 31 01:11:27.083977 kubelet[2970]: I1031 01:11:27.083801 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fee7de22-8924-4fc4-82de-bb1f0276e082-policysync\") pod \"calico-node-hvqf6\" (UID: \"fee7de22-8924-4fc4-82de-bb1f0276e082\") " pod="calico-system/calico-node-hvqf6" Oct 31 01:11:27.083977 kubelet[2970]: I1031 01:11:27.083822 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fee7de22-8924-4fc4-82de-bb1f0276e082-cni-bin-dir\") pod \"calico-node-hvqf6\" (UID: \"fee7de22-8924-4fc4-82de-bb1f0276e082\") " pod="calico-system/calico-node-hvqf6" Oct 31 01:11:27.083977 kubelet[2970]: I1031 01:11:27.083836 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fee7de22-8924-4fc4-82de-bb1f0276e082-var-lib-calico\") pod \"calico-node-hvqf6\" (UID: \"fee7de22-8924-4fc4-82de-bb1f0276e082\") " pod="calico-system/calico-node-hvqf6" Oct 31 01:11:27.084130 kubelet[2970]: I1031 01:11:27.083846 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fee7de22-8924-4fc4-82de-bb1f0276e082-var-run-calico\") pod \"calico-node-hvqf6\" (UID: \"fee7de22-8924-4fc4-82de-bb1f0276e082\") " pod="calico-system/calico-node-hvqf6" Oct 31 01:11:27.084130 kubelet[2970]: I1031 01:11:27.083859 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fee7de22-8924-4fc4-82de-bb1f0276e082-tigera-ca-bundle\") pod \"calico-node-hvqf6\" (UID: \"fee7de22-8924-4fc4-82de-bb1f0276e082\") " pod="calico-system/calico-node-hvqf6" Oct 31 01:11:27.084130 kubelet[2970]: I1031 01:11:27.083871 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fee7de22-8924-4fc4-82de-bb1f0276e082-cni-log-dir\") pod \"calico-node-hvqf6\" (UID: \"fee7de22-8924-4fc4-82de-bb1f0276e082\") " pod="calico-system/calico-node-hvqf6" Oct 31 01:11:27.084130 kubelet[2970]: I1031 01:11:27.083885 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fee7de22-8924-4fc4-82de-bb1f0276e082-xtables-lock\") pod \"calico-node-hvqf6\" (UID: \"fee7de22-8924-4fc4-82de-bb1f0276e082\") " pod="calico-system/calico-node-hvqf6" Oct 31 01:11:27.084130 kubelet[2970]: I1031 01:11:27.083895 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhhzk\" (UniqueName: \"kubernetes.io/projected/fee7de22-8924-4fc4-82de-bb1f0276e082-kube-api-access-xhhzk\") pod \"calico-node-hvqf6\" (UID: \"fee7de22-8924-4fc4-82de-bb1f0276e082\") " pod="calico-system/calico-node-hvqf6" Oct 31 01:11:27.084243 kubelet[2970]: I1031 01:11:27.083910 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fee7de22-8924-4fc4-82de-bb1f0276e082-cni-net-dir\") pod \"calico-node-hvqf6\" (UID: \"fee7de22-8924-4fc4-82de-bb1f0276e082\") " pod="calico-system/calico-node-hvqf6" Oct 31 01:11:27.084243 kubelet[2970]: I1031 01:11:27.083922 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fee7de22-8924-4fc4-82de-bb1f0276e082-node-certs\") pod \"calico-node-hvqf6\" (UID: \"fee7de22-8924-4fc4-82de-bb1f0276e082\") " pod="calico-system/calico-node-hvqf6" Oct 31 01:11:27.090839 containerd[1680]: time="2025-10-31T01:11:27.090807985Z" level=info msg="connecting to shim e3ee6db3a850b931ae54d4922e749653cb98b5abe4eea35d49b7c2414b5d8d10" address="unix:///run/containerd/s/29f65b97cab3a3d303aeb0659915b00549a123a0b804dfa6f4810db0089cab8b" namespace=k8s.io protocol=ttrpc version=3 Oct 31 01:11:27.114929 systemd[1]: Started cri-containerd-e3ee6db3a850b931ae54d4922e749653cb98b5abe4eea35d49b7c2414b5d8d10.scope - libcontainer container e3ee6db3a850b931ae54d4922e749653cb98b5abe4eea35d49b7c2414b5d8d10. Oct 31 01:11:27.154785 containerd[1680]: time="2025-10-31T01:11:27.154760711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-774f99658b-df988,Uid:11297e08-8a6c-436d-9d6f-00d490fbbf01,Namespace:calico-system,Attempt:0,} returns sandbox id \"e3ee6db3a850b931ae54d4922e749653cb98b5abe4eea35d49b7c2414b5d8d10\"" Oct 31 01:11:27.155625 containerd[1680]: time="2025-10-31T01:11:27.155610366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 31 01:11:27.188391 kubelet[2970]: E1031 01:11:27.188351 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4j67c" podUID="c7606c94-65d4-44ae-9466-226a1af8c528" Oct 31 01:11:27.197154 kubelet[2970]: E1031 01:11:27.197136 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.197300 kubelet[2970]: W1031 01:11:27.197250 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.197300 kubelet[2970]: E1031 01:11:27.197273 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.200953 kubelet[2970]: E1031 01:11:27.200929 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.201240 kubelet[2970]: W1031 01:11:27.201048 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.201240 kubelet[2970]: E1031 01:11:27.201063 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.276669 kubelet[2970]: E1031 01:11:27.276566 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.276669 kubelet[2970]: W1031 01:11:27.276582 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.276669 kubelet[2970]: E1031 01:11:27.276600 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.277487 kubelet[2970]: E1031 01:11:27.277314 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.277487 kubelet[2970]: W1031 01:11:27.277323 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.277487 kubelet[2970]: E1031 01:11:27.277330 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.277660 kubelet[2970]: E1031 01:11:27.277616 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.277660 kubelet[2970]: W1031 01:11:27.277624 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.277660 kubelet[2970]: E1031 01:11:27.277631 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.277947 kubelet[2970]: E1031 01:11:27.277903 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.277947 kubelet[2970]: W1031 01:11:27.277912 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.277947 kubelet[2970]: E1031 01:11:27.277919 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.278196 kubelet[2970]: E1031 01:11:27.278158 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.278196 kubelet[2970]: W1031 01:11:27.278165 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.278196 kubelet[2970]: E1031 01:11:27.278173 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.278406 kubelet[2970]: E1031 01:11:27.278370 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.278406 kubelet[2970]: W1031 01:11:27.278377 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.278406 kubelet[2970]: E1031 01:11:27.278383 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.278624 kubelet[2970]: E1031 01:11:27.278586 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.278624 kubelet[2970]: W1031 01:11:27.278593 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.278624 kubelet[2970]: E1031 01:11:27.278600 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.278845 kubelet[2970]: E1031 01:11:27.278801 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.278845 kubelet[2970]: W1031 01:11:27.278808 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.278845 kubelet[2970]: E1031 01:11:27.278815 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.279098 kubelet[2970]: E1031 01:11:27.279061 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.279098 kubelet[2970]: W1031 01:11:27.279068 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.279098 kubelet[2970]: E1031 01:11:27.279075 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.279304 kubelet[2970]: E1031 01:11:27.279266 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.279304 kubelet[2970]: W1031 01:11:27.279273 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.279304 kubelet[2970]: E1031 01:11:27.279281 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.279512 kubelet[2970]: E1031 01:11:27.279470 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.279512 kubelet[2970]: W1031 01:11:27.279480 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.279512 kubelet[2970]: E1031 01:11:27.279488 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.279751 kubelet[2970]: E1031 01:11:27.279681 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.279751 kubelet[2970]: W1031 01:11:27.279689 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.280590 kubelet[2970]: E1031 01:11:27.279695 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.284129 kubelet[2970]: E1031 01:11:27.284117 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.284254 kubelet[2970]: W1031 01:11:27.284180 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.284254 kubelet[2970]: E1031 01:11:27.284193 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.284555 kubelet[2970]: E1031 01:11:27.284487 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.284555 kubelet[2970]: W1031 01:11:27.284496 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.284555 kubelet[2970]: E1031 01:11:27.284504 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.284738 kubelet[2970]: E1031 01:11:27.284695 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.284738 kubelet[2970]: W1031 01:11:27.284703 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.284738 kubelet[2970]: E1031 01:11:27.284710 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.284988 kubelet[2970]: E1031 01:11:27.284912 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.284988 kubelet[2970]: W1031 01:11:27.284921 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.284988 kubelet[2970]: E1031 01:11:27.284928 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.285176 kubelet[2970]: E1031 01:11:27.285115 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.285176 kubelet[2970]: W1031 01:11:27.285123 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.285176 kubelet[2970]: E1031 01:11:27.285130 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.285579 kubelet[2970]: E1031 01:11:27.285298 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.285579 kubelet[2970]: W1031 01:11:27.285306 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.285579 kubelet[2970]: E1031 01:11:27.285313 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.285908 kubelet[2970]: E1031 01:11:27.285840 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.285908 kubelet[2970]: W1031 01:11:27.285850 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.285908 kubelet[2970]: E1031 01:11:27.285858 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.286093 kubelet[2970]: E1031 01:11:27.286085 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.286356 kubelet[2970]: W1031 01:11:27.286134 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.286356 kubelet[2970]: E1031 01:11:27.286152 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.287264 kubelet[2970]: E1031 01:11:27.287256 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.287312 kubelet[2970]: W1031 01:11:27.287302 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.287418 kubelet[2970]: E1031 01:11:27.287352 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.287418 kubelet[2970]: I1031 01:11:27.287368 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7606c94-65d4-44ae-9466-226a1af8c528-kubelet-dir\") pod \"csi-node-driver-4j67c\" (UID: \"c7606c94-65d4-44ae-9466-226a1af8c528\") " pod="calico-system/csi-node-driver-4j67c" Oct 31 01:11:27.287596 kubelet[2970]: E1031 01:11:27.287520 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.287596 kubelet[2970]: W1031 01:11:27.287529 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.287596 kubelet[2970]: E1031 01:11:27.287536 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.287596 kubelet[2970]: I1031 01:11:27.287549 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c7606c94-65d4-44ae-9466-226a1af8c528-registration-dir\") pod \"csi-node-driver-4j67c\" (UID: \"c7606c94-65d4-44ae-9466-226a1af8c528\") " pod="calico-system/csi-node-driver-4j67c" Oct 31 01:11:27.287818 kubelet[2970]: E1031 01:11:27.287748 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.287818 kubelet[2970]: W1031 01:11:27.287756 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.287818 kubelet[2970]: E1031 01:11:27.287763 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.287818 kubelet[2970]: I1031 01:11:27.287773 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ll25\" (UniqueName: \"kubernetes.io/projected/c7606c94-65d4-44ae-9466-226a1af8c528-kube-api-access-8ll25\") pod \"csi-node-driver-4j67c\" (UID: \"c7606c94-65d4-44ae-9466-226a1af8c528\") " pod="calico-system/csi-node-driver-4j67c" Oct 31 01:11:27.288026 kubelet[2970]: E1031 01:11:27.287952 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.288026 kubelet[2970]: W1031 01:11:27.287959 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.288026 kubelet[2970]: E1031 01:11:27.287968 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.288026 kubelet[2970]: I1031 01:11:27.287978 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c7606c94-65d4-44ae-9466-226a1af8c528-socket-dir\") pod \"csi-node-driver-4j67c\" (UID: \"c7606c94-65d4-44ae-9466-226a1af8c528\") " pod="calico-system/csi-node-driver-4j67c" Oct 31 01:11:27.288246 kubelet[2970]: E1031 01:11:27.288155 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.288246 kubelet[2970]: W1031 01:11:27.288164 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.288246 kubelet[2970]: E1031 01:11:27.288171 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.288246 kubelet[2970]: I1031 01:11:27.288183 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c7606c94-65d4-44ae-9466-226a1af8c528-varrun\") pod \"csi-node-driver-4j67c\" (UID: \"c7606c94-65d4-44ae-9466-226a1af8c528\") " pod="calico-system/csi-node-driver-4j67c" Oct 31 01:11:27.288495 kubelet[2970]: E1031 01:11:27.288460 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.288560 kubelet[2970]: W1031 01:11:27.288549 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.288682 kubelet[2970]: E1031 01:11:27.288616 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.289753 kubelet[2970]: E1031 01:11:27.288819 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.289753 kubelet[2970]: W1031 01:11:27.288827 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.289753 kubelet[2970]: E1031 01:11:27.288835 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.296315 kubelet[2970]: E1031 01:11:27.289896 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.296315 kubelet[2970]: W1031 01:11:27.289903 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.296315 kubelet[2970]: E1031 01:11:27.289911 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.296315 kubelet[2970]: E1031 01:11:27.289995 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.296315 kubelet[2970]: W1031 01:11:27.290000 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.296315 kubelet[2970]: E1031 01:11:27.290006 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.296315 kubelet[2970]: E1031 01:11:27.290099 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.296315 kubelet[2970]: W1031 01:11:27.290106 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.296315 kubelet[2970]: E1031 01:11:27.290115 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.296315 kubelet[2970]: E1031 01:11:27.290192 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.296523 kubelet[2970]: W1031 01:11:27.290197 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.296523 kubelet[2970]: E1031 01:11:27.290205 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.296523 kubelet[2970]: E1031 01:11:27.290302 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.296523 kubelet[2970]: W1031 01:11:27.290307 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.296523 kubelet[2970]: E1031 01:11:27.290314 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.296523 kubelet[2970]: E1031 01:11:27.290392 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.296523 kubelet[2970]: W1031 01:11:27.290398 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.296523 kubelet[2970]: E1031 01:11:27.290404 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.296523 kubelet[2970]: E1031 01:11:27.290491 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.296523 kubelet[2970]: W1031 01:11:27.290496 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.296698 kubelet[2970]: E1031 01:11:27.290502 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.296698 kubelet[2970]: E1031 01:11:27.290578 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.296698 kubelet[2970]: W1031 01:11:27.290583 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.296698 kubelet[2970]: E1031 01:11:27.290588 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.330805 containerd[1680]: time="2025-10-31T01:11:27.330750617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hvqf6,Uid:fee7de22-8924-4fc4-82de-bb1f0276e082,Namespace:calico-system,Attempt:0,}" Oct 31 01:11:27.340755 containerd[1680]: time="2025-10-31T01:11:27.340682056Z" level=info msg="connecting to shim 7b1e420372c22b85933a0762385557dcebcfbdc937818b4adf40decbf5ab3316" address="unix:///run/containerd/s/bf391fff68e4380b253c781579cfb433a3a4367c6cb927199ce3330767743a42" namespace=k8s.io protocol=ttrpc version=3 Oct 31 01:11:27.359824 systemd[1]: Started cri-containerd-7b1e420372c22b85933a0762385557dcebcfbdc937818b4adf40decbf5ab3316.scope - libcontainer container 7b1e420372c22b85933a0762385557dcebcfbdc937818b4adf40decbf5ab3316. Oct 31 01:11:27.378168 containerd[1680]: time="2025-10-31T01:11:27.378142002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hvqf6,Uid:fee7de22-8924-4fc4-82de-bb1f0276e082,Namespace:calico-system,Attempt:0,} returns sandbox id \"7b1e420372c22b85933a0762385557dcebcfbdc937818b4adf40decbf5ab3316\"" Oct 31 01:11:27.388560 kubelet[2970]: E1031 01:11:27.388540 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.388768 kubelet[2970]: W1031 01:11:27.388608 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.388768 kubelet[2970]: E1031 01:11:27.388624 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.389288 kubelet[2970]: E1031 01:11:27.389265 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.389288 kubelet[2970]: W1031 01:11:27.389272 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.389288 kubelet[2970]: E1031 01:11:27.389278 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.389955 kubelet[2970]: E1031 01:11:27.389947 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.390057 kubelet[2970]: W1031 01:11:27.390003 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.390057 kubelet[2970]: E1031 01:11:27.390013 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.390664 kubelet[2970]: E1031 01:11:27.390599 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.390664 kubelet[2970]: W1031 01:11:27.390606 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.390664 kubelet[2970]: E1031 01:11:27.390612 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.390862 kubelet[2970]: E1031 01:11:27.390817 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.390862 kubelet[2970]: W1031 01:11:27.390824 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.390862 kubelet[2970]: E1031 01:11:27.390829 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.391038 kubelet[2970]: E1031 01:11:27.391019 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.391038 kubelet[2970]: W1031 01:11:27.391026 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.391038 kubelet[2970]: E1031 01:11:27.391032 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.391209 kubelet[2970]: E1031 01:11:27.391204 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.391264 kubelet[2970]: W1031 01:11:27.391241 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.391264 kubelet[2970]: E1031 01:11:27.391248 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.391935 kubelet[2970]: E1031 01:11:27.391915 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.391935 kubelet[2970]: W1031 01:11:27.391921 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.391935 kubelet[2970]: E1031 01:11:27.391927 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.392605 kubelet[2970]: E1031 01:11:27.392545 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.392605 kubelet[2970]: W1031 01:11:27.392552 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.392605 kubelet[2970]: E1031 01:11:27.392558 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.393263 kubelet[2970]: E1031 01:11:27.393226 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.393263 kubelet[2970]: W1031 01:11:27.393233 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.393263 kubelet[2970]: E1031 01:11:27.393238 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.393848 kubelet[2970]: E1031 01:11:27.393840 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.393908 kubelet[2970]: W1031 01:11:27.393890 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.393908 kubelet[2970]: E1031 01:11:27.393901 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.394092 kubelet[2970]: E1031 01:11:27.394075 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.394092 kubelet[2970]: W1031 01:11:27.394081 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.394092 kubelet[2970]: E1031 01:11:27.394086 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.394453 kubelet[2970]: E1031 01:11:27.394448 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.394502 kubelet[2970]: W1031 01:11:27.394484 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.394502 kubelet[2970]: E1031 01:11:27.394496 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.394680 kubelet[2970]: E1031 01:11:27.394661 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.394680 kubelet[2970]: W1031 01:11:27.394668 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.394680 kubelet[2970]: E1031 01:11:27.394674 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.395029 kubelet[2970]: E1031 01:11:27.395006 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.395029 kubelet[2970]: W1031 01:11:27.395013 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.395029 kubelet[2970]: E1031 01:11:27.395019 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.395666 kubelet[2970]: E1031 01:11:27.395646 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.395666 kubelet[2970]: W1031 01:11:27.395653 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.395666 kubelet[2970]: E1031 01:11:27.395659 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.395915 kubelet[2970]: E1031 01:11:27.395861 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.395915 kubelet[2970]: W1031 01:11:27.395867 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.395915 kubelet[2970]: E1031 01:11:27.395875 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.396270 kubelet[2970]: E1031 01:11:27.396252 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.396270 kubelet[2970]: W1031 01:11:27.396257 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.396270 kubelet[2970]: E1031 01:11:27.396263 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.396892 kubelet[2970]: E1031 01:11:27.396873 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.396892 kubelet[2970]: W1031 01:11:27.396880 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.396892 kubelet[2970]: E1031 01:11:27.396885 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.397496 kubelet[2970]: E1031 01:11:27.397474 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.397496 kubelet[2970]: W1031 01:11:27.397481 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.397496 kubelet[2970]: E1031 01:11:27.397488 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.398120 kubelet[2970]: E1031 01:11:27.398114 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.398164 kubelet[2970]: W1031 01:11:27.398150 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.398164 kubelet[2970]: E1031 01:11:27.398158 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.398761 kubelet[2970]: E1031 01:11:27.398431 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.398845 kubelet[2970]: W1031 01:11:27.398828 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.398869 kubelet[2970]: E1031 01:11:27.398845 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.398949 kubelet[2970]: E1031 01:11:27.398937 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.398949 kubelet[2970]: W1031 01:11:27.398945 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.399772 kubelet[2970]: E1031 01:11:27.398951 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.399849 kubelet[2970]: E1031 01:11:27.399820 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.399849 kubelet[2970]: W1031 01:11:27.399827 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.399849 kubelet[2970]: E1031 01:11:27.399833 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.400047 kubelet[2970]: E1031 01:11:27.400024 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.400047 kubelet[2970]: W1031 01:11:27.400030 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.400047 kubelet[2970]: E1031 01:11:27.400035 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:27.409266 kubelet[2970]: E1031 01:11:27.408785 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:27.409266 kubelet[2970]: W1031 01:11:27.408797 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:27.409266 kubelet[2970]: E1031 01:11:27.408809 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:28.902707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount492715376.mount: Deactivated successfully. Oct 31 01:11:29.339108 containerd[1680]: time="2025-10-31T01:11:29.338693635Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:11:29.365806 containerd[1680]: time="2025-10-31T01:11:29.347967539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 31 01:11:29.365806 containerd[1680]: time="2025-10-31T01:11:29.348410985Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:11:29.391464 containerd[1680]: time="2025-10-31T01:11:29.364863129Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.209056005s" Oct 31 01:11:29.391772 containerd[1680]: time="2025-10-31T01:11:29.391562080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 31 01:11:29.391772 containerd[1680]: time="2025-10-31T01:11:29.377098545Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:11:29.396021 containerd[1680]: time="2025-10-31T01:11:29.395898864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 31 01:11:29.405312 containerd[1680]: time="2025-10-31T01:11:29.404645583Z" level=info msg="CreateContainer within sandbox \"e3ee6db3a850b931ae54d4922e749653cb98b5abe4eea35d49b7c2414b5d8d10\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 31 01:11:29.471716 containerd[1680]: time="2025-10-31T01:11:29.471690473Z" level=info msg="Container 3eedbc097f0cefcce354ab752e2407b11998c8620a91f2acbf9777464085e207: CDI devices from CRI Config.CDIDevices: []" Oct 31 01:11:29.472160 kubelet[2970]: E1031 01:11:29.472127 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4j67c" podUID="c7606c94-65d4-44ae-9466-226a1af8c528" Oct 31 01:11:29.488064 containerd[1680]: time="2025-10-31T01:11:29.488032506Z" level=info msg="CreateContainer within sandbox \"e3ee6db3a850b931ae54d4922e749653cb98b5abe4eea35d49b7c2414b5d8d10\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3eedbc097f0cefcce354ab752e2407b11998c8620a91f2acbf9777464085e207\"" Oct 31 01:11:29.488876 containerd[1680]: time="2025-10-31T01:11:29.488856400Z" level=info msg="StartContainer for \"3eedbc097f0cefcce354ab752e2407b11998c8620a91f2acbf9777464085e207\"" Oct 31 01:11:29.490242 containerd[1680]: time="2025-10-31T01:11:29.490118429Z" level=info msg="connecting to shim 3eedbc097f0cefcce354ab752e2407b11998c8620a91f2acbf9777464085e207" address="unix:///run/containerd/s/29f65b97cab3a3d303aeb0659915b00549a123a0b804dfa6f4810db0089cab8b" protocol=ttrpc version=3 Oct 31 01:11:29.514900 systemd[1]: Started cri-containerd-3eedbc097f0cefcce354ab752e2407b11998c8620a91f2acbf9777464085e207.scope - libcontainer container 3eedbc097f0cefcce354ab752e2407b11998c8620a91f2acbf9777464085e207. Oct 31 01:11:29.562535 containerd[1680]: time="2025-10-31T01:11:29.562512373Z" level=info msg="StartContainer for \"3eedbc097f0cefcce354ab752e2407b11998c8620a91f2acbf9777464085e207\" returns successfully" Oct 31 01:11:30.553311 kubelet[2970]: I1031 01:11:30.553276 2970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-774f99658b-df988" podStartSLOduration=2.316317839 podStartE2EDuration="4.553265408s" podCreationTimestamp="2025-10-31 01:11:26 +0000 UTC" firstStartedPulling="2025-10-31 01:11:27.155445139 +0000 UTC m=+18.834522883" lastFinishedPulling="2025-10-31 01:11:29.392392708 +0000 UTC m=+21.071470452" observedRunningTime="2025-10-31 01:11:30.552445219 +0000 UTC m=+22.231522972" watchObservedRunningTime="2025-10-31 01:11:30.553265408 +0000 UTC m=+22.232343152" Oct 31 01:11:30.615902 kubelet[2970]: E1031 01:11:30.615880 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:30.615902 kubelet[2970]: W1031 01:11:30.615898 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:30.616700 kubelet[2970]: E1031 01:11:30.616502 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:30.617141 kubelet[2970]: E1031 01:11:30.617132 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:30.617141 kubelet[2970]: W1031 01:11:30.617139 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:30.617191 kubelet[2970]: E1031 01:11:30.617145 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:30.617239 kubelet[2970]: E1031 01:11:30.617231 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:30.617239 kubelet[2970]: W1031 01:11:30.617237 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:30.617286 kubelet[2970]: E1031 01:11:30.617242 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:30.619759 kubelet[2970]: E1031 01:11:30.619748 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:30.619759 kubelet[2970]: W1031 01:11:30.619755 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:30.619819 kubelet[2970]: E1031 01:11:30.619761 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:30.619857 kubelet[2970]: E1031 01:11:30.619851 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:30.619857 kubelet[2970]: W1031 01:11:30.619856 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:30.619899 kubelet[2970]: E1031 01:11:30.619861 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:30.619937 kubelet[2970]: E1031 01:11:30.619929 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:30.619937 kubelet[2970]: W1031 01:11:30.619935 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:30.620039 kubelet[2970]: E1031 01:11:30.619939 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:30.620039 kubelet[2970]: E1031 01:11:30.620037 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:30.620084 kubelet[2970]: W1031 01:11:30.620041 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:30.620084 kubelet[2970]: E1031 01:11:30.620046 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:30.620127 kubelet[2970]: E1031 01:11:30.620114 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:30.620127 kubelet[2970]: W1031 01:11:30.620118 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:30.620127 kubelet[2970]: E1031 01:11:30.620122 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:30.620536 kubelet[2970]: E1031 01:11:30.620524 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:30.620536 kubelet[2970]: W1031 01:11:30.620531 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:30.620536 kubelet[2970]: E1031 01:11:30.620536 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:30.620686 kubelet[2970]: E1031 01:11:30.620609 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:30.620686 kubelet[2970]: W1031 01:11:30.620615 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:30.620686 kubelet[2970]: E1031 01:11:30.620619 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:30.620759 kubelet[2970]: E1031 01:11:30.620695 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:30.620759 kubelet[2970]: W1031 01:11:30.620699 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:30.620759 kubelet[2970]: E1031 01:11:30.620704 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:30.620980 kubelet[2970]: E1031 01:11:30.620814 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:30.620980 kubelet[2970]: W1031 01:11:30.620821 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:30.620980 kubelet[2970]: E1031 01:11:30.620826 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:30.621265 kubelet[2970]: E1031 01:11:30.621169 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:30.621265 kubelet[2970]: W1031 01:11:30.621173 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:30.621265 kubelet[2970]: E1031 01:11:30.621179 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:30.621467 kubelet[2970]: E1031 01:11:30.621453 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:30.621467 kubelet[2970]: W1031 01:11:30.621462 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:30.621565 kubelet[2970]: E1031 01:11:30.621468 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:30.621786 kubelet[2970]: E1031 01:11:30.621646 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:30.621786 kubelet[2970]: W1031 01:11:30.621651 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:30.621786 kubelet[2970]: E1031 01:11:30.621657 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:30.622306 kubelet[2970]: E1031 01:11:30.622169 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:30.622306 kubelet[2970]: W1031 01:11:30.622179 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:30.622306 kubelet[2970]: E1031 01:11:30.622188 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:30.622306 kubelet[2970]: E1031 01:11:30.622307 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:30.622396 kubelet[2970]: W1031 01:11:30.622312 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:30.622396 kubelet[2970]: E1031 01:11:30.622317 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:30.622432 kubelet[2970]: E1031 01:11:30.622416 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:30.622432 kubelet[2970]: W1031 01:11:30.622421 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:30.622432 kubelet[2970]: E1031 01:11:30.622425 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:30.622719 kubelet[2970]: E1031 01:11:30.622579 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:30.622719 kubelet[2970]: W1031 01:11:30.622587 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:30.622719 kubelet[2970]: E1031 01:11:30.622593 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:30.623733 kubelet[2970]: E1031 01:11:30.623030 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:30.623733 kubelet[2970]: W1031 01:11:30.623036 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:30.623733 kubelet[2970]: E1031 01:11:30.623041 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:30.623733 kubelet[2970]: E1031 01:11:30.623111 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:30.623733 kubelet[2970]: W1031 01:11:30.623115 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:30.623733 kubelet[2970]: E1031 01:11:30.623119 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:30.623733 kubelet[2970]: E1031 01:11:30.623212 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:30.623733 kubelet[2970]: W1031 01:11:30.623218 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:30.623733 kubelet[2970]: E1031 01:11:30.623225 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:30.624097 kubelet[2970]: E1031 01:11:30.624088 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:30.624157 kubelet[2970]: W1031 01:11:30.624150 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:30.624200 kubelet[2970]: E1031 01:11:30.624194 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:30.624556 kubelet[2970]: E1031 01:11:30.624489 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:30.624556 kubelet[2970]: W1031 01:11:30.624495 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:30.624556 kubelet[2970]: E1031 01:11:30.624501 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:30.624664 kubelet[2970]: E1031 01:11:30.624658 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:30.624715 kubelet[2970]: W1031 01:11:30.624707 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:30.624783 kubelet[2970]: E1031 01:11:30.624777 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:30.625271 kubelet[2970]: E1031 01:11:30.625040 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:30.625271 kubelet[2970]: W1031 01:11:30.625046 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:30.625271 kubelet[2970]: E1031 01:11:30.625052 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:30.625369 kubelet[2970]: E1031 01:11:30.625363 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:30.625409 kubelet[2970]: W1031 01:11:30.625404 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:30.625447 kubelet[2970]: E1031 01:11:30.625442 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:30.626032 kubelet[2970]: E1031 01:11:30.625801 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:30.626032 kubelet[2970]: W1031 01:11:30.625808 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:30.626032 kubelet[2970]: E1031 01:11:30.625813 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:30.626032 kubelet[2970]: E1031 01:11:30.625962 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:30.626032 kubelet[2970]: W1031 01:11:30.625968 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:30.626032 kubelet[2970]: E1031 01:11:30.625974 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:30.626152 kubelet[2970]: E1031 01:11:30.626076 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:30.626152 kubelet[2970]: W1031 01:11:30.626081 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:30.626152 kubelet[2970]: E1031 01:11:30.626085 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:30.626334 kubelet[2970]: E1031 01:11:30.626161 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:30.626334 kubelet[2970]: W1031 01:11:30.626165 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:30.626334 kubelet[2970]: E1031 01:11:30.626170 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:30.626390 kubelet[2970]: E1031 01:11:30.626386 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:30.626461 kubelet[2970]: W1031 01:11:30.626391 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:30.626461 kubelet[2970]: E1031 01:11:30.626395 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:30.626531 kubelet[2970]: E1031 01:11:30.626467 2970 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:11:30.626531 kubelet[2970]: W1031 01:11:30.626471 2970 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:11:30.626531 kubelet[2970]: E1031 01:11:30.626475 2970 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:11:30.770391 containerd[1680]: time="2025-10-31T01:11:30.770212796Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:11:30.774782 containerd[1680]: time="2025-10-31T01:11:30.774755607Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 31 01:11:30.781961 containerd[1680]: time="2025-10-31T01:11:30.781843559Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:11:30.787176 containerd[1680]: time="2025-10-31T01:11:30.787095537Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:11:30.787732 containerd[1680]: time="2025-10-31T01:11:30.787417470Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.391501593s" Oct 31 01:11:30.787732 containerd[1680]: time="2025-10-31T01:11:30.787436860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 31 01:11:30.810992 containerd[1680]: time="2025-10-31T01:11:30.810849220Z" level=info msg="CreateContainer within sandbox \"7b1e420372c22b85933a0762385557dcebcfbdc937818b4adf40decbf5ab3316\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 31 01:11:30.831909 containerd[1680]: time="2025-10-31T01:11:30.831887493Z" level=info msg="Container 6fafb58b914abfa0ce8435202c582c5acc65df10348b64e7779177394158c0b9: CDI devices from CRI Config.CDIDevices: []" Oct 31 01:11:30.834021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2328111215.mount: Deactivated successfully. Oct 31 01:11:30.838242 containerd[1680]: time="2025-10-31T01:11:30.838089941Z" level=info msg="CreateContainer within sandbox \"7b1e420372c22b85933a0762385557dcebcfbdc937818b4adf40decbf5ab3316\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6fafb58b914abfa0ce8435202c582c5acc65df10348b64e7779177394158c0b9\"" Oct 31 01:11:30.839020 containerd[1680]: time="2025-10-31T01:11:30.838967797Z" level=info msg="StartContainer for \"6fafb58b914abfa0ce8435202c582c5acc65df10348b64e7779177394158c0b9\"" Oct 31 01:11:30.841140 containerd[1680]: time="2025-10-31T01:11:30.841125841Z" level=info msg="connecting to shim 6fafb58b914abfa0ce8435202c582c5acc65df10348b64e7779177394158c0b9" address="unix:///run/containerd/s/bf391fff68e4380b253c781579cfb433a3a4367c6cb927199ce3330767743a42" protocol=ttrpc version=3 Oct 31 01:11:30.864828 systemd[1]: Started cri-containerd-6fafb58b914abfa0ce8435202c582c5acc65df10348b64e7779177394158c0b9.scope - libcontainer container 6fafb58b914abfa0ce8435202c582c5acc65df10348b64e7779177394158c0b9. Oct 31 01:11:30.893086 containerd[1680]: time="2025-10-31T01:11:30.892605815Z" level=info msg="StartContainer for \"6fafb58b914abfa0ce8435202c582c5acc65df10348b64e7779177394158c0b9\" returns successfully" Oct 31 01:11:30.896009 systemd[1]: cri-containerd-6fafb58b914abfa0ce8435202c582c5acc65df10348b64e7779177394158c0b9.scope: Deactivated successfully. Oct 31 01:11:30.906128 containerd[1680]: time="2025-10-31T01:11:30.906093105Z" level=info msg="received exit event container_id:\"6fafb58b914abfa0ce8435202c582c5acc65df10348b64e7779177394158c0b9\" id:\"6fafb58b914abfa0ce8435202c582c5acc65df10348b64e7779177394158c0b9\" pid:3638 exited_at:{seconds:1761873090 nanos:897642016}" Oct 31 01:11:30.912998 containerd[1680]: time="2025-10-31T01:11:30.912978839Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6fafb58b914abfa0ce8435202c582c5acc65df10348b64e7779177394158c0b9\" id:\"6fafb58b914abfa0ce8435202c582c5acc65df10348b64e7779177394158c0b9\" pid:3638 exited_at:{seconds:1761873090 nanos:897642016}" Oct 31 01:11:30.927854 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6fafb58b914abfa0ce8435202c582c5acc65df10348b64e7779177394158c0b9-rootfs.mount: Deactivated successfully. Oct 31 01:11:31.467184 kubelet[2970]: E1031 01:11:31.467157 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4j67c" podUID="c7606c94-65d4-44ae-9466-226a1af8c528" Oct 31 01:11:31.554515 kubelet[2970]: I1031 01:11:31.554487 2970 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 01:11:31.563182 containerd[1680]: time="2025-10-31T01:11:31.563081168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 31 01:11:33.466915 kubelet[2970]: E1031 01:11:33.466880 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4j67c" podUID="c7606c94-65d4-44ae-9466-226a1af8c528" Oct 31 01:11:34.052016 containerd[1680]: time="2025-10-31T01:11:34.051994039Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:11:34.052495 containerd[1680]: time="2025-10-31T01:11:34.052366569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 31 01:11:34.052639 containerd[1680]: time="2025-10-31T01:11:34.052624702Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:11:34.053591 containerd[1680]: time="2025-10-31T01:11:34.053574331Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:11:34.054216 containerd[1680]: time="2025-10-31T01:11:34.053969597Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.490810444s" Oct 31 01:11:34.054216 containerd[1680]: time="2025-10-31T01:11:34.053985518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 31 01:11:34.056196 containerd[1680]: time="2025-10-31T01:11:34.056183376Z" level=info msg="CreateContainer within sandbox \"7b1e420372c22b85933a0762385557dcebcfbdc937818b4adf40decbf5ab3316\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 31 01:11:34.061653 containerd[1680]: time="2025-10-31T01:11:34.061638972Z" level=info msg="Container 857d97020acae12af376a3159737d5fa12308073bff1d2e123ed1e605ce1acc1: CDI devices from CRI Config.CDIDevices: []" Oct 31 01:11:34.063744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount30741156.mount: Deactivated successfully. Oct 31 01:11:34.068399 containerd[1680]: time="2025-10-31T01:11:34.067282891Z" level=info msg="CreateContainer within sandbox \"7b1e420372c22b85933a0762385557dcebcfbdc937818b4adf40decbf5ab3316\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"857d97020acae12af376a3159737d5fa12308073bff1d2e123ed1e605ce1acc1\"" Oct 31 01:11:34.068399 containerd[1680]: time="2025-10-31T01:11:34.067694955Z" level=info msg="StartContainer for \"857d97020acae12af376a3159737d5fa12308073bff1d2e123ed1e605ce1acc1\"" Oct 31 01:11:34.068816 containerd[1680]: time="2025-10-31T01:11:34.068759777Z" level=info msg="connecting to shim 857d97020acae12af376a3159737d5fa12308073bff1d2e123ed1e605ce1acc1" address="unix:///run/containerd/s/bf391fff68e4380b253c781579cfb433a3a4367c6cb927199ce3330767743a42" protocol=ttrpc version=3 Oct 31 01:11:34.083811 systemd[1]: Started cri-containerd-857d97020acae12af376a3159737d5fa12308073bff1d2e123ed1e605ce1acc1.scope - libcontainer container 857d97020acae12af376a3159737d5fa12308073bff1d2e123ed1e605ce1acc1. Oct 31 01:11:34.108385 containerd[1680]: time="2025-10-31T01:11:34.108260156Z" level=info msg="StartContainer for \"857d97020acae12af376a3159737d5fa12308073bff1d2e123ed1e605ce1acc1\" returns successfully" Oct 31 01:11:35.360448 systemd[1]: cri-containerd-857d97020acae12af376a3159737d5fa12308073bff1d2e123ed1e605ce1acc1.scope: Deactivated successfully. Oct 31 01:11:35.361522 systemd[1]: cri-containerd-857d97020acae12af376a3159737d5fa12308073bff1d2e123ed1e605ce1acc1.scope: Consumed 289ms CPU time, 155.4M memory peak, 12K read from disk, 171.3M written to disk. Oct 31 01:11:35.369898 containerd[1680]: time="2025-10-31T01:11:35.369773329Z" level=info msg="received exit event container_id:\"857d97020acae12af376a3159737d5fa12308073bff1d2e123ed1e605ce1acc1\" id:\"857d97020acae12af376a3159737d5fa12308073bff1d2e123ed1e605ce1acc1\" pid:3693 exited_at:{seconds:1761873095 nanos:369245443}" Oct 31 01:11:35.369898 containerd[1680]: time="2025-10-31T01:11:35.369882403Z" level=info msg="TaskExit event in podsandbox handler container_id:\"857d97020acae12af376a3159737d5fa12308073bff1d2e123ed1e605ce1acc1\" id:\"857d97020acae12af376a3159737d5fa12308073bff1d2e123ed1e605ce1acc1\" pid:3693 exited_at:{seconds:1761873095 nanos:369245443}" Oct 31 01:11:35.393708 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-857d97020acae12af376a3159737d5fa12308073bff1d2e123ed1e605ce1acc1-rootfs.mount: Deactivated successfully. Oct 31 01:11:35.433102 kubelet[2970]: I1031 01:11:35.433083 2970 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 31 01:11:35.473405 systemd[1]: Created slice kubepods-burstable-podab451c17_5e1d_49f8_82b8_eb7dedc414c7.slice - libcontainer container kubepods-burstable-podab451c17_5e1d_49f8_82b8_eb7dedc414c7.slice. Oct 31 01:11:35.486913 systemd[1]: Created slice kubepods-burstable-podd59878ec_d6a3_4ecc_a664_a6bb44484f80.slice - libcontainer container kubepods-burstable-podd59878ec_d6a3_4ecc_a664_a6bb44484f80.slice. Oct 31 01:11:35.491135 systemd[1]: Created slice kubepods-besteffort-pode92d6b8b_bd48_432b_a17d_5d635d6fb001.slice - libcontainer container kubepods-besteffort-pode92d6b8b_bd48_432b_a17d_5d635d6fb001.slice. Oct 31 01:11:35.496699 systemd[1]: Created slice kubepods-besteffort-pod4d287e78_d822_498f_92dc_e6aaa22a1cfb.slice - libcontainer container kubepods-besteffort-pod4d287e78_d822_498f_92dc_e6aaa22a1cfb.slice. Oct 31 01:11:35.503960 systemd[1]: Created slice kubepods-besteffort-pod5298c82b_bc3f_4f82_8aba_c9069839de1b.slice - libcontainer container kubepods-besteffort-pod5298c82b_bc3f_4f82_8aba_c9069839de1b.slice. Oct 31 01:11:35.506732 systemd[1]: Created slice kubepods-besteffort-pod6ebd7d48_8bc4_4814_b512_88ba6c138a34.slice - libcontainer container kubepods-besteffort-pod6ebd7d48_8bc4_4814_b512_88ba6c138a34.slice. Oct 31 01:11:35.511973 systemd[1]: Created slice kubepods-besteffort-podc7606c94_65d4_44ae_9466_226a1af8c528.slice - libcontainer container kubepods-besteffort-podc7606c94_65d4_44ae_9466_226a1af8c528.slice. Oct 31 01:11:35.517230 systemd[1]: Created slice kubepods-besteffort-pod014b66aa_7ac9_43b5_8a19_b4ecf0978b6c.slice - libcontainer container kubepods-besteffort-pod014b66aa_7ac9_43b5_8a19_b4ecf0978b6c.slice. Oct 31 01:11:35.523663 systemd[1]: Created slice kubepods-besteffort-pod65c6a159_851f_4aa0_86f0_ed319b59746c.slice - libcontainer container kubepods-besteffort-pod65c6a159_851f_4aa0_86f0_ed319b59746c.slice. Oct 31 01:11:35.534650 containerd[1680]: time="2025-10-31T01:11:35.534349283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4j67c,Uid:c7606c94-65d4-44ae-9466-226a1af8c528,Namespace:calico-system,Attempt:0,}" Oct 31 01:11:35.558263 kubelet[2970]: I1031 01:11:35.558243 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkrws\" (UniqueName: \"kubernetes.io/projected/014b66aa-7ac9-43b5-8a19-b4ecf0978b6c-kube-api-access-gkrws\") pod \"calico-apiserver-6559f565b6-9xwpc\" (UID: \"014b66aa-7ac9-43b5-8a19-b4ecf0978b6c\") " pod="calico-apiserver/calico-apiserver-6559f565b6-9xwpc" Oct 31 01:11:35.558363 kubelet[2970]: I1031 01:11:35.558356 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgzwd\" (UniqueName: \"kubernetes.io/projected/e92d6b8b-bd48-432b-a17d-5d635d6fb001-kube-api-access-lgzwd\") pod \"whisker-5b8bc5bbf7-zbbzw\" (UID: \"e92d6b8b-bd48-432b-a17d-5d635d6fb001\") " pod="calico-system/whisker-5b8bc5bbf7-zbbzw" Oct 31 01:11:35.558431 kubelet[2970]: I1031 01:11:35.558423 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65c6a159-851f-4aa0-86f0-ed319b59746c-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-4m4m9\" (UID: \"65c6a159-851f-4aa0-86f0-ed319b59746c\") " pod="calico-system/goldmane-7c778bb748-4m4m9" Oct 31 01:11:35.560790 kubelet[2970]: I1031 01:11:35.560688 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw4d5\" (UniqueName: \"kubernetes.io/projected/ab451c17-5e1d-49f8-82b8-eb7dedc414c7-kube-api-access-hw4d5\") pod \"coredns-66bc5c9577-xkvt5\" (UID: \"ab451c17-5e1d-49f8-82b8-eb7dedc414c7\") " pod="kube-system/coredns-66bc5c9577-xkvt5" Oct 31 01:11:35.560790 kubelet[2970]: I1031 01:11:35.560707 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d59878ec-d6a3-4ecc-a664-a6bb44484f80-config-volume\") pod \"coredns-66bc5c9577-98l5j\" (UID: \"d59878ec-d6a3-4ecc-a664-a6bb44484f80\") " pod="kube-system/coredns-66bc5c9577-98l5j" Oct 31 01:11:35.560790 kubelet[2970]: I1031 01:11:35.560719 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl9ng\" (UniqueName: \"kubernetes.io/projected/d59878ec-d6a3-4ecc-a664-a6bb44484f80-kube-api-access-cl9ng\") pod \"coredns-66bc5c9577-98l5j\" (UID: \"d59878ec-d6a3-4ecc-a664-a6bb44484f80\") " pod="kube-system/coredns-66bc5c9577-98l5j" Oct 31 01:11:35.560790 kubelet[2970]: I1031 01:11:35.560750 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvnj5\" (UniqueName: \"kubernetes.io/projected/4d287e78-d822-498f-92dc-e6aaa22a1cfb-kube-api-access-vvnj5\") pod \"calico-apiserver-6559f565b6-lxd8t\" (UID: \"4d287e78-d822-498f-92dc-e6aaa22a1cfb\") " pod="calico-apiserver/calico-apiserver-6559f565b6-lxd8t" Oct 31 01:11:35.560790 kubelet[2970]: I1031 01:11:35.560770 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/65c6a159-851f-4aa0-86f0-ed319b59746c-goldmane-key-pair\") pod \"goldmane-7c778bb748-4m4m9\" (UID: \"65c6a159-851f-4aa0-86f0-ed319b59746c\") " pod="calico-system/goldmane-7c778bb748-4m4m9" Oct 31 01:11:35.561136 kubelet[2970]: I1031 01:11:35.560855 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q9vq\" (UniqueName: \"kubernetes.io/projected/65c6a159-851f-4aa0-86f0-ed319b59746c-kube-api-access-2q9vq\") pod \"goldmane-7c778bb748-4m4m9\" (UID: \"65c6a159-851f-4aa0-86f0-ed319b59746c\") " pod="calico-system/goldmane-7c778bb748-4m4m9" Oct 31 01:11:35.561136 kubelet[2970]: I1031 01:11:35.560881 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65c6a159-851f-4aa0-86f0-ed319b59746c-config\") pod \"goldmane-7c778bb748-4m4m9\" (UID: \"65c6a159-851f-4aa0-86f0-ed319b59746c\") " pod="calico-system/goldmane-7c778bb748-4m4m9" Oct 31 01:11:35.561136 kubelet[2970]: I1031 01:11:35.560894 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6ebd7d48-8bc4-4814-b512-88ba6c138a34-calico-apiserver-certs\") pod \"calico-apiserver-5756f8ffdf-n4khn\" (UID: \"6ebd7d48-8bc4-4814-b512-88ba6c138a34\") " pod="calico-apiserver/calico-apiserver-5756f8ffdf-n4khn" Oct 31 01:11:35.561136 kubelet[2970]: I1031 01:11:35.560905 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab451c17-5e1d-49f8-82b8-eb7dedc414c7-config-volume\") pod \"coredns-66bc5c9577-xkvt5\" (UID: \"ab451c17-5e1d-49f8-82b8-eb7dedc414c7\") " pod="kube-system/coredns-66bc5c9577-xkvt5" Oct 31 01:11:35.561136 kubelet[2970]: I1031 01:11:35.560934 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4d287e78-d822-498f-92dc-e6aaa22a1cfb-calico-apiserver-certs\") pod \"calico-apiserver-6559f565b6-lxd8t\" (UID: \"4d287e78-d822-498f-92dc-e6aaa22a1cfb\") " pod="calico-apiserver/calico-apiserver-6559f565b6-lxd8t" Oct 31 01:11:35.561250 kubelet[2970]: I1031 01:11:35.560947 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5298c82b-bc3f-4f82-8aba-c9069839de1b-tigera-ca-bundle\") pod \"calico-kube-controllers-577dc64884-7r78b\" (UID: \"5298c82b-bc3f-4f82-8aba-c9069839de1b\") " pod="calico-system/calico-kube-controllers-577dc64884-7r78b" Oct 31 01:11:35.561250 kubelet[2970]: I1031 01:11:35.560961 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/014b66aa-7ac9-43b5-8a19-b4ecf0978b6c-calico-apiserver-certs\") pod \"calico-apiserver-6559f565b6-9xwpc\" (UID: \"014b66aa-7ac9-43b5-8a19-b4ecf0978b6c\") " pod="calico-apiserver/calico-apiserver-6559f565b6-9xwpc" Oct 31 01:11:35.561250 kubelet[2970]: I1031 01:11:35.560972 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e92d6b8b-bd48-432b-a17d-5d635d6fb001-whisker-ca-bundle\") pod \"whisker-5b8bc5bbf7-zbbzw\" (UID: \"e92d6b8b-bd48-432b-a17d-5d635d6fb001\") " pod="calico-system/whisker-5b8bc5bbf7-zbbzw" Oct 31 01:11:35.561250 kubelet[2970]: I1031 01:11:35.560980 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4ldp\" (UniqueName: \"kubernetes.io/projected/5298c82b-bc3f-4f82-8aba-c9069839de1b-kube-api-access-q4ldp\") pod \"calico-kube-controllers-577dc64884-7r78b\" (UID: \"5298c82b-bc3f-4f82-8aba-c9069839de1b\") " pod="calico-system/calico-kube-controllers-577dc64884-7r78b" Oct 31 01:11:35.561250 kubelet[2970]: I1031 01:11:35.560999 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq9tt\" (UniqueName: \"kubernetes.io/projected/6ebd7d48-8bc4-4814-b512-88ba6c138a34-kube-api-access-kq9tt\") pod \"calico-apiserver-5756f8ffdf-n4khn\" (UID: \"6ebd7d48-8bc4-4814-b512-88ba6c138a34\") " pod="calico-apiserver/calico-apiserver-5756f8ffdf-n4khn" Oct 31 01:11:35.561366 kubelet[2970]: I1031 01:11:35.561010 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e92d6b8b-bd48-432b-a17d-5d635d6fb001-whisker-backend-key-pair\") pod \"whisker-5b8bc5bbf7-zbbzw\" (UID: \"e92d6b8b-bd48-432b-a17d-5d635d6fb001\") " pod="calico-system/whisker-5b8bc5bbf7-zbbzw" Oct 31 01:11:35.622091 containerd[1680]: time="2025-10-31T01:11:35.620781592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 31 01:11:35.810618 containerd[1680]: time="2025-10-31T01:11:35.810443061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-98l5j,Uid:d59878ec-d6a3-4ecc-a664-a6bb44484f80,Namespace:kube-system,Attempt:0,}" Oct 31 01:11:35.810851 containerd[1680]: time="2025-10-31T01:11:35.810839552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6559f565b6-lxd8t,Uid:4d287e78-d822-498f-92dc-e6aaa22a1cfb,Namespace:calico-apiserver,Attempt:0,}" Oct 31 01:11:35.811643 containerd[1680]: time="2025-10-31T01:11:35.810918177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xkvt5,Uid:ab451c17-5e1d-49f8-82b8-eb7dedc414c7,Namespace:kube-system,Attempt:0,}" Oct 31 01:11:35.811643 containerd[1680]: time="2025-10-31T01:11:35.810945224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-577dc64884-7r78b,Uid:5298c82b-bc3f-4f82-8aba-c9069839de1b,Namespace:calico-system,Attempt:0,}" Oct 31 01:11:35.811643 containerd[1680]: time="2025-10-31T01:11:35.811547207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5756f8ffdf-n4khn,Uid:6ebd7d48-8bc4-4814-b512-88ba6c138a34,Namespace:calico-apiserver,Attempt:0,}" Oct 31 01:11:35.811965 containerd[1680]: time="2025-10-31T01:11:35.811951267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b8bc5bbf7-zbbzw,Uid:e92d6b8b-bd48-432b-a17d-5d635d6fb001,Namespace:calico-system,Attempt:0,}" Oct 31 01:11:35.826840 containerd[1680]: time="2025-10-31T01:11:35.826518261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6559f565b6-9xwpc,Uid:014b66aa-7ac9-43b5-8a19-b4ecf0978b6c,Namespace:calico-apiserver,Attempt:0,}" Oct 31 01:11:35.841600 containerd[1680]: time="2025-10-31T01:11:35.841578938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-4m4m9,Uid:65c6a159-851f-4aa0-86f0-ed319b59746c,Namespace:calico-system,Attempt:0,}" Oct 31 01:11:35.887187 containerd[1680]: time="2025-10-31T01:11:35.886987694Z" level=error msg="Failed to destroy network for sandbox \"66e2c1575b6b20783eb30cc3dfa8649d770c72de2e925f4ec9837a2e1be21285\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:11:35.889453 containerd[1680]: time="2025-10-31T01:11:35.889402208Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4j67c,Uid:c7606c94-65d4-44ae-9466-226a1af8c528,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"66e2c1575b6b20783eb30cc3dfa8649d770c72de2e925f4ec9837a2e1be21285\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:11:35.900569 kubelet[2970]: E1031 01:11:35.900516 2970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66e2c1575b6b20783eb30cc3dfa8649d770c72de2e925f4ec9837a2e1be21285\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:11:35.900569 kubelet[2970]: E1031 01:11:35.900559 2970 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66e2c1575b6b20783eb30cc3dfa8649d770c72de2e925f4ec9837a2e1be21285\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4j67c" Oct 31 01:11:35.900569 kubelet[2970]: E1031 01:11:35.900577 2970 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66e2c1575b6b20783eb30cc3dfa8649d770c72de2e925f4ec9837a2e1be21285\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4j67c" Oct 31 01:11:35.904683 kubelet[2970]: E1031 01:11:35.904606 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4j67c_calico-system(c7606c94-65d4-44ae-9466-226a1af8c528)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4j67c_calico-system(c7606c94-65d4-44ae-9466-226a1af8c528)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66e2c1575b6b20783eb30cc3dfa8649d770c72de2e925f4ec9837a2e1be21285\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4j67c" podUID="c7606c94-65d4-44ae-9466-226a1af8c528" Oct 31 01:11:35.957890 containerd[1680]: time="2025-10-31T01:11:35.957857296Z" level=error msg="Failed to destroy network for sandbox \"319b909cf3b6be632b299ed26eac0e8df699e42e96eec272174a240723267d7b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:11:35.958653 containerd[1680]: time="2025-10-31T01:11:35.958639252Z" level=error msg="Failed to destroy network for sandbox \"eab4a4957a2195c324f7c889562fc391d8f860e636442b91ad404220464f4bfa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:11:35.960836 containerd[1680]: time="2025-10-31T01:11:35.960818444Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6559f565b6-lxd8t,Uid:4d287e78-d822-498f-92dc-e6aaa22a1cfb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"319b909cf3b6be632b299ed26eac0e8df699e42e96eec272174a240723267d7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:11:35.962836 kubelet[2970]: E1031 01:11:35.962813 2970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"319b909cf3b6be632b299ed26eac0e8df699e42e96eec272174a240723267d7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:11:35.962895 containerd[1680]: time="2025-10-31T01:11:35.962827652Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-577dc64884-7r78b,Uid:5298c82b-bc3f-4f82-8aba-c9069839de1b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eab4a4957a2195c324f7c889562fc391d8f860e636442b91ad404220464f4bfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:11:35.962997 kubelet[2970]: E1031 01:11:35.962987 2970 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"319b909cf3b6be632b299ed26eac0e8df699e42e96eec272174a240723267d7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6559f565b6-lxd8t" Oct 31 01:11:35.963065 kubelet[2970]: E1031 01:11:35.963051 2970 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"319b909cf3b6be632b299ed26eac0e8df699e42e96eec272174a240723267d7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6559f565b6-lxd8t" Oct 31 01:11:35.963366 kubelet[2970]: E1031 01:11:35.962908 2970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eab4a4957a2195c324f7c889562fc391d8f860e636442b91ad404220464f4bfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:11:35.963464 kubelet[2970]: E1031 01:11:35.963412 2970 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eab4a4957a2195c324f7c889562fc391d8f860e636442b91ad404220464f4bfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-577dc64884-7r78b" Oct 31 01:11:35.963464 kubelet[2970]: E1031 01:11:35.963427 2970 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eab4a4957a2195c324f7c889562fc391d8f860e636442b91ad404220464f4bfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-577dc64884-7r78b" Oct 31 01:11:35.963464 kubelet[2970]: E1031 01:11:35.963445 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-577dc64884-7r78b_calico-system(5298c82b-bc3f-4f82-8aba-c9069839de1b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-577dc64884-7r78b_calico-system(5298c82b-bc3f-4f82-8aba-c9069839de1b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eab4a4957a2195c324f7c889562fc391d8f860e636442b91ad404220464f4bfa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-577dc64884-7r78b" podUID="5298c82b-bc3f-4f82-8aba-c9069839de1b" Oct 31 01:11:35.964313 kubelet[2970]: E1031 01:11:35.963344 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6559f565b6-lxd8t_calico-apiserver(4d287e78-d822-498f-92dc-e6aaa22a1cfb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6559f565b6-lxd8t_calico-apiserver(4d287e78-d822-498f-92dc-e6aaa22a1cfb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"319b909cf3b6be632b299ed26eac0e8df699e42e96eec272174a240723267d7b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6559f565b6-lxd8t" podUID="4d287e78-d822-498f-92dc-e6aaa22a1cfb" Oct 31 01:11:35.989082 containerd[1680]: time="2025-10-31T01:11:35.989043712Z" level=error msg="Failed to destroy network for sandbox \"c8cbad4718bd5db628d2e42b8a21bf34f28913a286923b203a913cbc462e3e21\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:11:35.990163 containerd[1680]: time="2025-10-31T01:11:35.990140281Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5756f8ffdf-n4khn,Uid:6ebd7d48-8bc4-4814-b512-88ba6c138a34,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8cbad4718bd5db628d2e42b8a21bf34f28913a286923b203a913cbc462e3e21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:11:35.990443 kubelet[2970]: E1031 01:11:35.990406 2970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8cbad4718bd5db628d2e42b8a21bf34f28913a286923b203a913cbc462e3e21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:11:35.990728 kubelet[2970]: E1031 01:11:35.990614 2970 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8cbad4718bd5db628d2e42b8a21bf34f28913a286923b203a913cbc462e3e21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5756f8ffdf-n4khn" Oct 31 01:11:35.990728 kubelet[2970]: E1031 01:11:35.990632 2970 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8cbad4718bd5db628d2e42b8a21bf34f28913a286923b203a913cbc462e3e21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5756f8ffdf-n4khn" Oct 31 01:11:35.990728 kubelet[2970]: E1031 01:11:35.990688 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5756f8ffdf-n4khn_calico-apiserver(6ebd7d48-8bc4-4814-b512-88ba6c138a34)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5756f8ffdf-n4khn_calico-apiserver(6ebd7d48-8bc4-4814-b512-88ba6c138a34)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8cbad4718bd5db628d2e42b8a21bf34f28913a286923b203a913cbc462e3e21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5756f8ffdf-n4khn" podUID="6ebd7d48-8bc4-4814-b512-88ba6c138a34" Oct 31 01:11:36.000765 containerd[1680]: time="2025-10-31T01:11:36.000668558Z" level=error msg="Failed to destroy network for sandbox \"bfb94d297879faf6104bd4b8f6ef1c37505f1438cbbf369db0c4bc0275a7f924\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:11:36.001334 containerd[1680]: time="2025-10-31T01:11:36.001278947Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b8bc5bbf7-zbbzw,Uid:e92d6b8b-bd48-432b-a17d-5d635d6fb001,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfb94d297879faf6104bd4b8f6ef1c37505f1438cbbf369db0c4bc0275a7f924\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:11:36.001633 kubelet[2970]: E1031 01:11:36.001486 2970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfb94d297879faf6104bd4b8f6ef1c37505f1438cbbf369db0c4bc0275a7f924\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:11:36.001633 kubelet[2970]: E1031 01:11:36.001524 2970 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfb94d297879faf6104bd4b8f6ef1c37505f1438cbbf369db0c4bc0275a7f924\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5b8bc5bbf7-zbbzw" Oct 31 01:11:36.001633 kubelet[2970]: E1031 01:11:36.001538 2970 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfb94d297879faf6104bd4b8f6ef1c37505f1438cbbf369db0c4bc0275a7f924\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5b8bc5bbf7-zbbzw" Oct 31 01:11:36.001914 kubelet[2970]: E1031 01:11:36.001761 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5b8bc5bbf7-zbbzw_calico-system(e92d6b8b-bd48-432b-a17d-5d635d6fb001)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5b8bc5bbf7-zbbzw_calico-system(e92d6b8b-bd48-432b-a17d-5d635d6fb001)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bfb94d297879faf6104bd4b8f6ef1c37505f1438cbbf369db0c4bc0275a7f924\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5b8bc5bbf7-zbbzw" podUID="e92d6b8b-bd48-432b-a17d-5d635d6fb001" Oct 31 01:11:36.002217 containerd[1680]: time="2025-10-31T01:11:36.002136748Z" level=error msg="Failed to destroy network for sandbox \"39adf043167e8dc5b94b3e9243845edcb31ca3bdf5783d7654deab49a49933fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:11:36.002892 containerd[1680]: time="2025-10-31T01:11:36.002510327Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-98l5j,Uid:d59878ec-d6a3-4ecc-a664-a6bb44484f80,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"39adf043167e8dc5b94b3e9243845edcb31ca3bdf5783d7654deab49a49933fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:11:36.002941 kubelet[2970]: E1031 01:11:36.002779 2970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39adf043167e8dc5b94b3e9243845edcb31ca3bdf5783d7654deab49a49933fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:11:36.002941 kubelet[2970]: E1031 01:11:36.002802 2970 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39adf043167e8dc5b94b3e9243845edcb31ca3bdf5783d7654deab49a49933fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-98l5j" Oct 31 01:11:36.002941 kubelet[2970]: E1031 01:11:36.002811 2970 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39adf043167e8dc5b94b3e9243845edcb31ca3bdf5783d7654deab49a49933fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-98l5j" Oct 31 01:11:36.003475 kubelet[2970]: E1031 01:11:36.002839 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-98l5j_kube-system(d59878ec-d6a3-4ecc-a664-a6bb44484f80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-98l5j_kube-system(d59878ec-d6a3-4ecc-a664-a6bb44484f80)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39adf043167e8dc5b94b3e9243845edcb31ca3bdf5783d7654deab49a49933fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-98l5j" podUID="d59878ec-d6a3-4ecc-a664-a6bb44484f80" Oct 31 01:11:36.009831 containerd[1680]: time="2025-10-31T01:11:36.009753744Z" level=error msg="Failed to destroy network for sandbox \"ed94f14437b99402b6192049d9a8b52b34b1fab764e0e76b460d57da5a6f7962\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:11:36.010794 containerd[1680]: time="2025-10-31T01:11:36.010714236Z" level=error msg="Failed to destroy network for sandbox \"731e3f53f624f05e15d8f3ab937bb64924912ec78868a6159d12d65f4f3d2547\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:11:36.011063 containerd[1680]: time="2025-10-31T01:11:36.010999783Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xkvt5,Uid:ab451c17-5e1d-49f8-82b8-eb7dedc414c7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed94f14437b99402b6192049d9a8b52b34b1fab764e0e76b460d57da5a6f7962\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:11:36.011900 containerd[1680]: time="2025-10-31T01:11:36.011862977Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-4m4m9,Uid:65c6a159-851f-4aa0-86f0-ed319b59746c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"731e3f53f624f05e15d8f3ab937bb64924912ec78868a6159d12d65f4f3d2547\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:11:36.011951 kubelet[2970]: E1031 01:11:36.011873 2970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed94f14437b99402b6192049d9a8b52b34b1fab764e0e76b460d57da5a6f7962\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:11:36.011951 kubelet[2970]: E1031 01:11:36.011940 2970 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed94f14437b99402b6192049d9a8b52b34b1fab764e0e76b460d57da5a6f7962\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xkvt5" Oct 31 01:11:36.012491 kubelet[2970]: E1031 01:11:36.011954 2970 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed94f14437b99402b6192049d9a8b52b34b1fab764e0e76b460d57da5a6f7962\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xkvt5" Oct 31 01:11:36.012491 kubelet[2970]: E1031 01:11:36.012008 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-xkvt5_kube-system(ab451c17-5e1d-49f8-82b8-eb7dedc414c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-xkvt5_kube-system(ab451c17-5e1d-49f8-82b8-eb7dedc414c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed94f14437b99402b6192049d9a8b52b34b1fab764e0e76b460d57da5a6f7962\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-xkvt5" podUID="ab451c17-5e1d-49f8-82b8-eb7dedc414c7" Oct 31 01:11:36.012491 kubelet[2970]: E1031 01:11:36.012202 2970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"731e3f53f624f05e15d8f3ab937bb64924912ec78868a6159d12d65f4f3d2547\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:11:36.012568 kubelet[2970]: E1031 01:11:36.012225 2970 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"731e3f53f624f05e15d8f3ab937bb64924912ec78868a6159d12d65f4f3d2547\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-4m4m9" Oct 31 01:11:36.012568 kubelet[2970]: E1031 01:11:36.012247 2970 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"731e3f53f624f05e15d8f3ab937bb64924912ec78868a6159d12d65f4f3d2547\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-4m4m9" Oct 31 01:11:36.012568 kubelet[2970]: E1031 01:11:36.012270 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-4m4m9_calico-system(65c6a159-851f-4aa0-86f0-ed319b59746c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-4m4m9_calico-system(65c6a159-851f-4aa0-86f0-ed319b59746c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"731e3f53f624f05e15d8f3ab937bb64924912ec78868a6159d12d65f4f3d2547\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-4m4m9" podUID="65c6a159-851f-4aa0-86f0-ed319b59746c" Oct 31 01:11:36.013530 containerd[1680]: time="2025-10-31T01:11:36.013507294Z" level=error msg="Failed to destroy network for sandbox \"8dc694a63da55df34c3f350fd83ae2ced9dc0787e4514269a7ded42394fa6190\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:11:36.014429 containerd[1680]: time="2025-10-31T01:11:36.014152550Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6559f565b6-9xwpc,Uid:014b66aa-7ac9-43b5-8a19-b4ecf0978b6c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8dc694a63da55df34c3f350fd83ae2ced9dc0787e4514269a7ded42394fa6190\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:11:36.014543 kubelet[2970]: E1031 01:11:36.014286 2970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8dc694a63da55df34c3f350fd83ae2ced9dc0787e4514269a7ded42394fa6190\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:11:36.014543 kubelet[2970]: E1031 01:11:36.014315 2970 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8dc694a63da55df34c3f350fd83ae2ced9dc0787e4514269a7ded42394fa6190\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6559f565b6-9xwpc" Oct 31 01:11:36.014543 kubelet[2970]: E1031 01:11:36.014327 2970 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8dc694a63da55df34c3f350fd83ae2ced9dc0787e4514269a7ded42394fa6190\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6559f565b6-9xwpc" Oct 31 01:11:36.014631 kubelet[2970]: E1031 01:11:36.014388 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6559f565b6-9xwpc_calico-apiserver(014b66aa-7ac9-43b5-8a19-b4ecf0978b6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6559f565b6-9xwpc_calico-apiserver(014b66aa-7ac9-43b5-8a19-b4ecf0978b6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8dc694a63da55df34c3f350fd83ae2ced9dc0787e4514269a7ded42394fa6190\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6559f565b6-9xwpc" podUID="014b66aa-7ac9-43b5-8a19-b4ecf0978b6c" Oct 31 01:11:36.391462 systemd[1]: run-netns-cni\x2d2e3c601b\x2d1774\x2d7836\x2da037\x2d474c9ad2f9d1.mount: Deactivated successfully. Oct 31 01:11:40.939213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2389817181.mount: Deactivated successfully. Oct 31 01:11:41.039418 containerd[1680]: time="2025-10-31T01:11:41.039389901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:11:41.042712 containerd[1680]: time="2025-10-31T01:11:41.042150033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 31 01:11:41.044937 containerd[1680]: time="2025-10-31T01:11:41.044524394Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:11:41.045489 containerd[1680]: time="2025-10-31T01:11:41.045079392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:11:41.046274 containerd[1680]: time="2025-10-31T01:11:41.046259945Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 5.424031289s" Oct 31 01:11:41.046332 containerd[1680]: time="2025-10-31T01:11:41.046322366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 31 01:11:41.083878 containerd[1680]: time="2025-10-31T01:11:41.083858453Z" level=info msg="CreateContainer within sandbox \"7b1e420372c22b85933a0762385557dcebcfbdc937818b4adf40decbf5ab3316\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 31 01:11:41.099394 containerd[1680]: time="2025-10-31T01:11:41.099372973Z" level=info msg="Container fa52027747a8cfa4b6de2bedc7f5ec18148e4ea9d3d9f2f043d94fbfd8f38731: CDI devices from CRI Config.CDIDevices: []" Oct 31 01:11:41.100357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3370525601.mount: Deactivated successfully. Oct 31 01:11:41.137892 containerd[1680]: time="2025-10-31T01:11:41.137861477Z" level=info msg="CreateContainer within sandbox \"7b1e420372c22b85933a0762385557dcebcfbdc937818b4adf40decbf5ab3316\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fa52027747a8cfa4b6de2bedc7f5ec18148e4ea9d3d9f2f043d94fbfd8f38731\"" Oct 31 01:11:41.139376 containerd[1680]: time="2025-10-31T01:11:41.139359788Z" level=info msg="StartContainer for \"fa52027747a8cfa4b6de2bedc7f5ec18148e4ea9d3d9f2f043d94fbfd8f38731\"" Oct 31 01:11:41.145687 containerd[1680]: time="2025-10-31T01:11:41.145665029Z" level=info msg="connecting to shim fa52027747a8cfa4b6de2bedc7f5ec18148e4ea9d3d9f2f043d94fbfd8f38731" address="unix:///run/containerd/s/bf391fff68e4380b253c781579cfb433a3a4367c6cb927199ce3330767743a42" protocol=ttrpc version=3 Oct 31 01:11:41.217817 systemd[1]: Started cri-containerd-fa52027747a8cfa4b6de2bedc7f5ec18148e4ea9d3d9f2f043d94fbfd8f38731.scope - libcontainer container fa52027747a8cfa4b6de2bedc7f5ec18148e4ea9d3d9f2f043d94fbfd8f38731. Oct 31 01:11:41.256758 containerd[1680]: time="2025-10-31T01:11:41.256403436Z" level=info msg="StartContainer for \"fa52027747a8cfa4b6de2bedc7f5ec18148e4ea9d3d9f2f043d94fbfd8f38731\" returns successfully" Oct 31 01:11:41.434066 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 31 01:11:41.436058 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 31 01:11:41.706352 kubelet[2970]: I1031 01:11:41.700360 2970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hvqf6" podStartSLOduration=2.032647823 podStartE2EDuration="15.700347586s" podCreationTimestamp="2025-10-31 01:11:26 +0000 UTC" firstStartedPulling="2025-10-31 01:11:27.379079031 +0000 UTC m=+19.058156778" lastFinishedPulling="2025-10-31 01:11:41.046778795 +0000 UTC m=+32.725856541" observedRunningTime="2025-10-31 01:11:41.686435482 +0000 UTC m=+33.365513238" watchObservedRunningTime="2025-10-31 01:11:41.700347586 +0000 UTC m=+33.379425344" Oct 31 01:11:41.789415 kubelet[2970]: I1031 01:11:41.789333 2970 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e92d6b8b-bd48-432b-a17d-5d635d6fb001-whisker-backend-key-pair\") pod \"e92d6b8b-bd48-432b-a17d-5d635d6fb001\" (UID: \"e92d6b8b-bd48-432b-a17d-5d635d6fb001\") " Oct 31 01:11:41.789415 kubelet[2970]: I1031 01:11:41.789360 2970 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgzwd\" (UniqueName: \"kubernetes.io/projected/e92d6b8b-bd48-432b-a17d-5d635d6fb001-kube-api-access-lgzwd\") pod \"e92d6b8b-bd48-432b-a17d-5d635d6fb001\" (UID: \"e92d6b8b-bd48-432b-a17d-5d635d6fb001\") " Oct 31 01:11:41.789415 kubelet[2970]: I1031 01:11:41.789372 2970 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e92d6b8b-bd48-432b-a17d-5d635d6fb001-whisker-ca-bundle\") pod \"e92d6b8b-bd48-432b-a17d-5d635d6fb001\" (UID: \"e92d6b8b-bd48-432b-a17d-5d635d6fb001\") " Oct 31 01:11:41.873271 kubelet[2970]: I1031 01:11:41.873234 2970 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e92d6b8b-bd48-432b-a17d-5d635d6fb001-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e92d6b8b-bd48-432b-a17d-5d635d6fb001" (UID: "e92d6b8b-bd48-432b-a17d-5d635d6fb001"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 31 01:11:41.876323 kubelet[2970]: I1031 01:11:41.876248 2970 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e92d6b8b-bd48-432b-a17d-5d635d6fb001-kube-api-access-lgzwd" (OuterVolumeSpecName: "kube-api-access-lgzwd") pod "e92d6b8b-bd48-432b-a17d-5d635d6fb001" (UID: "e92d6b8b-bd48-432b-a17d-5d635d6fb001"). InnerVolumeSpecName "kube-api-access-lgzwd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 31 01:11:41.876526 kubelet[2970]: I1031 01:11:41.876500 2970 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e92d6b8b-bd48-432b-a17d-5d635d6fb001-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e92d6b8b-bd48-432b-a17d-5d635d6fb001" (UID: "e92d6b8b-bd48-432b-a17d-5d635d6fb001"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 31 01:11:41.889852 kubelet[2970]: I1031 01:11:41.889825 2970 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e92d6b8b-bd48-432b-a17d-5d635d6fb001-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 31 01:11:41.889852 kubelet[2970]: I1031 01:11:41.889846 2970 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lgzwd\" (UniqueName: \"kubernetes.io/projected/e92d6b8b-bd48-432b-a17d-5d635d6fb001-kube-api-access-lgzwd\") on node \"localhost\" DevicePath \"\"" Oct 31 01:11:41.889852 kubelet[2970]: I1031 01:11:41.889854 2970 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e92d6b8b-bd48-432b-a17d-5d635d6fb001-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 31 01:11:41.939891 systemd[1]: var-lib-kubelet-pods-e92d6b8b\x2dbd48\x2d432b\x2da17d\x2d5d635d6fb001-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlgzwd.mount: Deactivated successfully. Oct 31 01:11:41.939961 systemd[1]: var-lib-kubelet-pods-e92d6b8b\x2dbd48\x2d432b\x2da17d\x2d5d635d6fb001-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 31 01:11:42.473756 systemd[1]: Removed slice kubepods-besteffort-pode92d6b8b_bd48_432b_a17d_5d635d6fb001.slice - libcontainer container kubepods-besteffort-pode92d6b8b_bd48_432b_a17d_5d635d6fb001.slice. Oct 31 01:11:42.638068 kubelet[2970]: I1031 01:11:42.638049 2970 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 01:11:42.708828 systemd[1]: Created slice kubepods-besteffort-pod5d682b35_d05b_4f22_b277_65a604bf6c0b.slice - libcontainer container kubepods-besteffort-pod5d682b35_d05b_4f22_b277_65a604bf6c0b.slice. Oct 31 01:11:42.894453 kubelet[2970]: I1031 01:11:42.894416 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d682b35-d05b-4f22-b277-65a604bf6c0b-whisker-ca-bundle\") pod \"whisker-7fbb589785-6lzvp\" (UID: \"5d682b35-d05b-4f22-b277-65a604bf6c0b\") " pod="calico-system/whisker-7fbb589785-6lzvp" Oct 31 01:11:42.894691 kubelet[2970]: I1031 01:11:42.894478 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5d682b35-d05b-4f22-b277-65a604bf6c0b-whisker-backend-key-pair\") pod \"whisker-7fbb589785-6lzvp\" (UID: \"5d682b35-d05b-4f22-b277-65a604bf6c0b\") " pod="calico-system/whisker-7fbb589785-6lzvp" Oct 31 01:11:42.894691 kubelet[2970]: I1031 01:11:42.894496 2970 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27mtq\" (UniqueName: \"kubernetes.io/projected/5d682b35-d05b-4f22-b277-65a604bf6c0b-kube-api-access-27mtq\") pod \"whisker-7fbb589785-6lzvp\" (UID: \"5d682b35-d05b-4f22-b277-65a604bf6c0b\") " pod="calico-system/whisker-7fbb589785-6lzvp" Oct 31 01:11:43.039966 containerd[1680]: time="2025-10-31T01:11:43.039937369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7fbb589785-6lzvp,Uid:5d682b35-d05b-4f22-b277-65a604bf6c0b,Namespace:calico-system,Attempt:0,}" Oct 31 01:11:43.637255 systemd-networkd[1554]: calie3c98502b97: Link UP Oct 31 01:11:43.637442 systemd-networkd[1554]: calie3c98502b97: Gained carrier Oct 31 01:11:43.644600 containerd[1680]: 2025-10-31 01:11:43.103 [INFO][4137] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 01:11:43.644600 containerd[1680]: 2025-10-31 01:11:43.255 [INFO][4137] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7fbb589785--6lzvp-eth0 whisker-7fbb589785- calico-system 5d682b35-d05b-4f22-b277-65a604bf6c0b 919 0 2025-10-31 01:11:42 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7fbb589785 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7fbb589785-6lzvp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie3c98502b97 [] [] }} ContainerID="3e272b7078c82eb173b9462604f9384b97b65ae627034e8b39211e1e60f29f43" Namespace="calico-system" Pod="whisker-7fbb589785-6lzvp" WorkloadEndpoint="localhost-k8s-whisker--7fbb589785--6lzvp-" Oct 31 01:11:43.644600 containerd[1680]: 2025-10-31 01:11:43.255 [INFO][4137] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3e272b7078c82eb173b9462604f9384b97b65ae627034e8b39211e1e60f29f43" Namespace="calico-system" Pod="whisker-7fbb589785-6lzvp" WorkloadEndpoint="localhost-k8s-whisker--7fbb589785--6lzvp-eth0" Oct 31 01:11:43.644600 containerd[1680]: 2025-10-31 01:11:43.556 [INFO][4157] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e272b7078c82eb173b9462604f9384b97b65ae627034e8b39211e1e60f29f43" HandleID="k8s-pod-network.3e272b7078c82eb173b9462604f9384b97b65ae627034e8b39211e1e60f29f43" Workload="localhost-k8s-whisker--7fbb589785--6lzvp-eth0" Oct 31 01:11:43.644708 containerd[1680]: 2025-10-31 01:11:43.560 [INFO][4157] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3e272b7078c82eb173b9462604f9384b97b65ae627034e8b39211e1e60f29f43" HandleID="k8s-pod-network.3e272b7078c82eb173b9462604f9384b97b65ae627034e8b39211e1e60f29f43" Workload="localhost-k8s-whisker--7fbb589785--6lzvp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001027e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7fbb589785-6lzvp", "timestamp":"2025-10-31 01:11:43.556909062 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:11:43.644708 containerd[1680]: 2025-10-31 01:11:43.560 [INFO][4157] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:11:43.644708 containerd[1680]: 2025-10-31 01:11:43.561 [INFO][4157] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:11:43.644708 containerd[1680]: 2025-10-31 01:11:43.561 [INFO][4157] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 01:11:43.644708 containerd[1680]: 2025-10-31 01:11:43.584 [INFO][4157] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3e272b7078c82eb173b9462604f9384b97b65ae627034e8b39211e1e60f29f43" host="localhost" Oct 31 01:11:43.644708 containerd[1680]: 2025-10-31 01:11:43.604 [INFO][4157] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 01:11:43.644708 containerd[1680]: 2025-10-31 01:11:43.607 [INFO][4157] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 01:11:43.644708 containerd[1680]: 2025-10-31 01:11:43.609 [INFO][4157] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 01:11:43.644708 containerd[1680]: 2025-10-31 01:11:43.610 [INFO][4157] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 01:11:43.644708 containerd[1680]: 2025-10-31 01:11:43.610 [INFO][4157] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3e272b7078c82eb173b9462604f9384b97b65ae627034e8b39211e1e60f29f43" host="localhost" Oct 31 01:11:43.645379 containerd[1680]: 2025-10-31 01:11:43.611 [INFO][4157] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3e272b7078c82eb173b9462604f9384b97b65ae627034e8b39211e1e60f29f43 Oct 31 01:11:43.645379 containerd[1680]: 2025-10-31 01:11:43.613 [INFO][4157] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3e272b7078c82eb173b9462604f9384b97b65ae627034e8b39211e1e60f29f43" host="localhost" Oct 31 01:11:43.645379 containerd[1680]: 2025-10-31 01:11:43.616 [INFO][4157] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.3e272b7078c82eb173b9462604f9384b97b65ae627034e8b39211e1e60f29f43" host="localhost" Oct 31 01:11:43.645379 containerd[1680]: 2025-10-31 01:11:43.616 [INFO][4157] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.3e272b7078c82eb173b9462604f9384b97b65ae627034e8b39211e1e60f29f43" host="localhost" Oct 31 01:11:43.645379 containerd[1680]: 2025-10-31 01:11:43.616 [INFO][4157] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:11:43.645379 containerd[1680]: 2025-10-31 01:11:43.616 [INFO][4157] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="3e272b7078c82eb173b9462604f9384b97b65ae627034e8b39211e1e60f29f43" HandleID="k8s-pod-network.3e272b7078c82eb173b9462604f9384b97b65ae627034e8b39211e1e60f29f43" Workload="localhost-k8s-whisker--7fbb589785--6lzvp-eth0" Oct 31 01:11:43.645474 containerd[1680]: 2025-10-31 01:11:43.618 [INFO][4137] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3e272b7078c82eb173b9462604f9384b97b65ae627034e8b39211e1e60f29f43" Namespace="calico-system" Pod="whisker-7fbb589785-6lzvp" WorkloadEndpoint="localhost-k8s-whisker--7fbb589785--6lzvp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7fbb589785--6lzvp-eth0", GenerateName:"whisker-7fbb589785-", Namespace:"calico-system", SelfLink:"", UID:"5d682b35-d05b-4f22-b277-65a604bf6c0b", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 11, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7fbb589785", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7fbb589785-6lzvp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie3c98502b97", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:11:43.645474 containerd[1680]: 2025-10-31 01:11:43.618 [INFO][4137] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="3e272b7078c82eb173b9462604f9384b97b65ae627034e8b39211e1e60f29f43" Namespace="calico-system" Pod="whisker-7fbb589785-6lzvp" WorkloadEndpoint="localhost-k8s-whisker--7fbb589785--6lzvp-eth0" Oct 31 01:11:43.645532 containerd[1680]: 2025-10-31 01:11:43.618 [INFO][4137] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie3c98502b97 ContainerID="3e272b7078c82eb173b9462604f9384b97b65ae627034e8b39211e1e60f29f43" Namespace="calico-system" Pod="whisker-7fbb589785-6lzvp" WorkloadEndpoint="localhost-k8s-whisker--7fbb589785--6lzvp-eth0" Oct 31 01:11:43.645532 containerd[1680]: 2025-10-31 01:11:43.629 [INFO][4137] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e272b7078c82eb173b9462604f9384b97b65ae627034e8b39211e1e60f29f43" Namespace="calico-system" Pod="whisker-7fbb589785-6lzvp" WorkloadEndpoint="localhost-k8s-whisker--7fbb589785--6lzvp-eth0" Oct 31 01:11:43.645565 containerd[1680]: 2025-10-31 01:11:43.630 [INFO][4137] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3e272b7078c82eb173b9462604f9384b97b65ae627034e8b39211e1e60f29f43" Namespace="calico-system" Pod="whisker-7fbb589785-6lzvp" WorkloadEndpoint="localhost-k8s-whisker--7fbb589785--6lzvp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7fbb589785--6lzvp-eth0", GenerateName:"whisker-7fbb589785-", Namespace:"calico-system", SelfLink:"", UID:"5d682b35-d05b-4f22-b277-65a604bf6c0b", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 11, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7fbb589785", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3e272b7078c82eb173b9462604f9384b97b65ae627034e8b39211e1e60f29f43", Pod:"whisker-7fbb589785-6lzvp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie3c98502b97", MAC:"0a:5e:ce:d4:48:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:11:43.645603 containerd[1680]: 2025-10-31 01:11:43.635 [INFO][4137] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3e272b7078c82eb173b9462604f9384b97b65ae627034e8b39211e1e60f29f43" Namespace="calico-system" Pod="whisker-7fbb589785-6lzvp" WorkloadEndpoint="localhost-k8s-whisker--7fbb589785--6lzvp-eth0" Oct 31 01:11:43.712415 containerd[1680]: time="2025-10-31T01:11:43.712363900Z" level=info msg="connecting to shim 3e272b7078c82eb173b9462604f9384b97b65ae627034e8b39211e1e60f29f43" address="unix:///run/containerd/s/7a3463fd3eb1b6a20cf5562de9c7c6d7138bc6171db2bd068ae4cbeb9c17db43" namespace=k8s.io protocol=ttrpc version=3 Oct 31 01:11:43.731820 systemd[1]: Started cri-containerd-3e272b7078c82eb173b9462604f9384b97b65ae627034e8b39211e1e60f29f43.scope - libcontainer container 3e272b7078c82eb173b9462604f9384b97b65ae627034e8b39211e1e60f29f43. Oct 31 01:11:43.742257 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 01:11:43.811013 containerd[1680]: time="2025-10-31T01:11:43.810984398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7fbb589785-6lzvp,Uid:5d682b35-d05b-4f22-b277-65a604bf6c0b,Namespace:calico-system,Attempt:0,} returns sandbox id \"3e272b7078c82eb173b9462604f9384b97b65ae627034e8b39211e1e60f29f43\"" Oct 31 01:11:43.815227 containerd[1680]: time="2025-10-31T01:11:43.815197116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 01:11:44.175022 containerd[1680]: time="2025-10-31T01:11:44.174985458Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 01:11:44.175386 containerd[1680]: time="2025-10-31T01:11:44.175358961Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 01:11:44.175476 containerd[1680]: time="2025-10-31T01:11:44.175416796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 01:11:44.177858 kubelet[2970]: E1031 01:11:44.177826 2970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 01:11:44.178053 kubelet[2970]: E1031 01:11:44.177869 2970 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 01:11:44.178053 kubelet[2970]: E1031 01:11:44.177930 2970 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7fbb589785-6lzvp_calico-system(5d682b35-d05b-4f22-b277-65a604bf6c0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 01:11:44.178692 containerd[1680]: time="2025-10-31T01:11:44.178673030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 01:11:44.469116 kubelet[2970]: I1031 01:11:44.469043 2970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e92d6b8b-bd48-432b-a17d-5d635d6fb001" path="/var/lib/kubelet/pods/e92d6b8b-bd48-432b-a17d-5d635d6fb001/volumes" Oct 31 01:11:44.542237 containerd[1680]: time="2025-10-31T01:11:44.542199454Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 01:11:44.542615 containerd[1680]: time="2025-10-31T01:11:44.542586152Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 01:11:44.542679 containerd[1680]: time="2025-10-31T01:11:44.542667346Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 01:11:44.542841 kubelet[2970]: E1031 01:11:44.542808 2970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 01:11:44.542900 kubelet[2970]: E1031 01:11:44.542847 2970 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 01:11:44.542930 kubelet[2970]: E1031 01:11:44.542908 2970 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7fbb589785-6lzvp_calico-system(5d682b35-d05b-4f22-b277-65a604bf6c0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 01:11:44.542959 kubelet[2970]: E1031 01:11:44.542941 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fbb589785-6lzvp" podUID="5d682b35-d05b-4f22-b277-65a604bf6c0b" Oct 31 01:11:44.641335 kubelet[2970]: E1031 01:11:44.641283 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fbb589785-6lzvp" podUID="5d682b35-d05b-4f22-b277-65a604bf6c0b" Oct 31 01:11:44.941894 kubelet[2970]: I1031 01:11:44.941864 2970 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 01:11:44.980699 systemd-networkd[1554]: calie3c98502b97: Gained IPv6LL Oct 31 01:11:45.035642 containerd[1680]: time="2025-10-31T01:11:45.035618433Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa52027747a8cfa4b6de2bedc7f5ec18148e4ea9d3d9f2f043d94fbfd8f38731\" id:\"c6ad425ee94091d5b8fbd7a0c0a45b1c60852a51dc5f172f374d8808653f2210\" pid:4252 exit_status:1 exited_at:{seconds:1761873105 nanos:35432891}" Oct 31 01:11:45.097518 containerd[1680]: time="2025-10-31T01:11:45.097496350Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa52027747a8cfa4b6de2bedc7f5ec18148e4ea9d3d9f2f043d94fbfd8f38731\" id:\"9298fd67dcd4fc480e9e85f7d55d5ff229b6f7c612ef27686a862cdb0172971d\" pid:4277 exit_status:1 exited_at:{seconds:1761873105 nanos:97319091}" Oct 31 01:11:45.642965 kubelet[2970]: E1031 01:11:45.642629 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fbb589785-6lzvp" podUID="5d682b35-d05b-4f22-b277-65a604bf6c0b" Oct 31 01:11:46.470576 containerd[1680]: time="2025-10-31T01:11:46.470327325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4j67c,Uid:c7606c94-65d4-44ae-9466-226a1af8c528,Namespace:calico-system,Attempt:0,}" Oct 31 01:11:46.556313 systemd-networkd[1554]: cali6761345ee9d: Link UP Oct 31 01:11:46.556896 systemd-networkd[1554]: cali6761345ee9d: Gained carrier Oct 31 01:11:46.567162 containerd[1680]: 2025-10-31 01:11:46.490 [INFO][4331] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 01:11:46.567162 containerd[1680]: 2025-10-31 01:11:46.496 [INFO][4331] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--4j67c-eth0 csi-node-driver- calico-system c7606c94-65d4-44ae-9466-226a1af8c528 747 0 2025-10-31 01:11:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-4j67c eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6761345ee9d [] [] }} ContainerID="505cb97bf6ffb08bce8c1f08de999db83cc30ec30a37a4f1dad08403d07312ef" Namespace="calico-system" Pod="csi-node-driver-4j67c" WorkloadEndpoint="localhost-k8s-csi--node--driver--4j67c-" Oct 31 01:11:46.567162 containerd[1680]: 2025-10-31 01:11:46.496 [INFO][4331] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="505cb97bf6ffb08bce8c1f08de999db83cc30ec30a37a4f1dad08403d07312ef" Namespace="calico-system" Pod="csi-node-driver-4j67c" WorkloadEndpoint="localhost-k8s-csi--node--driver--4j67c-eth0" Oct 31 01:11:46.567162 containerd[1680]: 2025-10-31 01:11:46.534 [INFO][4343] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="505cb97bf6ffb08bce8c1f08de999db83cc30ec30a37a4f1dad08403d07312ef" HandleID="k8s-pod-network.505cb97bf6ffb08bce8c1f08de999db83cc30ec30a37a4f1dad08403d07312ef" Workload="localhost-k8s-csi--node--driver--4j67c-eth0" Oct 31 01:11:46.567319 containerd[1680]: 2025-10-31 01:11:46.534 [INFO][4343] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="505cb97bf6ffb08bce8c1f08de999db83cc30ec30a37a4f1dad08403d07312ef" HandleID="k8s-pod-network.505cb97bf6ffb08bce8c1f08de999db83cc30ec30a37a4f1dad08403d07312ef" Workload="localhost-k8s-csi--node--driver--4j67c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-4j67c", "timestamp":"2025-10-31 01:11:46.534492221 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:11:46.567319 containerd[1680]: 2025-10-31 01:11:46.534 [INFO][4343] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:11:46.567319 containerd[1680]: 2025-10-31 01:11:46.534 [INFO][4343] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:11:46.567319 containerd[1680]: 2025-10-31 01:11:46.534 [INFO][4343] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 01:11:46.567319 containerd[1680]: 2025-10-31 01:11:46.539 [INFO][4343] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.505cb97bf6ffb08bce8c1f08de999db83cc30ec30a37a4f1dad08403d07312ef" host="localhost" Oct 31 01:11:46.567319 containerd[1680]: 2025-10-31 01:11:46.541 [INFO][4343] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 01:11:46.567319 containerd[1680]: 2025-10-31 01:11:46.543 [INFO][4343] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 01:11:46.567319 containerd[1680]: 2025-10-31 01:11:46.544 [INFO][4343] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 01:11:46.567319 containerd[1680]: 2025-10-31 01:11:46.545 [INFO][4343] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 01:11:46.567319 containerd[1680]: 2025-10-31 01:11:46.545 [INFO][4343] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.505cb97bf6ffb08bce8c1f08de999db83cc30ec30a37a4f1dad08403d07312ef" host="localhost" Oct 31 01:11:46.567484 containerd[1680]: 2025-10-31 01:11:46.546 [INFO][4343] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.505cb97bf6ffb08bce8c1f08de999db83cc30ec30a37a4f1dad08403d07312ef Oct 31 01:11:46.567484 containerd[1680]: 2025-10-31 01:11:46.548 [INFO][4343] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.505cb97bf6ffb08bce8c1f08de999db83cc30ec30a37a4f1dad08403d07312ef" host="localhost" Oct 31 01:11:46.567484 containerd[1680]: 2025-10-31 01:11:46.550 [INFO][4343] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.505cb97bf6ffb08bce8c1f08de999db83cc30ec30a37a4f1dad08403d07312ef" host="localhost" Oct 31 01:11:46.567484 containerd[1680]: 2025-10-31 01:11:46.550 [INFO][4343] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.505cb97bf6ffb08bce8c1f08de999db83cc30ec30a37a4f1dad08403d07312ef" host="localhost" Oct 31 01:11:46.567484 containerd[1680]: 2025-10-31 01:11:46.550 [INFO][4343] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:11:46.567484 containerd[1680]: 2025-10-31 01:11:46.550 [INFO][4343] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="505cb97bf6ffb08bce8c1f08de999db83cc30ec30a37a4f1dad08403d07312ef" HandleID="k8s-pod-network.505cb97bf6ffb08bce8c1f08de999db83cc30ec30a37a4f1dad08403d07312ef" Workload="localhost-k8s-csi--node--driver--4j67c-eth0" Oct 31 01:11:46.567588 containerd[1680]: 2025-10-31 01:11:46.553 [INFO][4331] cni-plugin/k8s.go 418: Populated endpoint ContainerID="505cb97bf6ffb08bce8c1f08de999db83cc30ec30a37a4f1dad08403d07312ef" Namespace="calico-system" Pod="csi-node-driver-4j67c" WorkloadEndpoint="localhost-k8s-csi--node--driver--4j67c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4j67c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c7606c94-65d4-44ae-9466-226a1af8c528", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 11, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-4j67c", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6761345ee9d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:11:46.567628 containerd[1680]: 2025-10-31 01:11:46.553 [INFO][4331] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="505cb97bf6ffb08bce8c1f08de999db83cc30ec30a37a4f1dad08403d07312ef" Namespace="calico-system" Pod="csi-node-driver-4j67c" WorkloadEndpoint="localhost-k8s-csi--node--driver--4j67c-eth0" Oct 31 01:11:46.567628 containerd[1680]: 2025-10-31 01:11:46.553 [INFO][4331] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6761345ee9d ContainerID="505cb97bf6ffb08bce8c1f08de999db83cc30ec30a37a4f1dad08403d07312ef" Namespace="calico-system" Pod="csi-node-driver-4j67c" WorkloadEndpoint="localhost-k8s-csi--node--driver--4j67c-eth0" Oct 31 01:11:46.567628 containerd[1680]: 2025-10-31 01:11:46.557 [INFO][4331] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="505cb97bf6ffb08bce8c1f08de999db83cc30ec30a37a4f1dad08403d07312ef" Namespace="calico-system" Pod="csi-node-driver-4j67c" WorkloadEndpoint="localhost-k8s-csi--node--driver--4j67c-eth0" Oct 31 01:11:46.567680 containerd[1680]: 2025-10-31 01:11:46.559 [INFO][4331] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="505cb97bf6ffb08bce8c1f08de999db83cc30ec30a37a4f1dad08403d07312ef" Namespace="calico-system" Pod="csi-node-driver-4j67c" WorkloadEndpoint="localhost-k8s-csi--node--driver--4j67c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4j67c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c7606c94-65d4-44ae-9466-226a1af8c528", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 11, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"505cb97bf6ffb08bce8c1f08de999db83cc30ec30a37a4f1dad08403d07312ef", Pod:"csi-node-driver-4j67c", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6761345ee9d", MAC:"a2:ac:64:28:27:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:11:46.567790 containerd[1680]: 2025-10-31 01:11:46.564 [INFO][4331] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="505cb97bf6ffb08bce8c1f08de999db83cc30ec30a37a4f1dad08403d07312ef" Namespace="calico-system" Pod="csi-node-driver-4j67c" WorkloadEndpoint="localhost-k8s-csi--node--driver--4j67c-eth0" Oct 31 01:11:46.578938 containerd[1680]: time="2025-10-31T01:11:46.578903637Z" level=info msg="connecting to shim 505cb97bf6ffb08bce8c1f08de999db83cc30ec30a37a4f1dad08403d07312ef" address="unix:///run/containerd/s/9b14013bf7bb819169834a20c0afe257f7944a5cac9636c111421e761d6841d2" namespace=k8s.io protocol=ttrpc version=3 Oct 31 01:11:46.599010 systemd[1]: Started cri-containerd-505cb97bf6ffb08bce8c1f08de999db83cc30ec30a37a4f1dad08403d07312ef.scope - libcontainer container 505cb97bf6ffb08bce8c1f08de999db83cc30ec30a37a4f1dad08403d07312ef. Oct 31 01:11:46.608254 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 01:11:46.616925 containerd[1680]: time="2025-10-31T01:11:46.616899738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4j67c,Uid:c7606c94-65d4-44ae-9466-226a1af8c528,Namespace:calico-system,Attempt:0,} returns sandbox id \"505cb97bf6ffb08bce8c1f08de999db83cc30ec30a37a4f1dad08403d07312ef\"" Oct 31 01:11:46.629940 containerd[1680]: time="2025-10-31T01:11:46.629913263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 01:11:46.953053 containerd[1680]: time="2025-10-31T01:11:46.953016776Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 01:11:46.953521 containerd[1680]: time="2025-10-31T01:11:46.953464003Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 01:11:46.953753 containerd[1680]: time="2025-10-31T01:11:46.953520918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 01:11:46.953793 kubelet[2970]: E1031 01:11:46.953636 2970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 01:11:46.953793 kubelet[2970]: E1031 01:11:46.953677 2970 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 01:11:46.955048 kubelet[2970]: E1031 01:11:46.953994 2970 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-4j67c_calico-system(c7606c94-65d4-44ae-9466-226a1af8c528): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 01:11:46.955601 containerd[1680]: time="2025-10-31T01:11:46.955551810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 01:11:47.333642 containerd[1680]: time="2025-10-31T01:11:47.333597511Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 01:11:47.333988 containerd[1680]: time="2025-10-31T01:11:47.333948240Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 01:11:47.334034 containerd[1680]: time="2025-10-31T01:11:47.334017993Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 01:11:47.334188 kubelet[2970]: E1031 01:11:47.334127 2970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 01:11:47.334188 kubelet[2970]: E1031 01:11:47.334173 2970 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 01:11:47.334404 kubelet[2970]: E1031 01:11:47.334321 2970 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-4j67c_calico-system(c7606c94-65d4-44ae-9466-226a1af8c528): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 01:11:47.334590 kubelet[2970]: E1031 01:11:47.334382 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4j67c" podUID="c7606c94-65d4-44ae-9466-226a1af8c528" Oct 31 01:11:47.650457 kubelet[2970]: E1031 01:11:47.649941 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4j67c" podUID="c7606c94-65d4-44ae-9466-226a1af8c528" Oct 31 01:11:48.175975 systemd-networkd[1554]: cali6761345ee9d: Gained IPv6LL Oct 31 01:11:48.468936 containerd[1680]: time="2025-10-31T01:11:48.468858599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xkvt5,Uid:ab451c17-5e1d-49f8-82b8-eb7dedc414c7,Namespace:kube-system,Attempt:0,}" Oct 31 01:11:48.477525 containerd[1680]: time="2025-10-31T01:11:48.477502856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-4m4m9,Uid:65c6a159-851f-4aa0-86f0-ed319b59746c,Namespace:calico-system,Attempt:0,}" Oct 31 01:11:48.594575 systemd-networkd[1554]: cali662d73f31c2: Link UP Oct 31 01:11:48.594706 systemd-networkd[1554]: cali662d73f31c2: Gained carrier Oct 31 01:11:48.617872 containerd[1680]: 2025-10-31 01:11:48.500 [INFO][4421] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 01:11:48.617872 containerd[1680]: 2025-10-31 01:11:48.509 [INFO][4421] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--xkvt5-eth0 coredns-66bc5c9577- kube-system ab451c17-5e1d-49f8-82b8-eb7dedc414c7 842 0 2025-10-31 01:11:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-xkvt5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali662d73f31c2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="486b6423ea1b7c1c577c7e8abd105686c3291fbcb0a2cdbf871addfccb8af0c4" Namespace="kube-system" Pod="coredns-66bc5c9577-xkvt5" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xkvt5-" Oct 31 01:11:48.617872 containerd[1680]: 2025-10-31 01:11:48.509 [INFO][4421] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="486b6423ea1b7c1c577c7e8abd105686c3291fbcb0a2cdbf871addfccb8af0c4" Namespace="kube-system" Pod="coredns-66bc5c9577-xkvt5" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xkvt5-eth0" Oct 31 01:11:48.617872 containerd[1680]: 2025-10-31 01:11:48.552 [INFO][4456] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="486b6423ea1b7c1c577c7e8abd105686c3291fbcb0a2cdbf871addfccb8af0c4" HandleID="k8s-pod-network.486b6423ea1b7c1c577c7e8abd105686c3291fbcb0a2cdbf871addfccb8af0c4" Workload="localhost-k8s-coredns--66bc5c9577--xkvt5-eth0" Oct 31 01:11:48.618054 containerd[1680]: 2025-10-31 01:11:48.554 [INFO][4456] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="486b6423ea1b7c1c577c7e8abd105686c3291fbcb0a2cdbf871addfccb8af0c4" HandleID="k8s-pod-network.486b6423ea1b7c1c577c7e8abd105686c3291fbcb0a2cdbf871addfccb8af0c4" Workload="localhost-k8s-coredns--66bc5c9577--xkvt5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df6d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-xkvt5", "timestamp":"2025-10-31 01:11:48.552469007 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:11:48.618054 containerd[1680]: 2025-10-31 01:11:48.554 [INFO][4456] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:11:48.618054 containerd[1680]: 2025-10-31 01:11:48.554 [INFO][4456] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:11:48.618054 containerd[1680]: 2025-10-31 01:11:48.554 [INFO][4456] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 01:11:48.618054 containerd[1680]: 2025-10-31 01:11:48.569 [INFO][4456] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.486b6423ea1b7c1c577c7e8abd105686c3291fbcb0a2cdbf871addfccb8af0c4" host="localhost" Oct 31 01:11:48.618054 containerd[1680]: 2025-10-31 01:11:48.572 [INFO][4456] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 01:11:48.618054 containerd[1680]: 2025-10-31 01:11:48.575 [INFO][4456] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 01:11:48.618054 containerd[1680]: 2025-10-31 01:11:48.577 [INFO][4456] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 01:11:48.618054 containerd[1680]: 2025-10-31 01:11:48.581 [INFO][4456] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 01:11:48.618054 containerd[1680]: 2025-10-31 01:11:48.581 [INFO][4456] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.486b6423ea1b7c1c577c7e8abd105686c3291fbcb0a2cdbf871addfccb8af0c4" host="localhost" Oct 31 01:11:48.620042 containerd[1680]: 2025-10-31 01:11:48.582 [INFO][4456] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.486b6423ea1b7c1c577c7e8abd105686c3291fbcb0a2cdbf871addfccb8af0c4 Oct 31 01:11:48.620042 containerd[1680]: 2025-10-31 01:11:48.584 [INFO][4456] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.486b6423ea1b7c1c577c7e8abd105686c3291fbcb0a2cdbf871addfccb8af0c4" host="localhost" Oct 31 01:11:48.620042 containerd[1680]: 2025-10-31 01:11:48.587 [INFO][4456] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.486b6423ea1b7c1c577c7e8abd105686c3291fbcb0a2cdbf871addfccb8af0c4" host="localhost" Oct 31 01:11:48.620042 containerd[1680]: 2025-10-31 01:11:48.587 [INFO][4456] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.486b6423ea1b7c1c577c7e8abd105686c3291fbcb0a2cdbf871addfccb8af0c4" host="localhost" Oct 31 01:11:48.620042 containerd[1680]: 2025-10-31 01:11:48.587 [INFO][4456] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:11:48.620042 containerd[1680]: 2025-10-31 01:11:48.587 [INFO][4456] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="486b6423ea1b7c1c577c7e8abd105686c3291fbcb0a2cdbf871addfccb8af0c4" HandleID="k8s-pod-network.486b6423ea1b7c1c577c7e8abd105686c3291fbcb0a2cdbf871addfccb8af0c4" Workload="localhost-k8s-coredns--66bc5c9577--xkvt5-eth0" Oct 31 01:11:48.620144 containerd[1680]: 2025-10-31 01:11:48.589 [INFO][4421] cni-plugin/k8s.go 418: Populated endpoint ContainerID="486b6423ea1b7c1c577c7e8abd105686c3291fbcb0a2cdbf871addfccb8af0c4" Namespace="kube-system" Pod="coredns-66bc5c9577-xkvt5" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xkvt5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--xkvt5-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ab451c17-5e1d-49f8-82b8-eb7dedc414c7", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 11, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-xkvt5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali662d73f31c2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:11:48.620144 containerd[1680]: 2025-10-31 01:11:48.589 [INFO][4421] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="486b6423ea1b7c1c577c7e8abd105686c3291fbcb0a2cdbf871addfccb8af0c4" Namespace="kube-system" Pod="coredns-66bc5c9577-xkvt5" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xkvt5-eth0" Oct 31 01:11:48.620144 containerd[1680]: 2025-10-31 01:11:48.589 [INFO][4421] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali662d73f31c2 ContainerID="486b6423ea1b7c1c577c7e8abd105686c3291fbcb0a2cdbf871addfccb8af0c4" Namespace="kube-system" Pod="coredns-66bc5c9577-xkvt5" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xkvt5-eth0" Oct 31 01:11:48.620144 containerd[1680]: 2025-10-31 01:11:48.595 [INFO][4421] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="486b6423ea1b7c1c577c7e8abd105686c3291fbcb0a2cdbf871addfccb8af0c4" Namespace="kube-system" Pod="coredns-66bc5c9577-xkvt5" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xkvt5-eth0" Oct 31 01:11:48.620144 containerd[1680]: 2025-10-31 01:11:48.595 [INFO][4421] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="486b6423ea1b7c1c577c7e8abd105686c3291fbcb0a2cdbf871addfccb8af0c4" Namespace="kube-system" Pod="coredns-66bc5c9577-xkvt5" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xkvt5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--xkvt5-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ab451c17-5e1d-49f8-82b8-eb7dedc414c7", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 11, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"486b6423ea1b7c1c577c7e8abd105686c3291fbcb0a2cdbf871addfccb8af0c4", Pod:"coredns-66bc5c9577-xkvt5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali662d73f31c2", MAC:"66:78:f6:28:c6:2a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:11:48.620144 containerd[1680]: 2025-10-31 01:11:48.614 [INFO][4421] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="486b6423ea1b7c1c577c7e8abd105686c3291fbcb0a2cdbf871addfccb8af0c4" Namespace="kube-system" Pod="coredns-66bc5c9577-xkvt5" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xkvt5-eth0" Oct 31 01:11:48.640810 containerd[1680]: time="2025-10-31T01:11:48.640776928Z" level=info msg="connecting to shim 486b6423ea1b7c1c577c7e8abd105686c3291fbcb0a2cdbf871addfccb8af0c4" address="unix:///run/containerd/s/e8e7983638299c529b46b3ec06eb518ea3466aa0640bf76a6fa1a8383800da49" namespace=k8s.io protocol=ttrpc version=3 Oct 31 01:11:48.668876 systemd[1]: Started cri-containerd-486b6423ea1b7c1c577c7e8abd105686c3291fbcb0a2cdbf871addfccb8af0c4.scope - libcontainer container 486b6423ea1b7c1c577c7e8abd105686c3291fbcb0a2cdbf871addfccb8af0c4. Oct 31 01:11:48.684766 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 01:11:48.701013 systemd-networkd[1554]: cali8af9d44d5ba: Link UP Oct 31 01:11:48.702049 systemd-networkd[1554]: cali8af9d44d5ba: Gained carrier Oct 31 01:11:48.713795 containerd[1680]: 2025-10-31 01:11:48.525 [INFO][4438] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 01:11:48.713795 containerd[1680]: 2025-10-31 01:11:48.546 [INFO][4438] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--4m4m9-eth0 goldmane-7c778bb748- calico-system 65c6a159-851f-4aa0-86f0-ed319b59746c 852 0 2025-10-31 01:11:24 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-4m4m9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8af9d44d5ba [] [] }} ContainerID="a80429b7e6f03f26edef9c04f86e85509c5c37ad89e621f2a6e899bce1062b49" Namespace="calico-system" Pod="goldmane-7c778bb748-4m4m9" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--4m4m9-" Oct 31 01:11:48.713795 containerd[1680]: 2025-10-31 01:11:48.546 [INFO][4438] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a80429b7e6f03f26edef9c04f86e85509c5c37ad89e621f2a6e899bce1062b49" Namespace="calico-system" Pod="goldmane-7c778bb748-4m4m9" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--4m4m9-eth0" Oct 31 01:11:48.713795 containerd[1680]: 2025-10-31 01:11:48.592 [INFO][4469] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a80429b7e6f03f26edef9c04f86e85509c5c37ad89e621f2a6e899bce1062b49" HandleID="k8s-pod-network.a80429b7e6f03f26edef9c04f86e85509c5c37ad89e621f2a6e899bce1062b49" Workload="localhost-k8s-goldmane--7c778bb748--4m4m9-eth0" Oct 31 01:11:48.713795 containerd[1680]: 2025-10-31 01:11:48.592 [INFO][4469] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a80429b7e6f03f26edef9c04f86e85509c5c37ad89e621f2a6e899bce1062b49" HandleID="k8s-pod-network.a80429b7e6f03f26edef9c04f86e85509c5c37ad89e621f2a6e899bce1062b49" Workload="localhost-k8s-goldmane--7c778bb748--4m4m9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-4m4m9", "timestamp":"2025-10-31 01:11:48.592420599 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:11:48.713795 containerd[1680]: 2025-10-31 01:11:48.592 [INFO][4469] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:11:48.713795 containerd[1680]: 2025-10-31 01:11:48.592 [INFO][4469] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:11:48.713795 containerd[1680]: 2025-10-31 01:11:48.592 [INFO][4469] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 01:11:48.713795 containerd[1680]: 2025-10-31 01:11:48.666 [INFO][4469] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a80429b7e6f03f26edef9c04f86e85509c5c37ad89e621f2a6e899bce1062b49" host="localhost" Oct 31 01:11:48.713795 containerd[1680]: 2025-10-31 01:11:48.674 [INFO][4469] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 01:11:48.713795 containerd[1680]: 2025-10-31 01:11:48.679 [INFO][4469] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 01:11:48.713795 containerd[1680]: 2025-10-31 01:11:48.680 [INFO][4469] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 01:11:48.713795 containerd[1680]: 2025-10-31 01:11:48.682 [INFO][4469] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 01:11:48.713795 containerd[1680]: 2025-10-31 01:11:48.682 [INFO][4469] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a80429b7e6f03f26edef9c04f86e85509c5c37ad89e621f2a6e899bce1062b49" host="localhost" Oct 31 01:11:48.713795 containerd[1680]: 2025-10-31 01:11:48.684 [INFO][4469] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a80429b7e6f03f26edef9c04f86e85509c5c37ad89e621f2a6e899bce1062b49 Oct 31 01:11:48.713795 containerd[1680]: 2025-10-31 01:11:48.691 [INFO][4469] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a80429b7e6f03f26edef9c04f86e85509c5c37ad89e621f2a6e899bce1062b49" host="localhost" Oct 31 01:11:48.713795 containerd[1680]: 2025-10-31 01:11:48.695 [INFO][4469] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a80429b7e6f03f26edef9c04f86e85509c5c37ad89e621f2a6e899bce1062b49" host="localhost" Oct 31 01:11:48.713795 containerd[1680]: 2025-10-31 01:11:48.695 [INFO][4469] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a80429b7e6f03f26edef9c04f86e85509c5c37ad89e621f2a6e899bce1062b49" host="localhost" Oct 31 01:11:48.713795 containerd[1680]: 2025-10-31 01:11:48.695 [INFO][4469] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:11:48.713795 containerd[1680]: 2025-10-31 01:11:48.695 [INFO][4469] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a80429b7e6f03f26edef9c04f86e85509c5c37ad89e621f2a6e899bce1062b49" HandleID="k8s-pod-network.a80429b7e6f03f26edef9c04f86e85509c5c37ad89e621f2a6e899bce1062b49" Workload="localhost-k8s-goldmane--7c778bb748--4m4m9-eth0" Oct 31 01:11:48.714402 containerd[1680]: 2025-10-31 01:11:48.698 [INFO][4438] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a80429b7e6f03f26edef9c04f86e85509c5c37ad89e621f2a6e899bce1062b49" Namespace="calico-system" Pod="goldmane-7c778bb748-4m4m9" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--4m4m9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--4m4m9-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"65c6a159-851f-4aa0-86f0-ed319b59746c", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 11, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-4m4m9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8af9d44d5ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:11:48.714402 containerd[1680]: 2025-10-31 01:11:48.698 [INFO][4438] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a80429b7e6f03f26edef9c04f86e85509c5c37ad89e621f2a6e899bce1062b49" Namespace="calico-system" Pod="goldmane-7c778bb748-4m4m9" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--4m4m9-eth0" Oct 31 01:11:48.714402 containerd[1680]: 2025-10-31 01:11:48.698 [INFO][4438] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8af9d44d5ba ContainerID="a80429b7e6f03f26edef9c04f86e85509c5c37ad89e621f2a6e899bce1062b49" Namespace="calico-system" Pod="goldmane-7c778bb748-4m4m9" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--4m4m9-eth0" Oct 31 01:11:48.714402 containerd[1680]: 2025-10-31 01:11:48.702 [INFO][4438] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a80429b7e6f03f26edef9c04f86e85509c5c37ad89e621f2a6e899bce1062b49" Namespace="calico-system" Pod="goldmane-7c778bb748-4m4m9" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--4m4m9-eth0" Oct 31 01:11:48.714402 containerd[1680]: 2025-10-31 01:11:48.702 [INFO][4438] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a80429b7e6f03f26edef9c04f86e85509c5c37ad89e621f2a6e899bce1062b49" Namespace="calico-system" Pod="goldmane-7c778bb748-4m4m9" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--4m4m9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--4m4m9-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"65c6a159-851f-4aa0-86f0-ed319b59746c", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 11, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a80429b7e6f03f26edef9c04f86e85509c5c37ad89e621f2a6e899bce1062b49", Pod:"goldmane-7c778bb748-4m4m9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8af9d44d5ba", MAC:"0e:59:4e:22:55:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:11:48.714402 containerd[1680]: 2025-10-31 01:11:48.710 [INFO][4438] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a80429b7e6f03f26edef9c04f86e85509c5c37ad89e621f2a6e899bce1062b49" Namespace="calico-system" Pod="goldmane-7c778bb748-4m4m9" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--4m4m9-eth0" Oct 31 01:11:48.731264 containerd[1680]: time="2025-10-31T01:11:48.730948160Z" level=info msg="connecting to shim a80429b7e6f03f26edef9c04f86e85509c5c37ad89e621f2a6e899bce1062b49" address="unix:///run/containerd/s/8e066932efd7cddb6106461417bbdbbc5712cc7aa2c0e39ed7ba928bef17dfe3" namespace=k8s.io protocol=ttrpc version=3 Oct 31 01:11:48.734684 containerd[1680]: time="2025-10-31T01:11:48.734666706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xkvt5,Uid:ab451c17-5e1d-49f8-82b8-eb7dedc414c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"486b6423ea1b7c1c577c7e8abd105686c3291fbcb0a2cdbf871addfccb8af0c4\"" Oct 31 01:11:48.754314 containerd[1680]: time="2025-10-31T01:11:48.754291445Z" level=info msg="CreateContainer within sandbox \"486b6423ea1b7c1c577c7e8abd105686c3291fbcb0a2cdbf871addfccb8af0c4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 01:11:48.756814 systemd[1]: Started cri-containerd-a80429b7e6f03f26edef9c04f86e85509c5c37ad89e621f2a6e899bce1062b49.scope - libcontainer container a80429b7e6f03f26edef9c04f86e85509c5c37ad89e621f2a6e899bce1062b49. Oct 31 01:11:48.766641 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 01:11:48.767218 containerd[1680]: time="2025-10-31T01:11:48.766893436Z" level=info msg="Container 7f094e572bff9a61f0961adce02af411431a06eefbcc7dc83a8763c584655d63: CDI devices from CRI Config.CDIDevices: []" Oct 31 01:11:48.769629 containerd[1680]: time="2025-10-31T01:11:48.769613249Z" level=info msg="CreateContainer within sandbox \"486b6423ea1b7c1c577c7e8abd105686c3291fbcb0a2cdbf871addfccb8af0c4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7f094e572bff9a61f0961adce02af411431a06eefbcc7dc83a8763c584655d63\"" Oct 31 01:11:48.769970 containerd[1680]: time="2025-10-31T01:11:48.769949481Z" level=info msg="StartContainer for \"7f094e572bff9a61f0961adce02af411431a06eefbcc7dc83a8763c584655d63\"" Oct 31 01:11:48.771764 containerd[1680]: time="2025-10-31T01:11:48.770488004Z" level=info msg="connecting to shim 7f094e572bff9a61f0961adce02af411431a06eefbcc7dc83a8763c584655d63" address="unix:///run/containerd/s/e8e7983638299c529b46b3ec06eb518ea3466aa0640bf76a6fa1a8383800da49" protocol=ttrpc version=3 Oct 31 01:11:48.787834 systemd[1]: Started cri-containerd-7f094e572bff9a61f0961adce02af411431a06eefbcc7dc83a8763c584655d63.scope - libcontainer container 7f094e572bff9a61f0961adce02af411431a06eefbcc7dc83a8763c584655d63. Oct 31 01:11:48.800766 containerd[1680]: time="2025-10-31T01:11:48.799945596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-4m4m9,Uid:65c6a159-851f-4aa0-86f0-ed319b59746c,Namespace:calico-system,Attempt:0,} returns sandbox id \"a80429b7e6f03f26edef9c04f86e85509c5c37ad89e621f2a6e899bce1062b49\"" Oct 31 01:11:48.802246 containerd[1680]: time="2025-10-31T01:11:48.802133077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 01:11:48.814596 containerd[1680]: time="2025-10-31T01:11:48.814572225Z" level=info msg="StartContainer for \"7f094e572bff9a61f0961adce02af411431a06eefbcc7dc83a8763c584655d63\" returns successfully" Oct 31 01:11:49.480338 containerd[1680]: time="2025-10-31T01:11:49.480276993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5756f8ffdf-n4khn,Uid:6ebd7d48-8bc4-4814-b512-88ba6c138a34,Namespace:calico-apiserver,Attempt:0,}" Oct 31 01:11:49.488053 containerd[1680]: time="2025-10-31T01:11:49.488024427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6559f565b6-lxd8t,Uid:4d287e78-d822-498f-92dc-e6aaa22a1cfb,Namespace:calico-apiserver,Attempt:0,}" Oct 31 01:11:49.643754 systemd-networkd[1554]: calib968ef53e05: Link UP Oct 31 01:11:49.644421 systemd-networkd[1554]: calib968ef53e05: Gained carrier Oct 31 01:11:49.663079 containerd[1680]: time="2025-10-31T01:11:49.663047680Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 01:11:49.685061 containerd[1680]: 2025-10-31 01:11:49.557 [INFO][4625] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 01:11:49.685061 containerd[1680]: 2025-10-31 01:11:49.562 [INFO][4625] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6559f565b6--lxd8t-eth0 calico-apiserver-6559f565b6- calico-apiserver 4d287e78-d822-498f-92dc-e6aaa22a1cfb 851 0 2025-10-31 01:11:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6559f565b6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6559f565b6-lxd8t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib968ef53e05 [] [] }} ContainerID="94029c4d2ed7fed6fdd8605edd82e528c942d31c33c8f2f7fbef894221fb862a" Namespace="calico-apiserver" Pod="calico-apiserver-6559f565b6-lxd8t" WorkloadEndpoint="localhost-k8s-calico--apiserver--6559f565b6--lxd8t-" Oct 31 01:11:49.685061 containerd[1680]: 2025-10-31 01:11:49.562 [INFO][4625] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="94029c4d2ed7fed6fdd8605edd82e528c942d31c33c8f2f7fbef894221fb862a" Namespace="calico-apiserver" Pod="calico-apiserver-6559f565b6-lxd8t" WorkloadEndpoint="localhost-k8s-calico--apiserver--6559f565b6--lxd8t-eth0" Oct 31 01:11:49.685061 containerd[1680]: 2025-10-31 01:11:49.594 [INFO][4644] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="94029c4d2ed7fed6fdd8605edd82e528c942d31c33c8f2f7fbef894221fb862a" HandleID="k8s-pod-network.94029c4d2ed7fed6fdd8605edd82e528c942d31c33c8f2f7fbef894221fb862a" Workload="localhost-k8s-calico--apiserver--6559f565b6--lxd8t-eth0" Oct 31 01:11:49.685061 containerd[1680]: 2025-10-31 01:11:49.595 [INFO][4644] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="94029c4d2ed7fed6fdd8605edd82e528c942d31c33c8f2f7fbef894221fb862a" HandleID="k8s-pod-network.94029c4d2ed7fed6fdd8605edd82e528c942d31c33c8f2f7fbef894221fb862a" Workload="localhost-k8s-calico--apiserver--6559f565b6--lxd8t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5020), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6559f565b6-lxd8t", "timestamp":"2025-10-31 01:11:49.594951529 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:11:49.685061 containerd[1680]: 2025-10-31 01:11:49.595 [INFO][4644] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:11:49.685061 containerd[1680]: 2025-10-31 01:11:49.595 [INFO][4644] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:11:49.685061 containerd[1680]: 2025-10-31 01:11:49.595 [INFO][4644] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 01:11:49.685061 containerd[1680]: 2025-10-31 01:11:49.599 [INFO][4644] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.94029c4d2ed7fed6fdd8605edd82e528c942d31c33c8f2f7fbef894221fb862a" host="localhost" Oct 31 01:11:49.685061 containerd[1680]: 2025-10-31 01:11:49.601 [INFO][4644] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 01:11:49.685061 containerd[1680]: 2025-10-31 01:11:49.603 [INFO][4644] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 01:11:49.685061 containerd[1680]: 2025-10-31 01:11:49.604 [INFO][4644] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 01:11:49.685061 containerd[1680]: 2025-10-31 01:11:49.605 [INFO][4644] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 01:11:49.685061 containerd[1680]: 2025-10-31 01:11:49.605 [INFO][4644] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.94029c4d2ed7fed6fdd8605edd82e528c942d31c33c8f2f7fbef894221fb862a" host="localhost" Oct 31 01:11:49.685061 containerd[1680]: 2025-10-31 01:11:49.605 [INFO][4644] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.94029c4d2ed7fed6fdd8605edd82e528c942d31c33c8f2f7fbef894221fb862a Oct 31 01:11:49.685061 containerd[1680]: 2025-10-31 01:11:49.607 [INFO][4644] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.94029c4d2ed7fed6fdd8605edd82e528c942d31c33c8f2f7fbef894221fb862a" host="localhost" Oct 31 01:11:49.685061 containerd[1680]: 2025-10-31 01:11:49.615 [INFO][4644] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.94029c4d2ed7fed6fdd8605edd82e528c942d31c33c8f2f7fbef894221fb862a" host="localhost" Oct 31 01:11:49.685061 containerd[1680]: 2025-10-31 01:11:49.615 [INFO][4644] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.94029c4d2ed7fed6fdd8605edd82e528c942d31c33c8f2f7fbef894221fb862a" host="localhost" Oct 31 01:11:49.685061 containerd[1680]: 2025-10-31 01:11:49.615 [INFO][4644] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:11:49.685061 containerd[1680]: 2025-10-31 01:11:49.615 [INFO][4644] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="94029c4d2ed7fed6fdd8605edd82e528c942d31c33c8f2f7fbef894221fb862a" HandleID="k8s-pod-network.94029c4d2ed7fed6fdd8605edd82e528c942d31c33c8f2f7fbef894221fb862a" Workload="localhost-k8s-calico--apiserver--6559f565b6--lxd8t-eth0" Oct 31 01:11:49.697197 containerd[1680]: 2025-10-31 01:11:49.616 [INFO][4625] cni-plugin/k8s.go 418: Populated endpoint ContainerID="94029c4d2ed7fed6fdd8605edd82e528c942d31c33c8f2f7fbef894221fb862a" Namespace="calico-apiserver" Pod="calico-apiserver-6559f565b6-lxd8t" WorkloadEndpoint="localhost-k8s-calico--apiserver--6559f565b6--lxd8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6559f565b6--lxd8t-eth0", GenerateName:"calico-apiserver-6559f565b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"4d287e78-d822-498f-92dc-e6aaa22a1cfb", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 11, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6559f565b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6559f565b6-lxd8t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib968ef53e05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:11:49.697197 containerd[1680]: 2025-10-31 01:11:49.616 [INFO][4625] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="94029c4d2ed7fed6fdd8605edd82e528c942d31c33c8f2f7fbef894221fb862a" Namespace="calico-apiserver" Pod="calico-apiserver-6559f565b6-lxd8t" WorkloadEndpoint="localhost-k8s-calico--apiserver--6559f565b6--lxd8t-eth0" Oct 31 01:11:49.697197 containerd[1680]: 2025-10-31 01:11:49.616 [INFO][4625] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib968ef53e05 ContainerID="94029c4d2ed7fed6fdd8605edd82e528c942d31c33c8f2f7fbef894221fb862a" Namespace="calico-apiserver" Pod="calico-apiserver-6559f565b6-lxd8t" WorkloadEndpoint="localhost-k8s-calico--apiserver--6559f565b6--lxd8t-eth0" Oct 31 01:11:49.697197 containerd[1680]: 2025-10-31 01:11:49.644 [INFO][4625] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="94029c4d2ed7fed6fdd8605edd82e528c942d31c33c8f2f7fbef894221fb862a" Namespace="calico-apiserver" Pod="calico-apiserver-6559f565b6-lxd8t" WorkloadEndpoint="localhost-k8s-calico--apiserver--6559f565b6--lxd8t-eth0" Oct 31 01:11:49.697197 containerd[1680]: 2025-10-31 01:11:49.645 [INFO][4625] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="94029c4d2ed7fed6fdd8605edd82e528c942d31c33c8f2f7fbef894221fb862a" Namespace="calico-apiserver" Pod="calico-apiserver-6559f565b6-lxd8t" WorkloadEndpoint="localhost-k8s-calico--apiserver--6559f565b6--lxd8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6559f565b6--lxd8t-eth0", GenerateName:"calico-apiserver-6559f565b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"4d287e78-d822-498f-92dc-e6aaa22a1cfb", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 11, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6559f565b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"94029c4d2ed7fed6fdd8605edd82e528c942d31c33c8f2f7fbef894221fb862a", Pod:"calico-apiserver-6559f565b6-lxd8t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib968ef53e05", MAC:"46:4d:c7:63:63:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:11:49.697197 containerd[1680]: 2025-10-31 01:11:49.677 [INFO][4625] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="94029c4d2ed7fed6fdd8605edd82e528c942d31c33c8f2f7fbef894221fb862a" Namespace="calico-apiserver" Pod="calico-apiserver-6559f565b6-lxd8t" WorkloadEndpoint="localhost-k8s-calico--apiserver--6559f565b6--lxd8t-eth0" Oct 31 01:11:49.697197 containerd[1680]: time="2025-10-31T01:11:49.685592205Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 01:11:49.697197 containerd[1680]: time="2025-10-31T01:11:49.685854519Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 01:11:49.725868 kubelet[2970]: E1031 01:11:49.725525 2970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 01:11:49.725868 kubelet[2970]: E1031 01:11:49.725558 2970 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 01:11:49.725868 kubelet[2970]: E1031 01:11:49.725602 2970 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-4m4m9_calico-system(65c6a159-851f-4aa0-86f0-ed319b59746c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 01:11:49.725868 kubelet[2970]: E1031 01:11:49.725624 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4m4m9" podUID="65c6a159-851f-4aa0-86f0-ed319b59746c" Oct 31 01:11:49.752363 systemd-networkd[1554]: cali5695cd4acc3: Link UP Oct 31 01:11:49.752880 systemd-networkd[1554]: cali5695cd4acc3: Gained carrier Oct 31 01:11:49.776869 containerd[1680]: 2025-10-31 01:11:49.545 [INFO][4615] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 01:11:49.776869 containerd[1680]: 2025-10-31 01:11:49.557 [INFO][4615] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5756f8ffdf--n4khn-eth0 calico-apiserver-5756f8ffdf- calico-apiserver 6ebd7d48-8bc4-4814-b512-88ba6c138a34 847 0 2025-10-31 01:11:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5756f8ffdf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5756f8ffdf-n4khn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5695cd4acc3 [] [] }} ContainerID="39bbfeee528ae0917a37647882cd58aa83f328a7ec6f33e9e600943a885940a3" Namespace="calico-apiserver" Pod="calico-apiserver-5756f8ffdf-n4khn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5756f8ffdf--n4khn-" Oct 31 01:11:49.776869 containerd[1680]: 2025-10-31 01:11:49.557 [INFO][4615] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="39bbfeee528ae0917a37647882cd58aa83f328a7ec6f33e9e600943a885940a3" Namespace="calico-apiserver" Pod="calico-apiserver-5756f8ffdf-n4khn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5756f8ffdf--n4khn-eth0" Oct 31 01:11:49.776869 containerd[1680]: 2025-10-31 01:11:49.595 [INFO][4638] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="39bbfeee528ae0917a37647882cd58aa83f328a7ec6f33e9e600943a885940a3" HandleID="k8s-pod-network.39bbfeee528ae0917a37647882cd58aa83f328a7ec6f33e9e600943a885940a3" Workload="localhost-k8s-calico--apiserver--5756f8ffdf--n4khn-eth0" Oct 31 01:11:49.776869 containerd[1680]: 2025-10-31 01:11:49.595 [INFO][4638] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="39bbfeee528ae0917a37647882cd58aa83f328a7ec6f33e9e600943a885940a3" HandleID="k8s-pod-network.39bbfeee528ae0917a37647882cd58aa83f328a7ec6f33e9e600943a885940a3" Workload="localhost-k8s-calico--apiserver--5756f8ffdf--n4khn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b73a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5756f8ffdf-n4khn", "timestamp":"2025-10-31 01:11:49.595904213 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:11:49.776869 containerd[1680]: 2025-10-31 01:11:49.596 [INFO][4638] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:11:49.776869 containerd[1680]: 2025-10-31 01:11:49.615 [INFO][4638] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:11:49.776869 containerd[1680]: 2025-10-31 01:11:49.615 [INFO][4638] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 01:11:49.776869 containerd[1680]: 2025-10-31 01:11:49.700 [INFO][4638] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.39bbfeee528ae0917a37647882cd58aa83f328a7ec6f33e9e600943a885940a3" host="localhost" Oct 31 01:11:49.776869 containerd[1680]: 2025-10-31 01:11:49.706 [INFO][4638] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 01:11:49.776869 containerd[1680]: 2025-10-31 01:11:49.710 [INFO][4638] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 01:11:49.776869 containerd[1680]: 2025-10-31 01:11:49.711 [INFO][4638] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 01:11:49.776869 containerd[1680]: 2025-10-31 01:11:49.713 [INFO][4638] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 01:11:49.776869 containerd[1680]: 2025-10-31 01:11:49.713 [INFO][4638] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.39bbfeee528ae0917a37647882cd58aa83f328a7ec6f33e9e600943a885940a3" host="localhost" Oct 31 01:11:49.776869 containerd[1680]: 2025-10-31 01:11:49.715 [INFO][4638] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.39bbfeee528ae0917a37647882cd58aa83f328a7ec6f33e9e600943a885940a3 Oct 31 01:11:49.776869 containerd[1680]: 2025-10-31 01:11:49.725 [INFO][4638] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.39bbfeee528ae0917a37647882cd58aa83f328a7ec6f33e9e600943a885940a3" host="localhost" Oct 31 01:11:49.776869 containerd[1680]: 2025-10-31 01:11:49.746 [INFO][4638] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.39bbfeee528ae0917a37647882cd58aa83f328a7ec6f33e9e600943a885940a3" host="localhost" Oct 31 01:11:49.776869 containerd[1680]: 2025-10-31 01:11:49.746 [INFO][4638] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.39bbfeee528ae0917a37647882cd58aa83f328a7ec6f33e9e600943a885940a3" host="localhost" Oct 31 01:11:49.776869 containerd[1680]: 2025-10-31 01:11:49.747 [INFO][4638] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:11:49.776869 containerd[1680]: 2025-10-31 01:11:49.747 [INFO][4638] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="39bbfeee528ae0917a37647882cd58aa83f328a7ec6f33e9e600943a885940a3" HandleID="k8s-pod-network.39bbfeee528ae0917a37647882cd58aa83f328a7ec6f33e9e600943a885940a3" Workload="localhost-k8s-calico--apiserver--5756f8ffdf--n4khn-eth0" Oct 31 01:11:49.800999 containerd[1680]: 2025-10-31 01:11:49.749 [INFO][4615] cni-plugin/k8s.go 418: Populated endpoint ContainerID="39bbfeee528ae0917a37647882cd58aa83f328a7ec6f33e9e600943a885940a3" Namespace="calico-apiserver" Pod="calico-apiserver-5756f8ffdf-n4khn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5756f8ffdf--n4khn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5756f8ffdf--n4khn-eth0", GenerateName:"calico-apiserver-5756f8ffdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"6ebd7d48-8bc4-4814-b512-88ba6c138a34", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 11, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5756f8ffdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5756f8ffdf-n4khn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5695cd4acc3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:11:49.800999 containerd[1680]: 2025-10-31 01:11:49.749 [INFO][4615] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="39bbfeee528ae0917a37647882cd58aa83f328a7ec6f33e9e600943a885940a3" Namespace="calico-apiserver" Pod="calico-apiserver-5756f8ffdf-n4khn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5756f8ffdf--n4khn-eth0" Oct 31 01:11:49.800999 containerd[1680]: 2025-10-31 01:11:49.749 [INFO][4615] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5695cd4acc3 ContainerID="39bbfeee528ae0917a37647882cd58aa83f328a7ec6f33e9e600943a885940a3" Namespace="calico-apiserver" Pod="calico-apiserver-5756f8ffdf-n4khn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5756f8ffdf--n4khn-eth0" Oct 31 01:11:49.800999 containerd[1680]: 2025-10-31 01:11:49.752 [INFO][4615] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="39bbfeee528ae0917a37647882cd58aa83f328a7ec6f33e9e600943a885940a3" Namespace="calico-apiserver" Pod="calico-apiserver-5756f8ffdf-n4khn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5756f8ffdf--n4khn-eth0" Oct 31 01:11:49.800999 containerd[1680]: 2025-10-31 01:11:49.752 [INFO][4615] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="39bbfeee528ae0917a37647882cd58aa83f328a7ec6f33e9e600943a885940a3" Namespace="calico-apiserver" Pod="calico-apiserver-5756f8ffdf-n4khn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5756f8ffdf--n4khn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5756f8ffdf--n4khn-eth0", GenerateName:"calico-apiserver-5756f8ffdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"6ebd7d48-8bc4-4814-b512-88ba6c138a34", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 11, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5756f8ffdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"39bbfeee528ae0917a37647882cd58aa83f328a7ec6f33e9e600943a885940a3", Pod:"calico-apiserver-5756f8ffdf-n4khn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5695cd4acc3", MAC:"f2:6a:10:45:bf:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:11:49.800999 containerd[1680]: 2025-10-31 01:11:49.773 [INFO][4615] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="39bbfeee528ae0917a37647882cd58aa83f328a7ec6f33e9e600943a885940a3" Namespace="calico-apiserver" Pod="calico-apiserver-5756f8ffdf-n4khn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5756f8ffdf--n4khn-eth0" Oct 31 01:11:49.800999 containerd[1680]: time="2025-10-31T01:11:49.787754702Z" level=info msg="connecting to shim 94029c4d2ed7fed6fdd8605edd82e528c942d31c33c8f2f7fbef894221fb862a" address="unix:///run/containerd/s/82968fa71f3b2f4d47015607c3c2c3df624c1adf7705e8d37d87f85099df485e" namespace=k8s.io protocol=ttrpc version=3 Oct 31 01:11:49.822087 systemd[1]: Started cri-containerd-94029c4d2ed7fed6fdd8605edd82e528c942d31c33c8f2f7fbef894221fb862a.scope - libcontainer container 94029c4d2ed7fed6fdd8605edd82e528c942d31c33c8f2f7fbef894221fb862a. Oct 31 01:11:49.848761 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 01:11:49.862704 kubelet[2970]: I1031 01:11:49.862243 2970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-xkvt5" podStartSLOduration=35.821545239 podStartE2EDuration="35.821545239s" podCreationTimestamp="2025-10-31 01:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 01:11:49.795773024 +0000 UTC m=+41.474850779" watchObservedRunningTime="2025-10-31 01:11:49.821545239 +0000 UTC m=+41.500622995" Oct 31 01:11:49.911687 containerd[1680]: time="2025-10-31T01:11:49.911660738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6559f565b6-lxd8t,Uid:4d287e78-d822-498f-92dc-e6aaa22a1cfb,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"94029c4d2ed7fed6fdd8605edd82e528c942d31c33c8f2f7fbef894221fb862a\"" Oct 31 01:11:49.916889 containerd[1680]: time="2025-10-31T01:11:49.915770883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:11:49.942767 containerd[1680]: time="2025-10-31T01:11:49.942740877Z" level=info msg="connecting to shim 39bbfeee528ae0917a37647882cd58aa83f328a7ec6f33e9e600943a885940a3" address="unix:///run/containerd/s/b7badaeb6e6a3ff33a6f48f80decf4e5f55cfe0b95f8e56d76f149cc6250228a" namespace=k8s.io protocol=ttrpc version=3 Oct 31 01:11:49.960823 systemd[1]: Started cri-containerd-39bbfeee528ae0917a37647882cd58aa83f328a7ec6f33e9e600943a885940a3.scope - libcontainer container 39bbfeee528ae0917a37647882cd58aa83f328a7ec6f33e9e600943a885940a3. Oct 31 01:11:49.967824 systemd-networkd[1554]: cali8af9d44d5ba: Gained IPv6LL Oct 31 01:11:49.968355 systemd-networkd[1554]: cali662d73f31c2: Gained IPv6LL Oct 31 01:11:49.970267 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 01:11:49.998872 containerd[1680]: time="2025-10-31T01:11:49.998802703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5756f8ffdf-n4khn,Uid:6ebd7d48-8bc4-4814-b512-88ba6c138a34,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"39bbfeee528ae0917a37647882cd58aa83f328a7ec6f33e9e600943a885940a3\"" Oct 31 01:11:50.297751 containerd[1680]: time="2025-10-31T01:11:50.297636358Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 01:11:50.298182 containerd[1680]: time="2025-10-31T01:11:50.298093263Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:11:50.298182 containerd[1680]: time="2025-10-31T01:11:50.298151601Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 01:11:50.298323 kubelet[2970]: E1031 01:11:50.298288 2970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:11:50.298370 kubelet[2970]: E1031 01:11:50.298339 2970 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:11:50.298481 kubelet[2970]: E1031 01:11:50.298459 2970 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6559f565b6-lxd8t_calico-apiserver(4d287e78-d822-498f-92dc-e6aaa22a1cfb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:11:50.298670 kubelet[2970]: E1031 01:11:50.298642 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6559f565b6-lxd8t" podUID="4d287e78-d822-498f-92dc-e6aaa22a1cfb" Oct 31 01:11:50.298924 containerd[1680]: time="2025-10-31T01:11:50.298904903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:11:50.477906 containerd[1680]: time="2025-10-31T01:11:50.477864849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-98l5j,Uid:d59878ec-d6a3-4ecc-a664-a6bb44484f80,Namespace:kube-system,Attempt:0,}" Oct 31 01:11:50.478260 containerd[1680]: time="2025-10-31T01:11:50.478191892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6559f565b6-9xwpc,Uid:014b66aa-7ac9-43b5-8a19-b4ecf0978b6c,Namespace:calico-apiserver,Attempt:0,}" Oct 31 01:11:50.482703 containerd[1680]: time="2025-10-31T01:11:50.480215918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-577dc64884-7r78b,Uid:5298c82b-bc3f-4f82-8aba-c9069839de1b,Namespace:calico-system,Attempt:0,}" Oct 31 01:11:50.579535 systemd-networkd[1554]: cali3a6340ada5f: Link UP Oct 31 01:11:50.581890 systemd-networkd[1554]: cali3a6340ada5f: Gained carrier Oct 31 01:11:50.590606 containerd[1680]: 2025-10-31 01:11:50.512 [INFO][4780] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 01:11:50.590606 containerd[1680]: 2025-10-31 01:11:50.523 [INFO][4780] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--98l5j-eth0 coredns-66bc5c9577- kube-system d59878ec-d6a3-4ecc-a664-a6bb44484f80 846 0 2025-10-31 01:11:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-98l5j eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3a6340ada5f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="ad77122254f7abddb62fb20f1a3fa18258fdca685799357f965f33daf313d0e4" Namespace="kube-system" Pod="coredns-66bc5c9577-98l5j" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--98l5j-" Oct 31 01:11:50.590606 containerd[1680]: 2025-10-31 01:11:50.523 [INFO][4780] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ad77122254f7abddb62fb20f1a3fa18258fdca685799357f965f33daf313d0e4" Namespace="kube-system" Pod="coredns-66bc5c9577-98l5j" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--98l5j-eth0" Oct 31 01:11:50.590606 containerd[1680]: 2025-10-31 01:11:50.542 [INFO][4815] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad77122254f7abddb62fb20f1a3fa18258fdca685799357f965f33daf313d0e4" HandleID="k8s-pod-network.ad77122254f7abddb62fb20f1a3fa18258fdca685799357f965f33daf313d0e4" Workload="localhost-k8s-coredns--66bc5c9577--98l5j-eth0" Oct 31 01:11:50.590606 containerd[1680]: 2025-10-31 01:11:50.542 [INFO][4815] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ad77122254f7abddb62fb20f1a3fa18258fdca685799357f965f33daf313d0e4" HandleID="k8s-pod-network.ad77122254f7abddb62fb20f1a3fa18258fdca685799357f965f33daf313d0e4" Workload="localhost-k8s-coredns--66bc5c9577--98l5j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-98l5j", "timestamp":"2025-10-31 01:11:50.542069155 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:11:50.590606 containerd[1680]: 2025-10-31 01:11:50.542 [INFO][4815] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:11:50.590606 containerd[1680]: 2025-10-31 01:11:50.542 [INFO][4815] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:11:50.590606 containerd[1680]: 2025-10-31 01:11:50.542 [INFO][4815] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 01:11:50.590606 containerd[1680]: 2025-10-31 01:11:50.546 [INFO][4815] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ad77122254f7abddb62fb20f1a3fa18258fdca685799357f965f33daf313d0e4" host="localhost" Oct 31 01:11:50.590606 containerd[1680]: 2025-10-31 01:11:50.548 [INFO][4815] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 01:11:50.590606 containerd[1680]: 2025-10-31 01:11:50.552 [INFO][4815] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 01:11:50.590606 containerd[1680]: 2025-10-31 01:11:50.553 [INFO][4815] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 01:11:50.590606 containerd[1680]: 2025-10-31 01:11:50.554 [INFO][4815] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 01:11:50.590606 containerd[1680]: 2025-10-31 01:11:50.554 [INFO][4815] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ad77122254f7abddb62fb20f1a3fa18258fdca685799357f965f33daf313d0e4" host="localhost" Oct 31 01:11:50.590606 containerd[1680]: 2025-10-31 01:11:50.555 [INFO][4815] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ad77122254f7abddb62fb20f1a3fa18258fdca685799357f965f33daf313d0e4 Oct 31 01:11:50.590606 containerd[1680]: 2025-10-31 01:11:50.564 [INFO][4815] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ad77122254f7abddb62fb20f1a3fa18258fdca685799357f965f33daf313d0e4" host="localhost" Oct 31 01:11:50.590606 containerd[1680]: 2025-10-31 01:11:50.573 [INFO][4815] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.ad77122254f7abddb62fb20f1a3fa18258fdca685799357f965f33daf313d0e4" host="localhost" Oct 31 01:11:50.590606 containerd[1680]: 2025-10-31 01:11:50.573 [INFO][4815] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.ad77122254f7abddb62fb20f1a3fa18258fdca685799357f965f33daf313d0e4" host="localhost" Oct 31 01:11:50.590606 containerd[1680]: 2025-10-31 01:11:50.574 [INFO][4815] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:11:50.590606 containerd[1680]: 2025-10-31 01:11:50.574 [INFO][4815] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="ad77122254f7abddb62fb20f1a3fa18258fdca685799357f965f33daf313d0e4" HandleID="k8s-pod-network.ad77122254f7abddb62fb20f1a3fa18258fdca685799357f965f33daf313d0e4" Workload="localhost-k8s-coredns--66bc5c9577--98l5j-eth0" Oct 31 01:11:50.592144 containerd[1680]: 2025-10-31 01:11:50.575 [INFO][4780] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ad77122254f7abddb62fb20f1a3fa18258fdca685799357f965f33daf313d0e4" Namespace="kube-system" Pod="coredns-66bc5c9577-98l5j" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--98l5j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--98l5j-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d59878ec-d6a3-4ecc-a664-a6bb44484f80", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 11, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-98l5j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3a6340ada5f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:11:50.592144 containerd[1680]: 2025-10-31 01:11:50.575 [INFO][4780] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="ad77122254f7abddb62fb20f1a3fa18258fdca685799357f965f33daf313d0e4" Namespace="kube-system" Pod="coredns-66bc5c9577-98l5j" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--98l5j-eth0" Oct 31 01:11:50.592144 containerd[1680]: 2025-10-31 01:11:50.576 [INFO][4780] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3a6340ada5f ContainerID="ad77122254f7abddb62fb20f1a3fa18258fdca685799357f965f33daf313d0e4" Namespace="kube-system" Pod="coredns-66bc5c9577-98l5j" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--98l5j-eth0" Oct 31 01:11:50.592144 containerd[1680]: 2025-10-31 01:11:50.582 [INFO][4780] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad77122254f7abddb62fb20f1a3fa18258fdca685799357f965f33daf313d0e4" Namespace="kube-system" Pod="coredns-66bc5c9577-98l5j" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--98l5j-eth0" Oct 31 01:11:50.592144 containerd[1680]: 2025-10-31 01:11:50.584 [INFO][4780] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ad77122254f7abddb62fb20f1a3fa18258fdca685799357f965f33daf313d0e4" Namespace="kube-system" Pod="coredns-66bc5c9577-98l5j" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--98l5j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--98l5j-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d59878ec-d6a3-4ecc-a664-a6bb44484f80", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 11, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ad77122254f7abddb62fb20f1a3fa18258fdca685799357f965f33daf313d0e4", Pod:"coredns-66bc5c9577-98l5j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3a6340ada5f", MAC:"d2:53:22:5e:81:56", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:11:50.592144 containerd[1680]: 2025-10-31 01:11:50.588 [INFO][4780] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ad77122254f7abddb62fb20f1a3fa18258fdca685799357f965f33daf313d0e4" Namespace="kube-system" Pod="coredns-66bc5c9577-98l5j" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--98l5j-eth0" Oct 31 01:11:50.604023 containerd[1680]: time="2025-10-31T01:11:50.603981719Z" level=info msg="connecting to shim ad77122254f7abddb62fb20f1a3fa18258fdca685799357f965f33daf313d0e4" address="unix:///run/containerd/s/322e830bd509c0c6cca021087171519d43d2cdb6bda6ff72969e4faf340fc8df" namespace=k8s.io protocol=ttrpc version=3 Oct 31 01:11:50.619946 systemd[1]: Started cri-containerd-ad77122254f7abddb62fb20f1a3fa18258fdca685799357f965f33daf313d0e4.scope - libcontainer container ad77122254f7abddb62fb20f1a3fa18258fdca685799357f965f33daf313d0e4. Oct 31 01:11:50.621006 containerd[1680]: time="2025-10-31T01:11:50.620988309Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 01:11:50.621240 containerd[1680]: time="2025-10-31T01:11:50.621223048Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:11:50.621358 containerd[1680]: time="2025-10-31T01:11:50.621254706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 01:11:50.621386 kubelet[2970]: E1031 01:11:50.621311 2970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:11:50.621453 kubelet[2970]: E1031 01:11:50.621441 2970 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:11:50.621556 kubelet[2970]: E1031 01:11:50.621539 2970 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5756f8ffdf-n4khn_calico-apiserver(6ebd7d48-8bc4-4814-b512-88ba6c138a34): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:11:50.621716 kubelet[2970]: E1031 01:11:50.621691 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5756f8ffdf-n4khn" podUID="6ebd7d48-8bc4-4814-b512-88ba6c138a34" Oct 31 01:11:50.630425 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 01:11:50.659859 containerd[1680]: time="2025-10-31T01:11:50.659832121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-98l5j,Uid:d59878ec-d6a3-4ecc-a664-a6bb44484f80,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad77122254f7abddb62fb20f1a3fa18258fdca685799357f965f33daf313d0e4\"" Oct 31 01:11:50.663889 containerd[1680]: time="2025-10-31T01:11:50.663866834Z" level=info msg="CreateContainer within sandbox \"ad77122254f7abddb62fb20f1a3fa18258fdca685799357f965f33daf313d0e4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 01:11:50.669073 containerd[1680]: time="2025-10-31T01:11:50.669040718Z" level=info msg="Container 919d563d1929bd47286bcd794bc5f83a11e8f61570bc4a176ce941ebbcb011e3: CDI devices from CRI Config.CDIDevices: []" Oct 31 01:11:50.673525 containerd[1680]: time="2025-10-31T01:11:50.673505910Z" level=info msg="CreateContainer within sandbox \"ad77122254f7abddb62fb20f1a3fa18258fdca685799357f965f33daf313d0e4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"919d563d1929bd47286bcd794bc5f83a11e8f61570bc4a176ce941ebbcb011e3\"" Oct 31 01:11:50.676735 kubelet[2970]: E1031 01:11:50.676243 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6559f565b6-lxd8t" podUID="4d287e78-d822-498f-92dc-e6aaa22a1cfb" Oct 31 01:11:50.681218 kubelet[2970]: E1031 01:11:50.680978 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4m4m9" podUID="65c6a159-851f-4aa0-86f0-ed319b59746c" Oct 31 01:11:50.681218 kubelet[2970]: E1031 01:11:50.681038 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5756f8ffdf-n4khn" podUID="6ebd7d48-8bc4-4814-b512-88ba6c138a34" Oct 31 01:11:50.683150 containerd[1680]: time="2025-10-31T01:11:50.683130513Z" level=info msg="StartContainer for \"919d563d1929bd47286bcd794bc5f83a11e8f61570bc4a176ce941ebbcb011e3\"" Oct 31 01:11:50.686878 containerd[1680]: time="2025-10-31T01:11:50.686839402Z" level=info msg="connecting to shim 919d563d1929bd47286bcd794bc5f83a11e8f61570bc4a176ce941ebbcb011e3" address="unix:///run/containerd/s/322e830bd509c0c6cca021087171519d43d2cdb6bda6ff72969e4faf340fc8df" protocol=ttrpc version=3 Oct 31 01:11:50.689472 systemd-networkd[1554]: cali1e34fa522ad: Link UP Oct 31 01:11:50.689602 systemd-networkd[1554]: cali1e34fa522ad: Gained carrier Oct 31 01:11:50.709230 containerd[1680]: 2025-10-31 01:11:50.508 [INFO][4793] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 01:11:50.709230 containerd[1680]: 2025-10-31 01:11:50.524 [INFO][4793] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6559f565b6--9xwpc-eth0 calico-apiserver-6559f565b6- calico-apiserver 014b66aa-7ac9-43b5-8a19-b4ecf0978b6c 849 0 2025-10-31 01:11:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6559f565b6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6559f565b6-9xwpc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1e34fa522ad [] [] }} ContainerID="030dc9a1a2cfe214d1fb188733183691153cf07d6d14759cac3405569ee053fb" Namespace="calico-apiserver" Pod="calico-apiserver-6559f565b6-9xwpc" WorkloadEndpoint="localhost-k8s-calico--apiserver--6559f565b6--9xwpc-" Oct 31 01:11:50.709230 containerd[1680]: 2025-10-31 01:11:50.524 [INFO][4793] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="030dc9a1a2cfe214d1fb188733183691153cf07d6d14759cac3405569ee053fb" Namespace="calico-apiserver" Pod="calico-apiserver-6559f565b6-9xwpc" WorkloadEndpoint="localhost-k8s-calico--apiserver--6559f565b6--9xwpc-eth0" Oct 31 01:11:50.709230 containerd[1680]: 2025-10-31 01:11:50.557 [INFO][4824] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="030dc9a1a2cfe214d1fb188733183691153cf07d6d14759cac3405569ee053fb" HandleID="k8s-pod-network.030dc9a1a2cfe214d1fb188733183691153cf07d6d14759cac3405569ee053fb" Workload="localhost-k8s-calico--apiserver--6559f565b6--9xwpc-eth0" Oct 31 01:11:50.709230 containerd[1680]: 2025-10-31 01:11:50.558 [INFO][4824] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="030dc9a1a2cfe214d1fb188733183691153cf07d6d14759cac3405569ee053fb" HandleID="k8s-pod-network.030dc9a1a2cfe214d1fb188733183691153cf07d6d14759cac3405569ee053fb" Workload="localhost-k8s-calico--apiserver--6559f565b6--9xwpc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd7c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6559f565b6-9xwpc", "timestamp":"2025-10-31 01:11:50.557577876 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:11:50.709230 containerd[1680]: 2025-10-31 01:11:50.558 [INFO][4824] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:11:50.709230 containerd[1680]: 2025-10-31 01:11:50.574 [INFO][4824] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:11:50.709230 containerd[1680]: 2025-10-31 01:11:50.574 [INFO][4824] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 01:11:50.709230 containerd[1680]: 2025-10-31 01:11:50.647 [INFO][4824] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.030dc9a1a2cfe214d1fb188733183691153cf07d6d14759cac3405569ee053fb" host="localhost" Oct 31 01:11:50.709230 containerd[1680]: 2025-10-31 01:11:50.652 [INFO][4824] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 01:11:50.709230 containerd[1680]: 2025-10-31 01:11:50.654 [INFO][4824] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 01:11:50.709230 containerd[1680]: 2025-10-31 01:11:50.655 [INFO][4824] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 01:11:50.709230 containerd[1680]: 2025-10-31 01:11:50.656 [INFO][4824] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 01:11:50.709230 containerd[1680]: 2025-10-31 01:11:50.656 [INFO][4824] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.030dc9a1a2cfe214d1fb188733183691153cf07d6d14759cac3405569ee053fb" host="localhost" Oct 31 01:11:50.709230 containerd[1680]: 2025-10-31 01:11:50.657 [INFO][4824] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.030dc9a1a2cfe214d1fb188733183691153cf07d6d14759cac3405569ee053fb Oct 31 01:11:50.709230 containerd[1680]: 2025-10-31 01:11:50.661 [INFO][4824] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.030dc9a1a2cfe214d1fb188733183691153cf07d6d14759cac3405569ee053fb" host="localhost" Oct 31 01:11:50.709230 containerd[1680]: 2025-10-31 01:11:50.669 [INFO][4824] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.030dc9a1a2cfe214d1fb188733183691153cf07d6d14759cac3405569ee053fb" host="localhost" Oct 31 01:11:50.709230 containerd[1680]: 2025-10-31 01:11:50.669 [INFO][4824] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.030dc9a1a2cfe214d1fb188733183691153cf07d6d14759cac3405569ee053fb" host="localhost" Oct 31 01:11:50.709230 containerd[1680]: 2025-10-31 01:11:50.670 [INFO][4824] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:11:50.709230 containerd[1680]: 2025-10-31 01:11:50.670 [INFO][4824] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="030dc9a1a2cfe214d1fb188733183691153cf07d6d14759cac3405569ee053fb" HandleID="k8s-pod-network.030dc9a1a2cfe214d1fb188733183691153cf07d6d14759cac3405569ee053fb" Workload="localhost-k8s-calico--apiserver--6559f565b6--9xwpc-eth0" Oct 31 01:11:50.709665 containerd[1680]: 2025-10-31 01:11:50.681 [INFO][4793] cni-plugin/k8s.go 418: Populated endpoint ContainerID="030dc9a1a2cfe214d1fb188733183691153cf07d6d14759cac3405569ee053fb" Namespace="calico-apiserver" Pod="calico-apiserver-6559f565b6-9xwpc" WorkloadEndpoint="localhost-k8s-calico--apiserver--6559f565b6--9xwpc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6559f565b6--9xwpc-eth0", GenerateName:"calico-apiserver-6559f565b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"014b66aa-7ac9-43b5-8a19-b4ecf0978b6c", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 11, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6559f565b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6559f565b6-9xwpc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1e34fa522ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:11:50.709665 containerd[1680]: 2025-10-31 01:11:50.681 [INFO][4793] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="030dc9a1a2cfe214d1fb188733183691153cf07d6d14759cac3405569ee053fb" Namespace="calico-apiserver" Pod="calico-apiserver-6559f565b6-9xwpc" WorkloadEndpoint="localhost-k8s-calico--apiserver--6559f565b6--9xwpc-eth0" Oct 31 01:11:50.709665 containerd[1680]: 2025-10-31 01:11:50.681 [INFO][4793] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e34fa522ad ContainerID="030dc9a1a2cfe214d1fb188733183691153cf07d6d14759cac3405569ee053fb" Namespace="calico-apiserver" Pod="calico-apiserver-6559f565b6-9xwpc" WorkloadEndpoint="localhost-k8s-calico--apiserver--6559f565b6--9xwpc-eth0" Oct 31 01:11:50.709665 containerd[1680]: 2025-10-31 01:11:50.691 [INFO][4793] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="030dc9a1a2cfe214d1fb188733183691153cf07d6d14759cac3405569ee053fb" Namespace="calico-apiserver" Pod="calico-apiserver-6559f565b6-9xwpc" WorkloadEndpoint="localhost-k8s-calico--apiserver--6559f565b6--9xwpc-eth0" Oct 31 01:11:50.709665 containerd[1680]: 2025-10-31 01:11:50.692 [INFO][4793] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="030dc9a1a2cfe214d1fb188733183691153cf07d6d14759cac3405569ee053fb" Namespace="calico-apiserver" Pod="calico-apiserver-6559f565b6-9xwpc" WorkloadEndpoint="localhost-k8s-calico--apiserver--6559f565b6--9xwpc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6559f565b6--9xwpc-eth0", GenerateName:"calico-apiserver-6559f565b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"014b66aa-7ac9-43b5-8a19-b4ecf0978b6c", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 11, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6559f565b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"030dc9a1a2cfe214d1fb188733183691153cf07d6d14759cac3405569ee053fb", Pod:"calico-apiserver-6559f565b6-9xwpc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1e34fa522ad", MAC:"f2:6d:1d:c6:50:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:11:50.709665 containerd[1680]: 2025-10-31 01:11:50.707 [INFO][4793] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="030dc9a1a2cfe214d1fb188733183691153cf07d6d14759cac3405569ee053fb" Namespace="calico-apiserver" Pod="calico-apiserver-6559f565b6-9xwpc" WorkloadEndpoint="localhost-k8s-calico--apiserver--6559f565b6--9xwpc-eth0" Oct 31 01:11:50.709274 systemd[1]: Started cri-containerd-919d563d1929bd47286bcd794bc5f83a11e8f61570bc4a176ce941ebbcb011e3.scope - libcontainer container 919d563d1929bd47286bcd794bc5f83a11e8f61570bc4a176ce941ebbcb011e3. Oct 31 01:11:50.734566 containerd[1680]: time="2025-10-31T01:11:50.734540441Z" level=info msg="connecting to shim 030dc9a1a2cfe214d1fb188733183691153cf07d6d14759cac3405569ee053fb" address="unix:///run/containerd/s/afc0f6c5e5f5a870cf131e1cbd931fe3ca3cad15010f7bad0373c19cdfc2768a" namespace=k8s.io protocol=ttrpc version=3 Oct 31 01:11:50.748906 containerd[1680]: time="2025-10-31T01:11:50.748879328Z" level=info msg="StartContainer for \"919d563d1929bd47286bcd794bc5f83a11e8f61570bc4a176ce941ebbcb011e3\" returns successfully" Oct 31 01:11:50.768170 systemd[1]: Started cri-containerd-030dc9a1a2cfe214d1fb188733183691153cf07d6d14759cac3405569ee053fb.scope - libcontainer container 030dc9a1a2cfe214d1fb188733183691153cf07d6d14759cac3405569ee053fb. Oct 31 01:11:50.790612 systemd-networkd[1554]: calica10e7fb5f0: Link UP Oct 31 01:11:50.792651 systemd-networkd[1554]: calica10e7fb5f0: Gained carrier Oct 31 01:11:50.799861 systemd-networkd[1554]: calib968ef53e05: Gained IPv6LL Oct 31 01:11:50.805242 containerd[1680]: 2025-10-31 01:11:50.516 [INFO][4786] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 01:11:50.805242 containerd[1680]: 2025-10-31 01:11:50.525 [INFO][4786] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--577dc64884--7r78b-eth0 calico-kube-controllers-577dc64884- calico-system 5298c82b-bc3f-4f82-8aba-c9069839de1b 848 0 2025-10-31 01:11:27 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:577dc64884 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-577dc64884-7r78b eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calica10e7fb5f0 [] [] }} ContainerID="ded4e7028c931902cf30545290516362f6827a01a5d92444171d1a2c4b61a0cc" Namespace="calico-system" Pod="calico-kube-controllers-577dc64884-7r78b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--577dc64884--7r78b-" Oct 31 01:11:50.805242 containerd[1680]: 2025-10-31 01:11:50.525 [INFO][4786] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ded4e7028c931902cf30545290516362f6827a01a5d92444171d1a2c4b61a0cc" Namespace="calico-system" Pod="calico-kube-controllers-577dc64884-7r78b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--577dc64884--7r78b-eth0" Oct 31 01:11:50.805242 containerd[1680]: 2025-10-31 01:11:50.558 [INFO][4818] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ded4e7028c931902cf30545290516362f6827a01a5d92444171d1a2c4b61a0cc" HandleID="k8s-pod-network.ded4e7028c931902cf30545290516362f6827a01a5d92444171d1a2c4b61a0cc" Workload="localhost-k8s-calico--kube--controllers--577dc64884--7r78b-eth0" Oct 31 01:11:50.805242 containerd[1680]: 2025-10-31 01:11:50.558 [INFO][4818] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ded4e7028c931902cf30545290516362f6827a01a5d92444171d1a2c4b61a0cc" HandleID="k8s-pod-network.ded4e7028c931902cf30545290516362f6827a01a5d92444171d1a2c4b61a0cc" Workload="localhost-k8s-calico--kube--controllers--577dc64884--7r78b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df870), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-577dc64884-7r78b", "timestamp":"2025-10-31 01:11:50.558074415 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:11:50.805242 containerd[1680]: 2025-10-31 01:11:50.558 [INFO][4818] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:11:50.805242 containerd[1680]: 2025-10-31 01:11:50.669 [INFO][4818] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:11:50.805242 containerd[1680]: 2025-10-31 01:11:50.670 [INFO][4818] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 01:11:50.805242 containerd[1680]: 2025-10-31 01:11:50.748 [INFO][4818] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ded4e7028c931902cf30545290516362f6827a01a5d92444171d1a2c4b61a0cc" host="localhost" Oct 31 01:11:50.805242 containerd[1680]: 2025-10-31 01:11:50.760 [INFO][4818] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 01:11:50.805242 containerd[1680]: 2025-10-31 01:11:50.768 [INFO][4818] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 01:11:50.805242 containerd[1680]: 2025-10-31 01:11:50.771 [INFO][4818] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 01:11:50.805242 containerd[1680]: 2025-10-31 01:11:50.773 [INFO][4818] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 01:11:50.805242 containerd[1680]: 2025-10-31 01:11:50.773 [INFO][4818] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ded4e7028c931902cf30545290516362f6827a01a5d92444171d1a2c4b61a0cc" host="localhost" Oct 31 01:11:50.805242 containerd[1680]: 2025-10-31 01:11:50.773 [INFO][4818] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ded4e7028c931902cf30545290516362f6827a01a5d92444171d1a2c4b61a0cc Oct 31 01:11:50.805242 containerd[1680]: 2025-10-31 01:11:50.778 [INFO][4818] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ded4e7028c931902cf30545290516362f6827a01a5d92444171d1a2c4b61a0cc" host="localhost" Oct 31 01:11:50.805242 containerd[1680]: 2025-10-31 01:11:50.782 [INFO][4818] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.ded4e7028c931902cf30545290516362f6827a01a5d92444171d1a2c4b61a0cc" host="localhost" Oct 31 01:11:50.805242 containerd[1680]: 2025-10-31 01:11:50.782 [INFO][4818] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.ded4e7028c931902cf30545290516362f6827a01a5d92444171d1a2c4b61a0cc" host="localhost" Oct 31 01:11:50.805242 containerd[1680]: 2025-10-31 01:11:50.782 [INFO][4818] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:11:50.805242 containerd[1680]: 2025-10-31 01:11:50.782 [INFO][4818] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="ded4e7028c931902cf30545290516362f6827a01a5d92444171d1a2c4b61a0cc" HandleID="k8s-pod-network.ded4e7028c931902cf30545290516362f6827a01a5d92444171d1a2c4b61a0cc" Workload="localhost-k8s-calico--kube--controllers--577dc64884--7r78b-eth0" Oct 31 01:11:50.805714 containerd[1680]: 2025-10-31 01:11:50.785 [INFO][4786] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ded4e7028c931902cf30545290516362f6827a01a5d92444171d1a2c4b61a0cc" Namespace="calico-system" Pod="calico-kube-controllers-577dc64884-7r78b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--577dc64884--7r78b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--577dc64884--7r78b-eth0", GenerateName:"calico-kube-controllers-577dc64884-", Namespace:"calico-system", SelfLink:"", UID:"5298c82b-bc3f-4f82-8aba-c9069839de1b", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 11, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"577dc64884", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-577dc64884-7r78b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calica10e7fb5f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:11:50.805714 containerd[1680]: 2025-10-31 01:11:50.785 [INFO][4786] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="ded4e7028c931902cf30545290516362f6827a01a5d92444171d1a2c4b61a0cc" Namespace="calico-system" Pod="calico-kube-controllers-577dc64884-7r78b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--577dc64884--7r78b-eth0" Oct 31 01:11:50.805714 containerd[1680]: 2025-10-31 01:11:50.785 [INFO][4786] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calica10e7fb5f0 ContainerID="ded4e7028c931902cf30545290516362f6827a01a5d92444171d1a2c4b61a0cc" Namespace="calico-system" Pod="calico-kube-controllers-577dc64884-7r78b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--577dc64884--7r78b-eth0" Oct 31 01:11:50.805714 containerd[1680]: 2025-10-31 01:11:50.791 [INFO][4786] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ded4e7028c931902cf30545290516362f6827a01a5d92444171d1a2c4b61a0cc" Namespace="calico-system" Pod="calico-kube-controllers-577dc64884-7r78b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--577dc64884--7r78b-eth0" Oct 31 01:11:50.805714 containerd[1680]: 2025-10-31 01:11:50.793 [INFO][4786] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ded4e7028c931902cf30545290516362f6827a01a5d92444171d1a2c4b61a0cc" Namespace="calico-system" Pod="calico-kube-controllers-577dc64884-7r78b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--577dc64884--7r78b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--577dc64884--7r78b-eth0", GenerateName:"calico-kube-controllers-577dc64884-", Namespace:"calico-system", SelfLink:"", UID:"5298c82b-bc3f-4f82-8aba-c9069839de1b", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 11, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"577dc64884", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ded4e7028c931902cf30545290516362f6827a01a5d92444171d1a2c4b61a0cc", Pod:"calico-kube-controllers-577dc64884-7r78b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calica10e7fb5f0", MAC:"3a:0d:e6:69:d8:27", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:11:50.805714 containerd[1680]: 2025-10-31 01:11:50.803 [INFO][4786] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ded4e7028c931902cf30545290516362f6827a01a5d92444171d1a2c4b61a0cc" Namespace="calico-system" Pod="calico-kube-controllers-577dc64884-7r78b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--577dc64884--7r78b-eth0" Oct 31 01:11:50.823339 containerd[1680]: time="2025-10-31T01:11:50.823210564Z" level=info msg="connecting to shim ded4e7028c931902cf30545290516362f6827a01a5d92444171d1a2c4b61a0cc" address="unix:///run/containerd/s/622e91f81909730092b6a1aeb573d80d025d74b23107a27c6ecc5b70a7636e9e" namespace=k8s.io protocol=ttrpc version=3 Oct 31 01:11:50.845275 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 01:11:50.851854 systemd[1]: Started cri-containerd-ded4e7028c931902cf30545290516362f6827a01a5d92444171d1a2c4b61a0cc.scope - libcontainer container ded4e7028c931902cf30545290516362f6827a01a5d92444171d1a2c4b61a0cc. Oct 31 01:11:50.867698 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 01:11:50.909560 containerd[1680]: time="2025-10-31T01:11:50.909394143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6559f565b6-9xwpc,Uid:014b66aa-7ac9-43b5-8a19-b4ecf0978b6c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"030dc9a1a2cfe214d1fb188733183691153cf07d6d14759cac3405569ee053fb\"" Oct 31 01:11:50.912258 containerd[1680]: time="2025-10-31T01:11:50.912197242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:11:50.921601 containerd[1680]: time="2025-10-31T01:11:50.921576675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-577dc64884-7r78b,Uid:5298c82b-bc3f-4f82-8aba-c9069839de1b,Namespace:calico-system,Attempt:0,} returns sandbox id \"ded4e7028c931902cf30545290516362f6827a01a5d92444171d1a2c4b61a0cc\"" Oct 31 01:11:51.119837 systemd-networkd[1554]: cali5695cd4acc3: Gained IPv6LL Oct 31 01:11:51.228766 containerd[1680]: time="2025-10-31T01:11:51.228523637Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 01:11:51.229281 containerd[1680]: time="2025-10-31T01:11:51.229241505Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:11:51.229331 containerd[1680]: time="2025-10-31T01:11:51.229310360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 01:11:51.229590 kubelet[2970]: E1031 01:11:51.229560 2970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:11:51.230669 kubelet[2970]: E1031 01:11:51.229596 2970 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:11:51.230669 kubelet[2970]: E1031 01:11:51.229763 2970 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6559f565b6-9xwpc_calico-apiserver(014b66aa-7ac9-43b5-8a19-b4ecf0978b6c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:11:51.230669 kubelet[2970]: E1031 01:11:51.229793 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6559f565b6-9xwpc" podUID="014b66aa-7ac9-43b5-8a19-b4ecf0978b6c" Oct 31 01:11:51.232972 containerd[1680]: time="2025-10-31T01:11:51.230302990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 01:11:51.589294 containerd[1680]: time="2025-10-31T01:11:51.589260621Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 01:11:51.589637 containerd[1680]: time="2025-10-31T01:11:51.589610338Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 01:11:51.589719 containerd[1680]: time="2025-10-31T01:11:51.589691025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 01:11:51.589873 kubelet[2970]: E1031 01:11:51.589825 2970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 01:11:51.589915 kubelet[2970]: E1031 01:11:51.589879 2970 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 01:11:51.589962 kubelet[2970]: E1031 01:11:51.589948 2970 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-577dc64884-7r78b_calico-system(5298c82b-bc3f-4f82-8aba-c9069839de1b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 01:11:51.590027 kubelet[2970]: E1031 01:11:51.589984 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-577dc64884-7r78b" podUID="5298c82b-bc3f-4f82-8aba-c9069839de1b" Oct 31 01:11:51.691946 kubelet[2970]: E1031 01:11:51.691914 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-577dc64884-7r78b" podUID="5298c82b-bc3f-4f82-8aba-c9069839de1b" Oct 31 01:11:51.694998 kubelet[2970]: E1031 01:11:51.694763 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6559f565b6-9xwpc" podUID="014b66aa-7ac9-43b5-8a19-b4ecf0978b6c" Oct 31 01:11:51.695403 kubelet[2970]: E1031 01:11:51.695337 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5756f8ffdf-n4khn" podUID="6ebd7d48-8bc4-4814-b512-88ba6c138a34" Oct 31 01:11:51.695913 kubelet[2970]: E1031 01:11:51.695555 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6559f565b6-lxd8t" podUID="4d287e78-d822-498f-92dc-e6aaa22a1cfb" Oct 31 01:11:51.709802 kubelet[2970]: I1031 01:11:51.709661 2970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-98l5j" podStartSLOduration=37.709650712 podStartE2EDuration="37.709650712s" podCreationTimestamp="2025-10-31 01:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 01:11:51.699317561 +0000 UTC m=+43.378395318" watchObservedRunningTime="2025-10-31 01:11:51.709650712 +0000 UTC m=+43.388728470" Oct 31 01:11:52.207956 systemd-networkd[1554]: cali3a6340ada5f: Gained IPv6LL Oct 31 01:11:52.399854 systemd-networkd[1554]: cali1e34fa522ad: Gained IPv6LL Oct 31 01:11:52.655902 systemd-networkd[1554]: calica10e7fb5f0: Gained IPv6LL Oct 31 01:11:52.696858 kubelet[2970]: E1031 01:11:52.696825 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-577dc64884-7r78b" podUID="5298c82b-bc3f-4f82-8aba-c9069839de1b" Oct 31 01:11:52.697144 kubelet[2970]: E1031 01:11:52.697026 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6559f565b6-9xwpc" podUID="014b66aa-7ac9-43b5-8a19-b4ecf0978b6c" Oct 31 01:11:56.541704 kubelet[2970]: I1031 01:11:56.541636 2970 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 01:11:57.703215 systemd-networkd[1554]: vxlan.calico: Link UP Oct 31 01:11:57.703245 systemd-networkd[1554]: vxlan.calico: Gained carrier Oct 31 01:11:58.470743 containerd[1680]: time="2025-10-31T01:11:58.470478234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 01:11:58.831418 containerd[1680]: time="2025-10-31T01:11:58.831363404Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 01:11:58.831808 containerd[1680]: time="2025-10-31T01:11:58.831780057Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 01:11:58.831862 containerd[1680]: time="2025-10-31T01:11:58.831839054Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 01:11:58.832004 kubelet[2970]: E1031 01:11:58.831954 2970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 01:11:58.832004 kubelet[2970]: E1031 01:11:58.831995 2970 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 01:11:58.833009 kubelet[2970]: E1031 01:11:58.832057 2970 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7fbb589785-6lzvp_calico-system(5d682b35-d05b-4f22-b277-65a604bf6c0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 01:11:58.833049 containerd[1680]: time="2025-10-31T01:11:58.832944582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 01:11:59.187463 containerd[1680]: time="2025-10-31T01:11:59.187224864Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 01:11:59.188032 containerd[1680]: time="2025-10-31T01:11:59.187883868Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 01:11:59.188032 containerd[1680]: time="2025-10-31T01:11:59.187956590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 01:11:59.188153 kubelet[2970]: E1031 01:11:59.188132 2970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 01:11:59.188214 kubelet[2970]: E1031 01:11:59.188165 2970 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 01:11:59.188247 kubelet[2970]: E1031 01:11:59.188223 2970 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7fbb589785-6lzvp_calico-system(5d682b35-d05b-4f22-b277-65a604bf6c0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 01:11:59.188279 kubelet[2970]: E1031 01:11:59.188259 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fbb589785-6lzvp" podUID="5d682b35-d05b-4f22-b277-65a604bf6c0b" Oct 31 01:11:59.376037 systemd-networkd[1554]: vxlan.calico: Gained IPv6LL Oct 31 01:11:59.468707 containerd[1680]: time="2025-10-31T01:11:59.468314197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 01:11:59.813043 containerd[1680]: time="2025-10-31T01:11:59.812958922Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 01:11:59.813390 containerd[1680]: time="2025-10-31T01:11:59.813270424Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 01:11:59.813462 containerd[1680]: time="2025-10-31T01:11:59.813390698Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 01:11:59.813512 kubelet[2970]: E1031 01:11:59.813465 2970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 01:11:59.813565 kubelet[2970]: E1031 01:11:59.813511 2970 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 01:11:59.813597 kubelet[2970]: E1031 01:11:59.813561 2970 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-4j67c_calico-system(c7606c94-65d4-44ae-9466-226a1af8c528): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 01:11:59.814986 containerd[1680]: time="2025-10-31T01:11:59.814922358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 01:12:00.171281 containerd[1680]: time="2025-10-31T01:12:00.171220549Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 01:12:00.171741 containerd[1680]: time="2025-10-31T01:12:00.171692197Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 01:12:00.171809 containerd[1680]: time="2025-10-31T01:12:00.171769833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 01:12:00.171940 kubelet[2970]: E1031 01:12:00.171884 2970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 01:12:00.171940 kubelet[2970]: E1031 01:12:00.171935 2970 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 01:12:00.172434 kubelet[2970]: E1031 01:12:00.171989 2970 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-4j67c_calico-system(c7606c94-65d4-44ae-9466-226a1af8c528): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 01:12:00.172434 kubelet[2970]: E1031 01:12:00.172019 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4j67c" podUID="c7606c94-65d4-44ae-9466-226a1af8c528" Oct 31 01:12:05.472136 containerd[1680]: time="2025-10-31T01:12:05.472073758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 01:12:05.780788 containerd[1680]: time="2025-10-31T01:12:05.780565331Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 01:12:05.781027 containerd[1680]: time="2025-10-31T01:12:05.780960242Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 01:12:05.781027 containerd[1680]: time="2025-10-31T01:12:05.781003698Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 01:12:05.781221 kubelet[2970]: E1031 01:12:05.781156 2970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 01:12:05.781221 kubelet[2970]: E1031 01:12:05.781195 2970 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 01:12:05.781469 kubelet[2970]: E1031 01:12:05.781332 2970 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-577dc64884-7r78b_calico-system(5298c82b-bc3f-4f82-8aba-c9069839de1b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 01:12:05.781469 kubelet[2970]: E1031 01:12:05.781363 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-577dc64884-7r78b" podUID="5298c82b-bc3f-4f82-8aba-c9069839de1b" Oct 31 01:12:05.781993 containerd[1680]: time="2025-10-31T01:12:05.781774099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 01:12:06.109033 containerd[1680]: time="2025-10-31T01:12:06.108999305Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 01:12:06.109448 containerd[1680]: time="2025-10-31T01:12:06.109420946Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 01:12:06.109516 containerd[1680]: time="2025-10-31T01:12:06.109494895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 01:12:06.109940 kubelet[2970]: E1031 01:12:06.109628 2970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 01:12:06.109940 kubelet[2970]: E1031 01:12:06.109662 2970 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 01:12:06.109940 kubelet[2970]: E1031 01:12:06.109751 2970 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-4m4m9_calico-system(65c6a159-851f-4aa0-86f0-ed319b59746c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 01:12:06.109940 kubelet[2970]: E1031 01:12:06.109783 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4m4m9" podUID="65c6a159-851f-4aa0-86f0-ed319b59746c" Oct 31 01:12:06.470025 containerd[1680]: time="2025-10-31T01:12:06.469658964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:12:06.827758 containerd[1680]: time="2025-10-31T01:12:06.827580548Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 01:12:06.835053 containerd[1680]: time="2025-10-31T01:12:06.833718822Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:12:06.835053 containerd[1680]: time="2025-10-31T01:12:06.833780678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 01:12:06.835053 containerd[1680]: time="2025-10-31T01:12:06.834466186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:12:06.835156 kubelet[2970]: E1031 01:12:06.833897 2970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:12:06.835156 kubelet[2970]: E1031 01:12:06.833942 2970 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:12:06.835156 kubelet[2970]: E1031 01:12:06.834078 2970 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6559f565b6-lxd8t_calico-apiserver(4d287e78-d822-498f-92dc-e6aaa22a1cfb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:12:06.835156 kubelet[2970]: E1031 01:12:06.834104 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6559f565b6-lxd8t" podUID="4d287e78-d822-498f-92dc-e6aaa22a1cfb" Oct 31 01:12:07.160416 containerd[1680]: time="2025-10-31T01:12:07.160329034Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 01:12:07.160907 containerd[1680]: time="2025-10-31T01:12:07.160875907Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:12:07.160991 containerd[1680]: time="2025-10-31T01:12:07.160939138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 01:12:07.161070 kubelet[2970]: E1031 01:12:07.161030 2970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:12:07.161070 kubelet[2970]: E1031 01:12:07.161062 2970 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:12:07.161171 kubelet[2970]: E1031 01:12:07.161130 2970 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5756f8ffdf-n4khn_calico-apiserver(6ebd7d48-8bc4-4814-b512-88ba6c138a34): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:12:07.161213 kubelet[2970]: E1031 01:12:07.161180 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5756f8ffdf-n4khn" podUID="6ebd7d48-8bc4-4814-b512-88ba6c138a34" Oct 31 01:12:07.468867 containerd[1680]: time="2025-10-31T01:12:07.468449910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:12:07.788512 containerd[1680]: time="2025-10-31T01:12:07.788429876Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 01:12:07.789060 containerd[1680]: time="2025-10-31T01:12:07.789004262Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:12:07.789213 containerd[1680]: time="2025-10-31T01:12:07.789197118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 01:12:07.789324 kubelet[2970]: E1031 01:12:07.789273 2970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:12:07.789324 kubelet[2970]: E1031 01:12:07.789306 2970 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:12:07.789549 kubelet[2970]: E1031 01:12:07.789469 2970 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6559f565b6-9xwpc_calico-apiserver(014b66aa-7ac9-43b5-8a19-b4ecf0978b6c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:12:07.789549 kubelet[2970]: E1031 01:12:07.789514 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6559f565b6-9xwpc" podUID="014b66aa-7ac9-43b5-8a19-b4ecf0978b6c" Oct 31 01:12:10.469367 kubelet[2970]: E1031 01:12:10.469121 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fbb589785-6lzvp" podUID="5d682b35-d05b-4f22-b277-65a604bf6c0b" Oct 31 01:12:12.469089 kubelet[2970]: E1031 01:12:12.469027 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4j67c" podUID="c7606c94-65d4-44ae-9466-226a1af8c528" Oct 31 01:12:15.157157 containerd[1680]: time="2025-10-31T01:12:15.157132514Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa52027747a8cfa4b6de2bedc7f5ec18148e4ea9d3d9f2f043d94fbfd8f38731\" id:\"f7cd73caf820137246874c2c4ef1750944221080e23b376abb8aa862563500c1\" pid:5328 exited_at:{seconds:1761873135 nanos:156908354}" Oct 31 01:12:15.979105 systemd[1]: Started sshd@7-139.178.70.100:22-139.178.89.65:36852.service - OpenSSH per-connection server daemon (139.178.89.65:36852). Oct 31 01:12:16.104067 sshd[5349]: Accepted publickey for core from 139.178.89.65 port 36852 ssh2: RSA SHA256:E9GqLv6yFiNZfazOGFBvIafNGKEBFs5YqxPFoTxd1zU Oct 31 01:12:16.106094 sshd-session[5349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:12:16.109960 systemd-logind[1650]: New session 10 of user core. Oct 31 01:12:16.120975 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 31 01:12:16.553532 sshd[5352]: Connection closed by 139.178.89.65 port 36852 Oct 31 01:12:16.553713 sshd-session[5349]: pam_unix(sshd:session): session closed for user core Oct 31 01:12:16.557645 systemd-logind[1650]: Session 10 logged out. Waiting for processes to exit. Oct 31 01:12:16.558112 systemd[1]: sshd@7-139.178.70.100:22-139.178.89.65:36852.service: Deactivated successfully. Oct 31 01:12:16.559363 systemd[1]: session-10.scope: Deactivated successfully. Oct 31 01:12:16.560402 systemd-logind[1650]: Removed session 10. Oct 31 01:12:18.468163 kubelet[2970]: E1031 01:12:18.467860 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-577dc64884-7r78b" podUID="5298c82b-bc3f-4f82-8aba-c9069839de1b" Oct 31 01:12:19.468203 kubelet[2970]: E1031 01:12:19.468047 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6559f565b6-lxd8t" podUID="4d287e78-d822-498f-92dc-e6aaa22a1cfb" Oct 31 01:12:20.468602 kubelet[2970]: E1031 01:12:20.468328 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4m4m9" podUID="65c6a159-851f-4aa0-86f0-ed319b59746c" Oct 31 01:12:21.468057 kubelet[2970]: E1031 01:12:21.468017 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5756f8ffdf-n4khn" podUID="6ebd7d48-8bc4-4814-b512-88ba6c138a34" Oct 31 01:12:21.566240 systemd[1]: Started sshd@8-139.178.70.100:22-139.178.89.65:36866.service - OpenSSH per-connection server daemon (139.178.89.65:36866). Oct 31 01:12:21.613756 sshd[5375]: Accepted publickey for core from 139.178.89.65 port 36866 ssh2: RSA SHA256:E9GqLv6yFiNZfazOGFBvIafNGKEBFs5YqxPFoTxd1zU Oct 31 01:12:21.614639 sshd-session[5375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:12:21.617940 systemd-logind[1650]: New session 11 of user core. Oct 31 01:12:21.623832 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 31 01:12:21.722907 sshd[5378]: Connection closed by 139.178.89.65 port 36866 Oct 31 01:12:21.723349 sshd-session[5375]: pam_unix(sshd:session): session closed for user core Oct 31 01:12:21.725845 systemd-logind[1650]: Session 11 logged out. Waiting for processes to exit. Oct 31 01:12:21.726006 systemd[1]: sshd@8-139.178.70.100:22-139.178.89.65:36866.service: Deactivated successfully. Oct 31 01:12:21.727360 systemd[1]: session-11.scope: Deactivated successfully. Oct 31 01:12:21.728398 systemd-logind[1650]: Removed session 11. Oct 31 01:12:22.468142 kubelet[2970]: E1031 01:12:22.467674 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6559f565b6-9xwpc" podUID="014b66aa-7ac9-43b5-8a19-b4ecf0978b6c" Oct 31 01:12:22.469324 containerd[1680]: time="2025-10-31T01:12:22.469072359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 01:12:22.850432 containerd[1680]: time="2025-10-31T01:12:22.850267041Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 01:12:22.851041 containerd[1680]: time="2025-10-31T01:12:22.851015118Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 01:12:22.851310 containerd[1680]: time="2025-10-31T01:12:22.851067432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 01:12:22.851872 kubelet[2970]: E1031 01:12:22.851531 2970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 01:12:22.851872 kubelet[2970]: E1031 01:12:22.851669 2970 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 01:12:22.851872 kubelet[2970]: E1031 01:12:22.851833 2970 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7fbb589785-6lzvp_calico-system(5d682b35-d05b-4f22-b277-65a604bf6c0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 01:12:22.854593 containerd[1680]: time="2025-10-31T01:12:22.854546962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 01:12:23.184946 containerd[1680]: time="2025-10-31T01:12:23.184869469Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 01:12:23.185974 containerd[1680]: time="2025-10-31T01:12:23.185569189Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 01:12:23.185974 containerd[1680]: time="2025-10-31T01:12:23.185628595Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 01:12:23.186041 kubelet[2970]: E1031 01:12:23.185752 2970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 01:12:23.186041 kubelet[2970]: E1031 01:12:23.185787 2970 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 01:12:23.186041 kubelet[2970]: E1031 01:12:23.185850 2970 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7fbb589785-6lzvp_calico-system(5d682b35-d05b-4f22-b277-65a604bf6c0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 01:12:23.186138 kubelet[2970]: E1031 01:12:23.185882 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fbb589785-6lzvp" podUID="5d682b35-d05b-4f22-b277-65a604bf6c0b" Oct 31 01:12:23.469578 containerd[1680]: time="2025-10-31T01:12:23.469071303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 01:12:23.791402 containerd[1680]: time="2025-10-31T01:12:23.791199274Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 01:12:23.798016 containerd[1680]: time="2025-10-31T01:12:23.797781602Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 01:12:23.798016 containerd[1680]: time="2025-10-31T01:12:23.797834429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 01:12:23.798096 kubelet[2970]: E1031 01:12:23.797912 2970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 01:12:23.798096 kubelet[2970]: E1031 01:12:23.797941 2970 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 01:12:23.798096 kubelet[2970]: E1031 01:12:23.797991 2970 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-4j67c_calico-system(c7606c94-65d4-44ae-9466-226a1af8c528): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 01:12:23.807696 containerd[1680]: time="2025-10-31T01:12:23.799095329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 01:12:24.126495 containerd[1680]: time="2025-10-31T01:12:24.126449034Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 01:12:24.126826 containerd[1680]: time="2025-10-31T01:12:24.126801243Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 01:12:24.126867 containerd[1680]: time="2025-10-31T01:12:24.126860853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 01:12:24.127018 kubelet[2970]: E1031 01:12:24.126986 2970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 01:12:24.127055 kubelet[2970]: E1031 01:12:24.127022 2970 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 01:12:24.127233 kubelet[2970]: E1031 01:12:24.127091 2970 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-4j67c_calico-system(c7606c94-65d4-44ae-9466-226a1af8c528): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 01:12:24.127780 kubelet[2970]: E1031 01:12:24.127334 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4j67c" podUID="c7606c94-65d4-44ae-9466-226a1af8c528" Oct 31 01:12:26.734989 systemd[1]: Started sshd@9-139.178.70.100:22-139.178.89.65:52774.service - OpenSSH per-connection server daemon (139.178.89.65:52774). Oct 31 01:12:26.783017 sshd[5390]: Accepted publickey for core from 139.178.89.65 port 52774 ssh2: RSA SHA256:E9GqLv6yFiNZfazOGFBvIafNGKEBFs5YqxPFoTxd1zU Oct 31 01:12:26.783848 sshd-session[5390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:12:26.788975 systemd-logind[1650]: New session 12 of user core. Oct 31 01:12:26.794939 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 31 01:12:26.898534 sshd[5393]: Connection closed by 139.178.89.65 port 52774 Oct 31 01:12:26.898968 sshd-session[5390]: pam_unix(sshd:session): session closed for user core Oct 31 01:12:26.905987 systemd[1]: sshd@9-139.178.70.100:22-139.178.89.65:52774.service: Deactivated successfully. Oct 31 01:12:26.906957 systemd[1]: session-12.scope: Deactivated successfully. Oct 31 01:12:26.907864 systemd-logind[1650]: Session 12 logged out. Waiting for processes to exit. Oct 31 01:12:26.910551 systemd-logind[1650]: Removed session 12. Oct 31 01:12:26.911985 systemd[1]: Started sshd@10-139.178.70.100:22-139.178.89.65:52786.service - OpenSSH per-connection server daemon (139.178.89.65:52786). Oct 31 01:12:26.946564 sshd[5405]: Accepted publickey for core from 139.178.89.65 port 52786 ssh2: RSA SHA256:E9GqLv6yFiNZfazOGFBvIafNGKEBFs5YqxPFoTxd1zU Oct 31 01:12:26.946855 sshd-session[5405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:12:26.949896 systemd-logind[1650]: New session 13 of user core. Oct 31 01:12:26.959945 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 31 01:12:27.084343 sshd[5408]: Connection closed by 139.178.89.65 port 52786 Oct 31 01:12:27.084829 sshd-session[5405]: pam_unix(sshd:session): session closed for user core Oct 31 01:12:27.092171 systemd[1]: sshd@10-139.178.70.100:22-139.178.89.65:52786.service: Deactivated successfully. Oct 31 01:12:27.094291 systemd[1]: session-13.scope: Deactivated successfully. Oct 31 01:12:27.097314 systemd-logind[1650]: Session 13 logged out. Waiting for processes to exit. Oct 31 01:12:27.098570 systemd[1]: Started sshd@11-139.178.70.100:22-139.178.89.65:52798.service - OpenSSH per-connection server daemon (139.178.89.65:52798). Oct 31 01:12:27.103285 systemd-logind[1650]: Removed session 13. Oct 31 01:12:27.167861 sshd[5419]: Accepted publickey for core from 139.178.89.65 port 52798 ssh2: RSA SHA256:E9GqLv6yFiNZfazOGFBvIafNGKEBFs5YqxPFoTxd1zU Oct 31 01:12:27.168767 sshd-session[5419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:12:27.174115 systemd-logind[1650]: New session 14 of user core. Oct 31 01:12:27.177807 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 31 01:12:27.288542 sshd[5422]: Connection closed by 139.178.89.65 port 52798 Oct 31 01:12:27.288878 sshd-session[5419]: pam_unix(sshd:session): session closed for user core Oct 31 01:12:27.291625 systemd[1]: sshd@11-139.178.70.100:22-139.178.89.65:52798.service: Deactivated successfully. Oct 31 01:12:27.293323 systemd[1]: session-14.scope: Deactivated successfully. Oct 31 01:12:27.294224 systemd-logind[1650]: Session 14 logged out. Waiting for processes to exit. Oct 31 01:12:27.295301 systemd-logind[1650]: Removed session 14. Oct 31 01:12:30.469184 containerd[1680]: time="2025-10-31T01:12:30.469155221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 01:12:30.797202 containerd[1680]: time="2025-10-31T01:12:30.797120890Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 01:12:30.800219 containerd[1680]: time="2025-10-31T01:12:30.800196718Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 01:12:30.800493 containerd[1680]: time="2025-10-31T01:12:30.800251305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 01:12:30.800636 kubelet[2970]: E1031 01:12:30.800613 2970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 01:12:30.801271 kubelet[2970]: E1031 01:12:30.801082 2970 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 01:12:30.801361 kubelet[2970]: E1031 01:12:30.801312 2970 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-577dc64884-7r78b_calico-system(5298c82b-bc3f-4f82-8aba-c9069839de1b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 01:12:30.801361 kubelet[2970]: E1031 01:12:30.801340 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-577dc64884-7r78b" podUID="5298c82b-bc3f-4f82-8aba-c9069839de1b" Oct 31 01:12:32.306438 systemd[1]: Started sshd@12-139.178.70.100:22-139.178.89.65:52810.service - OpenSSH per-connection server daemon (139.178.89.65:52810). Oct 31 01:12:32.471152 containerd[1680]: time="2025-10-31T01:12:32.471104193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:12:32.803696 containerd[1680]: time="2025-10-31T01:12:32.803670882Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 01:12:32.826641 containerd[1680]: time="2025-10-31T01:12:32.826614910Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:12:32.826713 containerd[1680]: time="2025-10-31T01:12:32.826689107Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 01:12:32.826832 kubelet[2970]: E1031 01:12:32.826805 2970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:12:32.826997 kubelet[2970]: E1031 01:12:32.826842 2970 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:12:32.826997 kubelet[2970]: E1031 01:12:32.826894 2970 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6559f565b6-lxd8t_calico-apiserver(4d287e78-d822-498f-92dc-e6aaa22a1cfb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:12:32.826997 kubelet[2970]: E1031 01:12:32.826914 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6559f565b6-lxd8t" podUID="4d287e78-d822-498f-92dc-e6aaa22a1cfb" Oct 31 01:12:32.846233 sshd[5438]: Accepted publickey for core from 139.178.89.65 port 52810 ssh2: RSA SHA256:E9GqLv6yFiNZfazOGFBvIafNGKEBFs5YqxPFoTxd1zU Oct 31 01:12:32.851244 sshd-session[5438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:12:32.855313 systemd-logind[1650]: New session 15 of user core. Oct 31 01:12:32.860103 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 31 01:12:32.975298 sshd[5441]: Connection closed by 139.178.89.65 port 52810 Oct 31 01:12:32.975736 sshd-session[5438]: pam_unix(sshd:session): session closed for user core Oct 31 01:12:32.978359 systemd-logind[1650]: Session 15 logged out. Waiting for processes to exit. Oct 31 01:12:32.978811 systemd[1]: sshd@12-139.178.70.100:22-139.178.89.65:52810.service: Deactivated successfully. Oct 31 01:12:32.980060 systemd[1]: session-15.scope: Deactivated successfully. Oct 31 01:12:32.981046 systemd-logind[1650]: Removed session 15. Oct 31 01:12:34.469098 containerd[1680]: time="2025-10-31T01:12:34.468964234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:12:34.810299 containerd[1680]: time="2025-10-31T01:12:34.810065152Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 01:12:34.810574 containerd[1680]: time="2025-10-31T01:12:34.810553068Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:12:34.810687 containerd[1680]: time="2025-10-31T01:12:34.810605542Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 01:12:34.810829 kubelet[2970]: E1031 01:12:34.810800 2970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:12:34.811435 kubelet[2970]: E1031 01:12:34.810835 2970 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:12:34.811435 kubelet[2970]: E1031 01:12:34.810897 2970 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6559f565b6-9xwpc_calico-apiserver(014b66aa-7ac9-43b5-8a19-b4ecf0978b6c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:12:34.811435 kubelet[2970]: E1031 01:12:34.810921 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6559f565b6-9xwpc" podUID="014b66aa-7ac9-43b5-8a19-b4ecf0978b6c" Oct 31 01:12:35.468031 containerd[1680]: time="2025-10-31T01:12:35.467996181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 01:12:35.790226 containerd[1680]: time="2025-10-31T01:12:35.790159882Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 01:12:35.790744 containerd[1680]: time="2025-10-31T01:12:35.790713794Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 01:12:35.790779 containerd[1680]: time="2025-10-31T01:12:35.790770418Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 01:12:35.790985 kubelet[2970]: E1031 01:12:35.790952 2970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 01:12:35.791146 kubelet[2970]: E1031 01:12:35.791043 2970 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 01:12:35.791239 kubelet[2970]: E1031 01:12:35.791224 2970 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-4m4m9_calico-system(65c6a159-851f-4aa0-86f0-ed319b59746c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 01:12:35.791268 kubelet[2970]: E1031 01:12:35.791249 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4m4m9" podUID="65c6a159-851f-4aa0-86f0-ed319b59746c" Oct 31 01:12:35.791453 containerd[1680]: time="2025-10-31T01:12:35.791406072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:12:36.143678 containerd[1680]: time="2025-10-31T01:12:36.143529649Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 01:12:36.144093 containerd[1680]: time="2025-10-31T01:12:36.143998323Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:12:36.144093 containerd[1680]: time="2025-10-31T01:12:36.144006476Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 01:12:36.144276 kubelet[2970]: E1031 01:12:36.144239 2970 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:12:36.144478 kubelet[2970]: E1031 01:12:36.144284 2970 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:12:36.144478 kubelet[2970]: E1031 01:12:36.144341 2970 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5756f8ffdf-n4khn_calico-apiserver(6ebd7d48-8bc4-4814-b512-88ba6c138a34): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:12:36.144478 kubelet[2970]: E1031 01:12:36.144367 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5756f8ffdf-n4khn" podUID="6ebd7d48-8bc4-4814-b512-88ba6c138a34" Oct 31 01:12:36.468111 kubelet[2970]: E1031 01:12:36.467999 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fbb589785-6lzvp" podUID="5d682b35-d05b-4f22-b277-65a604bf6c0b" Oct 31 01:12:37.469023 kubelet[2970]: E1031 01:12:37.468903 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4j67c" podUID="c7606c94-65d4-44ae-9466-226a1af8c528" Oct 31 01:12:37.987745 systemd[1]: Started sshd@13-139.178.70.100:22-139.178.89.65:42736.service - OpenSSH per-connection server daemon (139.178.89.65:42736). Oct 31 01:12:38.041970 sshd[5461]: Accepted publickey for core from 139.178.89.65 port 42736 ssh2: RSA SHA256:E9GqLv6yFiNZfazOGFBvIafNGKEBFs5YqxPFoTxd1zU Oct 31 01:12:38.042783 sshd-session[5461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:12:38.046393 systemd-logind[1650]: New session 16 of user core. Oct 31 01:12:38.052983 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 31 01:12:38.168873 sshd[5464]: Connection closed by 139.178.89.65 port 42736 Oct 31 01:12:38.170037 sshd-session[5461]: pam_unix(sshd:session): session closed for user core Oct 31 01:12:38.174482 systemd-logind[1650]: Session 16 logged out. Waiting for processes to exit. Oct 31 01:12:38.175046 systemd[1]: sshd@13-139.178.70.100:22-139.178.89.65:42736.service: Deactivated successfully. Oct 31 01:12:38.176685 systemd[1]: session-16.scope: Deactivated successfully. Oct 31 01:12:38.178506 systemd-logind[1650]: Removed session 16. Oct 31 01:12:42.468197 kubelet[2970]: E1031 01:12:42.468151 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-577dc64884-7r78b" podUID="5298c82b-bc3f-4f82-8aba-c9069839de1b" Oct 31 01:12:43.181918 systemd[1]: Started sshd@14-139.178.70.100:22-139.178.89.65:42752.service - OpenSSH per-connection server daemon (139.178.89.65:42752). Oct 31 01:12:43.223141 sshd[5475]: Accepted publickey for core from 139.178.89.65 port 42752 ssh2: RSA SHA256:E9GqLv6yFiNZfazOGFBvIafNGKEBFs5YqxPFoTxd1zU Oct 31 01:12:43.223999 sshd-session[5475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:12:43.227570 systemd-logind[1650]: New session 17 of user core. Oct 31 01:12:43.240904 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 31 01:12:43.342118 sshd[5478]: Connection closed by 139.178.89.65 port 42752 Oct 31 01:12:43.342486 sshd-session[5475]: pam_unix(sshd:session): session closed for user core Oct 31 01:12:43.350580 systemd[1]: sshd@14-139.178.70.100:22-139.178.89.65:42752.service: Deactivated successfully. Oct 31 01:12:43.351989 systemd[1]: session-17.scope: Deactivated successfully. Oct 31 01:12:43.352607 systemd-logind[1650]: Session 17 logged out. Waiting for processes to exit. Oct 31 01:12:43.354656 systemd[1]: Started sshd@15-139.178.70.100:22-139.178.89.65:42760.service - OpenSSH per-connection server daemon (139.178.89.65:42760). Oct 31 01:12:43.356619 systemd-logind[1650]: Removed session 17. Oct 31 01:12:43.394669 sshd[5490]: Accepted publickey for core from 139.178.89.65 port 42760 ssh2: RSA SHA256:E9GqLv6yFiNZfazOGFBvIafNGKEBFs5YqxPFoTxd1zU Oct 31 01:12:43.395525 sshd-session[5490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:12:43.399004 systemd-logind[1650]: New session 18 of user core. Oct 31 01:12:43.409829 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 31 01:12:43.469240 kubelet[2970]: E1031 01:12:43.469156 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6559f565b6-lxd8t" podUID="4d287e78-d822-498f-92dc-e6aaa22a1cfb" Oct 31 01:12:43.785940 sshd[5493]: Connection closed by 139.178.89.65 port 42760 Oct 31 01:12:43.786631 sshd-session[5490]: pam_unix(sshd:session): session closed for user core Oct 31 01:12:43.795278 systemd[1]: sshd@15-139.178.70.100:22-139.178.89.65:42760.service: Deactivated successfully. Oct 31 01:12:43.796538 systemd[1]: session-18.scope: Deactivated successfully. Oct 31 01:12:43.797054 systemd-logind[1650]: Session 18 logged out. Waiting for processes to exit. Oct 31 01:12:43.798069 systemd-logind[1650]: Removed session 18. Oct 31 01:12:43.799233 systemd[1]: Started sshd@16-139.178.70.100:22-139.178.89.65:42774.service - OpenSSH per-connection server daemon (139.178.89.65:42774). Oct 31 01:12:43.855138 sshd[5504]: Accepted publickey for core from 139.178.89.65 port 42774 ssh2: RSA SHA256:E9GqLv6yFiNZfazOGFBvIafNGKEBFs5YqxPFoTxd1zU Oct 31 01:12:43.855992 sshd-session[5504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:12:43.859527 systemd-logind[1650]: New session 19 of user core. Oct 31 01:12:43.867911 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 31 01:12:44.470025 sshd[5507]: Connection closed by 139.178.89.65 port 42774 Oct 31 01:12:44.471216 sshd-session[5504]: pam_unix(sshd:session): session closed for user core Oct 31 01:12:44.476564 systemd[1]: sshd@16-139.178.70.100:22-139.178.89.65:42774.service: Deactivated successfully. Oct 31 01:12:44.478542 systemd[1]: session-19.scope: Deactivated successfully. Oct 31 01:12:44.480052 systemd-logind[1650]: Session 19 logged out. Waiting for processes to exit. Oct 31 01:12:44.482415 systemd[1]: Started sshd@17-139.178.70.100:22-139.178.89.65:42786.service - OpenSSH per-connection server daemon (139.178.89.65:42786). Oct 31 01:12:44.484919 systemd-logind[1650]: Removed session 19. Oct 31 01:12:44.537224 sshd[5525]: Accepted publickey for core from 139.178.89.65 port 42786 ssh2: RSA SHA256:E9GqLv6yFiNZfazOGFBvIafNGKEBFs5YqxPFoTxd1zU Oct 31 01:12:44.538079 sshd-session[5525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:12:44.541109 systemd-logind[1650]: New session 20 of user core. Oct 31 01:12:44.543873 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 31 01:12:44.838078 sshd[5528]: Connection closed by 139.178.89.65 port 42786 Oct 31 01:12:44.838308 sshd-session[5525]: pam_unix(sshd:session): session closed for user core Oct 31 01:12:44.845370 systemd[1]: sshd@17-139.178.70.100:22-139.178.89.65:42786.service: Deactivated successfully. Oct 31 01:12:44.846599 systemd[1]: session-20.scope: Deactivated successfully. Oct 31 01:12:44.847861 systemd-logind[1650]: Session 20 logged out. Waiting for processes to exit. Oct 31 01:12:44.851263 systemd[1]: Started sshd@18-139.178.70.100:22-139.178.89.65:42794.service - OpenSSH per-connection server daemon (139.178.89.65:42794). Oct 31 01:12:44.852678 systemd-logind[1650]: Removed session 20. Oct 31 01:12:44.915575 sshd[5537]: Accepted publickey for core from 139.178.89.65 port 42794 ssh2: RSA SHA256:E9GqLv6yFiNZfazOGFBvIafNGKEBFs5YqxPFoTxd1zU Oct 31 01:12:44.916703 sshd-session[5537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:12:44.922242 systemd-logind[1650]: New session 21 of user core. Oct 31 01:12:44.927842 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 31 01:12:45.045529 sshd[5540]: Connection closed by 139.178.89.65 port 42794 Oct 31 01:12:45.045620 sshd-session[5537]: pam_unix(sshd:session): session closed for user core Oct 31 01:12:45.049011 systemd[1]: sshd@18-139.178.70.100:22-139.178.89.65:42794.service: Deactivated successfully. Oct 31 01:12:45.050456 systemd[1]: session-21.scope: Deactivated successfully. Oct 31 01:12:45.051687 systemd-logind[1650]: Session 21 logged out. Waiting for processes to exit. Oct 31 01:12:45.053421 systemd-logind[1650]: Removed session 21. Oct 31 01:12:45.184268 containerd[1680]: time="2025-10-31T01:12:45.183962491Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa52027747a8cfa4b6de2bedc7f5ec18148e4ea9d3d9f2f043d94fbfd8f38731\" id:\"3a9231f8716671c6f383a5999ee5a54ac6fb83c6083bf05d2c6b0af18c13d559\" pid:5563 exited_at:{seconds:1761873165 nanos:172178988}" Oct 31 01:12:47.468180 kubelet[2970]: E1031 01:12:47.468049 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4m4m9" podUID="65c6a159-851f-4aa0-86f0-ed319b59746c" Oct 31 01:12:47.468706 kubelet[2970]: E1031 01:12:47.468399 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6559f565b6-9xwpc" podUID="014b66aa-7ac9-43b5-8a19-b4ecf0978b6c" Oct 31 01:12:48.468858 kubelet[2970]: E1031 01:12:48.468826 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4j67c" podUID="c7606c94-65d4-44ae-9466-226a1af8c528" Oct 31 01:12:48.469559 kubelet[2970]: E1031 01:12:48.469522 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fbb589785-6lzvp" podUID="5d682b35-d05b-4f22-b277-65a604bf6c0b" Oct 31 01:12:50.056509 systemd[1]: Started sshd@19-139.178.70.100:22-139.178.89.65:50876.service - OpenSSH per-connection server daemon (139.178.89.65:50876). Oct 31 01:12:50.144632 sshd[5584]: Accepted publickey for core from 139.178.89.65 port 50876 ssh2: RSA SHA256:E9GqLv6yFiNZfazOGFBvIafNGKEBFs5YqxPFoTxd1zU Oct 31 01:12:50.146783 sshd-session[5584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:12:50.150776 systemd-logind[1650]: New session 22 of user core. Oct 31 01:12:50.153846 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 31 01:12:50.314937 sshd[5587]: Connection closed by 139.178.89.65 port 50876 Oct 31 01:12:50.315461 sshd-session[5584]: pam_unix(sshd:session): session closed for user core Oct 31 01:12:50.318760 systemd-logind[1650]: Session 22 logged out. Waiting for processes to exit. Oct 31 01:12:50.318850 systemd[1]: sshd@19-139.178.70.100:22-139.178.89.65:50876.service: Deactivated successfully. Oct 31 01:12:50.320257 systemd[1]: session-22.scope: Deactivated successfully. Oct 31 01:12:50.321186 systemd-logind[1650]: Removed session 22. Oct 31 01:12:50.468687 kubelet[2970]: E1031 01:12:50.468613 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5756f8ffdf-n4khn" podUID="6ebd7d48-8bc4-4814-b512-88ba6c138a34" Oct 31 01:12:55.325602 systemd[1]: Started sshd@20-139.178.70.100:22-139.178.89.65:50890.service - OpenSSH per-connection server daemon (139.178.89.65:50890). Oct 31 01:12:55.400175 sshd[5598]: Accepted publickey for core from 139.178.89.65 port 50890 ssh2: RSA SHA256:E9GqLv6yFiNZfazOGFBvIafNGKEBFs5YqxPFoTxd1zU Oct 31 01:12:55.400548 sshd-session[5598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:12:55.403918 systemd-logind[1650]: New session 23 of user core. Oct 31 01:12:55.414083 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 31 01:12:55.524401 sshd[5601]: Connection closed by 139.178.89.65 port 50890 Oct 31 01:12:55.524947 sshd-session[5598]: pam_unix(sshd:session): session closed for user core Oct 31 01:12:55.527327 systemd[1]: sshd@20-139.178.70.100:22-139.178.89.65:50890.service: Deactivated successfully. Oct 31 01:12:55.529006 systemd[1]: session-23.scope: Deactivated successfully. Oct 31 01:12:55.529664 systemd-logind[1650]: Session 23 logged out. Waiting for processes to exit. Oct 31 01:12:55.530496 systemd-logind[1650]: Removed session 23. Oct 31 01:12:56.468925 kubelet[2970]: E1031 01:12:56.468801 2970 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-577dc64884-7r78b" podUID="5298c82b-bc3f-4f82-8aba-c9069839de1b"