Nov 5 15:45:39.598330 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 13:45:21 -00 2025 Nov 5 15:45:39.598349 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:45:39.598356 kernel: Disabled fast string operations Nov 5 15:45:39.598360 kernel: BIOS-provided physical RAM map: Nov 5 15:45:39.598364 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Nov 5 15:45:39.598369 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Nov 5 15:45:39.598375 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Nov 5 15:45:39.598380 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Nov 5 15:45:39.598384 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Nov 5 15:45:39.598389 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Nov 5 15:45:39.598393 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Nov 5 15:45:39.598398 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Nov 5 15:45:39.598402 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Nov 5 15:45:39.598407 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Nov 5 15:45:39.598414 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Nov 5 15:45:39.598419 kernel: NX (Execute Disable) protection: active Nov 5 15:45:39.598424 kernel: APIC: Static calls initialized Nov 5 15:45:39.598429 kernel: SMBIOS 2.7 present. Nov 5 15:45:39.598434 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Nov 5 15:45:39.598439 kernel: DMI: Memory slots populated: 1/128 Nov 5 15:45:39.598445 kernel: vmware: hypercall mode: 0x00 Nov 5 15:45:39.598450 kernel: Hypervisor detected: VMware Nov 5 15:45:39.598455 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Nov 5 15:45:39.598460 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Nov 5 15:45:39.598641 kernel: vmware: using clock offset of 3321147401 ns Nov 5 15:45:39.598649 kernel: tsc: Detected 3408.000 MHz processor Nov 5 15:45:39.598655 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 5 15:45:39.598661 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 5 15:45:39.598666 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Nov 5 15:45:39.598674 kernel: total RAM covered: 3072M Nov 5 15:45:39.598698 kernel: Found optimal setting for mtrr clean up Nov 5 15:45:39.598704 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Nov 5 15:45:39.598710 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Nov 5 15:45:39.598715 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 5 15:45:39.598721 kernel: Using GB pages for direct mapping Nov 5 15:45:39.598726 kernel: ACPI: Early table checksum verification disabled Nov 5 15:45:39.598732 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Nov 5 15:45:39.598739 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Nov 5 15:45:39.598744 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Nov 5 15:45:39.598750 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Nov 5 15:45:39.598774 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Nov 5 15:45:39.598779 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Nov 5 15:45:39.598786 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Nov 5 15:45:39.598792 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Nov 5 15:45:39.598797 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Nov 5 15:45:39.598803 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Nov 5 15:45:39.598809 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Nov 5 15:45:39.598815 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Nov 5 15:45:39.598821 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Nov 5 15:45:39.598827 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Nov 5 15:45:39.598833 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Nov 5 15:45:39.598838 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Nov 5 15:45:39.598844 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Nov 5 15:45:39.598850 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Nov 5 15:45:39.598855 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Nov 5 15:45:39.598880 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Nov 5 15:45:39.598887 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Nov 5 15:45:39.598892 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Nov 5 15:45:39.598898 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 5 15:45:39.598904 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 5 15:45:39.598910 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Nov 5 15:45:39.598916 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00001000-0x7fffffff] Nov 5 15:45:39.598922 kernel: NODE_DATA(0) allocated [mem 0x7fff8dc0-0x7fffffff] Nov 5 15:45:39.598929 kernel: Zone ranges: Nov 5 15:45:39.598935 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 5 15:45:39.598942 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Nov 5 15:45:39.598947 kernel: Normal empty Nov 5 15:45:39.598953 kernel: Device empty Nov 5 15:45:39.598959 kernel: Movable zone start for each node Nov 5 15:45:39.598964 kernel: Early memory node ranges Nov 5 15:45:39.598970 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Nov 5 15:45:39.598976 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Nov 5 15:45:39.598982 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Nov 5 15:45:39.598988 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Nov 5 15:45:39.598994 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 5 15:45:39.599000 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Nov 5 15:45:39.599005 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Nov 5 15:45:39.599011 kernel: ACPI: PM-Timer IO Port: 0x1008 Nov 5 15:45:39.599018 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Nov 5 15:45:39.599024 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Nov 5 15:45:39.599030 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Nov 5 15:45:39.599036 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Nov 5 15:45:39.599041 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Nov 5 15:45:39.599047 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Nov 5 15:45:39.599052 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Nov 5 15:45:39.599058 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Nov 5 15:45:39.599064 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Nov 5 15:45:39.599070 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Nov 5 15:45:39.599075 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Nov 5 15:45:39.599081 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Nov 5 15:45:39.599086 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Nov 5 15:45:39.599092 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Nov 5 15:45:39.599097 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Nov 5 15:45:39.599103 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Nov 5 15:45:39.599110 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Nov 5 15:45:39.599115 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Nov 5 15:45:39.599121 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Nov 5 15:45:39.599126 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Nov 5 15:45:39.599132 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Nov 5 15:45:39.599137 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Nov 5 15:45:39.599142 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Nov 5 15:45:39.599148 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Nov 5 15:45:39.599155 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Nov 5 15:45:39.599161 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Nov 5 15:45:39.599166 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Nov 5 15:45:39.599171 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Nov 5 15:45:39.599177 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Nov 5 15:45:39.599183 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Nov 5 15:45:39.599189 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Nov 5 15:45:39.599194 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Nov 5 15:45:39.599201 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Nov 5 15:45:39.599207 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Nov 5 15:45:39.599213 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Nov 5 15:45:39.599218 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Nov 5 15:45:39.599224 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Nov 5 15:45:39.599230 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Nov 5 15:45:39.599236 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Nov 5 15:45:39.599242 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Nov 5 15:45:39.599252 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Nov 5 15:45:39.599261 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Nov 5 15:45:39.599271 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Nov 5 15:45:39.599281 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Nov 5 15:45:39.599290 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Nov 5 15:45:39.599296 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Nov 5 15:45:39.599302 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Nov 5 15:45:39.599310 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Nov 5 15:45:39.599316 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Nov 5 15:45:39.599321 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Nov 5 15:45:39.599327 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Nov 5 15:45:39.599333 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Nov 5 15:45:39.599339 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Nov 5 15:45:39.599345 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Nov 5 15:45:39.599351 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Nov 5 15:45:39.599361 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Nov 5 15:45:39.599370 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Nov 5 15:45:39.599379 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Nov 5 15:45:39.599388 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Nov 5 15:45:39.599397 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Nov 5 15:45:39.599406 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Nov 5 15:45:39.599412 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Nov 5 15:45:39.599418 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Nov 5 15:45:39.599424 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Nov 5 15:45:39.599432 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Nov 5 15:45:39.599438 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Nov 5 15:45:39.599444 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Nov 5 15:45:39.599450 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Nov 5 15:45:39.599456 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Nov 5 15:45:39.599462 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Nov 5 15:45:39.599474 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Nov 5 15:45:39.599481 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Nov 5 15:45:39.599488 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Nov 5 15:45:39.599493 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Nov 5 15:45:39.599499 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Nov 5 15:45:39.599505 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Nov 5 15:45:39.599512 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Nov 5 15:45:39.599518 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Nov 5 15:45:39.599524 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Nov 5 15:45:39.599530 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Nov 5 15:45:39.599537 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Nov 5 15:45:39.599542 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Nov 5 15:45:39.599548 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Nov 5 15:45:39.599554 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Nov 5 15:45:39.599559 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Nov 5 15:45:39.599565 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Nov 5 15:45:39.599571 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Nov 5 15:45:39.599577 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Nov 5 15:45:39.599584 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Nov 5 15:45:39.599590 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Nov 5 15:45:39.599596 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Nov 5 15:45:39.599602 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Nov 5 15:45:39.599608 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Nov 5 15:45:39.599614 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Nov 5 15:45:39.599620 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Nov 5 15:45:39.599626 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Nov 5 15:45:39.599632 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Nov 5 15:45:39.599639 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Nov 5 15:45:39.599645 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Nov 5 15:45:39.599651 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Nov 5 15:45:39.599657 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Nov 5 15:45:39.599663 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Nov 5 15:45:39.599668 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Nov 5 15:45:39.599674 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Nov 5 15:45:39.599680 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Nov 5 15:45:39.599687 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Nov 5 15:45:39.599693 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Nov 5 15:45:39.599699 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Nov 5 15:45:39.599705 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Nov 5 15:45:39.599711 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Nov 5 15:45:39.599717 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Nov 5 15:45:39.599723 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Nov 5 15:45:39.599728 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Nov 5 15:45:39.599735 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Nov 5 15:45:39.599741 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Nov 5 15:45:39.599747 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Nov 5 15:45:39.599753 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Nov 5 15:45:39.599759 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Nov 5 15:45:39.599765 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Nov 5 15:45:39.599770 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Nov 5 15:45:39.599777 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Nov 5 15:45:39.599783 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Nov 5 15:45:39.599790 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Nov 5 15:45:39.599796 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Nov 5 15:45:39.599802 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Nov 5 15:45:39.599807 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Nov 5 15:45:39.599813 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Nov 5 15:45:39.599819 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Nov 5 15:45:39.599825 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Nov 5 15:45:39.599832 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Nov 5 15:45:39.599838 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 5 15:45:39.599845 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Nov 5 15:45:39.599851 kernel: TSC deadline timer available Nov 5 15:45:39.599857 kernel: CPU topo: Max. logical packages: 128 Nov 5 15:45:39.599863 kernel: CPU topo: Max. logical dies: 128 Nov 5 15:45:39.599871 kernel: CPU topo: Max. dies per package: 1 Nov 5 15:45:39.599880 kernel: CPU topo: Max. threads per core: 1 Nov 5 15:45:39.599887 kernel: CPU topo: Num. cores per package: 1 Nov 5 15:45:39.599893 kernel: CPU topo: Num. threads per package: 1 Nov 5 15:45:39.599900 kernel: CPU topo: Allowing 2 present CPUs plus 126 hotplug CPUs Nov 5 15:45:39.599905 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Nov 5 15:45:39.599911 kernel: Booting paravirtualized kernel on VMware hypervisor Nov 5 15:45:39.599917 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 5 15:45:39.599924 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Nov 5 15:45:39.599931 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Nov 5 15:45:39.599938 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Nov 5 15:45:39.599944 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Nov 5 15:45:39.599950 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Nov 5 15:45:39.599956 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Nov 5 15:45:39.599962 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Nov 5 15:45:39.599968 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Nov 5 15:45:39.599976 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Nov 5 15:45:39.599982 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Nov 5 15:45:39.599988 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Nov 5 15:45:39.599994 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Nov 5 15:45:39.600000 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Nov 5 15:45:39.600006 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Nov 5 15:45:39.600012 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Nov 5 15:45:39.600019 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Nov 5 15:45:39.600025 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Nov 5 15:45:39.600031 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Nov 5 15:45:39.600037 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Nov 5 15:45:39.600044 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:45:39.600050 kernel: random: crng init done Nov 5 15:45:39.600056 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Nov 5 15:45:39.600063 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Nov 5 15:45:39.600070 kernel: printk: log_buf_len min size: 262144 bytes Nov 5 15:45:39.600075 kernel: printk: log_buf_len: 1048576 bytes Nov 5 15:45:39.600081 kernel: printk: early log buf free: 245688(93%) Nov 5 15:45:39.600088 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 5 15:45:39.600094 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 5 15:45:39.600100 kernel: Fallback order for Node 0: 0 Nov 5 15:45:39.600108 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524157 Nov 5 15:45:39.600114 kernel: Policy zone: DMA32 Nov 5 15:45:39.600120 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 5 15:45:39.600126 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Nov 5 15:45:39.600135 kernel: ftrace: allocating 40092 entries in 157 pages Nov 5 15:45:39.600145 kernel: ftrace: allocated 157 pages with 5 groups Nov 5 15:45:39.600154 kernel: Dynamic Preempt: voluntary Nov 5 15:45:39.600163 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 5 15:45:39.600175 kernel: rcu: RCU event tracing is enabled. Nov 5 15:45:39.600185 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Nov 5 15:45:39.600195 kernel: Trampoline variant of Tasks RCU enabled. Nov 5 15:45:39.600205 kernel: Rude variant of Tasks RCU enabled. Nov 5 15:45:39.600213 kernel: Tracing variant of Tasks RCU enabled. Nov 5 15:45:39.600219 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 5 15:45:39.600225 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Nov 5 15:45:39.600231 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 5 15:45:39.600240 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 5 15:45:39.600246 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 5 15:45:39.600252 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Nov 5 15:45:39.600259 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Nov 5 15:45:39.600265 kernel: Console: colour VGA+ 80x25 Nov 5 15:45:39.600271 kernel: printk: legacy console [tty0] enabled Nov 5 15:45:39.600277 kernel: printk: legacy console [ttyS0] enabled Nov 5 15:45:39.600284 kernel: ACPI: Core revision 20240827 Nov 5 15:45:39.600291 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Nov 5 15:45:39.600303 kernel: APIC: Switch to symmetric I/O mode setup Nov 5 15:45:39.600310 kernel: x2apic enabled Nov 5 15:45:39.600317 kernel: APIC: Switched APIC routing to: physical x2apic Nov 5 15:45:39.600323 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 5 15:45:39.600332 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Nov 5 15:45:39.600345 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Nov 5 15:45:39.600354 kernel: Disabled fast string operations Nov 5 15:45:39.600364 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 5 15:45:39.600372 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 5 15:45:39.600379 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 5 15:45:39.600385 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Nov 5 15:45:39.600391 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 5 15:45:39.600399 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 5 15:45:39.600405 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 5 15:45:39.600414 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 5 15:45:39.600423 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 5 15:45:39.600433 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 5 15:45:39.600443 kernel: SRBDS: Unknown: Dependent on hypervisor status Nov 5 15:45:39.600453 kernel: GDS: Unknown: Dependent on hypervisor status Nov 5 15:45:39.602183 kernel: active return thunk: its_return_thunk Nov 5 15:45:39.602199 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 5 15:45:39.602205 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 5 15:45:39.602212 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 5 15:45:39.602218 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 5 15:45:39.602224 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 5 15:45:39.602231 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 5 15:45:39.602240 kernel: Freeing SMP alternatives memory: 32K Nov 5 15:45:39.602246 kernel: pid_max: default: 131072 minimum: 1024 Nov 5 15:45:39.602253 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 5 15:45:39.602263 kernel: landlock: Up and running. Nov 5 15:45:39.602272 kernel: SELinux: Initializing. Nov 5 15:45:39.602283 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 5 15:45:39.602292 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 5 15:45:39.602303 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Nov 5 15:45:39.602311 kernel: Performance Events: Skylake events, core PMU driver. Nov 5 15:45:39.602317 kernel: core: CPUID marked event: 'cpu cycles' unavailable Nov 5 15:45:39.602323 kernel: core: CPUID marked event: 'instructions' unavailable Nov 5 15:45:39.602329 kernel: core: CPUID marked event: 'bus cycles' unavailable Nov 5 15:45:39.602335 kernel: core: CPUID marked event: 'cache references' unavailable Nov 5 15:45:39.602341 kernel: core: CPUID marked event: 'cache misses' unavailable Nov 5 15:45:39.602349 kernel: core: CPUID marked event: 'branch instructions' unavailable Nov 5 15:45:39.602355 kernel: core: CPUID marked event: 'branch misses' unavailable Nov 5 15:45:39.602361 kernel: ... version: 1 Nov 5 15:45:39.602367 kernel: ... bit width: 48 Nov 5 15:45:39.602377 kernel: ... generic registers: 4 Nov 5 15:45:39.602387 kernel: ... value mask: 0000ffffffffffff Nov 5 15:45:39.602395 kernel: ... max period: 000000007fffffff Nov 5 15:45:39.602403 kernel: ... fixed-purpose events: 0 Nov 5 15:45:39.602409 kernel: ... event mask: 000000000000000f Nov 5 15:45:39.602415 kernel: signal: max sigframe size: 1776 Nov 5 15:45:39.602421 kernel: rcu: Hierarchical SRCU implementation. Nov 5 15:45:39.602428 kernel: rcu: Max phase no-delay instances is 400. Nov 5 15:45:39.602434 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Nov 5 15:45:39.602441 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 5 15:45:39.602448 kernel: smp: Bringing up secondary CPUs ... Nov 5 15:45:39.602455 kernel: smpboot: x86: Booting SMP configuration: Nov 5 15:45:39.605497 kernel: .... node #0, CPUs: #1 Nov 5 15:45:39.605517 kernel: Disabled fast string operations Nov 5 15:45:39.605526 kernel: smp: Brought up 1 node, 2 CPUs Nov 5 15:45:39.605534 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Nov 5 15:45:39.605541 kernel: Memory: 1946736K/2096628K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15964K init, 2080K bss, 138508K reserved, 0K cma-reserved) Nov 5 15:45:39.605547 kernel: devtmpfs: initialized Nov 5 15:45:39.605557 kernel: x86/mm: Memory block size: 128MB Nov 5 15:45:39.605564 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Nov 5 15:45:39.605570 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 5 15:45:39.605577 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Nov 5 15:45:39.605583 kernel: pinctrl core: initialized pinctrl subsystem Nov 5 15:45:39.605589 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 5 15:45:39.605596 kernel: audit: initializing netlink subsys (disabled) Nov 5 15:45:39.605604 kernel: audit: type=2000 audit(1762357536.310:1): state=initialized audit_enabled=0 res=1 Nov 5 15:45:39.605614 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 5 15:45:39.605623 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 5 15:45:39.605633 kernel: cpuidle: using governor menu Nov 5 15:45:39.605643 kernel: Simple Boot Flag at 0x36 set to 0x80 Nov 5 15:45:39.605654 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 5 15:45:39.605663 kernel: dca service started, version 1.12.1 Nov 5 15:45:39.605672 kernel: PCI: ECAM [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) for domain 0000 [bus 00-7f] Nov 5 15:45:39.605689 kernel: PCI: Using configuration type 1 for base access Nov 5 15:45:39.605701 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 5 15:45:39.605711 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 5 15:45:39.605718 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 5 15:45:39.605724 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 5 15:45:39.605733 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 5 15:45:39.605742 kernel: ACPI: Added _OSI(Module Device) Nov 5 15:45:39.605751 kernel: ACPI: Added _OSI(Processor Device) Nov 5 15:45:39.605762 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 5 15:45:39.605773 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 5 15:45:39.605782 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Nov 5 15:45:39.605792 kernel: ACPI: Interpreter enabled Nov 5 15:45:39.605801 kernel: ACPI: PM: (supports S0 S1 S5) Nov 5 15:45:39.605811 kernel: ACPI: Using IOAPIC for interrupt routing Nov 5 15:45:39.605821 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 5 15:45:39.605831 kernel: PCI: Using E820 reservations for host bridge windows Nov 5 15:45:39.605840 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Nov 5 15:45:39.605846 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Nov 5 15:45:39.605984 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 5 15:45:39.606082 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Nov 5 15:45:39.606159 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Nov 5 15:45:39.606169 kernel: PCI host bridge to bus 0000:00 Nov 5 15:45:39.606235 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 5 15:45:39.606295 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Nov 5 15:45:39.606353 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 5 15:45:39.606413 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 5 15:45:39.607360 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Nov 5 15:45:39.607430 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Nov 5 15:45:39.607855 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 conventional PCI endpoint Nov 5 15:45:39.607932 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 conventional PCI bridge Nov 5 15:45:39.608005 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 5 15:45:39.608078 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Nov 5 15:45:39.608152 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a conventional PCI endpoint Nov 5 15:45:39.608223 kernel: pci 0000:00:07.1: BAR 4 [io 0x1060-0x106f] Nov 5 15:45:39.608289 kernel: pci 0000:00:07.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Nov 5 15:45:39.608355 kernel: pci 0000:00:07.1: BAR 1 [io 0x03f6]: legacy IDE quirk Nov 5 15:45:39.608422 kernel: pci 0000:00:07.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Nov 5 15:45:39.608540 kernel: pci 0000:00:07.1: BAR 3 [io 0x0376]: legacy IDE quirk Nov 5 15:45:39.608615 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Nov 5 15:45:39.608684 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Nov 5 15:45:39.608749 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Nov 5 15:45:39.608819 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 conventional PCI endpoint Nov 5 15:45:39.608885 kernel: pci 0000:00:07.7: BAR 0 [io 0x1080-0x10bf] Nov 5 15:45:39.608950 kernel: pci 0000:00:07.7: BAR 1 [mem 0xfebfe000-0xfebfffff 64bit] Nov 5 15:45:39.609023 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 conventional PCI endpoint Nov 5 15:45:39.609089 kernel: pci 0000:00:0f.0: BAR 0 [io 0x1070-0x107f] Nov 5 15:45:39.609153 kernel: pci 0000:00:0f.0: BAR 1 [mem 0xe8000000-0xefffffff pref] Nov 5 15:45:39.609218 kernel: pci 0000:00:0f.0: BAR 2 [mem 0xfe000000-0xfe7fffff] Nov 5 15:45:39.609283 kernel: pci 0000:00:0f.0: ROM [mem 0x00000000-0x00007fff pref] Nov 5 15:45:39.609347 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 5 15:45:39.609421 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 conventional PCI bridge Nov 5 15:45:39.609502 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Nov 5 15:45:39.609573 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Nov 5 15:45:39.609638 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Nov 5 15:45:39.609702 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 5 15:45:39.609770 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:45:39.609838 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 5 15:45:39.609909 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Nov 5 15:45:39.609974 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Nov 5 15:45:39.610040 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Nov 5 15:45:39.610108 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:45:39.610174 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 5 15:45:39.610241 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Nov 5 15:45:39.610306 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Nov 5 15:45:39.610371 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Nov 5 15:45:39.610435 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Nov 5 15:45:39.610651 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:45:39.610720 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 5 15:45:39.610789 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Nov 5 15:45:39.610854 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Nov 5 15:45:39.610920 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Nov 5 15:45:39.610985 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Nov 5 15:45:39.611058 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:45:39.611126 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 5 15:45:39.611192 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Nov 5 15:45:39.611256 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Nov 5 15:45:39.611321 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Nov 5 15:45:39.611389 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:45:39.611456 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 5 15:45:39.611529 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Nov 5 15:45:39.611594 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 5 15:45:39.611661 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Nov 5 15:45:39.611731 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:45:39.611797 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 5 15:45:39.611864 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Nov 5 15:45:39.611928 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Nov 5 15:45:39.611992 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Nov 5 15:45:39.612483 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:45:39.612572 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 5 15:45:39.612668 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Nov 5 15:45:39.612739 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Nov 5 15:45:39.612806 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Nov 5 15:45:39.612876 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:45:39.612943 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 5 15:45:39.613008 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Nov 5 15:45:39.613074 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Nov 5 15:45:39.613142 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Nov 5 15:45:39.613213 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:45:39.613279 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 5 15:45:39.613344 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Nov 5 15:45:39.613409 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Nov 5 15:45:39.613518 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Nov 5 15:45:39.613593 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:45:39.613660 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 5 15:45:39.613725 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Nov 5 15:45:39.613810 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Nov 5 15:45:39.613876 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Nov 5 15:45:39.613940 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Nov 5 15:45:39.614013 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:45:39.614079 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 5 15:45:39.614143 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Nov 5 15:45:39.614207 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Nov 5 15:45:39.614272 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Nov 5 15:45:39.614336 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Nov 5 15:45:39.614408 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:45:39.614489 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 5 15:45:39.614559 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Nov 5 15:45:39.614624 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 5 15:45:39.614689 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Nov 5 15:45:39.614759 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:45:39.614828 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 5 15:45:39.614896 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Nov 5 15:45:39.614959 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 5 15:45:39.615024 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Nov 5 15:45:39.615091 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:45:39.615156 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 5 15:45:39.615224 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Nov 5 15:45:39.615288 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Nov 5 15:45:39.615353 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Nov 5 15:45:39.615421 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:45:39.615495 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 5 15:45:39.615561 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Nov 5 15:45:39.615629 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Nov 5 15:45:39.615693 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Nov 5 15:45:39.615762 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:45:39.615827 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 5 15:45:39.615892 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Nov 5 15:45:39.615956 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 5 15:45:39.616022 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Nov 5 15:45:39.616094 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:45:39.616159 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 5 15:45:39.616224 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Nov 5 15:45:39.616288 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Nov 5 15:45:39.616352 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 5 15:45:39.616420 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Nov 5 15:45:39.616496 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:45:39.616563 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 5 15:45:39.616630 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Nov 5 15:45:39.616695 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Nov 5 15:45:39.616759 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Nov 5 15:45:39.616823 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Nov 5 15:45:39.616897 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:45:39.616965 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 5 15:45:39.617030 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Nov 5 15:45:39.617093 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Nov 5 15:45:39.617157 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Nov 5 15:45:39.617221 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Nov 5 15:45:39.617288 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:45:39.617355 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 5 15:45:39.617419 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Nov 5 15:45:39.617490 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 5 15:45:39.617556 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Nov 5 15:45:39.617626 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:45:39.617691 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 5 15:45:39.617758 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Nov 5 15:45:39.617823 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 5 15:45:39.617887 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Nov 5 15:45:39.617955 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:45:39.618020 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 5 15:45:39.618084 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Nov 5 15:45:39.618151 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Nov 5 15:45:39.618216 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Nov 5 15:45:39.618285 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:45:39.618352 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 5 15:45:39.618416 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Nov 5 15:45:39.618490 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Nov 5 15:45:39.618560 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Nov 5 15:45:39.618629 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:45:39.618694 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 5 15:45:39.618758 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Nov 5 15:45:39.618822 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 5 15:45:39.618886 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Nov 5 15:45:39.618958 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:45:39.619023 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 5 15:45:39.619087 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Nov 5 15:45:39.619152 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Nov 5 15:45:39.619217 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Nov 5 15:45:39.619281 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Nov 5 15:45:39.619355 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:45:39.619420 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 5 15:45:39.619497 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Nov 5 15:45:39.619564 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Nov 5 15:45:39.619629 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Nov 5 15:45:39.619694 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Nov 5 15:45:39.619765 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:45:39.619831 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 5 15:45:39.619899 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Nov 5 15:45:39.619963 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Nov 5 15:45:39.620027 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Nov 5 15:45:39.620095 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:45:39.620162 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 5 15:45:39.620227 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Nov 5 15:45:39.620290 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 5 15:45:39.620354 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Nov 5 15:45:39.620422 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:45:39.620530 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 5 15:45:39.620600 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Nov 5 15:45:39.620665 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Nov 5 15:45:39.620731 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Nov 5 15:45:39.620801 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:45:39.620867 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 5 15:45:39.620934 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Nov 5 15:45:39.620998 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Nov 5 15:45:39.621062 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Nov 5 15:45:39.621129 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:45:39.621194 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 5 15:45:39.621259 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Nov 5 15:45:39.621325 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Nov 5 15:45:39.621388 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Nov 5 15:45:39.621456 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:45:39.623284 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 5 15:45:39.623357 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Nov 5 15:45:39.623425 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 5 15:45:39.623510 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Nov 5 15:45:39.623580 kernel: pci_bus 0000:01: extended config space not accessible Nov 5 15:45:39.623649 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 5 15:45:39.623717 kernel: pci_bus 0000:02: extended config space not accessible Nov 5 15:45:39.623727 kernel: acpiphp: Slot [32] registered Nov 5 15:45:39.623734 kernel: acpiphp: Slot [33] registered Nov 5 15:45:39.623743 kernel: acpiphp: Slot [34] registered Nov 5 15:45:39.623750 kernel: acpiphp: Slot [35] registered Nov 5 15:45:39.623756 kernel: acpiphp: Slot [36] registered Nov 5 15:45:39.623763 kernel: acpiphp: Slot [37] registered Nov 5 15:45:39.623769 kernel: acpiphp: Slot [38] registered Nov 5 15:45:39.623776 kernel: acpiphp: Slot [39] registered Nov 5 15:45:39.623782 kernel: acpiphp: Slot [40] registered Nov 5 15:45:39.623789 kernel: acpiphp: Slot [41] registered Nov 5 15:45:39.623796 kernel: acpiphp: Slot [42] registered Nov 5 15:45:39.623803 kernel: acpiphp: Slot [43] registered Nov 5 15:45:39.623809 kernel: acpiphp: Slot [44] registered Nov 5 15:45:39.623815 kernel: acpiphp: Slot [45] registered Nov 5 15:45:39.623822 kernel: acpiphp: Slot [46] registered Nov 5 15:45:39.623828 kernel: acpiphp: Slot [47] registered Nov 5 15:45:39.623834 kernel: acpiphp: Slot [48] registered Nov 5 15:45:39.623932 kernel: acpiphp: Slot [49] registered Nov 5 15:45:39.623939 kernel: acpiphp: Slot [50] registered Nov 5 15:45:39.623946 kernel: acpiphp: Slot [51] registered Nov 5 15:45:39.623952 kernel: acpiphp: Slot [52] registered Nov 5 15:45:39.623959 kernel: acpiphp: Slot [53] registered Nov 5 15:45:39.623965 kernel: acpiphp: Slot [54] registered Nov 5 15:45:39.623972 kernel: acpiphp: Slot [55] registered Nov 5 15:45:39.623978 kernel: acpiphp: Slot [56] registered Nov 5 15:45:39.623986 kernel: acpiphp: Slot [57] registered Nov 5 15:45:39.623992 kernel: acpiphp: Slot [58] registered Nov 5 15:45:39.623998 kernel: acpiphp: Slot [59] registered Nov 5 15:45:39.624005 kernel: acpiphp: Slot [60] registered Nov 5 15:45:39.624011 kernel: acpiphp: Slot [61] registered Nov 5 15:45:39.624018 kernel: acpiphp: Slot [62] registered Nov 5 15:45:39.624024 kernel: acpiphp: Slot [63] registered Nov 5 15:45:39.624094 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Nov 5 15:45:39.624161 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Nov 5 15:45:39.624227 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Nov 5 15:45:39.624292 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Nov 5 15:45:39.624357 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Nov 5 15:45:39.624422 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Nov 5 15:45:39.624648 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 PCIe Endpoint Nov 5 15:45:39.624721 kernel: pci 0000:03:00.0: BAR 0 [io 0x4000-0x4007] Nov 5 15:45:39.624789 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfd5f8000-0xfd5fffff 64bit] Nov 5 15:45:39.624856 kernel: pci 0000:03:00.0: ROM [mem 0x00000000-0x0000ffff pref] Nov 5 15:45:39.624922 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Nov 5 15:45:39.624989 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Nov 5 15:45:39.625060 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 5 15:45:39.625128 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 5 15:45:39.625194 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 5 15:45:39.625261 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 5 15:45:39.625328 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 5 15:45:39.625395 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 5 15:45:39.625463 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 5 15:45:39.626298 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 5 15:45:39.626375 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 PCIe Endpoint Nov 5 15:45:39.626444 kernel: pci 0000:0b:00.0: BAR 0 [mem 0xfd4fc000-0xfd4fcfff] Nov 5 15:45:39.628302 kernel: pci 0000:0b:00.0: BAR 1 [mem 0xfd4fd000-0xfd4fdfff] Nov 5 15:45:39.628377 kernel: pci 0000:0b:00.0: BAR 2 [mem 0xfd4fe000-0xfd4fffff] Nov 5 15:45:39.628450 kernel: pci 0000:0b:00.0: BAR 3 [io 0x5000-0x500f] Nov 5 15:45:39.628533 kernel: pci 0000:0b:00.0: ROM [mem 0x00000000-0x0000ffff pref] Nov 5 15:45:39.628601 kernel: pci 0000:0b:00.0: supports D1 D2 Nov 5 15:45:39.628668 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 5 15:45:39.628735 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Nov 5 15:45:39.628802 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 5 15:45:39.628874 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 5 15:45:39.628942 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 5 15:45:39.629010 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 5 15:45:39.629078 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 5 15:45:39.629147 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 5 15:45:39.629213 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 5 15:45:39.629284 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 5 15:45:39.629351 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 5 15:45:39.629417 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 5 15:45:39.630382 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 5 15:45:39.630463 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 5 15:45:39.630552 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 5 15:45:39.630625 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 5 15:45:39.630694 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 5 15:45:39.630764 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 5 15:45:39.630831 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 5 15:45:39.630898 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 5 15:45:39.630966 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 5 15:45:39.631035 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 5 15:45:39.631104 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 5 15:45:39.631172 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 5 15:45:39.631239 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 5 15:45:39.631305 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 5 15:45:39.631315 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Nov 5 15:45:39.631324 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Nov 5 15:45:39.631331 kernel: ACPI: PCI: Interrupt link LNKB disabled Nov 5 15:45:39.631337 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 5 15:45:39.631344 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Nov 5 15:45:39.631351 kernel: iommu: Default domain type: Translated Nov 5 15:45:39.631357 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 5 15:45:39.631364 kernel: PCI: Using ACPI for IRQ routing Nov 5 15:45:39.631371 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 5 15:45:39.631378 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Nov 5 15:45:39.631385 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Nov 5 15:45:39.631451 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Nov 5 15:45:39.631530 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Nov 5 15:45:39.631595 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 5 15:45:39.631605 kernel: vgaarb: loaded Nov 5 15:45:39.631611 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Nov 5 15:45:39.631620 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Nov 5 15:45:39.631627 kernel: clocksource: Switched to clocksource tsc-early Nov 5 15:45:39.631634 kernel: VFS: Disk quotas dquot_6.6.0 Nov 5 15:45:39.631640 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 5 15:45:39.631647 kernel: pnp: PnP ACPI init Nov 5 15:45:39.631720 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Nov 5 15:45:39.631785 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Nov 5 15:45:39.631845 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Nov 5 15:45:39.631909 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Nov 5 15:45:39.631973 kernel: pnp 00:06: [dma 2] Nov 5 15:45:39.632040 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Nov 5 15:45:39.632100 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Nov 5 15:45:39.632162 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Nov 5 15:45:39.632171 kernel: pnp: PnP ACPI: found 8 devices Nov 5 15:45:39.632178 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 5 15:45:39.632185 kernel: NET: Registered PF_INET protocol family Nov 5 15:45:39.632191 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 5 15:45:39.632198 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 5 15:45:39.632207 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 5 15:45:39.632213 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 5 15:45:39.632220 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 5 15:45:39.632227 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 5 15:45:39.632233 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 5 15:45:39.632240 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 5 15:45:39.632246 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 5 15:45:39.632254 kernel: NET: Registered PF_XDP protocol family Nov 5 15:45:39.632321 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Nov 5 15:45:39.632388 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 5 15:45:39.632456 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 5 15:45:39.632557 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 5 15:45:39.632625 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 5 15:45:39.632695 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Nov 5 15:45:39.632762 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Nov 5 15:45:39.632829 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Nov 5 15:45:39.632908 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Nov 5 15:45:39.632977 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Nov 5 15:45:39.633044 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Nov 5 15:45:39.633109 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Nov 5 15:45:39.633179 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Nov 5 15:45:39.633244 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Nov 5 15:45:39.633311 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Nov 5 15:45:39.633377 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Nov 5 15:45:39.633442 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Nov 5 15:45:39.633557 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Nov 5 15:45:39.633639 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Nov 5 15:45:39.633707 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Nov 5 15:45:39.633773 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Nov 5 15:45:39.633841 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Nov 5 15:45:39.633909 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Nov 5 15:45:39.633976 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref]: assigned Nov 5 15:45:39.634044 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref]: assigned Nov 5 15:45:39.634110 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.634175 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.634240 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.634305 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.634371 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.634459 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.635044 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.635142 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.635251 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.635361 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.635474 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.635550 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.635648 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.635722 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.635820 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.635895 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.635972 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.636081 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.636177 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.636274 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.636358 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.636426 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.636503 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.636589 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.636668 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.636739 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.636817 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.636916 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.637021 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.637099 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.637195 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.637297 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.637402 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.637511 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.638125 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.638218 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.638299 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.638369 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.638442 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.638594 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.638667 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.638752 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.638823 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.638894 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.638964 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.639049 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.639141 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.639210 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.639276 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.639340 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.639423 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.639529 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.639632 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.639713 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.639782 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.639867 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.639936 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.640003 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.640071 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.640139 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.640207 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.640273 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.640340 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.640406 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.640674 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.640749 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.640821 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.640887 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.640956 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.641023 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.642064 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.642178 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.642251 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.642317 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.642391 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.642457 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.642535 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.642601 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.642672 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.642737 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.642809 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.642888 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.642956 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:45:39.643021 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Nov 5 15:45:39.643087 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 5 15:45:39.643153 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Nov 5 15:45:39.643221 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Nov 5 15:45:39.643285 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Nov 5 15:45:39.643349 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 5 15:45:39.643419 kernel: pci 0000:03:00.0: ROM [mem 0xfd500000-0xfd50ffff pref]: assigned Nov 5 15:45:39.644522 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 5 15:45:39.644612 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Nov 5 15:45:39.644686 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Nov 5 15:45:39.644754 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Nov 5 15:45:39.644824 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 5 15:45:39.644891 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Nov 5 15:45:39.644956 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Nov 5 15:45:39.645021 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Nov 5 15:45:39.645087 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 5 15:45:39.645151 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Nov 5 15:45:39.645219 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Nov 5 15:45:39.645284 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Nov 5 15:45:39.645349 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 5 15:45:39.645414 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Nov 5 15:45:39.647509 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Nov 5 15:45:39.647594 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 5 15:45:39.647664 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Nov 5 15:45:39.647735 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 5 15:45:39.647803 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 5 15:45:39.647868 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Nov 5 15:45:39.647933 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Nov 5 15:45:39.648000 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 5 15:45:39.648065 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Nov 5 15:45:39.648132 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Nov 5 15:45:39.648198 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 5 15:45:39.648262 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Nov 5 15:45:39.648326 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Nov 5 15:45:39.648396 kernel: pci 0000:0b:00.0: ROM [mem 0xfd400000-0xfd40ffff pref]: assigned Nov 5 15:45:39.648482 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 5 15:45:39.648555 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Nov 5 15:45:39.648620 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Nov 5 15:45:39.648688 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Nov 5 15:45:39.648754 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 5 15:45:39.648819 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Nov 5 15:45:39.648884 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Nov 5 15:45:39.648965 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Nov 5 15:45:39.649033 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 5 15:45:39.649097 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Nov 5 15:45:39.649160 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Nov 5 15:45:39.649224 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Nov 5 15:45:39.649288 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 5 15:45:39.649351 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Nov 5 15:45:39.649414 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 5 15:45:39.651341 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 5 15:45:39.651421 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Nov 5 15:45:39.651499 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 5 15:45:39.651593 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 5 15:45:39.651677 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Nov 5 15:45:39.651744 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Nov 5 15:45:39.651817 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 5 15:45:39.651884 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Nov 5 15:45:39.651949 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Nov 5 15:45:39.652017 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 5 15:45:39.652082 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Nov 5 15:45:39.652147 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 5 15:45:39.652216 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 5 15:45:39.652282 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Nov 5 15:45:39.652346 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Nov 5 15:45:39.652411 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 5 15:45:39.652591 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 5 15:45:39.652662 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Nov 5 15:45:39.652729 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Nov 5 15:45:39.652798 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Nov 5 15:45:39.652875 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 5 15:45:39.652944 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Nov 5 15:45:39.653010 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Nov 5 15:45:39.653075 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Nov 5 15:45:39.653142 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 5 15:45:39.653207 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Nov 5 15:45:39.653271 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 5 15:45:39.653340 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 5 15:45:39.653406 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Nov 5 15:45:39.653483 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 5 15:45:39.653567 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 5 15:45:39.653635 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Nov 5 15:45:39.653700 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Nov 5 15:45:39.653772 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 5 15:45:39.653837 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Nov 5 15:45:39.653903 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Nov 5 15:45:39.653970 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 5 15:45:39.654036 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Nov 5 15:45:39.654101 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 5 15:45:39.654172 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 5 15:45:39.654239 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Nov 5 15:45:39.654304 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Nov 5 15:45:39.654369 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Nov 5 15:45:39.654435 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 5 15:45:39.655871 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Nov 5 15:45:39.655949 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Nov 5 15:45:39.656018 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Nov 5 15:45:39.656087 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 5 15:45:39.656153 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Nov 5 15:45:39.656219 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Nov 5 15:45:39.656286 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 5 15:45:39.656353 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Nov 5 15:45:39.656421 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 5 15:45:39.656498 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 5 15:45:39.656566 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Nov 5 15:45:39.656632 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Nov 5 15:45:39.656699 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 5 15:45:39.656766 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Nov 5 15:45:39.656835 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Nov 5 15:45:39.656914 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 5 15:45:39.656980 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Nov 5 15:45:39.657045 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Nov 5 15:45:39.657112 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 5 15:45:39.657177 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Nov 5 15:45:39.657246 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 5 15:45:39.657309 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Nov 5 15:45:39.657367 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Nov 5 15:45:39.657425 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Nov 5 15:45:39.658543 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Nov 5 15:45:39.658613 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Nov 5 15:45:39.658683 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Nov 5 15:45:39.658744 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Nov 5 15:45:39.658810 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 5 15:45:39.658873 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Nov 5 15:45:39.658948 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Nov 5 15:45:39.659009 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Nov 5 15:45:39.659070 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Nov 5 15:45:39.659134 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Nov 5 15:45:39.659198 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Nov 5 15:45:39.659259 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Nov 5 15:45:39.659318 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Nov 5 15:45:39.659380 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Nov 5 15:45:39.659444 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Nov 5 15:45:39.659513 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Nov 5 15:45:39.659603 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Nov 5 15:45:39.659665 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Nov 5 15:45:39.659751 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Nov 5 15:45:39.659837 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Nov 5 15:45:39.659923 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Nov 5 15:45:39.659987 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Nov 5 15:45:39.660046 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 5 15:45:39.660115 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Nov 5 15:45:39.660174 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Nov 5 15:45:39.660241 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Nov 5 15:45:39.660300 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Nov 5 15:45:39.660364 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Nov 5 15:45:39.660423 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Nov 5 15:45:39.660993 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Nov 5 15:45:39.661062 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Nov 5 15:45:39.661209 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Nov 5 15:45:39.661278 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Nov 5 15:45:39.661338 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Nov 5 15:45:39.661397 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Nov 5 15:45:39.661479 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Nov 5 15:45:39.661544 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Nov 5 15:45:39.662385 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Nov 5 15:45:39.662460 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Nov 5 15:45:39.662535 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 5 15:45:39.662604 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Nov 5 15:45:39.662664 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 5 15:45:39.662730 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Nov 5 15:45:39.662789 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Nov 5 15:45:39.662872 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Nov 5 15:45:39.662948 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Nov 5 15:45:39.663012 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Nov 5 15:45:39.663070 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 5 15:45:39.663133 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Nov 5 15:45:39.663191 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Nov 5 15:45:39.663249 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 5 15:45:39.663315 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Nov 5 15:45:39.663374 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Nov 5 15:45:39.663433 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Nov 5 15:45:39.663537 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Nov 5 15:45:39.663598 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Nov 5 15:45:39.663660 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Nov 5 15:45:39.663723 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Nov 5 15:45:39.663781 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 5 15:45:39.663853 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Nov 5 15:45:39.663917 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 5 15:45:39.663980 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Nov 5 15:45:39.664043 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Nov 5 15:45:39.664107 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Nov 5 15:45:39.664167 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Nov 5 15:45:39.664229 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Nov 5 15:45:39.664288 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 5 15:45:39.664353 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Nov 5 15:45:39.664413 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Nov 5 15:45:39.664493 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Nov 5 15:45:39.664561 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Nov 5 15:45:39.664620 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Nov 5 15:45:39.664680 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Nov 5 15:45:39.664748 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Nov 5 15:45:39.664806 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Nov 5 15:45:39.664869 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Nov 5 15:45:39.664928 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 5 15:45:39.664992 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Nov 5 15:45:39.665053 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Nov 5 15:45:39.665117 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Nov 5 15:45:39.665177 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Nov 5 15:45:39.665242 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Nov 5 15:45:39.665301 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Nov 5 15:45:39.665368 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Nov 5 15:45:39.665427 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 5 15:45:39.665722 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 5 15:45:39.665735 kernel: PCI: CLS 32 bytes, default 64 Nov 5 15:45:39.665742 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 5 15:45:39.665749 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Nov 5 15:45:39.665758 kernel: clocksource: Switched to clocksource tsc Nov 5 15:45:39.665765 kernel: Initialise system trusted keyrings Nov 5 15:45:39.665771 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 5 15:45:39.665778 kernel: Key type asymmetric registered Nov 5 15:45:39.665785 kernel: Asymmetric key parser 'x509' registered Nov 5 15:45:39.665791 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 5 15:45:39.665797 kernel: io scheduler mq-deadline registered Nov 5 15:45:39.665805 kernel: io scheduler kyber registered Nov 5 15:45:39.665811 kernel: io scheduler bfq registered Nov 5 15:45:39.665881 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Nov 5 15:45:39.665949 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:45:39.666038 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Nov 5 15:45:39.666275 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:45:39.666367 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Nov 5 15:45:39.666438 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:45:39.666552 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Nov 5 15:45:39.666619 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:45:39.666689 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Nov 5 15:45:39.666755 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:45:39.666826 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Nov 5 15:45:39.666942 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:45:39.667007 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Nov 5 15:45:39.667072 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:45:39.667137 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Nov 5 15:45:39.667201 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:45:39.667269 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Nov 5 15:45:39.667333 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:45:39.667399 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Nov 5 15:45:39.667463 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:45:39.667742 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Nov 5 15:45:39.667863 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:45:39.668307 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Nov 5 15:45:39.668384 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:45:39.668880 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Nov 5 15:45:39.668958 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:45:39.669044 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Nov 5 15:45:39.669116 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:45:39.669195 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Nov 5 15:45:39.669268 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:45:39.669341 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Nov 5 15:45:39.669408 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:45:39.669497 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Nov 5 15:45:39.669591 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:45:39.669786 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Nov 5 15:45:39.669876 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:45:39.669947 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Nov 5 15:45:39.670018 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:45:39.670097 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Nov 5 15:45:39.670164 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:45:39.670250 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Nov 5 15:45:39.670336 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:45:39.670430 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Nov 5 15:45:39.670517 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:45:39.670587 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Nov 5 15:45:39.670665 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:45:39.670734 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Nov 5 15:45:39.670806 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:45:39.670874 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Nov 5 15:45:39.670942 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:45:39.671012 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Nov 5 15:45:39.671083 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:45:39.671179 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Nov 5 15:45:39.671258 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:45:39.671327 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Nov 5 15:45:39.671394 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:45:39.671461 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Nov 5 15:45:39.671687 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:45:39.671762 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Nov 5 15:45:39.671850 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:45:39.671953 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Nov 5 15:45:39.672031 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:45:39.672143 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Nov 5 15:45:39.672255 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:45:39.672277 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 5 15:45:39.672286 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 5 15:45:39.672294 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 5 15:45:39.672301 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Nov 5 15:45:39.672309 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 5 15:45:39.672323 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 5 15:45:39.672399 kernel: rtc_cmos 00:01: registered as rtc0 Nov 5 15:45:39.672477 kernel: rtc_cmos 00:01: setting system clock to 2025-11-05T15:45:38 UTC (1762357538) Nov 5 15:45:39.672488 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 5 15:45:39.672550 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Nov 5 15:45:39.672561 kernel: intel_pstate: CPU model not supported Nov 5 15:45:39.672568 kernel: NET: Registered PF_INET6 protocol family Nov 5 15:45:39.672575 kernel: Segment Routing with IPv6 Nov 5 15:45:39.672582 kernel: In-situ OAM (IOAM) with IPv6 Nov 5 15:45:39.672591 kernel: NET: Registered PF_PACKET protocol family Nov 5 15:45:39.672599 kernel: Key type dns_resolver registered Nov 5 15:45:39.672605 kernel: IPI shorthand broadcast: enabled Nov 5 15:45:39.672612 kernel: sched_clock: Marking stable (1501203517, 175185900)->(1689570651, -13181234) Nov 5 15:45:39.672619 kernel: registered taskstats version 1 Nov 5 15:45:39.672627 kernel: Loading compiled-in X.509 certificates Nov 5 15:45:39.672634 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 9f02cc8d588ce542f03b0da66dde47a90a145382' Nov 5 15:45:39.672641 kernel: Demotion targets for Node 0: null Nov 5 15:45:39.672649 kernel: Key type .fscrypt registered Nov 5 15:45:39.672656 kernel: Key type fscrypt-provisioning registered Nov 5 15:45:39.672662 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 5 15:45:39.672669 kernel: ima: Allocated hash algorithm: sha1 Nov 5 15:45:39.672676 kernel: ima: No architecture policies found Nov 5 15:45:39.672683 kernel: clk: Disabling unused clocks Nov 5 15:45:39.672691 kernel: Freeing unused kernel image (initmem) memory: 15964K Nov 5 15:45:39.672698 kernel: Write protecting the kernel read-only data: 40960k Nov 5 15:45:39.672705 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 5 15:45:39.672712 kernel: Run /init as init process Nov 5 15:45:39.672718 kernel: with arguments: Nov 5 15:45:39.672725 kernel: /init Nov 5 15:45:39.672732 kernel: with environment: Nov 5 15:45:39.672743 kernel: HOME=/ Nov 5 15:45:39.672749 kernel: TERM=linux Nov 5 15:45:39.672757 kernel: SCSI subsystem initialized Nov 5 15:45:39.672763 kernel: VMware PVSCSI driver - version 1.0.7.0-k Nov 5 15:45:39.672770 kernel: vmw_pvscsi: using 64bit dma Nov 5 15:45:39.672777 kernel: vmw_pvscsi: max_id: 16 Nov 5 15:45:39.672784 kernel: vmw_pvscsi: setting ring_pages to 8 Nov 5 15:45:39.672791 kernel: vmw_pvscsi: enabling reqCallThreshold Nov 5 15:45:39.672800 kernel: vmw_pvscsi: driver-based request coalescing enabled Nov 5 15:45:39.672807 kernel: vmw_pvscsi: using MSI-X Nov 5 15:45:39.672935 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Nov 5 15:45:39.673180 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Nov 5 15:45:39.673266 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Nov 5 15:45:39.673830 kernel: sd 0:0:0:0: [sda] 25804800 512-byte logical blocks: (13.2 GB/12.3 GiB) Nov 5 15:45:39.673997 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 5 15:45:39.674070 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Nov 5 15:45:39.674140 kernel: sd 0:0:0:0: [sda] Cache data unavailable Nov 5 15:45:39.674210 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Nov 5 15:45:39.674220 kernel: libata version 3.00 loaded. Nov 5 15:45:39.674227 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 5 15:45:39.674299 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 5 15:45:39.674370 kernel: ata_piix 0000:00:07.1: version 2.13 Nov 5 15:45:39.674443 kernel: scsi host1: ata_piix Nov 5 15:45:39.674522 kernel: scsi host2: ata_piix Nov 5 15:45:39.674533 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 lpm-pol 0 Nov 5 15:45:39.674540 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 lpm-pol 0 Nov 5 15:45:39.674549 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Nov 5 15:45:39.674687 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Nov 5 15:45:39.674763 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Nov 5 15:45:39.674774 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 5 15:45:39.674781 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 5 15:45:39.674790 kernel: device-mapper: uevent: version 1.0.3 Nov 5 15:45:39.674799 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 5 15:45:39.674868 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 5 15:45:39.674878 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 5 15:45:39.674885 kernel: raid6: avx2x4 gen() 47504 MB/s Nov 5 15:45:39.674892 kernel: raid6: avx2x2 gen() 52229 MB/s Nov 5 15:45:39.674899 kernel: raid6: avx2x1 gen() 44305 MB/s Nov 5 15:45:39.674906 kernel: raid6: using algorithm avx2x2 gen() 52229 MB/s Nov 5 15:45:39.674915 kernel: raid6: .... xor() 32090 MB/s, rmw enabled Nov 5 15:45:39.674923 kernel: raid6: using avx2x2 recovery algorithm Nov 5 15:45:39.674930 kernel: xor: automatically using best checksumming function avx Nov 5 15:45:39.674937 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 5 15:45:39.674944 kernel: BTRFS: device fsid a4c7be9c-39f6-471d-8a4c-d50144c6bf01 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (196) Nov 5 15:45:39.674951 kernel: BTRFS info (device dm-0): first mount of filesystem a4c7be9c-39f6-471d-8a4c-d50144c6bf01 Nov 5 15:45:39.674958 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:45:39.674967 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 5 15:45:39.674974 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 5 15:45:39.674981 kernel: BTRFS info (device dm-0): enabling free space tree Nov 5 15:45:39.674988 kernel: loop: module loaded Nov 5 15:45:39.674995 kernel: loop0: detected capacity change from 0 to 100120 Nov 5 15:45:39.675002 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 5 15:45:39.675010 systemd[1]: Successfully made /usr/ read-only. Nov 5 15:45:39.675020 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 15:45:39.675028 systemd[1]: Detected virtualization vmware. Nov 5 15:45:39.675035 systemd[1]: Detected architecture x86-64. Nov 5 15:45:39.675042 systemd[1]: Running in initrd. Nov 5 15:45:39.675049 systemd[1]: No hostname configured, using default hostname. Nov 5 15:45:39.675056 systemd[1]: Hostname set to . Nov 5 15:45:39.675064 systemd[1]: Initializing machine ID from random generator. Nov 5 15:45:39.675072 systemd[1]: Queued start job for default target initrd.target. Nov 5 15:45:39.675079 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 15:45:39.675086 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:45:39.675093 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:45:39.675101 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 5 15:45:39.675108 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 15:45:39.675117 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 5 15:45:39.675125 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 5 15:45:39.675132 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:45:39.675139 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:45:39.675146 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 5 15:45:39.675155 systemd[1]: Reached target paths.target - Path Units. Nov 5 15:45:39.675162 systemd[1]: Reached target slices.target - Slice Units. Nov 5 15:45:39.675169 systemd[1]: Reached target swap.target - Swaps. Nov 5 15:45:39.675176 systemd[1]: Reached target timers.target - Timer Units. Nov 5 15:45:39.675183 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 15:45:39.675190 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 15:45:39.675198 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 5 15:45:39.675206 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 5 15:45:39.675213 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:45:39.675221 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 15:45:39.675228 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:45:39.675235 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 15:45:39.675243 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Nov 5 15:45:39.675250 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 5 15:45:39.675258 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 15:45:39.675265 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 5 15:45:39.675272 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 5 15:45:39.675279 systemd[1]: Starting systemd-fsck-usr.service... Nov 5 15:45:39.675286 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 15:45:39.675294 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 15:45:39.675301 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:45:39.675310 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 5 15:45:39.675318 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:45:39.675325 systemd[1]: Finished systemd-fsck-usr.service. Nov 5 15:45:39.675333 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 15:45:39.675341 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 15:45:39.675348 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 15:45:39.675370 systemd-journald[332]: Collecting audit messages is disabled. Nov 5 15:45:39.675388 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 5 15:45:39.675396 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Nov 5 15:45:39.675403 kernel: Bridge firewalling registered Nov 5 15:45:39.675410 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 15:45:39.675417 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:45:39.675425 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 5 15:45:39.675433 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:45:39.675441 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:45:39.675448 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:45:39.675456 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 15:45:39.675473 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 15:45:39.675483 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 5 15:45:39.675492 systemd-journald[332]: Journal started Nov 5 15:45:39.675509 systemd-journald[332]: Runtime Journal (/run/log/journal/2f5b431ac4154f1c867c9f58e4b4d0b9) is 4.8M, max 38.5M, 33.7M free. Nov 5 15:45:39.607492 systemd-modules-load[334]: Inserted module 'br_netfilter' Nov 5 15:45:39.677199 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 15:45:39.677575 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 15:45:39.678106 dracut-cmdline[365]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 ip=139.178.70.100::139.178.70.97:28::ens192:off:1.1.1.1:1.0.0.1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:45:39.693575 systemd-tmpfiles[383]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 5 15:45:39.695381 systemd-resolved[363]: Positive Trust Anchors: Nov 5 15:45:39.695388 systemd-resolved[363]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 15:45:39.695390 systemd-resolved[363]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 15:45:39.695413 systemd-resolved[363]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 15:45:39.697650 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:45:39.713001 systemd-resolved[363]: Defaulting to hostname 'linux'. Nov 5 15:45:39.714135 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 15:45:39.714278 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:45:39.760486 kernel: Loading iSCSI transport class v2.0-870. Nov 5 15:45:39.775486 kernel: iscsi: registered transport (tcp) Nov 5 15:45:39.801489 kernel: iscsi: registered transport (qla4xxx) Nov 5 15:45:39.801544 kernel: QLogic iSCSI HBA Driver Nov 5 15:45:39.818822 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 15:45:39.831778 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:45:39.833021 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 15:45:39.854936 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 5 15:45:39.856031 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 5 15:45:39.857524 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 5 15:45:39.876870 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 5 15:45:39.878174 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:45:39.895396 systemd-udevd[616]: Using default interface naming scheme 'v257'. Nov 5 15:45:39.902290 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:45:39.903237 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 5 15:45:39.917340 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 15:45:39.918540 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 15:45:39.923056 dracut-pre-trigger[693]: rd.md=0: removing MD RAID activation Nov 5 15:45:39.940570 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 15:45:39.942543 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 15:45:39.951600 systemd-networkd[723]: lo: Link UP Nov 5 15:45:39.951607 systemd-networkd[723]: lo: Gained carrier Nov 5 15:45:39.952575 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 15:45:39.952841 systemd[1]: Reached target network.target - Network. Nov 5 15:45:40.028173 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:45:40.029662 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 5 15:45:40.108279 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Nov 5 15:45:40.117190 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Nov 5 15:45:40.128448 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Nov 5 15:45:40.129346 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 5 15:45:40.141995 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Nov 5 15:45:40.190482 kernel: VMware vmxnet3 virtual NIC driver - version 1.9.0.0-k-NAPI Nov 5 15:45:40.193477 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Nov 5 15:45:40.195478 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Nov 5 15:45:40.211560 kernel: cryptd: max_cpu_qlen set to 1000 Nov 5 15:45:40.221481 (udev-worker)[773]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Nov 5 15:45:40.226487 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Nov 5 15:45:40.229940 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:45:40.230026 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:45:40.230205 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:45:40.232009 kernel: AES CTR mode by8 optimization enabled Nov 5 15:45:40.232129 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:45:40.232832 systemd-networkd[723]: eth0: Interface name change detected, renamed to ens192. Nov 5 15:45:40.237478 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Nov 5 15:45:40.272548 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:45:40.272924 systemd-networkd[723]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Nov 5 15:45:40.275996 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Nov 5 15:45:40.276117 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Nov 5 15:45:40.277647 systemd-networkd[723]: ens192: Link UP Nov 5 15:45:40.277654 systemd-networkd[723]: ens192: Gained carrier Nov 5 15:45:40.314978 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 5 15:45:40.315671 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 15:45:40.315925 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:45:40.316155 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 15:45:40.316968 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 5 15:45:40.332729 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 5 15:45:41.194127 disk-uuid[793]: Warning: The kernel is still using the old partition table. Nov 5 15:45:41.194127 disk-uuid[793]: The new table will be used at the next reboot or after you Nov 5 15:45:41.194127 disk-uuid[793]: run partprobe(8) or kpartx(8) Nov 5 15:45:41.194127 disk-uuid[793]: The operation has completed successfully. Nov 5 15:45:41.201059 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 5 15:45:41.201118 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 5 15:45:41.201751 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 5 15:45:41.220424 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (884) Nov 5 15:45:41.220449 kernel: BTRFS info (device sda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:45:41.220462 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:45:41.224853 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 5 15:45:41.224881 kernel: BTRFS info (device sda6): enabling free space tree Nov 5 15:45:41.228478 kernel: BTRFS info (device sda6): last unmount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:45:41.229071 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 5 15:45:41.229862 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 5 15:45:41.348256 ignition[903]: Ignition 2.22.0 Nov 5 15:45:41.348267 ignition[903]: Stage: fetch-offline Nov 5 15:45:41.348294 ignition[903]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:45:41.348301 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 5 15:45:41.348353 ignition[903]: parsed url from cmdline: "" Nov 5 15:45:41.348355 ignition[903]: no config URL provided Nov 5 15:45:41.348358 ignition[903]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 15:45:41.348363 ignition[903]: no config at "/usr/lib/ignition/user.ign" Nov 5 15:45:41.348790 ignition[903]: config successfully fetched Nov 5 15:45:41.348808 ignition[903]: parsing config with SHA512: 2811c5940018b87493533c989730dc6c6e450b494d4fc1cb3ac41bb2155843ccf276a76918488edbbba1a5c70b904b154f3c0d8d96bf17756bfea023a817a7cf Nov 5 15:45:41.353258 unknown[903]: fetched base config from "system" Nov 5 15:45:41.353264 unknown[903]: fetched user config from "vmware" Nov 5 15:45:41.353477 ignition[903]: fetch-offline: fetch-offline passed Nov 5 15:45:41.353513 ignition[903]: Ignition finished successfully Nov 5 15:45:41.354615 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 15:45:41.354998 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 5 15:45:41.355479 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 5 15:45:41.373628 ignition[909]: Ignition 2.22.0 Nov 5 15:45:41.373636 ignition[909]: Stage: kargs Nov 5 15:45:41.373713 ignition[909]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:45:41.373717 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 5 15:45:41.374263 ignition[909]: kargs: kargs passed Nov 5 15:45:41.374294 ignition[909]: Ignition finished successfully Nov 5 15:45:41.375506 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 5 15:45:41.376353 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 5 15:45:41.391429 ignition[915]: Ignition 2.22.0 Nov 5 15:45:41.391441 ignition[915]: Stage: disks Nov 5 15:45:41.391534 ignition[915]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:45:41.391539 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 5 15:45:41.392122 ignition[915]: disks: disks passed Nov 5 15:45:41.392151 ignition[915]: Ignition finished successfully Nov 5 15:45:41.393030 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 5 15:45:41.393366 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 5 15:45:41.393497 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 5 15:45:41.393670 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 15:45:41.393844 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 15:45:41.394029 systemd[1]: Reached target basic.target - Basic System. Nov 5 15:45:41.394703 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 5 15:45:41.422933 systemd-fsck[923]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Nov 5 15:45:41.424348 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 5 15:45:41.425442 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 5 15:45:41.510119 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 5 15:45:41.510478 kernel: EXT4-fs (sda9): mounted filesystem f3db699e-c9e0-4f6b-8c2b-aa40a78cd116 r/w with ordered data mode. Quota mode: none. Nov 5 15:45:41.510609 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 5 15:45:41.511865 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 15:45:41.513503 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 5 15:45:41.513877 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 5 15:45:41.514065 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 5 15:45:41.514280 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 15:45:41.522359 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 5 15:45:41.523250 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 5 15:45:41.528483 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (931) Nov 5 15:45:41.531476 kernel: BTRFS info (device sda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:45:41.531496 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:45:41.535931 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 5 15:45:41.535953 kernel: BTRFS info (device sda6): enabling free space tree Nov 5 15:45:41.537249 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 15:45:41.561602 initrd-setup-root[955]: cut: /sysroot/etc/passwd: No such file or directory Nov 5 15:45:41.564996 initrd-setup-root[962]: cut: /sysroot/etc/group: No such file or directory Nov 5 15:45:41.566888 initrd-setup-root[969]: cut: /sysroot/etc/shadow: No such file or directory Nov 5 15:45:41.569118 initrd-setup-root[976]: cut: /sysroot/etc/gshadow: No such file or directory Nov 5 15:45:41.627732 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 5 15:45:41.628619 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 5 15:45:41.629532 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 5 15:45:41.639763 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 5 15:45:41.640491 kernel: BTRFS info (device sda6): last unmount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:45:41.654108 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 5 15:45:41.664109 ignition[1044]: INFO : Ignition 2.22.0 Nov 5 15:45:41.664109 ignition[1044]: INFO : Stage: mount Nov 5 15:45:41.664592 ignition[1044]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:45:41.664592 ignition[1044]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 5 15:45:41.664802 ignition[1044]: INFO : mount: mount passed Nov 5 15:45:41.664802 ignition[1044]: INFO : Ignition finished successfully Nov 5 15:45:41.665695 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 5 15:45:41.666630 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 5 15:45:41.679461 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 15:45:41.694477 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1055) Nov 5 15:45:41.696850 kernel: BTRFS info (device sda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:45:41.696870 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:45:41.701069 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 5 15:45:41.701098 kernel: BTRFS info (device sda6): enabling free space tree Nov 5 15:45:41.702046 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 15:45:41.725202 ignition[1072]: INFO : Ignition 2.22.0 Nov 5 15:45:41.725202 ignition[1072]: INFO : Stage: files Nov 5 15:45:41.725603 ignition[1072]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:45:41.725603 ignition[1072]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 5 15:45:41.725878 ignition[1072]: DEBUG : files: compiled without relabeling support, skipping Nov 5 15:45:41.726462 ignition[1072]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 5 15:45:41.726462 ignition[1072]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 5 15:45:41.729723 ignition[1072]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 5 15:45:41.729884 ignition[1072]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 5 15:45:41.730117 unknown[1072]: wrote ssh authorized keys file for user: core Nov 5 15:45:41.730302 ignition[1072]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 5 15:45:41.732090 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 5 15:45:41.732303 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 5 15:45:41.844195 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 5 15:45:41.898650 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 5 15:45:41.898968 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 5 15:45:41.898968 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 5 15:45:41.898968 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 5 15:45:41.898968 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 5 15:45:41.898968 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 15:45:41.900020 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 15:45:41.900020 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 15:45:41.900020 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 15:45:41.900608 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 15:45:41.900608 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 15:45:41.900608 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 5 15:45:41.902784 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 5 15:45:41.902784 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 5 15:45:41.903246 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 5 15:45:42.061655 systemd-networkd[723]: ens192: Gained IPv6LL Nov 5 15:45:44.094178 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 5 15:45:44.387325 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 5 15:45:44.387325 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Nov 5 15:45:44.388153 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Nov 5 15:45:44.388324 ignition[1072]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 5 15:45:44.388459 ignition[1072]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 15:45:44.388767 ignition[1072]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 15:45:44.388767 ignition[1072]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 5 15:45:44.388767 ignition[1072]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 5 15:45:44.389438 ignition[1072]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 5 15:45:44.389614 ignition[1072]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 5 15:45:44.389614 ignition[1072]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 5 15:45:44.389614 ignition[1072]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 5 15:45:44.413653 ignition[1072]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 5 15:45:44.415685 ignition[1072]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 5 15:45:44.415856 ignition[1072]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 5 15:45:44.415856 ignition[1072]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 5 15:45:44.415856 ignition[1072]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 5 15:45:44.416851 ignition[1072]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 5 15:45:44.416851 ignition[1072]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 5 15:45:44.416851 ignition[1072]: INFO : files: files passed Nov 5 15:45:44.416851 ignition[1072]: INFO : Ignition finished successfully Nov 5 15:45:44.416792 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 5 15:45:44.418028 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 5 15:45:44.420533 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 5 15:45:44.431851 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 5 15:45:44.432055 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 5 15:45:44.435130 initrd-setup-root-after-ignition[1106]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:45:44.435130 initrd-setup-root-after-ignition[1106]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:45:44.436297 initrd-setup-root-after-ignition[1110]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:45:44.436985 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 15:45:44.437371 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 5 15:45:44.438053 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 5 15:45:44.470273 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 5 15:45:44.470352 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 5 15:45:44.470673 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 5 15:45:44.470800 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 5 15:45:44.471166 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 5 15:45:44.471680 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 5 15:45:44.501409 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 15:45:44.502214 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 5 15:45:44.524802 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 15:45:44.524928 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:45:44.525126 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:45:44.525345 systemd[1]: Stopped target timers.target - Timer Units. Nov 5 15:45:44.525548 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 5 15:45:44.525615 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 15:45:44.526072 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 5 15:45:44.526233 systemd[1]: Stopped target basic.target - Basic System. Nov 5 15:45:44.526431 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 5 15:45:44.526656 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 15:45:44.526891 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 5 15:45:44.527126 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 5 15:45:44.527373 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 5 15:45:44.527603 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 15:45:44.527848 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 5 15:45:44.528083 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 5 15:45:44.528296 systemd[1]: Stopped target swap.target - Swaps. Nov 5 15:45:44.528493 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 5 15:45:44.528558 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 5 15:45:44.528831 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:45:44.529026 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:45:44.529227 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 5 15:45:44.529271 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:45:44.529463 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 5 15:45:44.529534 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 5 15:45:44.529821 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 5 15:45:44.529882 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 15:45:44.530169 systemd[1]: Stopped target paths.target - Path Units. Nov 5 15:45:44.530336 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 5 15:45:44.530375 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:45:44.530580 systemd[1]: Stopped target slices.target - Slice Units. Nov 5 15:45:44.530793 systemd[1]: Stopped target sockets.target - Socket Units. Nov 5 15:45:44.531012 systemd[1]: iscsid.socket: Deactivated successfully. Nov 5 15:45:44.531057 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 15:45:44.531207 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 5 15:45:44.531250 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 15:45:44.531458 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 5 15:45:44.531537 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 15:45:44.531716 systemd[1]: ignition-files.service: Deactivated successfully. Nov 5 15:45:44.531774 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 5 15:45:44.533559 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 5 15:45:44.533684 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 5 15:45:44.533751 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:45:44.534339 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 5 15:45:44.534444 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 5 15:45:44.534525 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:45:44.534685 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 5 15:45:44.534741 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:45:44.535753 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 5 15:45:44.535814 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 15:45:44.538424 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 5 15:45:44.547616 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 5 15:45:44.559486 ignition[1132]: INFO : Ignition 2.22.0 Nov 5 15:45:44.559486 ignition[1132]: INFO : Stage: umount Nov 5 15:45:44.559486 ignition[1132]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:45:44.559486 ignition[1132]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 5 15:45:44.559486 ignition[1132]: INFO : umount: umount passed Nov 5 15:45:44.559486 ignition[1132]: INFO : Ignition finished successfully Nov 5 15:45:44.559230 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 5 15:45:44.560679 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 5 15:45:44.562646 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 5 15:45:44.563218 systemd[1]: Stopped target network.target - Network. Nov 5 15:45:44.563532 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 5 15:45:44.563724 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 5 15:45:44.564014 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 5 15:45:44.564163 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 5 15:45:44.564431 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 5 15:45:44.564632 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 5 15:45:44.564897 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 5 15:45:44.564930 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 5 15:45:44.565390 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 5 15:45:44.565737 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 5 15:45:44.572579 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 5 15:45:44.572659 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 5 15:45:44.574213 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 5 15:45:44.574297 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 5 15:45:44.575305 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 5 15:45:44.575440 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 5 15:45:44.575470 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:45:44.576142 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 5 15:45:44.576237 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 5 15:45:44.576265 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 15:45:44.576395 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Nov 5 15:45:44.576421 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Nov 5 15:45:44.576597 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 15:45:44.576622 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:45:44.576729 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 5 15:45:44.576755 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 5 15:45:44.576879 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:45:44.592432 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 5 15:45:44.592736 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:45:44.593193 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 5 15:45:44.593343 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 5 15:45:44.593651 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 5 15:45:44.593818 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:45:44.593934 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 5 15:45:44.593964 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 5 15:45:44.594129 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 5 15:45:44.594158 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 5 15:45:44.594309 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 5 15:45:44.594334 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 15:45:44.595889 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 5 15:45:44.596165 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 5 15:45:44.596321 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:45:44.596698 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 5 15:45:44.596862 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:45:44.597188 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:45:44.597338 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:45:44.607591 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 5 15:45:44.607658 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 5 15:45:44.621254 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 5 15:45:44.621328 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 5 15:45:44.628349 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 5 15:45:44.628407 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 5 15:45:44.628703 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 5 15:45:44.628820 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 5 15:45:44.628850 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 5 15:45:44.629434 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 5 15:45:44.649049 systemd[1]: Switching root. Nov 5 15:45:44.680641 systemd-journald[332]: Journal stopped Nov 5 15:45:45.851078 systemd-journald[332]: Received SIGTERM from PID 1 (systemd). Nov 5 15:45:45.851101 kernel: SELinux: policy capability network_peer_controls=1 Nov 5 15:45:45.851109 kernel: SELinux: policy capability open_perms=1 Nov 5 15:45:45.851116 kernel: SELinux: policy capability extended_socket_class=1 Nov 5 15:45:45.851122 kernel: SELinux: policy capability always_check_network=0 Nov 5 15:45:45.851128 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 5 15:45:45.851136 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 5 15:45:45.851142 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 5 15:45:45.851148 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 5 15:45:45.851154 kernel: SELinux: policy capability userspace_initial_context=0 Nov 5 15:45:45.851161 systemd[1]: Successfully loaded SELinux policy in 46.698ms. Nov 5 15:45:45.851169 kernel: audit: type=1403 audit(1762357545.283:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 5 15:45:45.851176 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 3.945ms. Nov 5 15:45:45.851183 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 15:45:45.851191 systemd[1]: Detected virtualization vmware. Nov 5 15:45:45.851199 systemd[1]: Detected architecture x86-64. Nov 5 15:45:45.851206 systemd[1]: Detected first boot. Nov 5 15:45:45.851214 systemd[1]: Initializing machine ID from random generator. Nov 5 15:45:45.851221 zram_generator::config[1176]: No configuration found. Nov 5 15:45:45.851325 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Nov 5 15:45:45.851339 kernel: Guest personality initialized and is active Nov 5 15:45:45.851345 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 5 15:45:45.851352 kernel: Initialized host personality Nov 5 15:45:45.851359 kernel: NET: Registered PF_VSOCK protocol family Nov 5 15:45:45.851366 systemd[1]: Populated /etc with preset unit settings. Nov 5 15:45:45.851374 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 5 15:45:45.851383 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Nov 5 15:45:45.851390 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 5 15:45:45.851397 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 5 15:45:45.851404 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 5 15:45:45.851411 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 5 15:45:45.851419 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 5 15:45:45.851427 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 5 15:45:45.851435 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 5 15:45:45.851442 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 5 15:45:45.851449 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 5 15:45:45.851457 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 5 15:45:45.851634 systemd[1]: Created slice user.slice - User and Session Slice. Nov 5 15:45:45.851649 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:45:45.851657 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:45:45.851667 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 5 15:45:45.851675 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 5 15:45:45.851683 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 5 15:45:45.851691 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 15:45:45.851698 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 5 15:45:45.851707 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:45:45.851715 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:45:45.851723 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 5 15:45:45.851730 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 5 15:45:45.851737 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 5 15:45:45.851745 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 5 15:45:45.851753 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:45:45.851761 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 15:45:45.851768 systemd[1]: Reached target slices.target - Slice Units. Nov 5 15:45:45.851776 systemd[1]: Reached target swap.target - Swaps. Nov 5 15:45:45.851783 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 5 15:45:45.851791 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 5 15:45:45.851799 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 5 15:45:45.851807 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:45:45.851815 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 15:45:45.851822 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:45:45.851831 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 5 15:45:45.851839 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 5 15:45:45.851846 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 5 15:45:45.851854 systemd[1]: Mounting media.mount - External Media Directory... Nov 5 15:45:45.851861 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:45:45.851869 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 5 15:45:45.851876 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 5 15:45:45.851885 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 5 15:45:45.851893 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 5 15:45:45.851900 systemd[1]: Reached target machines.target - Containers. Nov 5 15:45:45.851908 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 5 15:45:45.851916 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Nov 5 15:45:45.851923 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 15:45:45.851931 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 5 15:45:45.851939 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:45:45.851947 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 15:45:45.851954 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:45:45.851962 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 5 15:45:45.851969 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:45:45.851977 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 5 15:45:45.851986 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 5 15:45:45.851993 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 5 15:45:45.852001 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 5 15:45:45.852008 systemd[1]: Stopped systemd-fsck-usr.service. Nov 5 15:45:45.852016 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:45:45.852024 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 15:45:45.852031 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 15:45:45.852040 kernel: fuse: init (API version 7.41) Nov 5 15:45:45.852047 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 15:45:45.852055 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 5 15:45:45.852062 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 5 15:45:45.852070 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 15:45:45.852078 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:45:45.852085 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 5 15:45:45.852094 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 5 15:45:45.852102 systemd[1]: Mounted media.mount - External Media Directory. Nov 5 15:45:45.852109 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 5 15:45:45.852117 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 5 15:45:45.852124 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 5 15:45:45.852132 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:45:45.852141 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 5 15:45:45.852148 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 5 15:45:45.852156 kernel: ACPI: bus type drm_connector registered Nov 5 15:45:45.852163 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:45:45.852170 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:45:45.852178 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 15:45:45.852185 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 15:45:45.852194 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:45:45.852215 systemd-journald[1266]: Collecting audit messages is disabled. Nov 5 15:45:45.852233 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:45:45.852241 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 5 15:45:45.852250 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 5 15:45:45.852258 systemd-journald[1266]: Journal started Nov 5 15:45:45.852273 systemd-journald[1266]: Runtime Journal (/run/log/journal/f7a3ceefe0104276bc6fc264e437f42a) is 4.8M, max 38.5M, 33.7M free. Nov 5 15:45:45.674076 systemd[1]: Queued start job for default target multi-user.target. Nov 5 15:45:45.681911 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 5 15:45:45.682177 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 5 15:45:45.852758 jq[1246]: true Nov 5 15:45:45.853489 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 15:45:45.855657 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:45:45.855788 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:45:45.856058 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 15:45:45.856352 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:45:45.857027 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 5 15:45:45.857705 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 5 15:45:45.859150 jq[1283]: true Nov 5 15:45:45.867400 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 15:45:45.870732 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 5 15:45:45.873550 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 5 15:45:45.874542 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 5 15:45:45.874651 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 5 15:45:45.874668 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 15:45:45.875329 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 5 15:45:45.875526 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:45:45.883661 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 5 15:45:45.885561 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 5 15:45:45.885689 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:45:45.889310 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 5 15:45:45.889570 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:45:45.892516 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:45:45.896540 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 5 15:45:45.900418 ignition[1294]: Ignition 2.22.0 Nov 5 15:45:45.900797 ignition[1294]: deleting config from guestinfo properties Nov 5 15:45:45.901615 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 5 15:45:45.902268 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 5 15:45:45.903559 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 5 15:45:45.906010 ignition[1294]: Successfully deleted config Nov 5 15:45:45.908053 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 5 15:45:45.908448 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Nov 5 15:45:45.912782 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 5 15:45:45.913282 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 5 15:45:45.914210 systemd-journald[1266]: Time spent on flushing to /var/log/journal/f7a3ceefe0104276bc6fc264e437f42a is 36.452ms for 1751 entries. Nov 5 15:45:45.914210 systemd-journald[1266]: System Journal (/var/log/journal/f7a3ceefe0104276bc6fc264e437f42a) is 8M, max 588.1M, 580.1M free. Nov 5 15:45:45.971402 systemd-journald[1266]: Received client request to flush runtime journal. Nov 5 15:45:45.971441 kernel: loop1: detected capacity change from 0 to 110984 Nov 5 15:45:45.971458 kernel: loop2: detected capacity change from 0 to 2960 Nov 5 15:45:45.917861 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 5 15:45:45.932520 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:45:45.971607 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 5 15:45:45.974594 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 15:45:45.976071 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 15:45:45.976689 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 5 15:45:45.977501 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 5 15:45:45.996604 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 5 15:45:46.000478 kernel: loop3: detected capacity change from 0 to 224512 Nov 5 15:45:46.014225 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Nov 5 15:45:46.014236 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Nov 5 15:45:46.017769 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:45:46.025382 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:45:46.028480 kernel: loop4: detected capacity change from 0 to 128048 Nov 5 15:45:46.044323 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 5 15:45:46.051528 kernel: loop5: detected capacity change from 0 to 110984 Nov 5 15:45:46.063487 kernel: loop6: detected capacity change from 0 to 2960 Nov 5 15:45:46.076509 kernel: loop7: detected capacity change from 0 to 224512 Nov 5 15:45:46.088942 kernel: loop1: detected capacity change from 0 to 128048 Nov 5 15:45:46.095698 systemd-resolved[1339]: Positive Trust Anchors: Nov 5 15:45:46.095949 systemd-resolved[1339]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 15:45:46.095990 systemd-resolved[1339]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 15:45:46.096114 systemd-resolved[1339]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 15:45:46.100092 systemd-resolved[1339]: Defaulting to hostname 'linux'. Nov 5 15:45:46.100771 (sd-merge)[1355]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-vmware.raw'. Nov 5 15:45:46.102319 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 15:45:46.102494 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:45:46.103301 (sd-merge)[1355]: Merged extensions into '/usr'. Nov 5 15:45:46.107589 systemd[1]: Reload requested from client PID 1325 ('systemd-sysext') (unit systemd-sysext.service)... Nov 5 15:45:46.107646 systemd[1]: Reloading... Nov 5 15:45:46.165487 zram_generator::config[1386]: No configuration found. Nov 5 15:45:46.250216 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 5 15:45:46.299750 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 5 15:45:46.300041 systemd[1]: Reloading finished in 192 ms. Nov 5 15:45:46.311617 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 5 15:45:46.323420 systemd[1]: Starting ensure-sysext.service... Nov 5 15:45:46.326197 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 15:45:46.343072 systemd[1]: Reload requested from client PID 1440 ('systemctl') (unit ensure-sysext.service)... Nov 5 15:45:46.343082 systemd[1]: Reloading... Nov 5 15:45:46.349396 systemd-tmpfiles[1441]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 5 15:45:46.349423 systemd-tmpfiles[1441]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 5 15:45:46.349941 systemd-tmpfiles[1441]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 5 15:45:46.350165 systemd-tmpfiles[1441]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 5 15:45:46.351068 systemd-tmpfiles[1441]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 5 15:45:46.351234 systemd-tmpfiles[1441]: ACLs are not supported, ignoring. Nov 5 15:45:46.351267 systemd-tmpfiles[1441]: ACLs are not supported, ignoring. Nov 5 15:45:46.378797 systemd-tmpfiles[1441]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 15:45:46.378804 systemd-tmpfiles[1441]: Skipping /boot Nov 5 15:45:46.390830 zram_generator::config[1468]: No configuration found. Nov 5 15:45:46.393442 systemd-tmpfiles[1441]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 15:45:46.393509 systemd-tmpfiles[1441]: Skipping /boot Nov 5 15:45:46.469907 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 5 15:45:46.516009 systemd[1]: Reloading finished in 172 ms. Nov 5 15:45:46.537528 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 5 15:45:46.546928 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:45:46.551312 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 15:45:46.552104 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 5 15:45:46.554652 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 5 15:45:46.557761 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 5 15:45:46.559592 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:45:46.560573 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 5 15:45:46.564677 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:45:46.565773 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:45:46.569865 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:45:46.570169 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:45:46.570237 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:45:46.571877 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:45:46.571935 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:45:46.579689 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 15:45:46.579936 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:45:46.579998 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:45:46.583686 systemd[1]: Finished ensure-sysext.service. Nov 5 15:45:46.592612 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 5 15:45:46.595798 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:45:46.595906 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:45:46.597579 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:45:46.597685 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:45:46.598682 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 15:45:46.598785 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 15:45:46.606237 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:45:46.610834 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 5 15:45:46.611300 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:45:46.611530 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:45:46.611996 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:45:46.620047 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 5 15:45:46.629325 systemd-udevd[1533]: Using default interface naming scheme 'v257'. Nov 5 15:45:46.632604 augenrules[1568]: No rules Nov 5 15:45:46.633213 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 15:45:46.633363 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 15:45:46.668646 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 5 15:45:46.668863 systemd[1]: Reached target time-set.target - System Time Set. Nov 5 15:45:46.670824 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:45:46.673006 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 15:45:46.677100 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 5 15:45:46.677256 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 15:45:46.683178 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:45:46.683190 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:45:46.725344 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 5 15:45:46.741733 systemd-networkd[1579]: lo: Link UP Nov 5 15:45:46.741737 systemd-networkd[1579]: lo: Gained carrier Nov 5 15:45:46.742647 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 15:45:46.742791 systemd[1]: Reached target network.target - Network. Nov 5 15:45:46.744146 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 5 15:45:46.747454 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 5 15:45:46.765996 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 5 15:45:46.803313 kernel: mousedev: PS/2 mouse device common for all mice Nov 5 15:45:46.803365 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Nov 5 15:45:46.803508 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Nov 5 15:45:46.800846 systemd-networkd[1579]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Nov 5 15:45:46.805973 systemd-networkd[1579]: ens192: Link UP Nov 5 15:45:46.806198 systemd-networkd[1579]: ens192: Gained carrier Nov 5 15:45:46.811533 systemd-timesyncd[1546]: Network configuration changed, trying to establish connection. Nov 5 15:45:46.816380 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 5 15:45:46.823597 kernel: ACPI: button: Power Button [PWRF] Nov 5 15:45:46.912487 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Nov 5 15:45:46.918620 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Nov 5 15:45:46.920092 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 5 15:45:46.945005 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 5 15:45:47.015526 (udev-worker)[1589]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Nov 5 15:45:47.021323 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:45:47.078801 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:45:47.087924 ldconfig[1531]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 5 15:45:47.089454 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 5 15:45:47.090606 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 5 15:45:47.103591 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 5 15:45:47.103851 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 15:45:47.104038 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 5 15:45:47.104193 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 5 15:45:47.104333 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 5 15:45:47.104571 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 5 15:45:47.104747 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 5 15:45:47.104891 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 5 15:45:47.105028 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 5 15:45:47.105051 systemd[1]: Reached target paths.target - Path Units. Nov 5 15:45:47.105163 systemd[1]: Reached target timers.target - Timer Units. Nov 5 15:45:47.105794 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 5 15:45:47.106836 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 5 15:45:47.108338 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 5 15:45:47.108613 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 5 15:45:47.108752 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 5 15:45:47.110595 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 5 15:45:47.110870 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 5 15:45:47.111355 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 5 15:45:47.111904 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 15:45:47.112017 systemd[1]: Reached target basic.target - Basic System. Nov 5 15:45:47.112155 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 5 15:45:47.112174 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 5 15:45:47.112835 systemd[1]: Starting containerd.service - containerd container runtime... Nov 5 15:45:47.115534 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 5 15:45:47.116265 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 5 15:45:47.117603 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 5 15:45:47.126532 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 5 15:45:47.126647 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 5 15:45:47.127323 jq[1642]: false Nov 5 15:45:47.127462 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 5 15:45:47.130107 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 5 15:45:47.132510 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 5 15:45:47.133573 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 5 15:45:47.134773 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 5 15:45:47.138022 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 5 15:45:47.138144 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 5 15:45:47.138563 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 5 15:45:47.143405 google_oslogin_nss_cache[1644]: oslogin_cache_refresh[1644]: Refreshing passwd entry cache Nov 5 15:45:47.141360 oslogin_cache_refresh[1644]: Refreshing passwd entry cache Nov 5 15:45:47.141563 systemd[1]: Starting update-engine.service - Update Engine... Nov 5 15:45:47.144582 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 5 15:45:47.149769 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Nov 5 15:45:47.155110 extend-filesystems[1643]: Found /dev/sda6 Nov 5 15:45:47.154814 oslogin_cache_refresh[1644]: Failure getting users, quitting Nov 5 15:45:47.155356 google_oslogin_nss_cache[1644]: oslogin_cache_refresh[1644]: Failure getting users, quitting Nov 5 15:45:47.155356 google_oslogin_nss_cache[1644]: oslogin_cache_refresh[1644]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 15:45:47.155356 google_oslogin_nss_cache[1644]: oslogin_cache_refresh[1644]: Refreshing group entry cache Nov 5 15:45:47.154825 oslogin_cache_refresh[1644]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 15:45:47.154852 oslogin_cache_refresh[1644]: Refreshing group entry cache Nov 5 15:45:47.156827 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 5 15:45:47.157397 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 5 15:45:47.157529 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 5 15:45:47.157898 systemd[1]: motdgen.service: Deactivated successfully. Nov 5 15:45:47.158020 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 5 15:45:47.159316 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 5 15:45:47.159433 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 5 15:45:47.162828 google_oslogin_nss_cache[1644]: oslogin_cache_refresh[1644]: Failure getting groups, quitting Nov 5 15:45:47.162828 google_oslogin_nss_cache[1644]: oslogin_cache_refresh[1644]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 15:45:47.162822 oslogin_cache_refresh[1644]: Failure getting groups, quitting Nov 5 15:45:47.162829 oslogin_cache_refresh[1644]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 15:45:47.164368 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 5 15:45:47.164876 extend-filesystems[1643]: Found /dev/sda9 Nov 5 15:45:47.165237 jq[1654]: true Nov 5 15:45:47.166549 extend-filesystems[1643]: Checking size of /dev/sda9 Nov 5 15:45:47.169502 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 5 15:45:47.181975 (ntainerd)[1686]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 5 15:45:47.185536 update_engine[1652]: I20251105 15:45:47.184772 1652 main.cc:92] Flatcar Update Engine starting Nov 5 15:45:47.188417 extend-filesystems[1643]: Resized partition /dev/sda9 Nov 5 15:45:47.188969 jq[1681]: true Nov 5 15:45:47.189174 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Nov 5 15:45:47.203615 extend-filesystems[1696]: resize2fs 1.47.3 (8-Jul-2025) Nov 5 15:45:47.205442 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Nov 5 15:45:47.209479 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 1635323 blocks Nov 5 15:45:47.215477 kernel: EXT4-fs (sda9): resized filesystem to 1635323 Nov 5 15:45:47.219273 tar[1670]: linux-amd64/LICENSE Nov 5 15:45:47.223423 unknown[1691]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Nov 5 15:45:47.224709 systemd-logind[1651]: Watching system buttons on /dev/input/event2 (Power Button) Nov 5 15:45:47.224723 systemd-logind[1651]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 5 15:45:47.225290 systemd-logind[1651]: New seat seat0. Nov 5 15:45:47.230216 extend-filesystems[1696]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 5 15:45:47.230216 extend-filesystems[1696]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 5 15:45:47.230216 extend-filesystems[1696]: The filesystem on /dev/sda9 is now 1635323 (4k) blocks long. Nov 5 15:45:47.232517 tar[1670]: linux-amd64/helm Nov 5 15:45:47.227070 systemd[1]: Started systemd-logind.service - User Login Management. Nov 5 15:45:47.232574 extend-filesystems[1643]: Resized filesystem in /dev/sda9 Nov 5 15:45:47.227393 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 5 15:45:47.227644 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 5 15:45:47.243071 dbus-daemon[1640]: [system] SELinux support is enabled Nov 5 15:45:47.243358 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 5 15:45:47.244078 unknown[1691]: Core dump limit set to -1 Nov 5 15:45:47.244958 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 5 15:45:47.244974 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 5 15:45:47.245540 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 5 15:45:47.245552 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 5 15:45:47.256219 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Nov 5 15:45:47.257415 dbus-daemon[1640]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 5 15:45:47.269159 systemd[1]: Started update-engine.service - Update Engine. Nov 5 15:45:47.271837 update_engine[1652]: I20251105 15:45:47.271545 1652 update_check_scheduler.cc:74] Next update check in 10m58s Nov 5 15:45:47.279416 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 5 15:45:47.304506 bash[1717]: Updated "/home/core/.ssh/authorized_keys" Nov 5 15:45:47.303453 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 5 15:45:47.304276 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 5 15:45:47.455504 locksmithd[1718]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 5 15:45:47.471225 sshd_keygen[1687]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 5 15:45:47.477156 containerd[1686]: time="2025-11-05T15:45:47Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 5 15:45:47.478462 containerd[1686]: time="2025-11-05T15:45:47.478444135Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 5 15:45:47.492165 containerd[1686]: time="2025-11-05T15:45:47.492131739Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.098µs" Nov 5 15:45:47.492165 containerd[1686]: time="2025-11-05T15:45:47.492157296Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 5 15:45:47.492165 containerd[1686]: time="2025-11-05T15:45:47.492168777Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 5 15:45:47.492566 containerd[1686]: time="2025-11-05T15:45:47.492257996Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 5 15:45:47.492566 containerd[1686]: time="2025-11-05T15:45:47.492279245Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 5 15:45:47.492566 containerd[1686]: time="2025-11-05T15:45:47.492297455Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 15:45:47.492566 containerd[1686]: time="2025-11-05T15:45:47.492334327Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 15:45:47.492566 containerd[1686]: time="2025-11-05T15:45:47.492349479Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 15:45:47.492566 containerd[1686]: time="2025-11-05T15:45:47.492528345Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 15:45:47.492566 containerd[1686]: time="2025-11-05T15:45:47.492540955Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 15:45:47.492566 containerd[1686]: time="2025-11-05T15:45:47.492553500Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 15:45:47.492566 containerd[1686]: time="2025-11-05T15:45:47.492563642Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 5 15:45:47.492698 containerd[1686]: time="2025-11-05T15:45:47.492603019Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 5 15:45:47.492714 containerd[1686]: time="2025-11-05T15:45:47.492708424Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 15:45:47.492729 containerd[1686]: time="2025-11-05T15:45:47.492724383Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 15:45:47.492749 containerd[1686]: time="2025-11-05T15:45:47.492729847Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 5 15:45:47.492749 containerd[1686]: time="2025-11-05T15:45:47.492745786Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 5 15:45:47.492893 containerd[1686]: time="2025-11-05T15:45:47.492879761Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 5 15:45:47.492962 containerd[1686]: time="2025-11-05T15:45:47.492913954Z" level=info msg="metadata content store policy set" policy=shared Nov 5 15:45:47.502775 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 5 15:45:47.504554 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 5 15:45:47.520684 systemd[1]: issuegen.service: Deactivated successfully. Nov 5 15:45:47.520822 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 5 15:45:47.522392 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 5 15:45:47.533803 containerd[1686]: time="2025-11-05T15:45:47.531951349Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 5 15:45:47.533803 containerd[1686]: time="2025-11-05T15:45:47.531993897Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 5 15:45:47.533803 containerd[1686]: time="2025-11-05T15:45:47.532003563Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 5 15:45:47.533803 containerd[1686]: time="2025-11-05T15:45:47.532014729Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 5 15:45:47.533803 containerd[1686]: time="2025-11-05T15:45:47.532023629Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 5 15:45:47.533803 containerd[1686]: time="2025-11-05T15:45:47.532029929Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 5 15:45:47.533803 containerd[1686]: time="2025-11-05T15:45:47.532038085Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 5 15:45:47.533803 containerd[1686]: time="2025-11-05T15:45:47.532072183Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 5 15:45:47.533803 containerd[1686]: time="2025-11-05T15:45:47.532078588Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 5 15:45:47.533803 containerd[1686]: time="2025-11-05T15:45:47.532087581Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 5 15:45:47.533803 containerd[1686]: time="2025-11-05T15:45:47.532093227Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 5 15:45:47.533803 containerd[1686]: time="2025-11-05T15:45:47.532169868Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 5 15:45:47.533803 containerd[1686]: time="2025-11-05T15:45:47.532238061Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 5 15:45:47.533803 containerd[1686]: time="2025-11-05T15:45:47.532250762Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 5 15:45:47.534033 containerd[1686]: time="2025-11-05T15:45:47.532259386Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 5 15:45:47.534033 containerd[1686]: time="2025-11-05T15:45:47.532265565Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 5 15:45:47.534033 containerd[1686]: time="2025-11-05T15:45:47.532271298Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 5 15:45:47.534033 containerd[1686]: time="2025-11-05T15:45:47.532282669Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 5 15:45:47.534033 containerd[1686]: time="2025-11-05T15:45:47.532289755Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 5 15:45:47.534033 containerd[1686]: time="2025-11-05T15:45:47.532295219Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 5 15:45:47.534033 containerd[1686]: time="2025-11-05T15:45:47.532300922Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 5 15:45:47.534033 containerd[1686]: time="2025-11-05T15:45:47.532306709Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 5 15:45:47.534033 containerd[1686]: time="2025-11-05T15:45:47.532312087Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 5 15:45:47.534033 containerd[1686]: time="2025-11-05T15:45:47.532353430Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 5 15:45:47.534033 containerd[1686]: time="2025-11-05T15:45:47.532365337Z" level=info msg="Start snapshots syncer" Nov 5 15:45:47.534033 containerd[1686]: time="2025-11-05T15:45:47.532386618Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 5 15:45:47.534191 containerd[1686]: time="2025-11-05T15:45:47.532600962Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 5 15:45:47.534191 containerd[1686]: time="2025-11-05T15:45:47.532631058Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 5 15:45:47.534267 containerd[1686]: time="2025-11-05T15:45:47.532680737Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 5 15:45:47.534267 containerd[1686]: time="2025-11-05T15:45:47.532763310Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 5 15:45:47.534267 containerd[1686]: time="2025-11-05T15:45:47.532780510Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 5 15:45:47.534267 containerd[1686]: time="2025-11-05T15:45:47.532787439Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 5 15:45:47.534267 containerd[1686]: time="2025-11-05T15:45:47.532794391Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 5 15:45:47.534267 containerd[1686]: time="2025-11-05T15:45:47.532801270Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 5 15:45:47.534267 containerd[1686]: time="2025-11-05T15:45:47.532807165Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 5 15:45:47.534267 containerd[1686]: time="2025-11-05T15:45:47.532812793Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 5 15:45:47.534267 containerd[1686]: time="2025-11-05T15:45:47.532825891Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 5 15:45:47.534267 containerd[1686]: time="2025-11-05T15:45:47.532833977Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 5 15:45:47.534267 containerd[1686]: time="2025-11-05T15:45:47.532841151Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 5 15:45:47.534267 containerd[1686]: time="2025-11-05T15:45:47.532869967Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 15:45:47.534267 containerd[1686]: time="2025-11-05T15:45:47.532880108Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 15:45:47.534267 containerd[1686]: time="2025-11-05T15:45:47.532885215Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 15:45:47.534445 containerd[1686]: time="2025-11-05T15:45:47.532890298Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 15:45:47.534445 containerd[1686]: time="2025-11-05T15:45:47.532894406Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 5 15:45:47.534445 containerd[1686]: time="2025-11-05T15:45:47.532900165Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 5 15:45:47.534445 containerd[1686]: time="2025-11-05T15:45:47.532907889Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 5 15:45:47.534445 containerd[1686]: time="2025-11-05T15:45:47.532940113Z" level=info msg="runtime interface created" Nov 5 15:45:47.534445 containerd[1686]: time="2025-11-05T15:45:47.532944643Z" level=info msg="created NRI interface" Nov 5 15:45:47.534445 containerd[1686]: time="2025-11-05T15:45:47.532949621Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 5 15:45:47.534445 containerd[1686]: time="2025-11-05T15:45:47.532956177Z" level=info msg="Connect containerd service" Nov 5 15:45:47.534445 containerd[1686]: time="2025-11-05T15:45:47.532971942Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 5 15:45:47.534445 containerd[1686]: time="2025-11-05T15:45:47.533426668Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 15:45:47.543387 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 5 15:45:47.544970 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 5 15:45:47.551675 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 5 15:45:47.551897 systemd[1]: Reached target getty.target - Login Prompts. Nov 5 15:45:47.643509 containerd[1686]: time="2025-11-05T15:45:47.643485529Z" level=info msg="Start subscribing containerd event" Nov 5 15:45:47.643621 containerd[1686]: time="2025-11-05T15:45:47.643605207Z" level=info msg="Start recovering state" Nov 5 15:45:47.643710 containerd[1686]: time="2025-11-05T15:45:47.643703263Z" level=info msg="Start event monitor" Nov 5 15:45:47.643750 containerd[1686]: time="2025-11-05T15:45:47.643744945Z" level=info msg="Start cni network conf syncer for default" Nov 5 15:45:47.643787 containerd[1686]: time="2025-11-05T15:45:47.643780870Z" level=info msg="Start streaming server" Nov 5 15:45:47.643820 containerd[1686]: time="2025-11-05T15:45:47.643810474Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 5 15:45:47.643867 containerd[1686]: time="2025-11-05T15:45:47.643861848Z" level=info msg="runtime interface starting up..." Nov 5 15:45:47.643918 containerd[1686]: time="2025-11-05T15:45:47.643908505Z" level=info msg="starting plugins..." Nov 5 15:45:47.643951 containerd[1686]: time="2025-11-05T15:45:47.643946076Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 5 15:45:47.644116 containerd[1686]: time="2025-11-05T15:45:47.644095629Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 5 15:45:47.644138 containerd[1686]: time="2025-11-05T15:45:47.644133234Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 5 15:45:47.644385 containerd[1686]: time="2025-11-05T15:45:47.644351245Z" level=info msg="containerd successfully booted in 0.167967s" Nov 5 15:45:47.644443 systemd[1]: Started containerd.service - containerd container runtime. Nov 5 15:45:47.659815 tar[1670]: linux-amd64/README.md Nov 5 15:45:47.673509 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 5 15:45:48.525693 systemd-networkd[1579]: ens192: Gained IPv6LL Nov 5 15:45:48.526070 systemd-timesyncd[1546]: Network configuration changed, trying to establish connection. Nov 5 15:45:48.527091 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 5 15:45:48.527865 systemd[1]: Reached target network-online.target - Network is Online. Nov 5 15:45:48.529150 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Nov 5 15:45:48.532845 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:45:48.537150 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 5 15:45:48.559623 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 5 15:45:48.572819 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 5 15:45:48.572969 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Nov 5 15:45:48.573253 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 5 15:45:49.451541 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:45:49.451868 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 5 15:45:49.452415 systemd[1]: Startup finished in 2.310s (kernel) + 5.920s (initrd) + 4.214s (userspace) = 12.445s. Nov 5 15:45:49.458776 (kubelet)[1844]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:45:49.523074 login[1798]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 5 15:45:49.527460 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 5 15:45:49.529614 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 5 15:45:49.535230 systemd-logind[1651]: New session 1 of user core. Nov 5 15:45:49.548950 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 5 15:45:49.551193 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 5 15:45:49.563844 (systemd)[1849]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 5 15:45:49.565355 systemd-logind[1651]: New session c1 of user core. Nov 5 15:45:49.650452 systemd[1849]: Queued start job for default target default.target. Nov 5 15:45:49.657747 systemd[1849]: Created slice app.slice - User Application Slice. Nov 5 15:45:49.657856 systemd[1849]: Reached target paths.target - Paths. Nov 5 15:45:49.657885 systemd[1849]: Reached target timers.target - Timers. Nov 5 15:45:49.660524 systemd[1849]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 5 15:45:49.666635 systemd[1849]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 5 15:45:49.666815 systemd[1849]: Reached target sockets.target - Sockets. Nov 5 15:45:49.666852 systemd[1849]: Reached target basic.target - Basic System. Nov 5 15:45:49.666877 systemd[1849]: Reached target default.target - Main User Target. Nov 5 15:45:49.666893 systemd[1849]: Startup finished in 97ms. Nov 5 15:45:49.667099 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 5 15:45:49.668191 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 5 15:45:49.809820 login[1799]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 5 15:45:49.813439 systemd-logind[1651]: New session 2 of user core. Nov 5 15:45:49.820556 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 5 15:45:50.047520 kubelet[1844]: E1105 15:45:50.047483 1844 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:45:50.049003 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:45:50.049146 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:45:50.049497 systemd[1]: kubelet.service: Consumed 619ms CPU time, 263.8M memory peak. Nov 5 15:45:50.051611 systemd-timesyncd[1546]: Network configuration changed, trying to establish connection. Nov 5 15:46:00.203525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 5 15:46:00.205460 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:46:00.660872 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:46:00.663578 (kubelet)[1894]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:46:00.787713 kubelet[1894]: E1105 15:46:00.787681 1894 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:46:00.789999 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:46:00.790082 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:46:00.790390 systemd[1]: kubelet.service: Consumed 127ms CPU time, 109.7M memory peak. Nov 5 15:46:10.953335 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 5 15:46:10.954414 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:46:11.237109 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:46:11.240185 (kubelet)[1909]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:46:11.268484 kubelet[1909]: E1105 15:46:11.268429 1909 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:46:11.270008 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:46:11.270149 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:46:11.270555 systemd[1]: kubelet.service: Consumed 101ms CPU time, 110.3M memory peak. Nov 5 15:46:17.349343 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 5 15:46:17.350459 systemd[1]: Started sshd@0-139.178.70.100:22-139.178.89.65:39098.service - OpenSSH per-connection server daemon (139.178.89.65:39098). Nov 5 15:46:17.425730 sshd[1916]: Accepted publickey for core from 139.178.89.65 port 39098 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:46:17.426529 sshd-session[1916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:46:17.429562 systemd-logind[1651]: New session 3 of user core. Nov 5 15:46:17.437849 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 5 15:46:17.493635 systemd[1]: Started sshd@1-139.178.70.100:22-139.178.89.65:39108.service - OpenSSH per-connection server daemon (139.178.89.65:39108). Nov 5 15:46:17.528978 sshd[1922]: Accepted publickey for core from 139.178.89.65 port 39108 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:46:17.529813 sshd-session[1922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:46:17.533127 systemd-logind[1651]: New session 4 of user core. Nov 5 15:46:17.541592 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 5 15:46:17.589591 sshd[1925]: Connection closed by 139.178.89.65 port 39108 Nov 5 15:46:17.589889 sshd-session[1922]: pam_unix(sshd:session): session closed for user core Nov 5 15:46:17.597607 systemd[1]: sshd@1-139.178.70.100:22-139.178.89.65:39108.service: Deactivated successfully. Nov 5 15:46:17.598377 systemd[1]: session-4.scope: Deactivated successfully. Nov 5 15:46:17.599048 systemd-logind[1651]: Session 4 logged out. Waiting for processes to exit. Nov 5 15:46:17.599942 systemd[1]: Started sshd@2-139.178.70.100:22-139.178.89.65:39116.service - OpenSSH per-connection server daemon (139.178.89.65:39116). Nov 5 15:46:17.601916 systemd-logind[1651]: Removed session 4. Nov 5 15:46:17.640868 sshd[1931]: Accepted publickey for core from 139.178.89.65 port 39116 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:46:17.641715 sshd-session[1931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:46:17.644757 systemd-logind[1651]: New session 5 of user core. Nov 5 15:46:17.654593 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 5 15:46:17.700899 sshd[1934]: Connection closed by 139.178.89.65 port 39116 Nov 5 15:46:17.701790 sshd-session[1931]: pam_unix(sshd:session): session closed for user core Nov 5 15:46:17.711314 systemd[1]: sshd@2-139.178.70.100:22-139.178.89.65:39116.service: Deactivated successfully. Nov 5 15:46:17.712461 systemd[1]: session-5.scope: Deactivated successfully. Nov 5 15:46:17.713128 systemd-logind[1651]: Session 5 logged out. Waiting for processes to exit. Nov 5 15:46:17.715068 systemd[1]: Started sshd@3-139.178.70.100:22-139.178.89.65:39126.service - OpenSSH per-connection server daemon (139.178.89.65:39126). Nov 5 15:46:17.715826 systemd-logind[1651]: Removed session 5. Nov 5 15:46:17.753864 sshd[1940]: Accepted publickey for core from 139.178.89.65 port 39126 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:46:17.754425 sshd-session[1940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:46:17.757993 systemd-logind[1651]: New session 6 of user core. Nov 5 15:46:17.764583 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 5 15:46:17.812502 sshd[1943]: Connection closed by 139.178.89.65 port 39126 Nov 5 15:46:17.812762 sshd-session[1940]: pam_unix(sshd:session): session closed for user core Nov 5 15:46:17.826015 systemd[1]: sshd@3-139.178.70.100:22-139.178.89.65:39126.service: Deactivated successfully. Nov 5 15:46:17.827081 systemd[1]: session-6.scope: Deactivated successfully. Nov 5 15:46:17.827727 systemd-logind[1651]: Session 6 logged out. Waiting for processes to exit. Nov 5 15:46:17.829218 systemd[1]: Started sshd@4-139.178.70.100:22-139.178.89.65:39140.service - OpenSSH per-connection server daemon (139.178.89.65:39140). Nov 5 15:46:17.830091 systemd-logind[1651]: Removed session 6. Nov 5 15:46:17.872481 sshd[1949]: Accepted publickey for core from 139.178.89.65 port 39140 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:46:17.873027 sshd-session[1949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:46:17.876381 systemd-logind[1651]: New session 7 of user core. Nov 5 15:46:17.883568 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 5 15:46:17.973946 sudo[1953]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 5 15:46:17.974175 sudo[1953]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:46:17.988255 sudo[1953]: pam_unix(sudo:session): session closed for user root Nov 5 15:46:17.989401 sshd[1952]: Connection closed by 139.178.89.65 port 39140 Nov 5 15:46:17.990399 sshd-session[1949]: pam_unix(sshd:session): session closed for user core Nov 5 15:46:17.995639 systemd[1]: sshd@4-139.178.70.100:22-139.178.89.65:39140.service: Deactivated successfully. Nov 5 15:46:17.997166 systemd[1]: session-7.scope: Deactivated successfully. Nov 5 15:46:17.997913 systemd-logind[1651]: Session 7 logged out. Waiting for processes to exit. Nov 5 15:46:18.000195 systemd[1]: Started sshd@5-139.178.70.100:22-139.178.89.65:39156.service - OpenSSH per-connection server daemon (139.178.89.65:39156). Nov 5 15:46:18.001065 systemd-logind[1651]: Removed session 7. Nov 5 15:46:18.036899 sshd[1959]: Accepted publickey for core from 139.178.89.65 port 39156 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:46:18.037716 sshd-session[1959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:46:18.040636 systemd-logind[1651]: New session 8 of user core. Nov 5 15:46:18.050808 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 5 15:46:18.101265 sudo[1964]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 5 15:46:18.101461 sudo[1964]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:46:18.104591 sudo[1964]: pam_unix(sudo:session): session closed for user root Nov 5 15:46:18.109536 sudo[1963]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 5 15:46:18.109742 sudo[1963]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:46:18.119369 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 15:46:18.150103 augenrules[1986]: No rules Nov 5 15:46:18.150782 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 15:46:18.151016 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 15:46:18.151903 sudo[1963]: pam_unix(sudo:session): session closed for user root Nov 5 15:46:18.154016 sshd[1962]: Connection closed by 139.178.89.65 port 39156 Nov 5 15:46:18.153113 sshd-session[1959]: pam_unix(sshd:session): session closed for user core Nov 5 15:46:18.158712 systemd[1]: sshd@5-139.178.70.100:22-139.178.89.65:39156.service: Deactivated successfully. Nov 5 15:46:18.159691 systemd[1]: session-8.scope: Deactivated successfully. Nov 5 15:46:18.160187 systemd-logind[1651]: Session 8 logged out. Waiting for processes to exit. Nov 5 15:46:18.161645 systemd[1]: Started sshd@6-139.178.70.100:22-139.178.89.65:39170.service - OpenSSH per-connection server daemon (139.178.89.65:39170). Nov 5 15:46:18.162915 systemd-logind[1651]: Removed session 8. Nov 5 15:46:18.202097 sshd[1995]: Accepted publickey for core from 139.178.89.65 port 39170 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:46:18.202664 sshd-session[1995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:46:18.205024 systemd-logind[1651]: New session 9 of user core. Nov 5 15:46:18.214646 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 5 15:46:18.262956 sudo[1999]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 5 15:46:18.263354 sudo[1999]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:46:18.718185 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 5 15:46:18.728731 (dockerd)[2017]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 5 15:46:18.965222 dockerd[2017]: time="2025-11-05T15:46:18.965188012Z" level=info msg="Starting up" Nov 5 15:46:18.967616 dockerd[2017]: time="2025-11-05T15:46:18.967596322Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 5 15:46:18.974074 dockerd[2017]: time="2025-11-05T15:46:18.973860185Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 5 15:46:19.045023 dockerd[2017]: time="2025-11-05T15:46:19.044997009Z" level=info msg="Loading containers: start." Nov 5 15:46:19.053486 kernel: Initializing XFRM netlink socket Nov 5 15:46:19.219820 systemd-timesyncd[1546]: Network configuration changed, trying to establish connection. Nov 5 15:46:19.247523 systemd-networkd[1579]: docker0: Link UP Nov 5 15:46:19.249141 dockerd[2017]: time="2025-11-05T15:46:19.249125500Z" level=info msg="Loading containers: done." Nov 5 15:46:19.257430 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3580237038-merged.mount: Deactivated successfully. Nov 5 15:47:57.981422 systemd-resolved[1339]: Clock change detected. Flushing caches. Nov 5 15:47:57.981802 systemd-timesyncd[1546]: Contacted time server 69.48.203.162:123 (2.flatcar.pool.ntp.org). Nov 5 15:47:57.981962 systemd-timesyncd[1546]: Initial clock synchronization to Wed 2025-11-05 15:47:57.981381 UTC. Nov 5 15:47:57.984286 dockerd[2017]: time="2025-11-05T15:47:57.984198373Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 5 15:47:57.984286 dockerd[2017]: time="2025-11-05T15:47:57.984262603Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 5 15:47:57.984397 dockerd[2017]: time="2025-11-05T15:47:57.984362993Z" level=info msg="Initializing buildkit" Nov 5 15:47:58.020186 dockerd[2017]: time="2025-11-05T15:47:58.020155510Z" level=info msg="Completed buildkit initialization" Nov 5 15:47:58.027306 dockerd[2017]: time="2025-11-05T15:47:58.027282694Z" level=info msg="Daemon has completed initialization" Nov 5 15:47:58.027492 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 5 15:47:58.027601 dockerd[2017]: time="2025-11-05T15:47:58.027393155Z" level=info msg="API listen on /run/docker.sock" Nov 5 15:47:58.847239 containerd[1686]: time="2025-11-05T15:47:58.847198241Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 5 15:47:59.322706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount657189926.mount: Deactivated successfully. Nov 5 15:48:00.165592 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 5 15:48:00.167493 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:48:00.425933 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:48:00.428811 (kubelet)[2289]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:48:00.451278 kubelet[2289]: E1105 15:48:00.451247 2289 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:48:00.452611 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:48:00.452747 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:48:00.453114 systemd[1]: kubelet.service: Consumed 97ms CPU time, 110.2M memory peak. Nov 5 15:48:00.952406 containerd[1686]: time="2025-11-05T15:48:00.952377600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:00.953464 containerd[1686]: time="2025-11-05T15:48:00.953430260Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Nov 5 15:48:00.956237 containerd[1686]: time="2025-11-05T15:48:00.956225195Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:00.958335 containerd[1686]: time="2025-11-05T15:48:00.958322342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:00.958762 containerd[1686]: time="2025-11-05T15:48:00.958644415Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.111417709s" Nov 5 15:48:00.958762 containerd[1686]: time="2025-11-05T15:48:00.958665137Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 5 15:48:00.959326 containerd[1686]: time="2025-11-05T15:48:00.959313130Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 5 15:48:02.329562 containerd[1686]: time="2025-11-05T15:48:02.329529259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:02.337053 containerd[1686]: time="2025-11-05T15:48:02.336966700Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Nov 5 15:48:02.340894 containerd[1686]: time="2025-11-05T15:48:02.340862566Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:02.345931 containerd[1686]: time="2025-11-05T15:48:02.345918229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:02.346658 containerd[1686]: time="2025-11-05T15:48:02.346420275Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.387092127s" Nov 5 15:48:02.346658 containerd[1686]: time="2025-11-05T15:48:02.346437554Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 5 15:48:02.346787 containerd[1686]: time="2025-11-05T15:48:02.346776182Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 5 15:48:03.411402 containerd[1686]: time="2025-11-05T15:48:03.411372909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:03.412120 containerd[1686]: time="2025-11-05T15:48:03.412106540Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Nov 5 15:48:03.412367 containerd[1686]: time="2025-11-05T15:48:03.412354001Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:03.414117 containerd[1686]: time="2025-11-05T15:48:03.414079813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:03.414620 containerd[1686]: time="2025-11-05T15:48:03.414604427Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.067552147s" Nov 5 15:48:03.414648 containerd[1686]: time="2025-11-05T15:48:03.414622063Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 5 15:48:03.415055 containerd[1686]: time="2025-11-05T15:48:03.415040296Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 5 15:48:04.226616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount688449230.mount: Deactivated successfully. Nov 5 15:48:04.636625 containerd[1686]: time="2025-11-05T15:48:04.636551585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:04.641601 containerd[1686]: time="2025-11-05T15:48:04.641575471Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Nov 5 15:48:04.647042 containerd[1686]: time="2025-11-05T15:48:04.646989275Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:04.657844 containerd[1686]: time="2025-11-05T15:48:04.657794117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:04.658178 containerd[1686]: time="2025-11-05T15:48:04.658061560Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.243004669s" Nov 5 15:48:04.658178 containerd[1686]: time="2025-11-05T15:48:04.658084393Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 5 15:48:04.658416 containerd[1686]: time="2025-11-05T15:48:04.658399046Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 5 15:48:05.260291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1712142489.mount: Deactivated successfully. Nov 5 15:48:05.974826 containerd[1686]: time="2025-11-05T15:48:05.974794321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:05.980596 containerd[1686]: time="2025-11-05T15:48:05.980578995Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 5 15:48:05.990143 containerd[1686]: time="2025-11-05T15:48:05.990122511Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:05.997967 containerd[1686]: time="2025-11-05T15:48:05.997949667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:05.998566 containerd[1686]: time="2025-11-05T15:48:05.998288287Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.339871462s" Nov 5 15:48:05.998566 containerd[1686]: time="2025-11-05T15:48:05.998305555Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 5 15:48:05.998698 containerd[1686]: time="2025-11-05T15:48:05.998684829Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 5 15:48:06.948717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3436577909.mount: Deactivated successfully. Nov 5 15:48:06.972559 containerd[1686]: time="2025-11-05T15:48:06.972522717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:48:06.977168 containerd[1686]: time="2025-11-05T15:48:06.977146890Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 5 15:48:06.979572 containerd[1686]: time="2025-11-05T15:48:06.979550159Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:48:06.985050 containerd[1686]: time="2025-11-05T15:48:06.985021529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:48:06.985372 containerd[1686]: time="2025-11-05T15:48:06.985274021Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 986.572901ms" Nov 5 15:48:06.985372 containerd[1686]: time="2025-11-05T15:48:06.985290535Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 5 15:48:06.985573 containerd[1686]: time="2025-11-05T15:48:06.985558403Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 5 15:48:07.515139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4238936507.mount: Deactivated successfully. Nov 5 15:48:10.665179 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 5 15:48:10.666635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:48:11.534390 containerd[1686]: time="2025-11-05T15:48:11.534348700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:11.572003 containerd[1686]: time="2025-11-05T15:48:11.571949219Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 5 15:48:11.620910 containerd[1686]: time="2025-11-05T15:48:11.620847264Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:11.681348 containerd[1686]: time="2025-11-05T15:48:11.681293183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:11.682545 containerd[1686]: time="2025-11-05T15:48:11.682427462Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.696653624s" Nov 5 15:48:11.682545 containerd[1686]: time="2025-11-05T15:48:11.682452978Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 5 15:48:11.686258 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:48:11.695269 (kubelet)[2437]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:48:11.723033 update_engine[1652]: I20251105 15:48:11.722687 1652 update_attempter.cc:509] Updating boot flags... Nov 5 15:48:11.823140 kubelet[2437]: E1105 15:48:11.821569 2437 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:48:11.823341 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:48:11.823425 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:48:11.823856 systemd[1]: kubelet.service: Consumed 111ms CPU time, 108M memory peak. Nov 5 15:48:13.797056 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:48:13.797220 systemd[1]: kubelet.service: Consumed 111ms CPU time, 108M memory peak. Nov 5 15:48:13.798710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:48:13.815170 systemd[1]: Reload requested from client PID 2488 ('systemctl') (unit session-9.scope)... Nov 5 15:48:13.815179 systemd[1]: Reloading... Nov 5 15:48:13.898119 zram_generator::config[2548]: No configuration found. Nov 5 15:48:13.957532 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 5 15:48:14.025856 systemd[1]: Reloading finished in 210 ms. Nov 5 15:48:14.071785 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 5 15:48:14.071869 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 5 15:48:14.072207 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:48:14.073825 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:48:14.588907 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:48:14.596292 (kubelet)[2600]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 15:48:14.631067 kubelet[2600]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:48:14.631067 kubelet[2600]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 15:48:14.631067 kubelet[2600]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:48:14.632894 kubelet[2600]: I1105 15:48:14.632392 2600 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 15:48:14.881373 kubelet[2600]: I1105 15:48:14.881317 2600 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 5 15:48:14.881373 kubelet[2600]: I1105 15:48:14.881335 2600 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 15:48:14.881664 kubelet[2600]: I1105 15:48:14.881488 2600 server.go:954] "Client rotation is on, will bootstrap in background" Nov 5 15:48:14.907255 kubelet[2600]: E1105 15:48:14.907227 2600 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:48:14.908394 kubelet[2600]: I1105 15:48:14.908331 2600 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 15:48:14.921399 kubelet[2600]: I1105 15:48:14.921390 2600 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 15:48:14.925175 kubelet[2600]: I1105 15:48:14.925164 2600 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 15:48:14.927385 kubelet[2600]: I1105 15:48:14.927363 2600 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 15:48:14.927482 kubelet[2600]: I1105 15:48:14.927385 2600 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 15:48:14.928828 kubelet[2600]: I1105 15:48:14.928816 2600 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 15:48:14.928828 kubelet[2600]: I1105 15:48:14.928828 2600 container_manager_linux.go:304] "Creating device plugin manager" Nov 5 15:48:14.931013 kubelet[2600]: I1105 15:48:14.931000 2600 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:48:14.933762 kubelet[2600]: I1105 15:48:14.933753 2600 kubelet.go:446] "Attempting to sync node with API server" Nov 5 15:48:14.935059 kubelet[2600]: I1105 15:48:14.934773 2600 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 15:48:14.935647 kubelet[2600]: I1105 15:48:14.935633 2600 kubelet.go:352] "Adding apiserver pod source" Nov 5 15:48:14.935647 kubelet[2600]: I1105 15:48:14.935645 2600 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 15:48:14.939338 kubelet[2600]: W1105 15:48:14.939074 2600 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Nov 5 15:48:14.939338 kubelet[2600]: E1105 15:48:14.939123 2600 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:48:14.939338 kubelet[2600]: W1105 15:48:14.939297 2600 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Nov 5 15:48:14.939338 kubelet[2600]: E1105 15:48:14.939316 2600 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:48:14.942277 kubelet[2600]: I1105 15:48:14.942266 2600 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 15:48:14.944575 kubelet[2600]: I1105 15:48:14.944565 2600 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 5 15:48:14.946876 kubelet[2600]: W1105 15:48:14.946869 2600 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 5 15:48:14.947212 kubelet[2600]: I1105 15:48:14.947205 2600 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 15:48:14.947384 kubelet[2600]: I1105 15:48:14.947377 2600 server.go:1287] "Started kubelet" Nov 5 15:48:14.949340 kubelet[2600]: I1105 15:48:14.949332 2600 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 15:48:14.954266 kubelet[2600]: I1105 15:48:14.954253 2600 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 15:48:14.955726 kubelet[2600]: I1105 15:48:14.955717 2600 server.go:479] "Adding debug handlers to kubelet server" Nov 5 15:48:14.956389 kubelet[2600]: I1105 15:48:14.956364 2600 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 15:48:14.956551 kubelet[2600]: I1105 15:48:14.956544 2600 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 15:48:14.957555 kubelet[2600]: I1105 15:48:14.957546 2600 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 15:48:14.958391 kubelet[2600]: E1105 15:48:14.954825 2600 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.100:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.100:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.187526fc76839a9b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-05 15:48:14.947367579 +0000 UTC m=+0.347326829,LastTimestamp:2025-11-05 15:48:14.947367579 +0000 UTC m=+0.347326829,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 5 15:48:14.959085 kubelet[2600]: I1105 15:48:14.959077 2600 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 15:48:14.959224 kubelet[2600]: E1105 15:48:14.959215 2600 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 15:48:14.959706 kubelet[2600]: E1105 15:48:14.959690 2600 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="200ms" Nov 5 15:48:14.959734 kubelet[2600]: I1105 15:48:14.959728 2600 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 15:48:14.959987 kubelet[2600]: W1105 15:48:14.959964 2600 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Nov 5 15:48:14.960014 kubelet[2600]: E1105 15:48:14.959988 2600 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:48:14.960032 kubelet[2600]: I1105 15:48:14.960015 2600 reconciler.go:26] "Reconciler: start to sync state" Nov 5 15:48:14.962935 kubelet[2600]: I1105 15:48:14.962928 2600 factory.go:221] Registration of the systemd container factory successfully Nov 5 15:48:14.963012 kubelet[2600]: I1105 15:48:14.963003 2600 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 15:48:14.966115 kubelet[2600]: E1105 15:48:14.965530 2600 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 15:48:14.966115 kubelet[2600]: I1105 15:48:14.965594 2600 factory.go:221] Registration of the containerd container factory successfully Nov 5 15:48:14.976358 kubelet[2600]: I1105 15:48:14.976295 2600 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 5 15:48:14.978678 kubelet[2600]: I1105 15:48:14.978670 2600 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 5 15:48:14.978859 kubelet[2600]: I1105 15:48:14.978717 2600 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 5 15:48:14.978859 kubelet[2600]: I1105 15:48:14.978730 2600 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 15:48:14.978859 kubelet[2600]: I1105 15:48:14.978734 2600 kubelet.go:2382] "Starting kubelet main sync loop" Nov 5 15:48:14.978859 kubelet[2600]: E1105 15:48:14.978755 2600 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 15:48:14.979861 kubelet[2600]: I1105 15:48:14.979847 2600 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 15:48:14.979861 kubelet[2600]: I1105 15:48:14.979857 2600 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 15:48:14.979906 kubelet[2600]: I1105 15:48:14.979865 2600 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:48:14.981130 kubelet[2600]: I1105 15:48:14.980836 2600 policy_none.go:49] "None policy: Start" Nov 5 15:48:14.981130 kubelet[2600]: I1105 15:48:14.980845 2600 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 15:48:14.981130 kubelet[2600]: I1105 15:48:14.980851 2600 state_mem.go:35] "Initializing new in-memory state store" Nov 5 15:48:14.982336 kubelet[2600]: W1105 15:48:14.982296 2600 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Nov 5 15:48:14.982336 kubelet[2600]: E1105 15:48:14.982314 2600 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:48:14.986083 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 5 15:48:14.998637 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 5 15:48:15.001522 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 5 15:48:15.025792 kubelet[2600]: I1105 15:48:15.025778 2600 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 5 15:48:15.026267 kubelet[2600]: I1105 15:48:15.026239 2600 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 15:48:15.026339 kubelet[2600]: I1105 15:48:15.026251 2600 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 15:48:15.028961 kubelet[2600]: E1105 15:48:15.028888 2600 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 15:48:15.028961 kubelet[2600]: E1105 15:48:15.028921 2600 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 5 15:48:15.029320 kubelet[2600]: I1105 15:48:15.029312 2600 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 15:48:15.097704 systemd[1]: Created slice kubepods-burstable-pod7db97b10fdb36e474f18a7fefec08e1c.slice - libcontainer container kubepods-burstable-pod7db97b10fdb36e474f18a7fefec08e1c.slice. Nov 5 15:48:15.111996 kubelet[2600]: E1105 15:48:15.111976 2600 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:48:15.114179 systemd[1]: Created slice kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice - libcontainer container kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice. Nov 5 15:48:15.119044 kubelet[2600]: E1105 15:48:15.118995 2600 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:48:15.121486 systemd[1]: Created slice kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice - libcontainer container kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice. Nov 5 15:48:15.122791 kubelet[2600]: E1105 15:48:15.122687 2600 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:48:15.127865 kubelet[2600]: I1105 15:48:15.127856 2600 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 15:48:15.128199 kubelet[2600]: E1105 15:48:15.128166 2600 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.100:6443/api/v1/nodes\": dial tcp 139.178.70.100:6443: connect: connection refused" node="localhost" Nov 5 15:48:15.160721 kubelet[2600]: E1105 15:48:15.160672 2600 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="400ms" Nov 5 15:48:15.242540 systemd[1]: Started sshd@7-139.178.70.100:22-87.236.176.219:45039.service - OpenSSH per-connection server daemon (87.236.176.219:45039). Nov 5 15:48:15.261344 kubelet[2600]: I1105 15:48:15.261325 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7db97b10fdb36e474f18a7fefec08e1c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7db97b10fdb36e474f18a7fefec08e1c\") " pod="kube-system/kube-apiserver-localhost" Nov 5 15:48:15.261450 kubelet[2600]: I1105 15:48:15.261440 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7db97b10fdb36e474f18a7fefec08e1c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7db97b10fdb36e474f18a7fefec08e1c\") " pod="kube-system/kube-apiserver-localhost" Nov 5 15:48:15.261516 kubelet[2600]: I1105 15:48:15.261506 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7db97b10fdb36e474f18a7fefec08e1c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7db97b10fdb36e474f18a7fefec08e1c\") " pod="kube-system/kube-apiserver-localhost" Nov 5 15:48:15.261651 kubelet[2600]: I1105 15:48:15.261564 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:48:15.261651 kubelet[2600]: I1105 15:48:15.261580 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:48:15.261651 kubelet[2600]: I1105 15:48:15.261592 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:48:15.261651 kubelet[2600]: I1105 15:48:15.261604 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:48:15.261651 kubelet[2600]: I1105 15:48:15.261619 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:48:15.261766 kubelet[2600]: I1105 15:48:15.261630 2600 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 5 15:48:15.329311 kubelet[2600]: I1105 15:48:15.329290 2600 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 15:48:15.329534 kubelet[2600]: E1105 15:48:15.329512 2600 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.100:6443/api/v1/nodes\": dial tcp 139.178.70.100:6443: connect: connection refused" node="localhost" Nov 5 15:48:15.414295 containerd[1686]: time="2025-11-05T15:48:15.414234587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7db97b10fdb36e474f18a7fefec08e1c,Namespace:kube-system,Attempt:0,}" Nov 5 15:48:15.420008 containerd[1686]: time="2025-11-05T15:48:15.419991021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Nov 5 15:48:15.423621 containerd[1686]: time="2025-11-05T15:48:15.423516600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Nov 5 15:48:15.562236 kubelet[2600]: E1105 15:48:15.561514 2600 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="800ms" Nov 5 15:48:15.613106 containerd[1686]: time="2025-11-05T15:48:15.613042317Z" level=info msg="connecting to shim 6f0df1972158ab1b664e11370e2749142ec803f982ab5ff94469c331007c6d5b" address="unix:///run/containerd/s/8c5701e3f38d4776f29669ff72dad9bb0084e4f61326394bc7781526f397c303" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:48:15.616649 containerd[1686]: time="2025-11-05T15:48:15.615758940Z" level=info msg="connecting to shim 2981475f4d80e95f55676ae4220251c20adfa909036f97edd859df5842fc10d6" address="unix:///run/containerd/s/4d32c36ad9165a2892f14407025f76bfdf5172c116860e74c0ce8876ce1c1398" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:48:15.622042 containerd[1686]: time="2025-11-05T15:48:15.622018134Z" level=info msg="connecting to shim 3084b41e6b4cc4520642a23e30260faefaa22768266996a1f4e6e3d17449d908" address="unix:///run/containerd/s/4e7dfc51dc153ab617beec9553db5d015b64774227483891a58e19ab0ee7a6e8" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:48:15.731074 kubelet[2600]: I1105 15:48:15.731040 2600 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 15:48:15.731382 kubelet[2600]: E1105 15:48:15.731256 2600 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.100:6443/api/v1/nodes\": dial tcp 139.178.70.100:6443: connect: connection refused" node="localhost" Nov 5 15:48:15.734281 systemd[1]: Started cri-containerd-6f0df1972158ab1b664e11370e2749142ec803f982ab5ff94469c331007c6d5b.scope - libcontainer container 6f0df1972158ab1b664e11370e2749142ec803f982ab5ff94469c331007c6d5b. Nov 5 15:48:15.740080 systemd[1]: Started cri-containerd-2981475f4d80e95f55676ae4220251c20adfa909036f97edd859df5842fc10d6.scope - libcontainer container 2981475f4d80e95f55676ae4220251c20adfa909036f97edd859df5842fc10d6. Nov 5 15:48:15.741794 systemd[1]: Started cri-containerd-3084b41e6b4cc4520642a23e30260faefaa22768266996a1f4e6e3d17449d908.scope - libcontainer container 3084b41e6b4cc4520642a23e30260faefaa22768266996a1f4e6e3d17449d908. Nov 5 15:48:15.830743 containerd[1686]: time="2025-11-05T15:48:15.830719306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7db97b10fdb36e474f18a7fefec08e1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2981475f4d80e95f55676ae4220251c20adfa909036f97edd859df5842fc10d6\"" Nov 5 15:48:15.832274 containerd[1686]: time="2025-11-05T15:48:15.832260232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"3084b41e6b4cc4520642a23e30260faefaa22768266996a1f4e6e3d17449d908\"" Nov 5 15:48:15.833013 containerd[1686]: time="2025-11-05T15:48:15.832746079Z" level=info msg="CreateContainer within sandbox \"2981475f4d80e95f55676ae4220251c20adfa909036f97edd859df5842fc10d6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 5 15:48:15.844280 containerd[1686]: time="2025-11-05T15:48:15.844255909Z" level=info msg="CreateContainer within sandbox \"3084b41e6b4cc4520642a23e30260faefaa22768266996a1f4e6e3d17449d908\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 5 15:48:15.854894 containerd[1686]: time="2025-11-05T15:48:15.854877268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f0df1972158ab1b664e11370e2749142ec803f982ab5ff94469c331007c6d5b\"" Nov 5 15:48:15.859199 containerd[1686]: time="2025-11-05T15:48:15.859177609Z" level=info msg="CreateContainer within sandbox \"6f0df1972158ab1b664e11370e2749142ec803f982ab5ff94469c331007c6d5b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 5 15:48:15.876820 containerd[1686]: time="2025-11-05T15:48:15.876767985Z" level=info msg="Container 9c5a068a4ba9f09add5619f371c3a1197d426ffdde0b6fc908d26bd29e47c8e0: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:48:15.877798 containerd[1686]: time="2025-11-05T15:48:15.877766750Z" level=info msg="Container a575dd984672ea77286d303bcf65bdb4635cfc8ab45f1fb83440d154f182d5e6: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:48:15.880302 containerd[1686]: time="2025-11-05T15:48:15.880278661Z" level=info msg="Container 44cfa03c97f185259379f909b6d8075822adcf730fc1e850aafaaf49c386fb10: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:48:15.891132 containerd[1686]: time="2025-11-05T15:48:15.891112998Z" level=info msg="CreateContainer within sandbox \"3084b41e6b4cc4520642a23e30260faefaa22768266996a1f4e6e3d17449d908\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a575dd984672ea77286d303bcf65bdb4635cfc8ab45f1fb83440d154f182d5e6\"" Nov 5 15:48:15.891587 containerd[1686]: time="2025-11-05T15:48:15.891563692Z" level=info msg="CreateContainer within sandbox \"6f0df1972158ab1b664e11370e2749142ec803f982ab5ff94469c331007c6d5b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"44cfa03c97f185259379f909b6d8075822adcf730fc1e850aafaaf49c386fb10\"" Nov 5 15:48:15.891886 containerd[1686]: time="2025-11-05T15:48:15.891869079Z" level=info msg="StartContainer for \"a575dd984672ea77286d303bcf65bdb4635cfc8ab45f1fb83440d154f182d5e6\"" Nov 5 15:48:15.892548 containerd[1686]: time="2025-11-05T15:48:15.892529073Z" level=info msg="connecting to shim a575dd984672ea77286d303bcf65bdb4635cfc8ab45f1fb83440d154f182d5e6" address="unix:///run/containerd/s/4e7dfc51dc153ab617beec9553db5d015b64774227483891a58e19ab0ee7a6e8" protocol=ttrpc version=3 Nov 5 15:48:15.893294 containerd[1686]: time="2025-11-05T15:48:15.893276725Z" level=info msg="CreateContainer within sandbox \"2981475f4d80e95f55676ae4220251c20adfa909036f97edd859df5842fc10d6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9c5a068a4ba9f09add5619f371c3a1197d426ffdde0b6fc908d26bd29e47c8e0\"" Nov 5 15:48:15.893367 containerd[1686]: time="2025-11-05T15:48:15.893352108Z" level=info msg="StartContainer for \"44cfa03c97f185259379f909b6d8075822adcf730fc1e850aafaaf49c386fb10\"" Nov 5 15:48:15.894165 containerd[1686]: time="2025-11-05T15:48:15.894053080Z" level=info msg="connecting to shim 44cfa03c97f185259379f909b6d8075822adcf730fc1e850aafaaf49c386fb10" address="unix:///run/containerd/s/8c5701e3f38d4776f29669ff72dad9bb0084e4f61326394bc7781526f397c303" protocol=ttrpc version=3 Nov 5 15:48:15.894228 containerd[1686]: time="2025-11-05T15:48:15.894126059Z" level=info msg="StartContainer for \"9c5a068a4ba9f09add5619f371c3a1197d426ffdde0b6fc908d26bd29e47c8e0\"" Nov 5 15:48:15.894756 containerd[1686]: time="2025-11-05T15:48:15.894739575Z" level=info msg="connecting to shim 9c5a068a4ba9f09add5619f371c3a1197d426ffdde0b6fc908d26bd29e47c8e0" address="unix:///run/containerd/s/4d32c36ad9165a2892f14407025f76bfdf5172c116860e74c0ce8876ce1c1398" protocol=ttrpc version=3 Nov 5 15:48:15.907530 kubelet[2600]: W1105 15:48:15.907496 2600 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Nov 5 15:48:15.907602 kubelet[2600]: E1105 15:48:15.907535 2600 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:48:15.909171 systemd[1]: Started cri-containerd-a575dd984672ea77286d303bcf65bdb4635cfc8ab45f1fb83440d154f182d5e6.scope - libcontainer container a575dd984672ea77286d303bcf65bdb4635cfc8ab45f1fb83440d154f182d5e6. Nov 5 15:48:15.912822 systemd[1]: Started cri-containerd-44cfa03c97f185259379f909b6d8075822adcf730fc1e850aafaaf49c386fb10.scope - libcontainer container 44cfa03c97f185259379f909b6d8075822adcf730fc1e850aafaaf49c386fb10. Nov 5 15:48:15.914290 systemd[1]: Started cri-containerd-9c5a068a4ba9f09add5619f371c3a1197d426ffdde0b6fc908d26bd29e47c8e0.scope - libcontainer container 9c5a068a4ba9f09add5619f371c3a1197d426ffdde0b6fc908d26bd29e47c8e0. Nov 5 15:48:15.919215 kubelet[2600]: W1105 15:48:15.919070 2600 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Nov 5 15:48:15.919215 kubelet[2600]: E1105 15:48:15.919205 2600 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:48:15.961853 containerd[1686]: time="2025-11-05T15:48:15.961675247Z" level=info msg="StartContainer for \"a575dd984672ea77286d303bcf65bdb4635cfc8ab45f1fb83440d154f182d5e6\" returns successfully" Nov 5 15:48:15.963362 containerd[1686]: time="2025-11-05T15:48:15.963325469Z" level=info msg="StartContainer for \"44cfa03c97f185259379f909b6d8075822adcf730fc1e850aafaaf49c386fb10\" returns successfully" Nov 5 15:48:15.979523 containerd[1686]: time="2025-11-05T15:48:15.979475970Z" level=info msg="StartContainer for \"9c5a068a4ba9f09add5619f371c3a1197d426ffdde0b6fc908d26bd29e47c8e0\" returns successfully" Nov 5 15:48:15.985740 kubelet[2600]: E1105 15:48:15.985690 2600 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:48:15.987334 kubelet[2600]: E1105 15:48:15.987210 2600 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:48:15.990069 kubelet[2600]: E1105 15:48:15.990054 2600 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:48:16.262793 kubelet[2600]: W1105 15:48:16.262702 2600 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Nov 5 15:48:16.262793 kubelet[2600]: E1105 15:48:16.262742 2600 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:48:16.363188 kubelet[2600]: E1105 15:48:16.363161 2600 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="1.6s" Nov 5 15:48:16.533191 kubelet[2600]: I1105 15:48:16.532778 2600 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 15:48:16.990099 kubelet[2600]: E1105 15:48:16.990052 2600 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:48:16.990415 kubelet[2600]: E1105 15:48:16.990315 2600 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:48:17.232170 sshd[2631]: Connection closed by 87.236.176.219 port 45039 Nov 5 15:48:17.232676 systemd[1]: sshd@7-139.178.70.100:22-87.236.176.219:45039.service: Deactivated successfully. Nov 5 15:48:17.368830 kubelet[2600]: E1105 15:48:17.368706 2600 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.187526fc76839a9b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-05 15:48:14.947367579 +0000 UTC m=+0.347326829,LastTimestamp:2025-11-05 15:48:14.947367579 +0000 UTC m=+0.347326829,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 5 15:48:17.461185 kubelet[2600]: E1105 15:48:17.461042 2600 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.187526fc7798a661 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-05 15:48:14.965524065 +0000 UTC m=+0.365483325,LastTimestamp:2025-11-05 15:48:14.965524065 +0000 UTC m=+0.365483325,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 5 15:48:17.500562 kubelet[2600]: E1105 15:48:17.500405 2600 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:48:17.513931 kubelet[2600]: I1105 15:48:17.513899 2600 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 5 15:48:17.513931 kubelet[2600]: E1105 15:48:17.513930 2600 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 5 15:48:17.557479 kubelet[2600]: E1105 15:48:17.557358 2600 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 15:48:17.658348 kubelet[2600]: E1105 15:48:17.658271 2600 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 15:48:17.759293 kubelet[2600]: E1105 15:48:17.759267 2600 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 15:48:17.859777 kubelet[2600]: E1105 15:48:17.859731 2600 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 15:48:17.941276 kubelet[2600]: I1105 15:48:17.941158 2600 apiserver.go:52] "Watching apiserver" Nov 5 15:48:17.959745 kubelet[2600]: I1105 15:48:17.959733 2600 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 15:48:17.960003 kubelet[2600]: I1105 15:48:17.959971 2600 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 15:48:17.965672 kubelet[2600]: E1105 15:48:17.965655 2600 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 5 15:48:17.965858 kubelet[2600]: I1105 15:48:17.965751 2600 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 15:48:17.967282 kubelet[2600]: E1105 15:48:17.967269 2600 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 5 15:48:17.967446 kubelet[2600]: I1105 15:48:17.967342 2600 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 15:48:17.968172 kubelet[2600]: E1105 15:48:17.968163 2600 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 5 15:48:17.990812 kubelet[2600]: I1105 15:48:17.990795 2600 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 15:48:17.992120 kubelet[2600]: E1105 15:48:17.992081 2600 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 5 15:48:18.400696 systemd[1]: Started sshd@8-139.178.70.100:22-87.236.176.219:42897.service - OpenSSH per-connection server daemon (87.236.176.219:42897). Nov 5 15:48:18.897921 sshd[2878]: Connection closed by 87.236.176.219 port 42897 [preauth] Nov 5 15:48:18.899169 systemd[1]: sshd@8-139.178.70.100:22-87.236.176.219:42897.service: Deactivated successfully. Nov 5 15:48:19.035084 systemd[1]: Reload requested from client PID 2884 ('systemctl') (unit session-9.scope)... Nov 5 15:48:19.035141 systemd[1]: Reloading... Nov 5 15:48:19.093177 zram_generator::config[2928]: No configuration found. Nov 5 15:48:19.177953 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 5 15:48:19.253943 systemd[1]: Reloading finished in 218 ms. Nov 5 15:48:19.284243 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:48:19.298296 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 15:48:19.298477 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:48:19.298510 systemd[1]: kubelet.service: Consumed 563ms CPU time, 129.7M memory peak. Nov 5 15:48:19.300631 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:48:19.668757 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:48:19.678294 (kubelet)[2996]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 15:48:19.719733 kubelet[2996]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:48:19.720066 kubelet[2996]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 15:48:19.720066 kubelet[2996]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:48:19.723522 kubelet[2996]: I1105 15:48:19.723497 2996 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 15:48:19.727527 kubelet[2996]: I1105 15:48:19.727511 2996 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 5 15:48:19.727527 kubelet[2996]: I1105 15:48:19.727523 2996 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 15:48:19.727654 kubelet[2996]: I1105 15:48:19.727644 2996 server.go:954] "Client rotation is on, will bootstrap in background" Nov 5 15:48:19.730506 kubelet[2996]: I1105 15:48:19.730493 2996 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 5 15:48:19.737929 kubelet[2996]: I1105 15:48:19.737861 2996 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 15:48:19.744914 kubelet[2996]: I1105 15:48:19.744900 2996 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 15:48:19.747870 kubelet[2996]: I1105 15:48:19.747860 2996 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 15:48:19.750464 kubelet[2996]: I1105 15:48:19.750248 2996 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 15:48:19.750464 kubelet[2996]: I1105 15:48:19.750339 2996 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 15:48:19.750464 kubelet[2996]: I1105 15:48:19.750440 2996 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 15:48:19.750464 kubelet[2996]: I1105 15:48:19.750447 2996 container_manager_linux.go:304] "Creating device plugin manager" Nov 5 15:48:19.750588 kubelet[2996]: I1105 15:48:19.750475 2996 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:48:19.751486 kubelet[2996]: I1105 15:48:19.751477 2996 kubelet.go:446] "Attempting to sync node with API server" Nov 5 15:48:19.751515 kubelet[2996]: I1105 15:48:19.751493 2996 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 15:48:19.751515 kubelet[2996]: I1105 15:48:19.751509 2996 kubelet.go:352] "Adding apiserver pod source" Nov 5 15:48:19.752182 kubelet[2996]: I1105 15:48:19.751516 2996 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 15:48:19.754380 kubelet[2996]: I1105 15:48:19.754359 2996 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 15:48:19.754692 kubelet[2996]: I1105 15:48:19.754685 2996 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 5 15:48:19.757672 kubelet[2996]: I1105 15:48:19.757659 2996 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 15:48:19.757705 kubelet[2996]: I1105 15:48:19.757680 2996 server.go:1287] "Started kubelet" Nov 5 15:48:19.759290 kubelet[2996]: I1105 15:48:19.759279 2996 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 15:48:19.764076 kubelet[2996]: I1105 15:48:19.764056 2996 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 15:48:19.765488 kubelet[2996]: I1105 15:48:19.765480 2996 server.go:479] "Adding debug handlers to kubelet server" Nov 5 15:48:19.765722 kubelet[2996]: I1105 15:48:19.765712 2996 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 15:48:19.766068 kubelet[2996]: I1105 15:48:19.766044 2996 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 15:48:19.766760 kubelet[2996]: I1105 15:48:19.766747 2996 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 15:48:19.766842 kubelet[2996]: I1105 15:48:19.766832 2996 reconciler.go:26] "Reconciler: start to sync state" Nov 5 15:48:19.766917 kubelet[2996]: I1105 15:48:19.766909 2996 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 15:48:19.767223 kubelet[2996]: I1105 15:48:19.767215 2996 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 15:48:19.767668 kubelet[2996]: I1105 15:48:19.767659 2996 factory.go:221] Registration of the systemd container factory successfully Nov 5 15:48:19.767755 kubelet[2996]: I1105 15:48:19.767745 2996 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 15:48:19.769692 kubelet[2996]: I1105 15:48:19.769674 2996 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 5 15:48:19.770156 kubelet[2996]: I1105 15:48:19.770149 2996 factory.go:221] Registration of the containerd container factory successfully Nov 5 15:48:19.770874 kubelet[2996]: I1105 15:48:19.770861 2996 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 5 15:48:19.771745 kubelet[2996]: I1105 15:48:19.771734 2996 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 5 15:48:19.771781 kubelet[2996]: I1105 15:48:19.771751 2996 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 15:48:19.771781 kubelet[2996]: I1105 15:48:19.771757 2996 kubelet.go:2382] "Starting kubelet main sync loop" Nov 5 15:48:19.772719 kubelet[2996]: E1105 15:48:19.772702 2996 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 15:48:19.798248 kubelet[2996]: I1105 15:48:19.798215 2996 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 15:48:19.798248 kubelet[2996]: I1105 15:48:19.798227 2996 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 15:48:19.798248 kubelet[2996]: I1105 15:48:19.798267 2996 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:48:19.798376 kubelet[2996]: I1105 15:48:19.798365 2996 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 5 15:48:19.798392 kubelet[2996]: I1105 15:48:19.798371 2996 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 5 15:48:19.798392 kubelet[2996]: I1105 15:48:19.798383 2996 policy_none.go:49] "None policy: Start" Nov 5 15:48:19.798392 kubelet[2996]: I1105 15:48:19.798388 2996 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 15:48:19.798440 kubelet[2996]: I1105 15:48:19.798393 2996 state_mem.go:35] "Initializing new in-memory state store" Nov 5 15:48:19.798457 kubelet[2996]: I1105 15:48:19.798447 2996 state_mem.go:75] "Updated machine memory state" Nov 5 15:48:19.800761 kubelet[2996]: I1105 15:48:19.800748 2996 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 5 15:48:19.800840 kubelet[2996]: I1105 15:48:19.800830 2996 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 15:48:19.800861 kubelet[2996]: I1105 15:48:19.800839 2996 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 15:48:19.802052 kubelet[2996]: I1105 15:48:19.802032 2996 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 15:48:19.805907 kubelet[2996]: E1105 15:48:19.805892 2996 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 15:48:19.873579 kubelet[2996]: I1105 15:48:19.873546 2996 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 15:48:19.876904 kubelet[2996]: I1105 15:48:19.876242 2996 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 15:48:19.876904 kubelet[2996]: I1105 15:48:19.876431 2996 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 15:48:19.907318 kubelet[2996]: I1105 15:48:19.907165 2996 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 15:48:19.911348 kubelet[2996]: I1105 15:48:19.910972 2996 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 5 15:48:19.911477 kubelet[2996]: I1105 15:48:19.911466 2996 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 5 15:48:20.068668 kubelet[2996]: I1105 15:48:20.068485 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:48:20.068668 kubelet[2996]: I1105 15:48:20.068507 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:48:20.068668 kubelet[2996]: I1105 15:48:20.068520 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7db97b10fdb36e474f18a7fefec08e1c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7db97b10fdb36e474f18a7fefec08e1c\") " pod="kube-system/kube-apiserver-localhost" Nov 5 15:48:20.068668 kubelet[2996]: I1105 15:48:20.068531 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:48:20.068668 kubelet[2996]: I1105 15:48:20.068540 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:48:20.068802 kubelet[2996]: I1105 15:48:20.068551 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:48:20.068802 kubelet[2996]: I1105 15:48:20.068559 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 5 15:48:20.068802 kubelet[2996]: I1105 15:48:20.068567 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7db97b10fdb36e474f18a7fefec08e1c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7db97b10fdb36e474f18a7fefec08e1c\") " pod="kube-system/kube-apiserver-localhost" Nov 5 15:48:20.068802 kubelet[2996]: I1105 15:48:20.068578 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7db97b10fdb36e474f18a7fefec08e1c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7db97b10fdb36e474f18a7fefec08e1c\") " pod="kube-system/kube-apiserver-localhost" Nov 5 15:48:20.752283 kubelet[2996]: I1105 15:48:20.752256 2996 apiserver.go:52] "Watching apiserver" Nov 5 15:48:20.767380 kubelet[2996]: I1105 15:48:20.767354 2996 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 15:48:20.793560 kubelet[2996]: I1105 15:48:20.793541 2996 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 15:48:20.800284 kubelet[2996]: E1105 15:48:20.800222 2996 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 5 15:48:20.818297 kubelet[2996]: I1105 15:48:20.818258 2996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.8182453600000001 podStartE2EDuration="1.81824536s" podCreationTimestamp="2025-11-05 15:48:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:48:20.813307681 +0000 UTC m=+1.121611711" watchObservedRunningTime="2025-11-05 15:48:20.81824536 +0000 UTC m=+1.126549381" Nov 5 15:48:20.818425 kubelet[2996]: I1105 15:48:20.818331 2996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.818326857 podStartE2EDuration="1.818326857s" podCreationTimestamp="2025-11-05 15:48:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:48:20.818154844 +0000 UTC m=+1.126458873" watchObservedRunningTime="2025-11-05 15:48:20.818326857 +0000 UTC m=+1.126630880" Nov 5 15:48:20.822816 kubelet[2996]: I1105 15:48:20.822789 2996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.822782873 podStartE2EDuration="1.822782873s" podCreationTimestamp="2025-11-05 15:48:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:48:20.822667893 +0000 UTC m=+1.130971919" watchObservedRunningTime="2025-11-05 15:48:20.822782873 +0000 UTC m=+1.131086895" Nov 5 15:48:23.994762 kubelet[2996]: I1105 15:48:23.994726 2996 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 5 15:48:23.995476 containerd[1686]: time="2025-11-05T15:48:23.995442174Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 5 15:48:23.995828 kubelet[2996]: I1105 15:48:23.995782 2996 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 5 15:48:24.725591 systemd[1]: Created slice kubepods-besteffort-pod155f48e7_acc7_435f_87aa_b332d3fe1874.slice - libcontainer container kubepods-besteffort-pod155f48e7_acc7_435f_87aa_b332d3fe1874.slice. Nov 5 15:48:24.801503 kubelet[2996]: I1105 15:48:24.801482 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/155f48e7-acc7-435f-87aa-b332d3fe1874-kube-proxy\") pod \"kube-proxy-t429n\" (UID: \"155f48e7-acc7-435f-87aa-b332d3fe1874\") " pod="kube-system/kube-proxy-t429n" Nov 5 15:48:24.801692 kubelet[2996]: I1105 15:48:24.801508 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/155f48e7-acc7-435f-87aa-b332d3fe1874-xtables-lock\") pod \"kube-proxy-t429n\" (UID: \"155f48e7-acc7-435f-87aa-b332d3fe1874\") " pod="kube-system/kube-proxy-t429n" Nov 5 15:48:24.801692 kubelet[2996]: I1105 15:48:24.801520 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/155f48e7-acc7-435f-87aa-b332d3fe1874-lib-modules\") pod \"kube-proxy-t429n\" (UID: \"155f48e7-acc7-435f-87aa-b332d3fe1874\") " pod="kube-system/kube-proxy-t429n" Nov 5 15:48:24.801692 kubelet[2996]: I1105 15:48:24.801530 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgpl6\" (UniqueName: \"kubernetes.io/projected/155f48e7-acc7-435f-87aa-b332d3fe1874-kube-api-access-xgpl6\") pod \"kube-proxy-t429n\" (UID: \"155f48e7-acc7-435f-87aa-b332d3fe1874\") " pod="kube-system/kube-proxy-t429n" Nov 5 15:48:25.033209 containerd[1686]: time="2025-11-05T15:48:25.033129459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t429n,Uid:155f48e7-acc7-435f-87aa-b332d3fe1874,Namespace:kube-system,Attempt:0,}" Nov 5 15:48:25.055402 containerd[1686]: time="2025-11-05T15:48:25.055318474Z" level=info msg="connecting to shim 7f60ca5431a0ff6547735b0a74d217a72f1329204dfcc50c30459fd78a474c21" address="unix:///run/containerd/s/2fbfe8502683b0490afc6d90e08a25b3384b888c7ba149b0b58ff6e50c7958c6" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:48:25.076257 systemd[1]: Started cri-containerd-7f60ca5431a0ff6547735b0a74d217a72f1329204dfcc50c30459fd78a474c21.scope - libcontainer container 7f60ca5431a0ff6547735b0a74d217a72f1329204dfcc50c30459fd78a474c21. Nov 5 15:48:25.095395 containerd[1686]: time="2025-11-05T15:48:25.095368198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t429n,Uid:155f48e7-acc7-435f-87aa-b332d3fe1874,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f60ca5431a0ff6547735b0a74d217a72f1329204dfcc50c30459fd78a474c21\"" Nov 5 15:48:25.097431 containerd[1686]: time="2025-11-05T15:48:25.097245203Z" level=info msg="CreateContainer within sandbox \"7f60ca5431a0ff6547735b0a74d217a72f1329204dfcc50c30459fd78a474c21\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 5 15:48:25.128457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4172590503.mount: Deactivated successfully. Nov 5 15:48:25.131068 containerd[1686]: time="2025-11-05T15:48:25.130788584Z" level=info msg="Container d54f709345f30c71451bc7c53a1c0208682be1af53b9d7c8c57a895ced0063be: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:48:25.150582 containerd[1686]: time="2025-11-05T15:48:25.150549668Z" level=info msg="CreateContainer within sandbox \"7f60ca5431a0ff6547735b0a74d217a72f1329204dfcc50c30459fd78a474c21\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d54f709345f30c71451bc7c53a1c0208682be1af53b9d7c8c57a895ced0063be\"" Nov 5 15:48:25.153426 containerd[1686]: time="2025-11-05T15:48:25.153397730Z" level=info msg="StartContainer for \"d54f709345f30c71451bc7c53a1c0208682be1af53b9d7c8c57a895ced0063be\"" Nov 5 15:48:25.158349 containerd[1686]: time="2025-11-05T15:48:25.158271374Z" level=info msg="connecting to shim d54f709345f30c71451bc7c53a1c0208682be1af53b9d7c8c57a895ced0063be" address="unix:///run/containerd/s/2fbfe8502683b0490afc6d90e08a25b3384b888c7ba149b0b58ff6e50c7958c6" protocol=ttrpc version=3 Nov 5 15:48:25.161202 systemd[1]: Created slice kubepods-besteffort-pod6dda7b9e_3a19_42bc_b541_70a57f255d7a.slice - libcontainer container kubepods-besteffort-pod6dda7b9e_3a19_42bc_b541_70a57f255d7a.slice. Nov 5 15:48:25.183266 systemd[1]: Started cri-containerd-d54f709345f30c71451bc7c53a1c0208682be1af53b9d7c8c57a895ced0063be.scope - libcontainer container d54f709345f30c71451bc7c53a1c0208682be1af53b9d7c8c57a895ced0063be. Nov 5 15:48:25.203610 kubelet[2996]: I1105 15:48:25.203586 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6dda7b9e-3a19-42bc-b541-70a57f255d7a-var-lib-calico\") pod \"tigera-operator-7dcd859c48-tnzzp\" (UID: \"6dda7b9e-3a19-42bc-b541-70a57f255d7a\") " pod="tigera-operator/tigera-operator-7dcd859c48-tnzzp" Nov 5 15:48:25.203610 kubelet[2996]: I1105 15:48:25.203611 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67hf9\" (UniqueName: \"kubernetes.io/projected/6dda7b9e-3a19-42bc-b541-70a57f255d7a-kube-api-access-67hf9\") pod \"tigera-operator-7dcd859c48-tnzzp\" (UID: \"6dda7b9e-3a19-42bc-b541-70a57f255d7a\") " pod="tigera-operator/tigera-operator-7dcd859c48-tnzzp" Nov 5 15:48:25.214779 containerd[1686]: time="2025-11-05T15:48:25.214741285Z" level=info msg="StartContainer for \"d54f709345f30c71451bc7c53a1c0208682be1af53b9d7c8c57a895ced0063be\" returns successfully" Nov 5 15:48:25.466372 containerd[1686]: time="2025-11-05T15:48:25.466337170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-tnzzp,Uid:6dda7b9e-3a19-42bc-b541-70a57f255d7a,Namespace:tigera-operator,Attempt:0,}" Nov 5 15:48:25.478852 containerd[1686]: time="2025-11-05T15:48:25.478807801Z" level=info msg="connecting to shim 9dc25d203caf6e8b6f21f33cbf226009cd2164cd8a0b2f65774c5a84b4221fa3" address="unix:///run/containerd/s/30f9f7817f959f72e666a7c57bd23ec5e6260861de107431d5d5af55f9d439cb" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:48:25.501247 systemd[1]: Started cri-containerd-9dc25d203caf6e8b6f21f33cbf226009cd2164cd8a0b2f65774c5a84b4221fa3.scope - libcontainer container 9dc25d203caf6e8b6f21f33cbf226009cd2164cd8a0b2f65774c5a84b4221fa3. Nov 5 15:48:25.539575 containerd[1686]: time="2025-11-05T15:48:25.539552130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-tnzzp,Uid:6dda7b9e-3a19-42bc-b541-70a57f255d7a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9dc25d203caf6e8b6f21f33cbf226009cd2164cd8a0b2f65774c5a84b4221fa3\"" Nov 5 15:48:25.540629 containerd[1686]: time="2025-11-05T15:48:25.540612057Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 5 15:48:25.810996 kubelet[2996]: I1105 15:48:25.810856 2996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t429n" podStartSLOduration=1.810843947 podStartE2EDuration="1.810843947s" podCreationTimestamp="2025-11-05 15:48:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:48:25.810589938 +0000 UTC m=+6.118893963" watchObservedRunningTime="2025-11-05 15:48:25.810843947 +0000 UTC m=+6.119147966" Nov 5 15:48:25.914003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4003663604.mount: Deactivated successfully. Nov 5 15:48:26.650303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2385860911.mount: Deactivated successfully. Nov 5 15:48:27.325994 containerd[1686]: time="2025-11-05T15:48:27.325966227Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:27.330587 containerd[1686]: time="2025-11-05T15:48:27.330570888Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 5 15:48:27.335556 containerd[1686]: time="2025-11-05T15:48:27.335529041Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:27.340314 containerd[1686]: time="2025-11-05T15:48:27.340287745Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:27.340837 containerd[1686]: time="2025-11-05T15:48:27.340617087Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.799987063s" Nov 5 15:48:27.340837 containerd[1686]: time="2025-11-05T15:48:27.340637898Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 5 15:48:27.351000 containerd[1686]: time="2025-11-05T15:48:27.350973090Z" level=info msg="CreateContainer within sandbox \"9dc25d203caf6e8b6f21f33cbf226009cd2164cd8a0b2f65774c5a84b4221fa3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 5 15:48:27.371072 containerd[1686]: time="2025-11-05T15:48:27.369371542Z" level=info msg="Container a4986a339bc843c299b2c43aaffb7cb067fea0c6ee8f7bc9776192b815272bfd: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:48:27.370060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1800608652.mount: Deactivated successfully. Nov 5 15:48:27.383390 containerd[1686]: time="2025-11-05T15:48:27.383367388Z" level=info msg="CreateContainer within sandbox \"9dc25d203caf6e8b6f21f33cbf226009cd2164cd8a0b2f65774c5a84b4221fa3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a4986a339bc843c299b2c43aaffb7cb067fea0c6ee8f7bc9776192b815272bfd\"" Nov 5 15:48:27.384399 containerd[1686]: time="2025-11-05T15:48:27.383784085Z" level=info msg="StartContainer for \"a4986a339bc843c299b2c43aaffb7cb067fea0c6ee8f7bc9776192b815272bfd\"" Nov 5 15:48:27.384458 containerd[1686]: time="2025-11-05T15:48:27.384389035Z" level=info msg="connecting to shim a4986a339bc843c299b2c43aaffb7cb067fea0c6ee8f7bc9776192b815272bfd" address="unix:///run/containerd/s/30f9f7817f959f72e666a7c57bd23ec5e6260861de107431d5d5af55f9d439cb" protocol=ttrpc version=3 Nov 5 15:48:27.400186 systemd[1]: Started cri-containerd-a4986a339bc843c299b2c43aaffb7cb067fea0c6ee8f7bc9776192b815272bfd.scope - libcontainer container a4986a339bc843c299b2c43aaffb7cb067fea0c6ee8f7bc9776192b815272bfd. Nov 5 15:48:27.421430 containerd[1686]: time="2025-11-05T15:48:27.421404287Z" level=info msg="StartContainer for \"a4986a339bc843c299b2c43aaffb7cb067fea0c6ee8f7bc9776192b815272bfd\" returns successfully" Nov 5 15:48:27.812794 kubelet[2996]: I1105 15:48:27.812069 2996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-tnzzp" podStartSLOduration=1.011250044 podStartE2EDuration="2.812056773s" podCreationTimestamp="2025-11-05 15:48:25 +0000 UTC" firstStartedPulling="2025-11-05 15:48:25.540245549 +0000 UTC m=+5.848549564" lastFinishedPulling="2025-11-05 15:48:27.34105228 +0000 UTC m=+7.649356293" observedRunningTime="2025-11-05 15:48:27.811985334 +0000 UTC m=+8.120289358" watchObservedRunningTime="2025-11-05 15:48:27.812056773 +0000 UTC m=+8.120360798" Nov 5 15:48:33.264539 sudo[1999]: pam_unix(sudo:session): session closed for user root Nov 5 15:48:33.265457 sshd[1998]: Connection closed by 139.178.89.65 port 39170 Nov 5 15:48:33.266201 sshd-session[1995]: pam_unix(sshd:session): session closed for user core Nov 5 15:48:33.271490 systemd[1]: sshd@6-139.178.70.100:22-139.178.89.65:39170.service: Deactivated successfully. Nov 5 15:48:33.272796 systemd[1]: session-9.scope: Deactivated successfully. Nov 5 15:48:33.272922 systemd[1]: session-9.scope: Consumed 3.157s CPU time, 150M memory peak. Nov 5 15:48:33.275450 systemd-logind[1651]: Session 9 logged out. Waiting for processes to exit. Nov 5 15:48:33.276770 systemd-logind[1651]: Removed session 9. Nov 5 15:48:37.378910 systemd[1]: Created slice kubepods-besteffort-pod3c6aae37_203b_493a_b925_904ca6bff87f.slice - libcontainer container kubepods-besteffort-pod3c6aae37_203b_493a_b925_904ca6bff87f.slice. Nov 5 15:48:37.481680 kubelet[2996]: I1105 15:48:37.481641 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3c6aae37-203b-493a-b925-904ca6bff87f-typha-certs\") pod \"calico-typha-58c88c569d-p97ds\" (UID: \"3c6aae37-203b-493a-b925-904ca6bff87f\") " pod="calico-system/calico-typha-58c88c569d-p97ds" Nov 5 15:48:37.482028 kubelet[2996]: I1105 15:48:37.481971 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kkzn\" (UniqueName: \"kubernetes.io/projected/3c6aae37-203b-493a-b925-904ca6bff87f-kube-api-access-9kkzn\") pod \"calico-typha-58c88c569d-p97ds\" (UID: \"3c6aae37-203b-493a-b925-904ca6bff87f\") " pod="calico-system/calico-typha-58c88c569d-p97ds" Nov 5 15:48:37.482028 kubelet[2996]: I1105 15:48:37.481991 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c6aae37-203b-493a-b925-904ca6bff87f-tigera-ca-bundle\") pod \"calico-typha-58c88c569d-p97ds\" (UID: \"3c6aae37-203b-493a-b925-904ca6bff87f\") " pod="calico-system/calico-typha-58c88c569d-p97ds" Nov 5 15:48:37.692900 systemd[1]: Created slice kubepods-besteffort-pod71955859_4b19_4c1d_8cc2_cba443e9408b.slice - libcontainer container kubepods-besteffort-pod71955859_4b19_4c1d_8cc2_cba443e9408b.slice. Nov 5 15:48:37.694844 containerd[1686]: time="2025-11-05T15:48:37.694755646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58c88c569d-p97ds,Uid:3c6aae37-203b-493a-b925-904ca6bff87f,Namespace:calico-system,Attempt:0,}" Nov 5 15:48:37.782905 kubelet[2996]: I1105 15:48:37.782876 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/71955859-4b19-4c1d-8cc2-cba443e9408b-cni-net-dir\") pod \"calico-node-wmrwf\" (UID: \"71955859-4b19-4c1d-8cc2-cba443e9408b\") " pod="calico-system/calico-node-wmrwf" Nov 5 15:48:37.782905 kubelet[2996]: I1105 15:48:37.782908 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/71955859-4b19-4c1d-8cc2-cba443e9408b-policysync\") pod \"calico-node-wmrwf\" (UID: \"71955859-4b19-4c1d-8cc2-cba443e9408b\") " pod="calico-system/calico-node-wmrwf" Nov 5 15:48:37.783032 kubelet[2996]: I1105 15:48:37.782950 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71955859-4b19-4c1d-8cc2-cba443e9408b-lib-modules\") pod \"calico-node-wmrwf\" (UID: \"71955859-4b19-4c1d-8cc2-cba443e9408b\") " pod="calico-system/calico-node-wmrwf" Nov 5 15:48:37.783032 kubelet[2996]: I1105 15:48:37.782962 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/71955859-4b19-4c1d-8cc2-cba443e9408b-var-lib-calico\") pod \"calico-node-wmrwf\" (UID: \"71955859-4b19-4c1d-8cc2-cba443e9408b\") " pod="calico-system/calico-node-wmrwf" Nov 5 15:48:37.783032 kubelet[2996]: I1105 15:48:37.782971 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/71955859-4b19-4c1d-8cc2-cba443e9408b-var-run-calico\") pod \"calico-node-wmrwf\" (UID: \"71955859-4b19-4c1d-8cc2-cba443e9408b\") " pod="calico-system/calico-node-wmrwf" Nov 5 15:48:37.783032 kubelet[2996]: I1105 15:48:37.782983 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/71955859-4b19-4c1d-8cc2-cba443e9408b-flexvol-driver-host\") pod \"calico-node-wmrwf\" (UID: \"71955859-4b19-4c1d-8cc2-cba443e9408b\") " pod="calico-system/calico-node-wmrwf" Nov 5 15:48:37.783179 kubelet[2996]: I1105 15:48:37.783043 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjvnb\" (UniqueName: \"kubernetes.io/projected/71955859-4b19-4c1d-8cc2-cba443e9408b-kube-api-access-kjvnb\") pod \"calico-node-wmrwf\" (UID: \"71955859-4b19-4c1d-8cc2-cba443e9408b\") " pod="calico-system/calico-node-wmrwf" Nov 5 15:48:37.783179 kubelet[2996]: I1105 15:48:37.783058 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71955859-4b19-4c1d-8cc2-cba443e9408b-xtables-lock\") pod \"calico-node-wmrwf\" (UID: \"71955859-4b19-4c1d-8cc2-cba443e9408b\") " pod="calico-system/calico-node-wmrwf" Nov 5 15:48:37.783179 kubelet[2996]: I1105 15:48:37.783072 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/71955859-4b19-4c1d-8cc2-cba443e9408b-cni-log-dir\") pod \"calico-node-wmrwf\" (UID: \"71955859-4b19-4c1d-8cc2-cba443e9408b\") " pod="calico-system/calico-node-wmrwf" Nov 5 15:48:37.783179 kubelet[2996]: I1105 15:48:37.783111 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/71955859-4b19-4c1d-8cc2-cba443e9408b-node-certs\") pod \"calico-node-wmrwf\" (UID: \"71955859-4b19-4c1d-8cc2-cba443e9408b\") " pod="calico-system/calico-node-wmrwf" Nov 5 15:48:37.783179 kubelet[2996]: I1105 15:48:37.783124 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/71955859-4b19-4c1d-8cc2-cba443e9408b-cni-bin-dir\") pod \"calico-node-wmrwf\" (UID: \"71955859-4b19-4c1d-8cc2-cba443e9408b\") " pod="calico-system/calico-node-wmrwf" Nov 5 15:48:37.783312 kubelet[2996]: I1105 15:48:37.783134 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71955859-4b19-4c1d-8cc2-cba443e9408b-tigera-ca-bundle\") pod \"calico-node-wmrwf\" (UID: \"71955859-4b19-4c1d-8cc2-cba443e9408b\") " pod="calico-system/calico-node-wmrwf" Nov 5 15:48:37.911344 kubelet[2996]: E1105 15:48:37.910484 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.911344 kubelet[2996]: W1105 15:48:37.910496 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.911344 kubelet[2996]: E1105 15:48:37.910511 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.914127 kubelet[2996]: E1105 15:48:37.912819 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9hj2w" podUID="5c5e9d05-7eb8-471e-907b-c18b5992bb51" Nov 5 15:48:37.921853 containerd[1686]: time="2025-11-05T15:48:37.921829164Z" level=info msg="connecting to shim 5556d2b8d355359722a479be68fc29804d5075b79ce1589c59673f7f012d4542" address="unix:///run/containerd/s/926b158278d0ca69c29bcbfc1bfd9116196a3fd14dd368bf37e6ccf7298ad170" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:48:37.943382 systemd[1]: Started cri-containerd-5556d2b8d355359722a479be68fc29804d5075b79ce1589c59673f7f012d4542.scope - libcontainer container 5556d2b8d355359722a479be68fc29804d5075b79ce1589c59673f7f012d4542. Nov 5 15:48:37.970363 kubelet[2996]: E1105 15:48:37.970343 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.970460 kubelet[2996]: W1105 15:48:37.970450 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.970501 kubelet[2996]: E1105 15:48:37.970494 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.970663 kubelet[2996]: E1105 15:48:37.970612 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.970663 kubelet[2996]: W1105 15:48:37.970618 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.970663 kubelet[2996]: E1105 15:48:37.970624 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.970786 kubelet[2996]: E1105 15:48:37.970781 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.970880 kubelet[2996]: W1105 15:48:37.970814 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.970880 kubelet[2996]: E1105 15:48:37.970820 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.971525 kubelet[2996]: E1105 15:48:37.971268 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.971848 kubelet[2996]: W1105 15:48:37.971828 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.971848 kubelet[2996]: E1105 15:48:37.971841 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.972317 kubelet[2996]: E1105 15:48:37.972148 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.972317 kubelet[2996]: W1105 15:48:37.972159 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.972317 kubelet[2996]: E1105 15:48:37.972170 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.972502 kubelet[2996]: E1105 15:48:37.972417 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.972502 kubelet[2996]: W1105 15:48:37.972425 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.972502 kubelet[2996]: E1105 15:48:37.972433 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.972976 kubelet[2996]: E1105 15:48:37.972966 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.972976 kubelet[2996]: W1105 15:48:37.972974 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.973127 kubelet[2996]: E1105 15:48:37.972981 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.973331 kubelet[2996]: E1105 15:48:37.973244 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.973331 kubelet[2996]: W1105 15:48:37.973255 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.973331 kubelet[2996]: E1105 15:48:37.973262 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.973750 kubelet[2996]: E1105 15:48:37.973739 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.973781 kubelet[2996]: W1105 15:48:37.973751 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.973781 kubelet[2996]: E1105 15:48:37.973760 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.973987 kubelet[2996]: E1105 15:48:37.973937 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.973987 kubelet[2996]: W1105 15:48:37.973945 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.973987 kubelet[2996]: E1105 15:48:37.973954 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.974210 kubelet[2996]: E1105 15:48:37.974200 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.974210 kubelet[2996]: W1105 15:48:37.974207 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.974287 kubelet[2996]: E1105 15:48:37.974213 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.975192 kubelet[2996]: E1105 15:48:37.975181 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.975192 kubelet[2996]: W1105 15:48:37.975191 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.975246 kubelet[2996]: E1105 15:48:37.975201 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.975329 kubelet[2996]: E1105 15:48:37.975318 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.975329 kubelet[2996]: W1105 15:48:37.975328 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.975396 kubelet[2996]: E1105 15:48:37.975336 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.976115 kubelet[2996]: E1105 15:48:37.976102 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.976159 kubelet[2996]: W1105 15:48:37.976115 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.976159 kubelet[2996]: E1105 15:48:37.976128 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.976242 kubelet[2996]: E1105 15:48:37.976233 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.976270 kubelet[2996]: W1105 15:48:37.976242 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.976270 kubelet[2996]: E1105 15:48:37.976250 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.976404 kubelet[2996]: E1105 15:48:37.976343 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.976404 kubelet[2996]: W1105 15:48:37.976349 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.976404 kubelet[2996]: E1105 15:48:37.976356 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.976591 kubelet[2996]: E1105 15:48:37.976453 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.976591 kubelet[2996]: W1105 15:48:37.976462 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.976591 kubelet[2996]: E1105 15:48:37.976469 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.977142 kubelet[2996]: E1105 15:48:37.977130 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.977142 kubelet[2996]: W1105 15:48:37.977142 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.977195 kubelet[2996]: E1105 15:48:37.977152 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.977264 kubelet[2996]: E1105 15:48:37.977253 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.977264 kubelet[2996]: W1105 15:48:37.977261 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.977323 kubelet[2996]: E1105 15:48:37.977268 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.977554 kubelet[2996]: E1105 15:48:37.977363 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.977554 kubelet[2996]: W1105 15:48:37.977369 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.977554 kubelet[2996]: E1105 15:48:37.977377 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.984736 kubelet[2996]: E1105 15:48:37.984713 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.984736 kubelet[2996]: W1105 15:48:37.984731 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.985137 kubelet[2996]: E1105 15:48:37.984750 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.985137 kubelet[2996]: I1105 15:48:37.984775 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5c5e9d05-7eb8-471e-907b-c18b5992bb51-socket-dir\") pod \"csi-node-driver-9hj2w\" (UID: \"5c5e9d05-7eb8-471e-907b-c18b5992bb51\") " pod="calico-system/csi-node-driver-9hj2w" Nov 5 15:48:37.985137 kubelet[2996]: E1105 15:48:37.985034 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.985137 kubelet[2996]: W1105 15:48:37.985126 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.985250 kubelet[2996]: E1105 15:48:37.985147 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.985250 kubelet[2996]: I1105 15:48:37.985165 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5c5e9d05-7eb8-471e-907b-c18b5992bb51-varrun\") pod \"csi-node-driver-9hj2w\" (UID: \"5c5e9d05-7eb8-471e-907b-c18b5992bb51\") " pod="calico-system/csi-node-driver-9hj2w" Nov 5 15:48:37.985767 kubelet[2996]: E1105 15:48:37.985752 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.985803 kubelet[2996]: W1105 15:48:37.985766 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.985803 kubelet[2996]: E1105 15:48:37.985783 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.985803 kubelet[2996]: I1105 15:48:37.985800 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c5e9d05-7eb8-471e-907b-c18b5992bb51-kubelet-dir\") pod \"csi-node-driver-9hj2w\" (UID: \"5c5e9d05-7eb8-471e-907b-c18b5992bb51\") " pod="calico-system/csi-node-driver-9hj2w" Nov 5 15:48:37.986258 kubelet[2996]: E1105 15:48:37.986015 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.986258 kubelet[2996]: W1105 15:48:37.986023 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.986258 kubelet[2996]: E1105 15:48:37.986032 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.986258 kubelet[2996]: I1105 15:48:37.986050 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpvz7\" (UniqueName: \"kubernetes.io/projected/5c5e9d05-7eb8-471e-907b-c18b5992bb51-kube-api-access-kpvz7\") pod \"csi-node-driver-9hj2w\" (UID: \"5c5e9d05-7eb8-471e-907b-c18b5992bb51\") " pod="calico-system/csi-node-driver-9hj2w" Nov 5 15:48:37.986752 kubelet[2996]: E1105 15:48:37.986731 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.986752 kubelet[2996]: W1105 15:48:37.986741 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.986874 kubelet[2996]: E1105 15:48:37.986820 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.987063 kubelet[2996]: E1105 15:48:37.987019 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.987063 kubelet[2996]: W1105 15:48:37.987026 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.987063 kubelet[2996]: E1105 15:48:37.987058 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.987269 kubelet[2996]: E1105 15:48:37.987241 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.987269 kubelet[2996]: W1105 15:48:37.987248 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.987317 kubelet[2996]: E1105 15:48:37.987281 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.987516 kubelet[2996]: E1105 15:48:37.987482 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.987516 kubelet[2996]: W1105 15:48:37.987489 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.987590 kubelet[2996]: E1105 15:48:37.987565 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.987612 kubelet[2996]: I1105 15:48:37.987587 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5c5e9d05-7eb8-471e-907b-c18b5992bb51-registration-dir\") pod \"csi-node-driver-9hj2w\" (UID: \"5c5e9d05-7eb8-471e-907b-c18b5992bb51\") " pod="calico-system/csi-node-driver-9hj2w" Nov 5 15:48:37.988116 kubelet[2996]: E1105 15:48:37.987712 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.988116 kubelet[2996]: W1105 15:48:37.988105 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.988256 kubelet[2996]: E1105 15:48:37.988243 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.988313 kubelet[2996]: E1105 15:48:37.988307 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.988382 kubelet[2996]: W1105 15:48:37.988339 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.988382 kubelet[2996]: E1105 15:48:37.988369 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.988457 kubelet[2996]: E1105 15:48:37.988443 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.988521 kubelet[2996]: W1105 15:48:37.988505 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.988521 kubelet[2996]: E1105 15:48:37.988514 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.988716 kubelet[2996]: E1105 15:48:37.988709 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.988837 kubelet[2996]: W1105 15:48:37.988768 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.988837 kubelet[2996]: E1105 15:48:37.988778 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.989162 kubelet[2996]: E1105 15:48:37.989033 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.989162 kubelet[2996]: W1105 15:48:37.989039 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.989162 kubelet[2996]: E1105 15:48:37.989045 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.989580 kubelet[2996]: E1105 15:48:37.989300 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.989580 kubelet[2996]: W1105 15:48:37.989538 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.989580 kubelet[2996]: E1105 15:48:37.989551 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.989752 kubelet[2996]: E1105 15:48:37.989730 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:37.989752 kubelet[2996]: W1105 15:48:37.989736 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:37.989752 kubelet[2996]: E1105 15:48:37.989743 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:37.997077 containerd[1686]: time="2025-11-05T15:48:37.996304380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wmrwf,Uid:71955859-4b19-4c1d-8cc2-cba443e9408b,Namespace:calico-system,Attempt:0,}" Nov 5 15:48:38.028777 containerd[1686]: time="2025-11-05T15:48:38.028754469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58c88c569d-p97ds,Uid:3c6aae37-203b-493a-b925-904ca6bff87f,Namespace:calico-system,Attempt:0,} returns sandbox id \"5556d2b8d355359722a479be68fc29804d5075b79ce1589c59673f7f012d4542\"" Nov 5 15:48:38.029789 containerd[1686]: time="2025-11-05T15:48:38.029772122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 5 15:48:38.042214 containerd[1686]: time="2025-11-05T15:48:38.042132055Z" level=info msg="connecting to shim 2c4ec6c392f30000a7dadbb9609967884e0bf00b204abec0406886e48c9596c2" address="unix:///run/containerd/s/84471699ff411cebc35a193927506daacaa8fef1e11a217b4d5689dfe6388d02" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:48:38.060194 systemd[1]: Started cri-containerd-2c4ec6c392f30000a7dadbb9609967884e0bf00b204abec0406886e48c9596c2.scope - libcontainer container 2c4ec6c392f30000a7dadbb9609967884e0bf00b204abec0406886e48c9596c2. Nov 5 15:48:38.080611 containerd[1686]: time="2025-11-05T15:48:38.080582273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wmrwf,Uid:71955859-4b19-4c1d-8cc2-cba443e9408b,Namespace:calico-system,Attempt:0,} returns sandbox id \"2c4ec6c392f30000a7dadbb9609967884e0bf00b204abec0406886e48c9596c2\"" Nov 5 15:48:38.089210 kubelet[2996]: E1105 15:48:38.089175 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:38.089210 kubelet[2996]: W1105 15:48:38.089188 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:38.089342 kubelet[2996]: E1105 15:48:38.089200 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:38.089479 kubelet[2996]: E1105 15:48:38.089461 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:38.089479 kubelet[2996]: W1105 15:48:38.089469 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:38.089624 kubelet[2996]: E1105 15:48:38.089544 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:38.089732 kubelet[2996]: E1105 15:48:38.089727 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:38.089765 kubelet[2996]: W1105 15:48:38.089760 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:38.089841 kubelet[2996]: E1105 15:48:38.089801 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:38.089958 kubelet[2996]: E1105 15:48:38.089953 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:38.089996 kubelet[2996]: W1105 15:48:38.089990 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:38.090044 kubelet[2996]: E1105 15:48:38.090038 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:38.097169 kubelet[2996]: E1105 15:48:38.090317 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:38.097169 kubelet[2996]: W1105 15:48:38.090323 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:38.097169 kubelet[2996]: E1105 15:48:38.090333 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:38.097169 kubelet[2996]: E1105 15:48:38.090436 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:38.097169 kubelet[2996]: W1105 15:48:38.090441 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:38.097169 kubelet[2996]: E1105 15:48:38.090457 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:38.097169 kubelet[2996]: E1105 15:48:38.090551 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:38.097169 kubelet[2996]: W1105 15:48:38.090556 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:38.097169 kubelet[2996]: E1105 15:48:38.090566 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:38.097169 kubelet[2996]: E1105 15:48:38.090709 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:38.097372 kubelet[2996]: W1105 15:48:38.090715 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:38.097372 kubelet[2996]: E1105 15:48:38.090724 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:38.097372 kubelet[2996]: E1105 15:48:38.090803 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:38.097372 kubelet[2996]: W1105 15:48:38.090808 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:38.097372 kubelet[2996]: E1105 15:48:38.090816 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:38.097372 kubelet[2996]: E1105 15:48:38.090972 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:38.097372 kubelet[2996]: W1105 15:48:38.090979 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:38.097372 kubelet[2996]: E1105 15:48:38.090991 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:38.097372 kubelet[2996]: E1105 15:48:38.092412 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:38.097372 kubelet[2996]: W1105 15:48:38.092418 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:38.097896 kubelet[2996]: E1105 15:48:38.092434 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:38.097896 kubelet[2996]: E1105 15:48:38.092584 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:38.097896 kubelet[2996]: W1105 15:48:38.092589 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:38.097896 kubelet[2996]: E1105 15:48:38.092617 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:38.097896 kubelet[2996]: E1105 15:48:38.092786 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:38.097896 kubelet[2996]: W1105 15:48:38.092792 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:38.097896 kubelet[2996]: E1105 15:48:38.092809 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:38.097896 kubelet[2996]: E1105 15:48:38.092899 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:38.097896 kubelet[2996]: W1105 15:48:38.092906 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:38.097896 kubelet[2996]: E1105 15:48:38.093225 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:38.098321 kubelet[2996]: W1105 15:48:38.093230 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:38.098321 kubelet[2996]: E1105 15:48:38.093085 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:38.098321 kubelet[2996]: E1105 15:48:38.093446 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:38.098321 kubelet[2996]: E1105 15:48:38.093504 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:38.098321 kubelet[2996]: W1105 15:48:38.093510 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:38.098321 kubelet[2996]: E1105 15:48:38.093519 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:38.098321 kubelet[2996]: E1105 15:48:38.093649 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:38.098321 kubelet[2996]: W1105 15:48:38.093654 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:38.098321 kubelet[2996]: E1105 15:48:38.093661 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:38.098321 kubelet[2996]: E1105 15:48:38.095208 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:38.098484 kubelet[2996]: W1105 15:48:38.095216 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:38.098484 kubelet[2996]: E1105 15:48:38.095256 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:38.098484 kubelet[2996]: E1105 15:48:38.096073 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:38.098484 kubelet[2996]: W1105 15:48:38.096113 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:38.098484 kubelet[2996]: E1105 15:48:38.096166 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:38.098484 kubelet[2996]: E1105 15:48:38.096257 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:38.098484 kubelet[2996]: W1105 15:48:38.096262 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:38.098484 kubelet[2996]: E1105 15:48:38.096290 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:38.098484 kubelet[2996]: E1105 15:48:38.096393 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:38.098484 kubelet[2996]: W1105 15:48:38.096402 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:38.098638 kubelet[2996]: E1105 15:48:38.096449 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:38.098638 kubelet[2996]: E1105 15:48:38.096498 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:38.098638 kubelet[2996]: W1105 15:48:38.096503 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:38.098638 kubelet[2996]: E1105 15:48:38.096510 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:38.098638 kubelet[2996]: E1105 15:48:38.096610 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:38.098638 kubelet[2996]: W1105 15:48:38.096615 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:38.098638 kubelet[2996]: E1105 15:48:38.096620 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:38.098638 kubelet[2996]: E1105 15:48:38.098040 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:38.098638 kubelet[2996]: W1105 15:48:38.098047 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:38.098638 kubelet[2996]: E1105 15:48:38.098053 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:38.100201 kubelet[2996]: E1105 15:48:38.100168 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:38.100201 kubelet[2996]: W1105 15:48:38.100176 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:38.100201 kubelet[2996]: E1105 15:48:38.100185 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:38.103524 kubelet[2996]: E1105 15:48:38.103490 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:38.103524 kubelet[2996]: W1105 15:48:38.103499 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:38.103623 kubelet[2996]: E1105 15:48:38.103509 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:39.797125 kubelet[2996]: E1105 15:48:39.796447 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9hj2w" podUID="5c5e9d05-7eb8-471e-907b-c18b5992bb51" Nov 5 15:48:39.896111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4155560909.mount: Deactivated successfully. Nov 5 15:48:40.436856 containerd[1686]: time="2025-11-05T15:48:40.436815392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:40.437454 containerd[1686]: time="2025-11-05T15:48:40.437433338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 5 15:48:40.437621 containerd[1686]: time="2025-11-05T15:48:40.437603254Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:40.439158 containerd[1686]: time="2025-11-05T15:48:40.439139573Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:40.439921 containerd[1686]: time="2025-11-05T15:48:40.439822604Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.409958448s" Nov 5 15:48:40.439921 containerd[1686]: time="2025-11-05T15:48:40.439888556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 5 15:48:40.441001 containerd[1686]: time="2025-11-05T15:48:40.440852903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 5 15:48:40.455067 containerd[1686]: time="2025-11-05T15:48:40.455040335Z" level=info msg="CreateContainer within sandbox \"5556d2b8d355359722a479be68fc29804d5075b79ce1589c59673f7f012d4542\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 5 15:48:40.460445 containerd[1686]: time="2025-11-05T15:48:40.458227150Z" level=info msg="Container b314750f436e15582b1e12fe5e0ed25e50d787a3b444ef7a2e9381220eeb7bb0: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:48:40.462603 containerd[1686]: time="2025-11-05T15:48:40.462577833Z" level=info msg="CreateContainer within sandbox \"5556d2b8d355359722a479be68fc29804d5075b79ce1589c59673f7f012d4542\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b314750f436e15582b1e12fe5e0ed25e50d787a3b444ef7a2e9381220eeb7bb0\"" Nov 5 15:48:40.463231 containerd[1686]: time="2025-11-05T15:48:40.463078254Z" level=info msg="StartContainer for \"b314750f436e15582b1e12fe5e0ed25e50d787a3b444ef7a2e9381220eeb7bb0\"" Nov 5 15:48:40.483689 containerd[1686]: time="2025-11-05T15:48:40.483597110Z" level=info msg="connecting to shim b314750f436e15582b1e12fe5e0ed25e50d787a3b444ef7a2e9381220eeb7bb0" address="unix:///run/containerd/s/926b158278d0ca69c29bcbfc1bfd9116196a3fd14dd368bf37e6ccf7298ad170" protocol=ttrpc version=3 Nov 5 15:48:40.519184 systemd[1]: Started cri-containerd-b314750f436e15582b1e12fe5e0ed25e50d787a3b444ef7a2e9381220eeb7bb0.scope - libcontainer container b314750f436e15582b1e12fe5e0ed25e50d787a3b444ef7a2e9381220eeb7bb0. Nov 5 15:48:40.558029 containerd[1686]: time="2025-11-05T15:48:40.557964933Z" level=info msg="StartContainer for \"b314750f436e15582b1e12fe5e0ed25e50d787a3b444ef7a2e9381220eeb7bb0\" returns successfully" Nov 5 15:48:40.846586 kubelet[2996]: I1105 15:48:40.846504 2996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-58c88c569d-p97ds" podStartSLOduration=1.435305681 podStartE2EDuration="3.846488483s" podCreationTimestamp="2025-11-05 15:48:37 +0000 UTC" firstStartedPulling="2025-11-05 15:48:38.029502685 +0000 UTC m=+18.337806699" lastFinishedPulling="2025-11-05 15:48:40.440685489 +0000 UTC m=+20.748989501" observedRunningTime="2025-11-05 15:48:40.846205436 +0000 UTC m=+21.154509459" watchObservedRunningTime="2025-11-05 15:48:40.846488483 +0000 UTC m=+21.154792502" Nov 5 15:48:40.904453 kubelet[2996]: E1105 15:48:40.904430 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:40.904453 kubelet[2996]: W1105 15:48:40.904446 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:40.917095 kubelet[2996]: E1105 15:48:40.917076 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:40.917206 kubelet[2996]: E1105 15:48:40.917197 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:40.917206 kubelet[2996]: W1105 15:48:40.917205 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:40.917261 kubelet[2996]: E1105 15:48:40.917213 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:40.917292 kubelet[2996]: E1105 15:48:40.917284 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:40.917292 kubelet[2996]: W1105 15:48:40.917289 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:40.917326 kubelet[2996]: E1105 15:48:40.917294 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:40.917409 kubelet[2996]: E1105 15:48:40.917386 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:40.917409 kubelet[2996]: W1105 15:48:40.917391 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:40.917409 kubelet[2996]: E1105 15:48:40.917396 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:40.917591 kubelet[2996]: E1105 15:48:40.917472 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:40.917591 kubelet[2996]: W1105 15:48:40.917476 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:40.917591 kubelet[2996]: E1105 15:48:40.917481 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:40.917591 kubelet[2996]: E1105 15:48:40.917544 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:40.917591 kubelet[2996]: W1105 15:48:40.917548 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:40.917591 kubelet[2996]: E1105 15:48:40.917553 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:40.921472 kubelet[2996]: E1105 15:48:40.917615 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:40.921472 kubelet[2996]: W1105 15:48:40.917619 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:40.921472 kubelet[2996]: E1105 15:48:40.917624 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:40.921472 kubelet[2996]: E1105 15:48:40.917688 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:40.921472 kubelet[2996]: W1105 15:48:40.917692 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:40.921472 kubelet[2996]: E1105 15:48:40.917697 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:40.921472 kubelet[2996]: E1105 15:48:40.917766 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:40.921472 kubelet[2996]: W1105 15:48:40.917772 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:40.921472 kubelet[2996]: E1105 15:48:40.917777 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:40.921472 kubelet[2996]: E1105 15:48:40.917854 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:40.928018 kubelet[2996]: W1105 15:48:40.917859 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:40.928018 kubelet[2996]: E1105 15:48:40.917863 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:40.928018 kubelet[2996]: E1105 15:48:40.917941 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:40.928018 kubelet[2996]: W1105 15:48:40.917946 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:40.928018 kubelet[2996]: E1105 15:48:40.917950 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:40.928018 kubelet[2996]: E1105 15:48:40.918029 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:40.928018 kubelet[2996]: W1105 15:48:40.918034 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:40.928018 kubelet[2996]: E1105 15:48:40.918038 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:40.928018 kubelet[2996]: E1105 15:48:40.918169 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:40.928018 kubelet[2996]: W1105 15:48:40.918174 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:40.928204 kubelet[2996]: E1105 15:48:40.918179 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:40.928204 kubelet[2996]: E1105 15:48:40.918255 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:40.928204 kubelet[2996]: W1105 15:48:40.918259 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:40.928204 kubelet[2996]: E1105 15:48:40.918264 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:40.928204 kubelet[2996]: E1105 15:48:40.918344 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:40.928204 kubelet[2996]: W1105 15:48:40.918349 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:40.928204 kubelet[2996]: E1105 15:48:40.918354 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:40.928204 kubelet[2996]: E1105 15:48:40.918473 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:40.928204 kubelet[2996]: W1105 15:48:40.918478 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:40.928204 kubelet[2996]: E1105 15:48:40.918483 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:40.928360 kubelet[2996]: E1105 15:48:40.918576 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:40.928360 kubelet[2996]: W1105 15:48:40.918581 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:40.928360 kubelet[2996]: E1105 15:48:40.918589 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:40.928360 kubelet[2996]: E1105 15:48:40.918681 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:40.928360 kubelet[2996]: W1105 15:48:40.918686 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:40.928360 kubelet[2996]: E1105 15:48:40.918694 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:40.928360 kubelet[2996]: E1105 15:48:40.918813 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:40.928360 kubelet[2996]: W1105 15:48:40.918822 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:40.928360 kubelet[2996]: E1105 15:48:40.918833 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:40.928360 kubelet[2996]: E1105 15:48:40.918928 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:40.928520 kubelet[2996]: W1105 15:48:40.918933 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:40.928520 kubelet[2996]: E1105 15:48:40.918945 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:40.928520 kubelet[2996]: E1105 15:48:40.919032 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:40.928520 kubelet[2996]: W1105 15:48:40.919037 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:40.928520 kubelet[2996]: E1105 15:48:40.919045 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:40.928520 kubelet[2996]: E1105 15:48:40.919150 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:40.928520 kubelet[2996]: W1105 15:48:40.919155 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:40.928520 kubelet[2996]: E1105 15:48:40.919162 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:40.928520 kubelet[2996]: E1105 15:48:40.919357 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:40.928520 kubelet[2996]: W1105 15:48:40.919362 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:40.928673 kubelet[2996]: E1105 15:48:40.919367 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:40.928673 kubelet[2996]: E1105 15:48:40.919455 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:40.928673 kubelet[2996]: W1105 15:48:40.919460 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:40.928673 kubelet[2996]: E1105 15:48:40.919469 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:40.928673 kubelet[2996]: E1105 15:48:40.919570 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:40.928673 kubelet[2996]: W1105 15:48:40.919577 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:40.928673 kubelet[2996]: E1105 15:48:40.919587 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:40.928673 kubelet[2996]: E1105 15:48:40.919672 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:40.928673 kubelet[2996]: W1105 15:48:40.919677 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:40.928673 kubelet[2996]: E1105 15:48:40.919685 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:40.932890 kubelet[2996]: E1105 15:48:40.919778 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:40.932890 kubelet[2996]: W1105 15:48:40.919783 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:40.932890 kubelet[2996]: E1105 15:48:40.919792 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:40.932890 kubelet[2996]: E1105 15:48:40.919887 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:40.932890 kubelet[2996]: W1105 15:48:40.919893 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:40.932890 kubelet[2996]: E1105 15:48:40.919901 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:40.932890 kubelet[2996]: E1105 15:48:40.920329 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:40.932890 kubelet[2996]: W1105 15:48:40.920334 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:40.932890 kubelet[2996]: E1105 15:48:40.920350 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:40.932890 kubelet[2996]: E1105 15:48:40.920431 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:40.933049 kubelet[2996]: W1105 15:48:40.920436 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:40.933049 kubelet[2996]: E1105 15:48:40.920445 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:40.933049 kubelet[2996]: E1105 15:48:40.920543 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:40.933049 kubelet[2996]: W1105 15:48:40.920547 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:40.933049 kubelet[2996]: E1105 15:48:40.920558 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:40.933049 kubelet[2996]: E1105 15:48:40.920683 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:40.933049 kubelet[2996]: W1105 15:48:40.920688 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:40.933049 kubelet[2996]: E1105 15:48:40.920693 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:40.933049 kubelet[2996]: E1105 15:48:40.920818 2996 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:40.933049 kubelet[2996]: W1105 15:48:40.920823 2996 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:40.933248 kubelet[2996]: E1105 15:48:40.920832 2996 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:41.769486 containerd[1686]: time="2025-11-05T15:48:41.769454227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:41.769958 containerd[1686]: time="2025-11-05T15:48:41.769836991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 5 15:48:41.770293 containerd[1686]: time="2025-11-05T15:48:41.770270517Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:41.771387 containerd[1686]: time="2025-11-05T15:48:41.771363663Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:41.771793 containerd[1686]: time="2025-11-05T15:48:41.771776488Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.330774731s" Nov 5 15:48:41.771829 containerd[1686]: time="2025-11-05T15:48:41.771795518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 5 15:48:41.772808 kubelet[2996]: E1105 15:48:41.772779 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9hj2w" podUID="5c5e9d05-7eb8-471e-907b-c18b5992bb51" Nov 5 15:48:41.774869 containerd[1686]: time="2025-11-05T15:48:41.774814960Z" level=info msg="CreateContainer within sandbox \"2c4ec6c392f30000a7dadbb9609967884e0bf00b204abec0406886e48c9596c2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 5 15:48:41.792728 containerd[1686]: time="2025-11-05T15:48:41.792072911Z" level=info msg="Container c2ae7a4ad047ce340f95c3a61e46d9d70b15c406e299f304be59b6316f291f56: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:48:41.794332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4163312517.mount: Deactivated successfully. Nov 5 15:48:41.798004 containerd[1686]: time="2025-11-05T15:48:41.797977469Z" level=info msg="CreateContainer within sandbox \"2c4ec6c392f30000a7dadbb9609967884e0bf00b204abec0406886e48c9596c2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c2ae7a4ad047ce340f95c3a61e46d9d70b15c406e299f304be59b6316f291f56\"" Nov 5 15:48:41.799489 containerd[1686]: time="2025-11-05T15:48:41.798703373Z" level=info msg="StartContainer for \"c2ae7a4ad047ce340f95c3a61e46d9d70b15c406e299f304be59b6316f291f56\"" Nov 5 15:48:41.800752 containerd[1686]: time="2025-11-05T15:48:41.800485583Z" level=info msg="connecting to shim c2ae7a4ad047ce340f95c3a61e46d9d70b15c406e299f304be59b6316f291f56" address="unix:///run/containerd/s/84471699ff411cebc35a193927506daacaa8fef1e11a217b4d5689dfe6388d02" protocol=ttrpc version=3 Nov 5 15:48:41.821211 systemd[1]: Started cri-containerd-c2ae7a4ad047ce340f95c3a61e46d9d70b15c406e299f304be59b6316f291f56.scope - libcontainer container c2ae7a4ad047ce340f95c3a61e46d9d70b15c406e299f304be59b6316f291f56. Nov 5 15:48:41.877695 systemd[1]: cri-containerd-c2ae7a4ad047ce340f95c3a61e46d9d70b15c406e299f304be59b6316f291f56.scope: Deactivated successfully. Nov 5 15:48:41.887994 containerd[1686]: time="2025-11-05T15:48:41.883921252Z" level=info msg="StartContainer for \"c2ae7a4ad047ce340f95c3a61e46d9d70b15c406e299f304be59b6316f291f56\" returns successfully" Nov 5 15:48:41.900474 containerd[1686]: time="2025-11-05T15:48:41.900436190Z" level=info msg="received exit event container_id:\"c2ae7a4ad047ce340f95c3a61e46d9d70b15c406e299f304be59b6316f291f56\" id:\"c2ae7a4ad047ce340f95c3a61e46d9d70b15c406e299f304be59b6316f291f56\" pid:3666 exited_at:{seconds:1762357721 nanos:893361221}" Nov 5 15:48:41.937586 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2ae7a4ad047ce340f95c3a61e46d9d70b15c406e299f304be59b6316f291f56-rootfs.mount: Deactivated successfully. Nov 5 15:48:41.944022 containerd[1686]: time="2025-11-05T15:48:41.943757010Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c2ae7a4ad047ce340f95c3a61e46d9d70b15c406e299f304be59b6316f291f56\" id:\"c2ae7a4ad047ce340f95c3a61e46d9d70b15c406e299f304be59b6316f291f56\" pid:3666 exited_at:{seconds:1762357721 nanos:893361221}" Nov 5 15:48:42.857115 containerd[1686]: time="2025-11-05T15:48:42.856966420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 5 15:48:43.772793 kubelet[2996]: E1105 15:48:43.772562 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9hj2w" podUID="5c5e9d05-7eb8-471e-907b-c18b5992bb51" Nov 5 15:48:45.639859 containerd[1686]: time="2025-11-05T15:48:45.639819595Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:45.640740 containerd[1686]: time="2025-11-05T15:48:45.640639493Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 5 15:48:45.641791 containerd[1686]: time="2025-11-05T15:48:45.640985782Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:45.642034 containerd[1686]: time="2025-11-05T15:48:45.642017894Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:45.642488 containerd[1686]: time="2025-11-05T15:48:45.642473545Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.785480859s" Nov 5 15:48:45.642541 containerd[1686]: time="2025-11-05T15:48:45.642533144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 5 15:48:45.644032 containerd[1686]: time="2025-11-05T15:48:45.644010416Z" level=info msg="CreateContainer within sandbox \"2c4ec6c392f30000a7dadbb9609967884e0bf00b204abec0406886e48c9596c2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 5 15:48:45.657538 containerd[1686]: time="2025-11-05T15:48:45.655115336Z" level=info msg="Container 6f226f11fa7a21f1b52e4706d948e41d66e6a49daa5de031aa4e0ac1e01314b3: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:48:45.666222 containerd[1686]: time="2025-11-05T15:48:45.666146287Z" level=info msg="CreateContainer within sandbox \"2c4ec6c392f30000a7dadbb9609967884e0bf00b204abec0406886e48c9596c2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6f226f11fa7a21f1b52e4706d948e41d66e6a49daa5de031aa4e0ac1e01314b3\"" Nov 5 15:48:45.666806 containerd[1686]: time="2025-11-05T15:48:45.666702365Z" level=info msg="StartContainer for \"6f226f11fa7a21f1b52e4706d948e41d66e6a49daa5de031aa4e0ac1e01314b3\"" Nov 5 15:48:45.667838 containerd[1686]: time="2025-11-05T15:48:45.667802126Z" level=info msg="connecting to shim 6f226f11fa7a21f1b52e4706d948e41d66e6a49daa5de031aa4e0ac1e01314b3" address="unix:///run/containerd/s/84471699ff411cebc35a193927506daacaa8fef1e11a217b4d5689dfe6388d02" protocol=ttrpc version=3 Nov 5 15:48:45.689297 systemd[1]: Started cri-containerd-6f226f11fa7a21f1b52e4706d948e41d66e6a49daa5de031aa4e0ac1e01314b3.scope - libcontainer container 6f226f11fa7a21f1b52e4706d948e41d66e6a49daa5de031aa4e0ac1e01314b3. Nov 5 15:48:45.720733 containerd[1686]: time="2025-11-05T15:48:45.720659261Z" level=info msg="StartContainer for \"6f226f11fa7a21f1b52e4706d948e41d66e6a49daa5de031aa4e0ac1e01314b3\" returns successfully" Nov 5 15:48:45.802446 kubelet[2996]: E1105 15:48:45.802345 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9hj2w" podUID="5c5e9d05-7eb8-471e-907b-c18b5992bb51" Nov 5 15:48:46.744007 systemd[1]: cri-containerd-6f226f11fa7a21f1b52e4706d948e41d66e6a49daa5de031aa4e0ac1e01314b3.scope: Deactivated successfully. Nov 5 15:48:46.744915 systemd[1]: cri-containerd-6f226f11fa7a21f1b52e4706d948e41d66e6a49daa5de031aa4e0ac1e01314b3.scope: Consumed 315ms CPU time, 165.6M memory peak, 284K read from disk, 171.3M written to disk. Nov 5 15:48:46.796550 containerd[1686]: time="2025-11-05T15:48:46.796522587Z" level=info msg="received exit event container_id:\"6f226f11fa7a21f1b52e4706d948e41d66e6a49daa5de031aa4e0ac1e01314b3\" id:\"6f226f11fa7a21f1b52e4706d948e41d66e6a49daa5de031aa4e0ac1e01314b3\" pid:3723 exited_at:{seconds:1762357726 nanos:796393739}" Nov 5 15:48:46.796888 containerd[1686]: time="2025-11-05T15:48:46.796635313Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6f226f11fa7a21f1b52e4706d948e41d66e6a49daa5de031aa4e0ac1e01314b3\" id:\"6f226f11fa7a21f1b52e4706d948e41d66e6a49daa5de031aa4e0ac1e01314b3\" pid:3723 exited_at:{seconds:1762357726 nanos:796393739}" Nov 5 15:48:46.808560 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f226f11fa7a21f1b52e4706d948e41d66e6a49daa5de031aa4e0ac1e01314b3-rootfs.mount: Deactivated successfully. Nov 5 15:48:46.816567 kubelet[2996]: I1105 15:48:46.816551 2996 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 5 15:48:46.976638 systemd[1]: Created slice kubepods-burstable-pod3e7a2b0d_7ec6_4b5b_b6bd_9ce0b5174a51.slice - libcontainer container kubepods-burstable-pod3e7a2b0d_7ec6_4b5b_b6bd_9ce0b5174a51.slice. Nov 5 15:48:46.994280 systemd[1]: Created slice kubepods-besteffort-pod12c26640_e854_41d4_a3c6_4ceffbd5949e.slice - libcontainer container kubepods-besteffort-pod12c26640_e854_41d4_a3c6_4ceffbd5949e.slice. Nov 5 15:48:47.004760 systemd[1]: Created slice kubepods-besteffort-pod1353ebb0_4e2b_4c92_bf7e_cc5b1b71054e.slice - libcontainer container kubepods-besteffort-pod1353ebb0_4e2b_4c92_bf7e_cc5b1b71054e.slice. Nov 5 15:48:47.009919 systemd[1]: Created slice kubepods-besteffort-pod0fbf3e00_38e0_4753_acbc_319c3080ae42.slice - libcontainer container kubepods-besteffort-pod0fbf3e00_38e0_4753_acbc_319c3080ae42.slice. Nov 5 15:48:47.011353 kubelet[2996]: W1105 15:48:47.008938 2996 reflector.go:569] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: configmaps "goldmane" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Nov 5 15:48:47.016195 kubelet[2996]: W1105 15:48:47.013623 2996 reflector.go:569] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Nov 5 15:48:47.016195 kubelet[2996]: E1105 15:48:47.015564 2996 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Nov 5 15:48:47.016195 kubelet[2996]: W1105 15:48:47.015641 2996 reflector.go:569] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Nov 5 15:48:47.016195 kubelet[2996]: E1105 15:48:47.015660 2996 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Nov 5 15:48:47.016340 kubelet[2996]: E1105 15:48:47.016320 2996 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Nov 5 15:48:47.017964 kubelet[2996]: W1105 15:48:47.017365 2996 reflector.go:569] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Nov 5 15:48:47.017964 kubelet[2996]: E1105 15:48:47.017394 2996 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Nov 5 15:48:47.017964 kubelet[2996]: W1105 15:48:47.017624 2996 reflector.go:569] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Nov 5 15:48:47.017964 kubelet[2996]: E1105 15:48:47.017636 2996 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Nov 5 15:48:47.017964 kubelet[2996]: W1105 15:48:47.017708 2996 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object Nov 5 15:48:47.018082 kubelet[2996]: E1105 15:48:47.017718 2996 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Nov 5 15:48:47.022122 systemd[1]: Created slice kubepods-burstable-pod1eb055ed_aaab_4a59_8ad1_afd7a5d570c1.slice - libcontainer container kubepods-burstable-pod1eb055ed_aaab_4a59_8ad1_afd7a5d570c1.slice. Nov 5 15:48:47.027766 systemd[1]: Created slice kubepods-besteffort-pod99dbe593_4354_4ab6_ba7c_be3559d541a3.slice - libcontainer container kubepods-besteffort-pod99dbe593_4354_4ab6_ba7c_be3559d541a3.slice. Nov 5 15:48:47.032879 systemd[1]: Created slice kubepods-besteffort-pod820085b5_0e01_4ac8_ba7a_c7255a8764f6.slice - libcontainer container kubepods-besteffort-pod820085b5_0e01_4ac8_ba7a_c7255a8764f6.slice. Nov 5 15:48:47.038676 systemd[1]: Created slice kubepods-besteffort-pod9045f224_fdd2_4555_a4fe_fb613a1c7ed0.slice - libcontainer container kubepods-besteffort-pod9045f224_fdd2_4555_a4fe_fb613a1c7ed0.slice. Nov 5 15:48:47.062121 kubelet[2996]: I1105 15:48:47.061211 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/12c26640-e854-41d4-a3c6-4ceffbd5949e-whisker-backend-key-pair\") pod \"whisker-7458db79c-mh27q\" (UID: \"12c26640-e854-41d4-a3c6-4ceffbd5949e\") " pod="calico-system/whisker-7458db79c-mh27q" Nov 5 15:48:47.062121 kubelet[2996]: I1105 15:48:47.061240 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e7a2b0d-7ec6-4b5b-b6bd-9ce0b5174a51-config-volume\") pod \"coredns-668d6bf9bc-d4wc4\" (UID: \"3e7a2b0d-7ec6-4b5b-b6bd-9ce0b5174a51\") " pod="kube-system/coredns-668d6bf9bc-d4wc4" Nov 5 15:48:47.062121 kubelet[2996]: I1105 15:48:47.061251 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwpzm\" (UniqueName: \"kubernetes.io/projected/12c26640-e854-41d4-a3c6-4ceffbd5949e-kube-api-access-dwpzm\") pod \"whisker-7458db79c-mh27q\" (UID: \"12c26640-e854-41d4-a3c6-4ceffbd5949e\") " pod="calico-system/whisker-7458db79c-mh27q" Nov 5 15:48:47.062121 kubelet[2996]: I1105 15:48:47.061270 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99xp2\" (UniqueName: \"kubernetes.io/projected/1eb055ed-aaab-4a59-8ad1-afd7a5d570c1-kube-api-access-99xp2\") pod \"coredns-668d6bf9bc-r9kqp\" (UID: \"1eb055ed-aaab-4a59-8ad1-afd7a5d570c1\") " pod="kube-system/coredns-668d6bf9bc-r9kqp" Nov 5 15:48:47.062121 kubelet[2996]: I1105 15:48:47.061281 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz8zj\" (UniqueName: \"kubernetes.io/projected/3e7a2b0d-7ec6-4b5b-b6bd-9ce0b5174a51-kube-api-access-pz8zj\") pod \"coredns-668d6bf9bc-d4wc4\" (UID: \"3e7a2b0d-7ec6-4b5b-b6bd-9ce0b5174a51\") " pod="kube-system/coredns-668d6bf9bc-d4wc4" Nov 5 15:48:47.062327 kubelet[2996]: I1105 15:48:47.061290 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12c26640-e854-41d4-a3c6-4ceffbd5949e-whisker-ca-bundle\") pod \"whisker-7458db79c-mh27q\" (UID: \"12c26640-e854-41d4-a3c6-4ceffbd5949e\") " pod="calico-system/whisker-7458db79c-mh27q" Nov 5 15:48:47.062327 kubelet[2996]: I1105 15:48:47.061299 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1eb055ed-aaab-4a59-8ad1-afd7a5d570c1-config-volume\") pod \"coredns-668d6bf9bc-r9kqp\" (UID: \"1eb055ed-aaab-4a59-8ad1-afd7a5d570c1\") " pod="kube-system/coredns-668d6bf9bc-r9kqp" Nov 5 15:48:47.161728 kubelet[2996]: I1105 15:48:47.161671 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/820085b5-0e01-4ac8-ba7a-c7255a8764f6-tigera-ca-bundle\") pod \"calico-kube-controllers-b57c799-kk9c4\" (UID: \"820085b5-0e01-4ac8-ba7a-c7255a8764f6\") " pod="calico-system/calico-kube-controllers-b57c799-kk9c4" Nov 5 15:48:47.161728 kubelet[2996]: I1105 15:48:47.161713 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw72x\" (UniqueName: \"kubernetes.io/projected/0fbf3e00-38e0-4753-acbc-319c3080ae42-kube-api-access-xw72x\") pod \"calico-apiserver-66dbdd57f9-thx98\" (UID: \"0fbf3e00-38e0-4753-acbc-319c3080ae42\") " pod="calico-apiserver/calico-apiserver-66dbdd57f9-thx98" Nov 5 15:48:47.161728 kubelet[2996]: I1105 15:48:47.161725 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6snb\" (UniqueName: \"kubernetes.io/projected/820085b5-0e01-4ac8-ba7a-c7255a8764f6-kube-api-access-r6snb\") pod \"calico-kube-controllers-b57c799-kk9c4\" (UID: \"820085b5-0e01-4ac8-ba7a-c7255a8764f6\") " pod="calico-system/calico-kube-controllers-b57c799-kk9c4" Nov 5 15:48:47.161876 kubelet[2996]: I1105 15:48:47.161755 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9045f224-fdd2-4555-a4fe-fb613a1c7ed0-config\") pod \"goldmane-666569f655-pv5b2\" (UID: \"9045f224-fdd2-4555-a4fe-fb613a1c7ed0\") " pod="calico-system/goldmane-666569f655-pv5b2" Nov 5 15:48:47.161876 kubelet[2996]: I1105 15:48:47.161778 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8j4q\" (UniqueName: \"kubernetes.io/projected/99dbe593-4354-4ab6-ba7c-be3559d541a3-kube-api-access-h8j4q\") pod \"calico-apiserver-556695d997-rtkds\" (UID: \"99dbe593-4354-4ab6-ba7c-be3559d541a3\") " pod="calico-apiserver/calico-apiserver-556695d997-rtkds" Nov 5 15:48:47.161876 kubelet[2996]: I1105 15:48:47.161788 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9045f224-fdd2-4555-a4fe-fb613a1c7ed0-goldmane-key-pair\") pod \"goldmane-666569f655-pv5b2\" (UID: \"9045f224-fdd2-4555-a4fe-fb613a1c7ed0\") " pod="calico-system/goldmane-666569f655-pv5b2" Nov 5 15:48:47.161876 kubelet[2996]: I1105 15:48:47.161800 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p28c5\" (UniqueName: \"kubernetes.io/projected/1353ebb0-4e2b-4c92-bf7e-cc5b1b71054e-kube-api-access-p28c5\") pod \"calico-apiserver-556695d997-5xswp\" (UID: \"1353ebb0-4e2b-4c92-bf7e-cc5b1b71054e\") " pod="calico-apiserver/calico-apiserver-556695d997-5xswp" Nov 5 15:48:47.161876 kubelet[2996]: I1105 15:48:47.161810 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0fbf3e00-38e0-4753-acbc-319c3080ae42-calico-apiserver-certs\") pod \"calico-apiserver-66dbdd57f9-thx98\" (UID: \"0fbf3e00-38e0-4753-acbc-319c3080ae42\") " pod="calico-apiserver/calico-apiserver-66dbdd57f9-thx98" Nov 5 15:48:47.161994 kubelet[2996]: I1105 15:48:47.161820 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fljrb\" (UniqueName: \"kubernetes.io/projected/9045f224-fdd2-4555-a4fe-fb613a1c7ed0-kube-api-access-fljrb\") pod \"goldmane-666569f655-pv5b2\" (UID: \"9045f224-fdd2-4555-a4fe-fb613a1c7ed0\") " pod="calico-system/goldmane-666569f655-pv5b2" Nov 5 15:48:47.161994 kubelet[2996]: I1105 15:48:47.161832 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9045f224-fdd2-4555-a4fe-fb613a1c7ed0-goldmane-ca-bundle\") pod \"goldmane-666569f655-pv5b2\" (UID: \"9045f224-fdd2-4555-a4fe-fb613a1c7ed0\") " pod="calico-system/goldmane-666569f655-pv5b2" Nov 5 15:48:47.161994 kubelet[2996]: I1105 15:48:47.161857 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1353ebb0-4e2b-4c92-bf7e-cc5b1b71054e-calico-apiserver-certs\") pod \"calico-apiserver-556695d997-5xswp\" (UID: \"1353ebb0-4e2b-4c92-bf7e-cc5b1b71054e\") " pod="calico-apiserver/calico-apiserver-556695d997-5xswp" Nov 5 15:48:47.161994 kubelet[2996]: I1105 15:48:47.161875 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/99dbe593-4354-4ab6-ba7c-be3559d541a3-calico-apiserver-certs\") pod \"calico-apiserver-556695d997-rtkds\" (UID: \"99dbe593-4354-4ab6-ba7c-be3559d541a3\") " pod="calico-apiserver/calico-apiserver-556695d997-rtkds" Nov 5 15:48:47.280480 containerd[1686]: time="2025-11-05T15:48:47.279542181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d4wc4,Uid:3e7a2b0d-7ec6-4b5b-b6bd-9ce0b5174a51,Namespace:kube-system,Attempt:0,}" Nov 5 15:48:47.326164 containerd[1686]: time="2025-11-05T15:48:47.326092525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r9kqp,Uid:1eb055ed-aaab-4a59-8ad1-afd7a5d570c1,Namespace:kube-system,Attempt:0,}" Nov 5 15:48:47.337596 containerd[1686]: time="2025-11-05T15:48:47.337548208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b57c799-kk9c4,Uid:820085b5-0e01-4ac8-ba7a-c7255a8764f6,Namespace:calico-system,Attempt:0,}" Nov 5 15:48:47.544017 containerd[1686]: time="2025-11-05T15:48:47.543880243Z" level=error msg="Failed to destroy network for sandbox \"f720154c55b2058a277b9b14664ed6898df2be3a95ee79afaacbb051a07f2589\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:47.545180 containerd[1686]: time="2025-11-05T15:48:47.545139371Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r9kqp,Uid:1eb055ed-aaab-4a59-8ad1-afd7a5d570c1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f720154c55b2058a277b9b14664ed6898df2be3a95ee79afaacbb051a07f2589\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:47.545273 containerd[1686]: time="2025-11-05T15:48:47.545205499Z" level=error msg="Failed to destroy network for sandbox \"860f6566067bc5b95892161e7d8e15ff2f33bb9eb9f2848c021912db5bda6a60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:47.545829 containerd[1686]: time="2025-11-05T15:48:47.545750718Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b57c799-kk9c4,Uid:820085b5-0e01-4ac8-ba7a-c7255a8764f6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"860f6566067bc5b95892161e7d8e15ff2f33bb9eb9f2848c021912db5bda6a60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:47.549919 containerd[1686]: time="2025-11-05T15:48:47.549881172Z" level=error msg="Failed to destroy network for sandbox \"e2fbe6e3be6c023bffea00e52cef8b58cb7074067853afde9b3caa8ebaba9cde\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:47.550277 containerd[1686]: time="2025-11-05T15:48:47.550242562Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d4wc4,Uid:3e7a2b0d-7ec6-4b5b-b6bd-9ce0b5174a51,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2fbe6e3be6c023bffea00e52cef8b58cb7074067853afde9b3caa8ebaba9cde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:47.550859 kubelet[2996]: E1105 15:48:47.550825 2996 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"860f6566067bc5b95892161e7d8e15ff2f33bb9eb9f2848c021912db5bda6a60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:47.550894 kubelet[2996]: E1105 15:48:47.550880 2996 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"860f6566067bc5b95892161e7d8e15ff2f33bb9eb9f2848c021912db5bda6a60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b57c799-kk9c4" Nov 5 15:48:47.550920 kubelet[2996]: E1105 15:48:47.550894 2996 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"860f6566067bc5b95892161e7d8e15ff2f33bb9eb9f2848c021912db5bda6a60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b57c799-kk9c4" Nov 5 15:48:47.550940 kubelet[2996]: E1105 15:48:47.550921 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-b57c799-kk9c4_calico-system(820085b5-0e01-4ac8-ba7a-c7255a8764f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-b57c799-kk9c4_calico-system(820085b5-0e01-4ac8-ba7a-c7255a8764f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"860f6566067bc5b95892161e7d8e15ff2f33bb9eb9f2848c021912db5bda6a60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b57c799-kk9c4" podUID="820085b5-0e01-4ac8-ba7a-c7255a8764f6" Nov 5 15:48:47.551283 kubelet[2996]: E1105 15:48:47.551133 2996 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2fbe6e3be6c023bffea00e52cef8b58cb7074067853afde9b3caa8ebaba9cde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:47.551283 kubelet[2996]: E1105 15:48:47.551147 2996 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2fbe6e3be6c023bffea00e52cef8b58cb7074067853afde9b3caa8ebaba9cde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d4wc4" Nov 5 15:48:47.551283 kubelet[2996]: E1105 15:48:47.551158 2996 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2fbe6e3be6c023bffea00e52cef8b58cb7074067853afde9b3caa8ebaba9cde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d4wc4" Nov 5 15:48:47.551355 kubelet[2996]: E1105 15:48:47.551173 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-d4wc4_kube-system(3e7a2b0d-7ec6-4b5b-b6bd-9ce0b5174a51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-d4wc4_kube-system(3e7a2b0d-7ec6-4b5b-b6bd-9ce0b5174a51)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e2fbe6e3be6c023bffea00e52cef8b58cb7074067853afde9b3caa8ebaba9cde\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-d4wc4" podUID="3e7a2b0d-7ec6-4b5b-b6bd-9ce0b5174a51" Nov 5 15:48:47.551355 kubelet[2996]: E1105 15:48:47.551191 2996 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f720154c55b2058a277b9b14664ed6898df2be3a95ee79afaacbb051a07f2589\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:47.551355 kubelet[2996]: E1105 15:48:47.551205 2996 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f720154c55b2058a277b9b14664ed6898df2be3a95ee79afaacbb051a07f2589\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-r9kqp" Nov 5 15:48:47.551808 kubelet[2996]: E1105 15:48:47.551220 2996 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f720154c55b2058a277b9b14664ed6898df2be3a95ee79afaacbb051a07f2589\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-r9kqp" Nov 5 15:48:47.551808 kubelet[2996]: E1105 15:48:47.551241 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-r9kqp_kube-system(1eb055ed-aaab-4a59-8ad1-afd7a5d570c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-r9kqp_kube-system(1eb055ed-aaab-4a59-8ad1-afd7a5d570c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f720154c55b2058a277b9b14664ed6898df2be3a95ee79afaacbb051a07f2589\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-r9kqp" podUID="1eb055ed-aaab-4a59-8ad1-afd7a5d570c1" Nov 5 15:48:47.785213 systemd[1]: Created slice kubepods-besteffort-pod5c5e9d05_7eb8_471e_907b_c18b5992bb51.slice - libcontainer container kubepods-besteffort-pod5c5e9d05_7eb8_471e_907b_c18b5992bb51.slice. Nov 5 15:48:47.787066 containerd[1686]: time="2025-11-05T15:48:47.787043016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9hj2w,Uid:5c5e9d05-7eb8-471e-907b-c18b5992bb51,Namespace:calico-system,Attempt:0,}" Nov 5 15:48:47.836187 containerd[1686]: time="2025-11-05T15:48:47.835730558Z" level=error msg="Failed to destroy network for sandbox \"f8421575b4c9b617da3a965304cd6d1ecbd6a6ed6837f0b38c9436306f19e64f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:47.837203 systemd[1]: run-netns-cni\x2d2ce59d3a\x2ddbbc\x2def33\x2d818b\x2db54ff5e4f40c.mount: Deactivated successfully. Nov 5 15:48:47.840935 containerd[1686]: time="2025-11-05T15:48:47.840894664Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9hj2w,Uid:5c5e9d05-7eb8-471e-907b-c18b5992bb51,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8421575b4c9b617da3a965304cd6d1ecbd6a6ed6837f0b38c9436306f19e64f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:47.841722 kubelet[2996]: E1105 15:48:47.841163 2996 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8421575b4c9b617da3a965304cd6d1ecbd6a6ed6837f0b38c9436306f19e64f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:47.841722 kubelet[2996]: E1105 15:48:47.841493 2996 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8421575b4c9b617da3a965304cd6d1ecbd6a6ed6837f0b38c9436306f19e64f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9hj2w" Nov 5 15:48:47.841722 kubelet[2996]: E1105 15:48:47.841514 2996 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8421575b4c9b617da3a965304cd6d1ecbd6a6ed6837f0b38c9436306f19e64f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9hj2w" Nov 5 15:48:47.842031 kubelet[2996]: E1105 15:48:47.841558 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9hj2w_calico-system(5c5e9d05-7eb8-471e-907b-c18b5992bb51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9hj2w_calico-system(5c5e9d05-7eb8-471e-907b-c18b5992bb51)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f8421575b4c9b617da3a965304cd6d1ecbd6a6ed6837f0b38c9436306f19e64f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9hj2w" podUID="5c5e9d05-7eb8-471e-907b-c18b5992bb51" Nov 5 15:48:47.937639 containerd[1686]: time="2025-11-05T15:48:47.937546923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 5 15:48:48.164135 kubelet[2996]: E1105 15:48:48.164040 2996 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Nov 5 15:48:48.164216 kubelet[2996]: E1105 15:48:48.164137 2996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/12c26640-e854-41d4-a3c6-4ceffbd5949e-whisker-ca-bundle podName:12c26640-e854-41d4-a3c6-4ceffbd5949e nodeName:}" failed. No retries permitted until 2025-11-05 15:48:48.664115391 +0000 UTC m=+28.972419409 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/12c26640-e854-41d4-a3c6-4ceffbd5949e-whisker-ca-bundle") pod "whisker-7458db79c-mh27q" (UID: "12c26640-e854-41d4-a3c6-4ceffbd5949e") : failed to sync configmap cache: timed out waiting for the condition Nov 5 15:48:48.173996 kubelet[2996]: E1105 15:48:48.173980 2996 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition Nov 5 15:48:48.174054 kubelet[2996]: E1105 15:48:48.174019 2996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12c26640-e854-41d4-a3c6-4ceffbd5949e-whisker-backend-key-pair podName:12c26640-e854-41d4-a3c6-4ceffbd5949e nodeName:}" failed. No retries permitted until 2025-11-05 15:48:48.674010164 +0000 UTC m=+28.982314182 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/12c26640-e854-41d4-a3c6-4ceffbd5949e-whisker-backend-key-pair") pod "whisker-7458db79c-mh27q" (UID: "12c26640-e854-41d4-a3c6-4ceffbd5949e") : failed to sync secret cache: timed out waiting for the condition Nov 5 15:48:48.263276 kubelet[2996]: E1105 15:48:48.263124 2996 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Nov 5 15:48:48.263276 kubelet[2996]: E1105 15:48:48.263147 2996 secret.go:189] Couldn't get secret calico-system/goldmane-key-pair: failed to sync secret cache: timed out waiting for the condition Nov 5 15:48:48.263276 kubelet[2996]: E1105 15:48:48.263178 2996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/99dbe593-4354-4ab6-ba7c-be3559d541a3-calico-apiserver-certs podName:99dbe593-4354-4ab6-ba7c-be3559d541a3 nodeName:}" failed. No retries permitted until 2025-11-05 15:48:48.763166251 +0000 UTC m=+29.071470269 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/99dbe593-4354-4ab6-ba7c-be3559d541a3-calico-apiserver-certs") pod "calico-apiserver-556695d997-rtkds" (UID: "99dbe593-4354-4ab6-ba7c-be3559d541a3") : failed to sync secret cache: timed out waiting for the condition Nov 5 15:48:48.263276 kubelet[2996]: E1105 15:48:48.263191 2996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9045f224-fdd2-4555-a4fe-fb613a1c7ed0-goldmane-key-pair podName:9045f224-fdd2-4555-a4fe-fb613a1c7ed0 nodeName:}" failed. No retries permitted until 2025-11-05 15:48:48.763186888 +0000 UTC m=+29.071490903 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-key-pair" (UniqueName: "kubernetes.io/secret/9045f224-fdd2-4555-a4fe-fb613a1c7ed0-goldmane-key-pair") pod "goldmane-666569f655-pv5b2" (UID: "9045f224-fdd2-4555-a4fe-fb613a1c7ed0") : failed to sync secret cache: timed out waiting for the condition Nov 5 15:48:48.263276 kubelet[2996]: E1105 15:48:48.263129 2996 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Nov 5 15:48:48.264325 kubelet[2996]: E1105 15:48:48.263207 2996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0fbf3e00-38e0-4753-acbc-319c3080ae42-calico-apiserver-certs podName:0fbf3e00-38e0-4753-acbc-319c3080ae42 nodeName:}" failed. No retries permitted until 2025-11-05 15:48:48.763202083 +0000 UTC m=+29.071506096 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/0fbf3e00-38e0-4753-acbc-319c3080ae42-calico-apiserver-certs") pod "calico-apiserver-66dbdd57f9-thx98" (UID: "0fbf3e00-38e0-4753-acbc-319c3080ae42") : failed to sync secret cache: timed out waiting for the condition Nov 5 15:48:48.264325 kubelet[2996]: E1105 15:48:48.263181 2996 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Nov 5 15:48:48.264325 kubelet[2996]: E1105 15:48:48.263222 2996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1353ebb0-4e2b-4c92-bf7e-cc5b1b71054e-calico-apiserver-certs podName:1353ebb0-4e2b-4c92-bf7e-cc5b1b71054e nodeName:}" failed. No retries permitted until 2025-11-05 15:48:48.763218933 +0000 UTC m=+29.071522946 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/1353ebb0-4e2b-4c92-bf7e-cc5b1b71054e-calico-apiserver-certs") pod "calico-apiserver-556695d997-5xswp" (UID: "1353ebb0-4e2b-4c92-bf7e-cc5b1b71054e") : failed to sync secret cache: timed out waiting for the condition Nov 5 15:48:48.264325 kubelet[2996]: E1105 15:48:48.263193 2996 configmap.go:193] Couldn't get configMap calico-system/goldmane-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Nov 5 15:48:48.264486 kubelet[2996]: E1105 15:48:48.263236 2996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9045f224-fdd2-4555-a4fe-fb613a1c7ed0-goldmane-ca-bundle podName:9045f224-fdd2-4555-a4fe-fb613a1c7ed0 nodeName:}" failed. No retries permitted until 2025-11-05 15:48:48.763232699 +0000 UTC m=+29.071536717 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-ca-bundle" (UniqueName: "kubernetes.io/configmap/9045f224-fdd2-4555-a4fe-fb613a1c7ed0-goldmane-ca-bundle") pod "goldmane-666569f655-pv5b2" (UID: "9045f224-fdd2-4555-a4fe-fb613a1c7ed0") : failed to sync configmap cache: timed out waiting for the condition Nov 5 15:48:48.264486 kubelet[2996]: E1105 15:48:48.264321 2996 configmap.go:193] Couldn't get configMap calico-system/goldmane: failed to sync configmap cache: timed out waiting for the condition Nov 5 15:48:48.264486 kubelet[2996]: E1105 15:48:48.264343 2996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9045f224-fdd2-4555-a4fe-fb613a1c7ed0-config podName:9045f224-fdd2-4555-a4fe-fb613a1c7ed0 nodeName:}" failed. No retries permitted until 2025-11-05 15:48:48.764336589 +0000 UTC m=+29.072640603 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9045f224-fdd2-4555-a4fe-fb613a1c7ed0-config") pod "goldmane-666569f655-pv5b2" (UID: "9045f224-fdd2-4555-a4fe-fb613a1c7ed0") : failed to sync configmap cache: timed out waiting for the condition Nov 5 15:48:48.803819 containerd[1686]: time="2025-11-05T15:48:48.803792973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7458db79c-mh27q,Uid:12c26640-e854-41d4-a3c6-4ceffbd5949e,Namespace:calico-system,Attempt:0,}" Nov 5 15:48:48.809138 containerd[1686]: time="2025-11-05T15:48:48.808360593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556695d997-5xswp,Uid:1353ebb0-4e2b-4c92-bf7e-cc5b1b71054e,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:48:48.819071 containerd[1686]: time="2025-11-05T15:48:48.819022214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66dbdd57f9-thx98,Uid:0fbf3e00-38e0-4753-acbc-319c3080ae42,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:48:48.831103 containerd[1686]: time="2025-11-05T15:48:48.830971190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556695d997-rtkds,Uid:99dbe593-4354-4ab6-ba7c-be3559d541a3,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:48:48.842895 containerd[1686]: time="2025-11-05T15:48:48.842857786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pv5b2,Uid:9045f224-fdd2-4555-a4fe-fb613a1c7ed0,Namespace:calico-system,Attempt:0,}" Nov 5 15:48:48.879391 containerd[1686]: time="2025-11-05T15:48:48.879325272Z" level=error msg="Failed to destroy network for sandbox \"c28d6274b80a0d44b04d39c950c0d2bfbbec2d809bf96e0e310ffe3408014047\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:48.882108 containerd[1686]: time="2025-11-05T15:48:48.882050390Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7458db79c-mh27q,Uid:12c26640-e854-41d4-a3c6-4ceffbd5949e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c28d6274b80a0d44b04d39c950c0d2bfbbec2d809bf96e0e310ffe3408014047\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:48.882477 kubelet[2996]: E1105 15:48:48.882262 2996 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c28d6274b80a0d44b04d39c950c0d2bfbbec2d809bf96e0e310ffe3408014047\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:48.882477 kubelet[2996]: E1105 15:48:48.882309 2996 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c28d6274b80a0d44b04d39c950c0d2bfbbec2d809bf96e0e310ffe3408014047\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7458db79c-mh27q" Nov 5 15:48:48.882477 kubelet[2996]: E1105 15:48:48.882326 2996 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c28d6274b80a0d44b04d39c950c0d2bfbbec2d809bf96e0e310ffe3408014047\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7458db79c-mh27q" Nov 5 15:48:48.883404 kubelet[2996]: E1105 15:48:48.882357 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7458db79c-mh27q_calico-system(12c26640-e854-41d4-a3c6-4ceffbd5949e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7458db79c-mh27q_calico-system(12c26640-e854-41d4-a3c6-4ceffbd5949e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c28d6274b80a0d44b04d39c950c0d2bfbbec2d809bf96e0e310ffe3408014047\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7458db79c-mh27q" podUID="12c26640-e854-41d4-a3c6-4ceffbd5949e" Nov 5 15:48:48.912520 containerd[1686]: time="2025-11-05T15:48:48.912485956Z" level=error msg="Failed to destroy network for sandbox \"8a2ef4bb6981873e19bdc486627814c421d14cec98dd9ecb64271bdcb2a2c467\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:48.913026 containerd[1686]: time="2025-11-05T15:48:48.913004577Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556695d997-5xswp,Uid:1353ebb0-4e2b-4c92-bf7e-cc5b1b71054e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a2ef4bb6981873e19bdc486627814c421d14cec98dd9ecb64271bdcb2a2c467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:48.913848 kubelet[2996]: E1105 15:48:48.913196 2996 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a2ef4bb6981873e19bdc486627814c421d14cec98dd9ecb64271bdcb2a2c467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:48.913848 kubelet[2996]: E1105 15:48:48.913242 2996 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a2ef4bb6981873e19bdc486627814c421d14cec98dd9ecb64271bdcb2a2c467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-556695d997-5xswp" Nov 5 15:48:48.913848 kubelet[2996]: E1105 15:48:48.913256 2996 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a2ef4bb6981873e19bdc486627814c421d14cec98dd9ecb64271bdcb2a2c467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-556695d997-5xswp" Nov 5 15:48:48.914005 kubelet[2996]: E1105 15:48:48.913280 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-556695d997-5xswp_calico-apiserver(1353ebb0-4e2b-4c92-bf7e-cc5b1b71054e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-556695d997-5xswp_calico-apiserver(1353ebb0-4e2b-4c92-bf7e-cc5b1b71054e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a2ef4bb6981873e19bdc486627814c421d14cec98dd9ecb64271bdcb2a2c467\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-556695d997-5xswp" podUID="1353ebb0-4e2b-4c92-bf7e-cc5b1b71054e" Nov 5 15:48:48.932636 containerd[1686]: time="2025-11-05T15:48:48.932361716Z" level=error msg="Failed to destroy network for sandbox \"b55d50e1aa0f1fa18049c3d583283f5b1c5711471372ab23c65595c436321106\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:48.933339 containerd[1686]: time="2025-11-05T15:48:48.933317901Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66dbdd57f9-thx98,Uid:0fbf3e00-38e0-4753-acbc-319c3080ae42,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b55d50e1aa0f1fa18049c3d583283f5b1c5711471372ab23c65595c436321106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:48.934448 kubelet[2996]: E1105 15:48:48.934306 2996 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b55d50e1aa0f1fa18049c3d583283f5b1c5711471372ab23c65595c436321106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:48.934448 kubelet[2996]: E1105 15:48:48.934351 2996 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b55d50e1aa0f1fa18049c3d583283f5b1c5711471372ab23c65595c436321106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66dbdd57f9-thx98" Nov 5 15:48:48.934448 kubelet[2996]: E1105 15:48:48.934367 2996 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b55d50e1aa0f1fa18049c3d583283f5b1c5711471372ab23c65595c436321106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66dbdd57f9-thx98" Nov 5 15:48:48.936077 kubelet[2996]: E1105 15:48:48.934743 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66dbdd57f9-thx98_calico-apiserver(0fbf3e00-38e0-4753-acbc-319c3080ae42)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66dbdd57f9-thx98_calico-apiserver(0fbf3e00-38e0-4753-acbc-319c3080ae42)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b55d50e1aa0f1fa18049c3d583283f5b1c5711471372ab23c65595c436321106\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66dbdd57f9-thx98" podUID="0fbf3e00-38e0-4753-acbc-319c3080ae42" Nov 5 15:48:48.940789 containerd[1686]: time="2025-11-05T15:48:48.939913188Z" level=error msg="Failed to destroy network for sandbox \"abc22b4a77db2adae57d8c174c6560b333015ad818f14a7eef0689afae1eb512\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:48.941981 containerd[1686]: time="2025-11-05T15:48:48.941683296Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pv5b2,Uid:9045f224-fdd2-4555-a4fe-fb613a1c7ed0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"abc22b4a77db2adae57d8c174c6560b333015ad818f14a7eef0689afae1eb512\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:48.943022 kubelet[2996]: E1105 15:48:48.941826 2996 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abc22b4a77db2adae57d8c174c6560b333015ad818f14a7eef0689afae1eb512\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:48.943022 kubelet[2996]: E1105 15:48:48.941862 2996 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abc22b4a77db2adae57d8c174c6560b333015ad818f14a7eef0689afae1eb512\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-pv5b2" Nov 5 15:48:48.943022 kubelet[2996]: E1105 15:48:48.941874 2996 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abc22b4a77db2adae57d8c174c6560b333015ad818f14a7eef0689afae1eb512\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-pv5b2" Nov 5 15:48:48.943235 kubelet[2996]: E1105 15:48:48.941897 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-pv5b2_calico-system(9045f224-fdd2-4555-a4fe-fb613a1c7ed0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-pv5b2_calico-system(9045f224-fdd2-4555-a4fe-fb613a1c7ed0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"abc22b4a77db2adae57d8c174c6560b333015ad818f14a7eef0689afae1eb512\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-pv5b2" podUID="9045f224-fdd2-4555-a4fe-fb613a1c7ed0" Nov 5 15:48:48.945309 containerd[1686]: time="2025-11-05T15:48:48.945273013Z" level=error msg="Failed to destroy network for sandbox \"d343efb953abbd68959c50d1f369d848140cff02d04428545ddf1fe96a9c1826\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:48.945628 containerd[1686]: time="2025-11-05T15:48:48.945608089Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556695d997-rtkds,Uid:99dbe593-4354-4ab6-ba7c-be3559d541a3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d343efb953abbd68959c50d1f369d848140cff02d04428545ddf1fe96a9c1826\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:48.945995 kubelet[2996]: E1105 15:48:48.945861 2996 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d343efb953abbd68959c50d1f369d848140cff02d04428545ddf1fe96a9c1826\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:48.945995 kubelet[2996]: E1105 15:48:48.945908 2996 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d343efb953abbd68959c50d1f369d848140cff02d04428545ddf1fe96a9c1826\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-556695d997-rtkds" Nov 5 15:48:48.945995 kubelet[2996]: E1105 15:48:48.945928 2996 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d343efb953abbd68959c50d1f369d848140cff02d04428545ddf1fe96a9c1826\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-556695d997-rtkds" Nov 5 15:48:48.946136 kubelet[2996]: E1105 15:48:48.945964 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-556695d997-rtkds_calico-apiserver(99dbe593-4354-4ab6-ba7c-be3559d541a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-556695d997-rtkds_calico-apiserver(99dbe593-4354-4ab6-ba7c-be3559d541a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d343efb953abbd68959c50d1f369d848140cff02d04428545ddf1fe96a9c1826\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-556695d997-rtkds" podUID="99dbe593-4354-4ab6-ba7c-be3559d541a3" Nov 5 15:48:49.808834 systemd[1]: run-netns-cni\x2d5ad0ced2\x2ddf6c\x2dd755\x2da499\x2dc59045974755.mount: Deactivated successfully. Nov 5 15:48:49.809501 systemd[1]: run-netns-cni\x2d9ad0874d\x2d306e\x2d43a2\x2dedb1\x2d402098ccdb84.mount: Deactivated successfully. Nov 5 15:48:49.809550 systemd[1]: run-netns-cni\x2de958430d\x2d178f\x2d8a53\x2d47e3\x2d210703e9d3a5.mount: Deactivated successfully. Nov 5 15:48:49.809588 systemd[1]: run-netns-cni\x2d510f7b3a\x2d2c90\x2d45e4\x2def9c\x2d7940145dceb4.mount: Deactivated successfully. Nov 5 15:48:49.809620 systemd[1]: run-netns-cni\x2d9ca052f3\x2d5d5c\x2dae20\x2d4770\x2d1da357cf08aa.mount: Deactivated successfully. Nov 5 15:48:53.263037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3580370000.mount: Deactivated successfully. Nov 5 15:48:53.438638 containerd[1686]: time="2025-11-05T15:48:53.438601202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:53.441213 containerd[1686]: time="2025-11-05T15:48:53.441195269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 5 15:48:53.456075 containerd[1686]: time="2025-11-05T15:48:53.456042397Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:53.461535 containerd[1686]: time="2025-11-05T15:48:53.461492123Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:53.463847 containerd[1686]: time="2025-11-05T15:48:53.463807376Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 5.524374405s" Nov 5 15:48:53.463847 containerd[1686]: time="2025-11-05T15:48:53.463846148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 5 15:48:53.486670 containerd[1686]: time="2025-11-05T15:48:53.486612182Z" level=info msg="CreateContainer within sandbox \"2c4ec6c392f30000a7dadbb9609967884e0bf00b204abec0406886e48c9596c2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 5 15:48:53.545277 containerd[1686]: time="2025-11-05T15:48:53.545208276Z" level=info msg="Container 443c5b7e7b3751d7b453e391a592cbc3a9a0aadb25cff9a4826be2f933d92c9b: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:48:53.547486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1312618425.mount: Deactivated successfully. Nov 5 15:48:53.580309 containerd[1686]: time="2025-11-05T15:48:53.580275833Z" level=info msg="CreateContainer within sandbox \"2c4ec6c392f30000a7dadbb9609967884e0bf00b204abec0406886e48c9596c2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"443c5b7e7b3751d7b453e391a592cbc3a9a0aadb25cff9a4826be2f933d92c9b\"" Nov 5 15:48:53.580902 containerd[1686]: time="2025-11-05T15:48:53.580879039Z" level=info msg="StartContainer for \"443c5b7e7b3751d7b453e391a592cbc3a9a0aadb25cff9a4826be2f933d92c9b\"" Nov 5 15:48:53.586008 containerd[1686]: time="2025-11-05T15:48:53.585975998Z" level=info msg="connecting to shim 443c5b7e7b3751d7b453e391a592cbc3a9a0aadb25cff9a4826be2f933d92c9b" address="unix:///run/containerd/s/84471699ff411cebc35a193927506daacaa8fef1e11a217b4d5689dfe6388d02" protocol=ttrpc version=3 Nov 5 15:48:53.684278 systemd[1]: Started cri-containerd-443c5b7e7b3751d7b453e391a592cbc3a9a0aadb25cff9a4826be2f933d92c9b.scope - libcontainer container 443c5b7e7b3751d7b453e391a592cbc3a9a0aadb25cff9a4826be2f933d92c9b. Nov 5 15:48:53.719586 containerd[1686]: time="2025-11-05T15:48:53.719559838Z" level=info msg="StartContainer for \"443c5b7e7b3751d7b453e391a592cbc3a9a0aadb25cff9a4826be2f933d92c9b\" returns successfully" Nov 5 15:48:53.979132 kubelet[2996]: I1105 15:48:53.978420 2996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wmrwf" podStartSLOduration=1.613054258 podStartE2EDuration="16.978398054s" podCreationTimestamp="2025-11-05 15:48:37 +0000 UTC" firstStartedPulling="2025-11-05 15:48:38.102222699 +0000 UTC m=+18.410526716" lastFinishedPulling="2025-11-05 15:48:53.467566494 +0000 UTC m=+33.775870512" observedRunningTime="2025-11-05 15:48:53.97130699 +0000 UTC m=+34.279611007" watchObservedRunningTime="2025-11-05 15:48:53.978398054 +0000 UTC m=+34.286702079" Nov 5 15:48:54.578611 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 5 15:48:54.581427 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 5 15:48:55.302730 kubelet[2996]: I1105 15:48:55.302680 2996 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12c26640-e854-41d4-a3c6-4ceffbd5949e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "12c26640-e854-41d4-a3c6-4ceffbd5949e" (UID: "12c26640-e854-41d4-a3c6-4ceffbd5949e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 15:48:55.310467 kubelet[2996]: I1105 15:48:55.310384 2996 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12c26640-e854-41d4-a3c6-4ceffbd5949e-whisker-ca-bundle\") pod \"12c26640-e854-41d4-a3c6-4ceffbd5949e\" (UID: \"12c26640-e854-41d4-a3c6-4ceffbd5949e\") " Nov 5 15:48:55.310702 kubelet[2996]: I1105 15:48:55.310565 2996 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/12c26640-e854-41d4-a3c6-4ceffbd5949e-whisker-backend-key-pair\") pod \"12c26640-e854-41d4-a3c6-4ceffbd5949e\" (UID: \"12c26640-e854-41d4-a3c6-4ceffbd5949e\") " Nov 5 15:48:55.310702 kubelet[2996]: I1105 15:48:55.310593 2996 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwpzm\" (UniqueName: \"kubernetes.io/projected/12c26640-e854-41d4-a3c6-4ceffbd5949e-kube-api-access-dwpzm\") pod \"12c26640-e854-41d4-a3c6-4ceffbd5949e\" (UID: \"12c26640-e854-41d4-a3c6-4ceffbd5949e\") " Nov 5 15:48:55.325971 kubelet[2996]: I1105 15:48:55.325932 2996 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12c26640-e854-41d4-a3c6-4ceffbd5949e-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 5 15:48:55.333029 systemd[1]: var-lib-kubelet-pods-12c26640\x2de854\x2d41d4\x2da3c6\x2d4ceffbd5949e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 5 15:48:55.337644 kubelet[2996]: I1105 15:48:55.335253 2996 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12c26640-e854-41d4-a3c6-4ceffbd5949e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "12c26640-e854-41d4-a3c6-4ceffbd5949e" (UID: "12c26640-e854-41d4-a3c6-4ceffbd5949e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 5 15:48:55.337644 kubelet[2996]: I1105 15:48:55.336562 2996 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12c26640-e854-41d4-a3c6-4ceffbd5949e-kube-api-access-dwpzm" (OuterVolumeSpecName: "kube-api-access-dwpzm") pod "12c26640-e854-41d4-a3c6-4ceffbd5949e" (UID: "12c26640-e854-41d4-a3c6-4ceffbd5949e"). InnerVolumeSpecName "kube-api-access-dwpzm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 15:48:55.338895 systemd[1]: var-lib-kubelet-pods-12c26640\x2de854\x2d41d4\x2da3c6\x2d4ceffbd5949e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddwpzm.mount: Deactivated successfully. Nov 5 15:48:55.426355 kubelet[2996]: I1105 15:48:55.426315 2996 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/12c26640-e854-41d4-a3c6-4ceffbd5949e-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 5 15:48:55.426355 kubelet[2996]: I1105 15:48:55.426340 2996 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dwpzm\" (UniqueName: \"kubernetes.io/projected/12c26640-e854-41d4-a3c6-4ceffbd5949e-kube-api-access-dwpzm\") on node \"localhost\" DevicePath \"\"" Nov 5 15:48:55.789200 systemd[1]: Removed slice kubepods-besteffort-pod12c26640_e854_41d4_a3c6_4ceffbd5949e.slice - libcontainer container kubepods-besteffort-pod12c26640_e854_41d4_a3c6_4ceffbd5949e.slice. Nov 5 15:48:56.133353 systemd[1]: Created slice kubepods-besteffort-pod0dc71d36_e361_4406_9e29_3e94a7056136.slice - libcontainer container kubepods-besteffort-pod0dc71d36_e361_4406_9e29_3e94a7056136.slice. Nov 5 15:48:56.231981 kubelet[2996]: I1105 15:48:56.231941 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0dc71d36-e361-4406-9e29-3e94a7056136-whisker-ca-bundle\") pod \"whisker-794d68bb9-6lxsm\" (UID: \"0dc71d36-e361-4406-9e29-3e94a7056136\") " pod="calico-system/whisker-794d68bb9-6lxsm" Nov 5 15:48:56.231981 kubelet[2996]: I1105 15:48:56.231988 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghxh9\" (UniqueName: \"kubernetes.io/projected/0dc71d36-e361-4406-9e29-3e94a7056136-kube-api-access-ghxh9\") pod \"whisker-794d68bb9-6lxsm\" (UID: \"0dc71d36-e361-4406-9e29-3e94a7056136\") " pod="calico-system/whisker-794d68bb9-6lxsm" Nov 5 15:48:56.232139 kubelet[2996]: I1105 15:48:56.232006 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0dc71d36-e361-4406-9e29-3e94a7056136-whisker-backend-key-pair\") pod \"whisker-794d68bb9-6lxsm\" (UID: \"0dc71d36-e361-4406-9e29-3e94a7056136\") " pod="calico-system/whisker-794d68bb9-6lxsm" Nov 5 15:48:56.458856 containerd[1686]: time="2025-11-05T15:48:56.458631885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-794d68bb9-6lxsm,Uid:0dc71d36-e361-4406-9e29-3e94a7056136,Namespace:calico-system,Attempt:0,}" Nov 5 15:48:56.758035 systemd-networkd[1579]: vxlan.calico: Link UP Nov 5 15:48:56.758041 systemd-networkd[1579]: vxlan.calico: Gained carrier Nov 5 15:48:57.473149 kubelet[2996]: I1105 15:48:57.457121 2996 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 15:48:57.627952 systemd-networkd[1579]: cali80d4d179d0e: Link UP Nov 5 15:48:57.628283 systemd-networkd[1579]: cali80d4d179d0e: Gained carrier Nov 5 15:48:57.643686 containerd[1686]: 2025-11-05 15:48:56.630 [INFO][4206] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--794d68bb9--6lxsm-eth0 whisker-794d68bb9- calico-system 0dc71d36-e361-4406-9e29-3e94a7056136 922 0 2025-11-05 15:48:56 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:794d68bb9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-794d68bb9-6lxsm eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali80d4d179d0e [] [] }} ContainerID="a96ec8969b3749e1d2fe4ecaeed6ecfa156585ab7a767595a3910bb703ab3609" Namespace="calico-system" Pod="whisker-794d68bb9-6lxsm" WorkloadEndpoint="localhost-k8s-whisker--794d68bb9--6lxsm-" Nov 5 15:48:57.643686 containerd[1686]: 2025-11-05 15:48:56.635 [INFO][4206] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a96ec8969b3749e1d2fe4ecaeed6ecfa156585ab7a767595a3910bb703ab3609" Namespace="calico-system" Pod="whisker-794d68bb9-6lxsm" WorkloadEndpoint="localhost-k8s-whisker--794d68bb9--6lxsm-eth0" Nov 5 15:48:57.643686 containerd[1686]: 2025-11-05 15:48:57.489 [INFO][4231] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a96ec8969b3749e1d2fe4ecaeed6ecfa156585ab7a767595a3910bb703ab3609" HandleID="k8s-pod-network.a96ec8969b3749e1d2fe4ecaeed6ecfa156585ab7a767595a3910bb703ab3609" Workload="localhost-k8s-whisker--794d68bb9--6lxsm-eth0" Nov 5 15:48:57.643985 containerd[1686]: 2025-11-05 15:48:57.495 [INFO][4231] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a96ec8969b3749e1d2fe4ecaeed6ecfa156585ab7a767595a3910bb703ab3609" HandleID="k8s-pod-network.a96ec8969b3749e1d2fe4ecaeed6ecfa156585ab7a767595a3910bb703ab3609" Workload="localhost-k8s-whisker--794d68bb9--6lxsm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002eb420), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-794d68bb9-6lxsm", "timestamp":"2025-11-05 15:48:57.489478024 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:48:57.643985 containerd[1686]: 2025-11-05 15:48:57.495 [INFO][4231] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:48:57.643985 containerd[1686]: 2025-11-05 15:48:57.495 [INFO][4231] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:48:57.643985 containerd[1686]: 2025-11-05 15:48:57.496 [INFO][4231] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:48:57.643985 containerd[1686]: 2025-11-05 15:48:57.553 [INFO][4231] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a96ec8969b3749e1d2fe4ecaeed6ecfa156585ab7a767595a3910bb703ab3609" host="localhost" Nov 5 15:48:57.643985 containerd[1686]: 2025-11-05 15:48:57.572 [INFO][4231] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:48:57.643985 containerd[1686]: 2025-11-05 15:48:57.577 [INFO][4231] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:48:57.643985 containerd[1686]: 2025-11-05 15:48:57.580 [INFO][4231] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:48:57.643985 containerd[1686]: 2025-11-05 15:48:57.584 [INFO][4231] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:48:57.643985 containerd[1686]: 2025-11-05 15:48:57.584 [INFO][4231] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a96ec8969b3749e1d2fe4ecaeed6ecfa156585ab7a767595a3910bb703ab3609" host="localhost" Nov 5 15:48:57.644224 containerd[1686]: 2025-11-05 15:48:57.586 [INFO][4231] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a96ec8969b3749e1d2fe4ecaeed6ecfa156585ab7a767595a3910bb703ab3609 Nov 5 15:48:57.644224 containerd[1686]: 2025-11-05 15:48:57.588 [INFO][4231] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a96ec8969b3749e1d2fe4ecaeed6ecfa156585ab7a767595a3910bb703ab3609" host="localhost" Nov 5 15:48:57.644224 containerd[1686]: 2025-11-05 15:48:57.593 [INFO][4231] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.a96ec8969b3749e1d2fe4ecaeed6ecfa156585ab7a767595a3910bb703ab3609" host="localhost" Nov 5 15:48:57.644224 containerd[1686]: 2025-11-05 15:48:57.593 [INFO][4231] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.a96ec8969b3749e1d2fe4ecaeed6ecfa156585ab7a767595a3910bb703ab3609" host="localhost" Nov 5 15:48:57.644224 containerd[1686]: 2025-11-05 15:48:57.593 [INFO][4231] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:48:57.644224 containerd[1686]: 2025-11-05 15:48:57.593 [INFO][4231] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="a96ec8969b3749e1d2fe4ecaeed6ecfa156585ab7a767595a3910bb703ab3609" HandleID="k8s-pod-network.a96ec8969b3749e1d2fe4ecaeed6ecfa156585ab7a767595a3910bb703ab3609" Workload="localhost-k8s-whisker--794d68bb9--6lxsm-eth0" Nov 5 15:48:57.651686 containerd[1686]: 2025-11-05 15:48:57.596 [INFO][4206] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a96ec8969b3749e1d2fe4ecaeed6ecfa156585ab7a767595a3910bb703ab3609" Namespace="calico-system" Pod="whisker-794d68bb9-6lxsm" WorkloadEndpoint="localhost-k8s-whisker--794d68bb9--6lxsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--794d68bb9--6lxsm-eth0", GenerateName:"whisker-794d68bb9-", Namespace:"calico-system", SelfLink:"", UID:"0dc71d36-e361-4406-9e29-3e94a7056136", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 48, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"794d68bb9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-794d68bb9-6lxsm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali80d4d179d0e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:48:57.651686 containerd[1686]: 2025-11-05 15:48:57.596 [INFO][4206] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="a96ec8969b3749e1d2fe4ecaeed6ecfa156585ab7a767595a3910bb703ab3609" Namespace="calico-system" Pod="whisker-794d68bb9-6lxsm" WorkloadEndpoint="localhost-k8s-whisker--794d68bb9--6lxsm-eth0" Nov 5 15:48:57.651749 containerd[1686]: 2025-11-05 15:48:57.597 [INFO][4206] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali80d4d179d0e ContainerID="a96ec8969b3749e1d2fe4ecaeed6ecfa156585ab7a767595a3910bb703ab3609" Namespace="calico-system" Pod="whisker-794d68bb9-6lxsm" WorkloadEndpoint="localhost-k8s-whisker--794d68bb9--6lxsm-eth0" Nov 5 15:48:57.651749 containerd[1686]: 2025-11-05 15:48:57.621 [INFO][4206] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a96ec8969b3749e1d2fe4ecaeed6ecfa156585ab7a767595a3910bb703ab3609" Namespace="calico-system" Pod="whisker-794d68bb9-6lxsm" WorkloadEndpoint="localhost-k8s-whisker--794d68bb9--6lxsm-eth0" Nov 5 15:48:57.651786 containerd[1686]: 2025-11-05 15:48:57.621 [INFO][4206] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a96ec8969b3749e1d2fe4ecaeed6ecfa156585ab7a767595a3910bb703ab3609" Namespace="calico-system" Pod="whisker-794d68bb9-6lxsm" WorkloadEndpoint="localhost-k8s-whisker--794d68bb9--6lxsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--794d68bb9--6lxsm-eth0", GenerateName:"whisker-794d68bb9-", Namespace:"calico-system", SelfLink:"", UID:"0dc71d36-e361-4406-9e29-3e94a7056136", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 48, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"794d68bb9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a96ec8969b3749e1d2fe4ecaeed6ecfa156585ab7a767595a3910bb703ab3609", Pod:"whisker-794d68bb9-6lxsm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali80d4d179d0e", MAC:"e6:71:a9:01:15:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:48:57.651836 containerd[1686]: 2025-11-05 15:48:57.640 [INFO][4206] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a96ec8969b3749e1d2fe4ecaeed6ecfa156585ab7a767595a3910bb703ab3609" Namespace="calico-system" Pod="whisker-794d68bb9-6lxsm" WorkloadEndpoint="localhost-k8s-whisker--794d68bb9--6lxsm-eth0" Nov 5 15:48:57.775063 kubelet[2996]: I1105 15:48:57.773995 2996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12c26640-e854-41d4-a3c6-4ceffbd5949e" path="/var/lib/kubelet/pods/12c26640-e854-41d4-a3c6-4ceffbd5949e/volumes" Nov 5 15:48:57.853799 containerd[1686]: time="2025-11-05T15:48:57.852838155Z" level=info msg="TaskExit event in podsandbox handler container_id:\"443c5b7e7b3751d7b453e391a592cbc3a9a0aadb25cff9a4826be2f933d92c9b\" id:\"61b9b6d2fed1f9a2722a62a3a9c743e33fc2efb66257dc804010833b2a04ae29\" pid:4326 exit_status:1 exited_at:{seconds:1762357737 nanos:852479128}" Nov 5 15:48:57.882491 containerd[1686]: time="2025-11-05T15:48:57.882458608Z" level=info msg="connecting to shim a96ec8969b3749e1d2fe4ecaeed6ecfa156585ab7a767595a3910bb703ab3609" address="unix:///run/containerd/s/eea7d4ad13c722fc78b2330f79dd24a6754bfcca7308e3524caa81988b5bc0d4" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:48:57.903308 systemd[1]: Started cri-containerd-a96ec8969b3749e1d2fe4ecaeed6ecfa156585ab7a767595a3910bb703ab3609.scope - libcontainer container a96ec8969b3749e1d2fe4ecaeed6ecfa156585ab7a767595a3910bb703ab3609. Nov 5 15:48:57.912256 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:48:57.962697 containerd[1686]: time="2025-11-05T15:48:57.962459519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-794d68bb9-6lxsm,Uid:0dc71d36-e361-4406-9e29-3e94a7056136,Namespace:calico-system,Attempt:0,} returns sandbox id \"a96ec8969b3749e1d2fe4ecaeed6ecfa156585ab7a767595a3910bb703ab3609\"" Nov 5 15:48:58.011813 containerd[1686]: time="2025-11-05T15:48:58.011783754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:48:58.079389 containerd[1686]: time="2025-11-05T15:48:58.079157734Z" level=info msg="TaskExit event in podsandbox handler container_id:\"443c5b7e7b3751d7b453e391a592cbc3a9a0aadb25cff9a4826be2f933d92c9b\" id:\"63769bf80d4324f1ae8f97448038cf5767b5e42eba9e7605dfe6bbebef84bd50\" pid:4391 exit_status:1 exited_at:{seconds:1762357738 nanos:78975766}" Nov 5 15:48:58.437218 systemd-networkd[1579]: vxlan.calico: Gained IPv6LL Nov 5 15:48:58.488440 containerd[1686]: time="2025-11-05T15:48:58.488384492Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:48:58.488916 containerd[1686]: time="2025-11-05T15:48:58.488816472Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:48:58.488971 containerd[1686]: time="2025-11-05T15:48:58.488953305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:48:58.492307 kubelet[2996]: E1105 15:48:58.489075 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:48:58.499431 kubelet[2996]: E1105 15:48:58.492202 2996 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:48:58.551085 kubelet[2996]: E1105 15:48:58.550988 2996 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2eef99ffd7d7435483175276c9559998,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ghxh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-794d68bb9-6lxsm_calico-system(0dc71d36-e361-4406-9e29-3e94a7056136): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:48:58.568935 containerd[1686]: time="2025-11-05T15:48:58.568894460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:48:58.773061 containerd[1686]: time="2025-11-05T15:48:58.772981628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b57c799-kk9c4,Uid:820085b5-0e01-4ac8-ba7a-c7255a8764f6,Namespace:calico-system,Attempt:0,}" Nov 5 15:48:58.773412 containerd[1686]: time="2025-11-05T15:48:58.773389804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r9kqp,Uid:1eb055ed-aaab-4a59-8ad1-afd7a5d570c1,Namespace:kube-system,Attempt:0,}" Nov 5 15:48:58.935782 systemd-networkd[1579]: cali4488b0d1665: Link UP Nov 5 15:48:58.936302 systemd-networkd[1579]: cali4488b0d1665: Gained carrier Nov 5 15:48:58.946242 containerd[1686]: time="2025-11-05T15:48:58.946187195Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:48:58.947069 containerd[1686]: time="2025-11-05T15:48:58.946905991Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:48:58.947422 containerd[1686]: time="2025-11-05T15:48:58.946964673Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:48:58.948413 kubelet[2996]: E1105 15:48:58.947419 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:48:58.948413 kubelet[2996]: E1105 15:48:58.947467 2996 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:48:58.948479 kubelet[2996]: E1105 15:48:58.947569 2996 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ghxh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-794d68bb9-6lxsm_calico-system(0dc71d36-e361-4406-9e29-3e94a7056136): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:48:58.949185 systemd-networkd[1579]: cali80d4d179d0e: Gained IPv6LL Nov 5 15:48:58.951459 kubelet[2996]: E1105 15:48:58.950638 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-794d68bb9-6lxsm" podUID="0dc71d36-e361-4406-9e29-3e94a7056136" Nov 5 15:48:58.953527 containerd[1686]: 2025-11-05 15:48:58.885 [INFO][4416] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--r9kqp-eth0 coredns-668d6bf9bc- kube-system 1eb055ed-aaab-4a59-8ad1-afd7a5d570c1 842 0 2025-11-05 15:48:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-r9kqp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4488b0d1665 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="90b795d8ca169f122a1ee679fd4ad5e126a3e46491150f5b0e5e627b3a40c09e" Namespace="kube-system" Pod="coredns-668d6bf9bc-r9kqp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--r9kqp-" Nov 5 15:48:58.953527 containerd[1686]: 2025-11-05 15:48:58.885 [INFO][4416] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="90b795d8ca169f122a1ee679fd4ad5e126a3e46491150f5b0e5e627b3a40c09e" Namespace="kube-system" Pod="coredns-668d6bf9bc-r9kqp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--r9kqp-eth0" Nov 5 15:48:58.953527 containerd[1686]: 2025-11-05 15:48:58.902 [INFO][4432] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90b795d8ca169f122a1ee679fd4ad5e126a3e46491150f5b0e5e627b3a40c09e" HandleID="k8s-pod-network.90b795d8ca169f122a1ee679fd4ad5e126a3e46491150f5b0e5e627b3a40c09e" Workload="localhost-k8s-coredns--668d6bf9bc--r9kqp-eth0" Nov 5 15:48:58.953667 containerd[1686]: 2025-11-05 15:48:58.902 [INFO][4432] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="90b795d8ca169f122a1ee679fd4ad5e126a3e46491150f5b0e5e627b3a40c09e" HandleID="k8s-pod-network.90b795d8ca169f122a1ee679fd4ad5e126a3e46491150f5b0e5e627b3a40c09e" Workload="localhost-k8s-coredns--668d6bf9bc--r9kqp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f110), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-r9kqp", "timestamp":"2025-11-05 15:48:58.902189343 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:48:58.953667 containerd[1686]: 2025-11-05 15:48:58.902 [INFO][4432] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:48:58.953667 containerd[1686]: 2025-11-05 15:48:58.902 [INFO][4432] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:48:58.953667 containerd[1686]: 2025-11-05 15:48:58.902 [INFO][4432] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:48:58.953667 containerd[1686]: 2025-11-05 15:48:58.906 [INFO][4432] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.90b795d8ca169f122a1ee679fd4ad5e126a3e46491150f5b0e5e627b3a40c09e" host="localhost" Nov 5 15:48:58.953667 containerd[1686]: 2025-11-05 15:48:58.909 [INFO][4432] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:48:58.953667 containerd[1686]: 2025-11-05 15:48:58.911 [INFO][4432] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:48:58.953667 containerd[1686]: 2025-11-05 15:48:58.912 [INFO][4432] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:48:58.953667 containerd[1686]: 2025-11-05 15:48:58.913 [INFO][4432] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:48:58.953667 containerd[1686]: 2025-11-05 15:48:58.913 [INFO][4432] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.90b795d8ca169f122a1ee679fd4ad5e126a3e46491150f5b0e5e627b3a40c09e" host="localhost" Nov 5 15:48:58.954788 containerd[1686]: 2025-11-05 15:48:58.914 [INFO][4432] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.90b795d8ca169f122a1ee679fd4ad5e126a3e46491150f5b0e5e627b3a40c09e Nov 5 15:48:58.954788 containerd[1686]: 2025-11-05 15:48:58.919 [INFO][4432] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.90b795d8ca169f122a1ee679fd4ad5e126a3e46491150f5b0e5e627b3a40c09e" host="localhost" Nov 5 15:48:58.954788 containerd[1686]: 2025-11-05 15:48:58.928 [INFO][4432] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.90b795d8ca169f122a1ee679fd4ad5e126a3e46491150f5b0e5e627b3a40c09e" host="localhost" Nov 5 15:48:58.954788 containerd[1686]: 2025-11-05 15:48:58.928 [INFO][4432] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.90b795d8ca169f122a1ee679fd4ad5e126a3e46491150f5b0e5e627b3a40c09e" host="localhost" Nov 5 15:48:58.954788 containerd[1686]: 2025-11-05 15:48:58.929 [INFO][4432] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:48:58.954788 containerd[1686]: 2025-11-05 15:48:58.929 [INFO][4432] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="90b795d8ca169f122a1ee679fd4ad5e126a3e46491150f5b0e5e627b3a40c09e" HandleID="k8s-pod-network.90b795d8ca169f122a1ee679fd4ad5e126a3e46491150f5b0e5e627b3a40c09e" Workload="localhost-k8s-coredns--668d6bf9bc--r9kqp-eth0" Nov 5 15:48:58.954891 containerd[1686]: 2025-11-05 15:48:58.932 [INFO][4416] cni-plugin/k8s.go 418: Populated endpoint ContainerID="90b795d8ca169f122a1ee679fd4ad5e126a3e46491150f5b0e5e627b3a40c09e" Namespace="kube-system" Pod="coredns-668d6bf9bc-r9kqp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--r9kqp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--r9kqp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1eb055ed-aaab-4a59-8ad1-afd7a5d570c1", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 48, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-r9kqp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4488b0d1665", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:48:58.955298 containerd[1686]: 2025-11-05 15:48:58.932 [INFO][4416] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="90b795d8ca169f122a1ee679fd4ad5e126a3e46491150f5b0e5e627b3a40c09e" Namespace="kube-system" Pod="coredns-668d6bf9bc-r9kqp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--r9kqp-eth0" Nov 5 15:48:58.955298 containerd[1686]: 2025-11-05 15:48:58.932 [INFO][4416] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4488b0d1665 ContainerID="90b795d8ca169f122a1ee679fd4ad5e126a3e46491150f5b0e5e627b3a40c09e" Namespace="kube-system" Pod="coredns-668d6bf9bc-r9kqp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--r9kqp-eth0" Nov 5 15:48:58.955298 containerd[1686]: 2025-11-05 15:48:58.936 [INFO][4416] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90b795d8ca169f122a1ee679fd4ad5e126a3e46491150f5b0e5e627b3a40c09e" Namespace="kube-system" Pod="coredns-668d6bf9bc-r9kqp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--r9kqp-eth0" Nov 5 15:48:58.955673 containerd[1686]: 2025-11-05 15:48:58.936 [INFO][4416] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="90b795d8ca169f122a1ee679fd4ad5e126a3e46491150f5b0e5e627b3a40c09e" Namespace="kube-system" Pod="coredns-668d6bf9bc-r9kqp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--r9kqp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--r9kqp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1eb055ed-aaab-4a59-8ad1-afd7a5d570c1", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 48, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"90b795d8ca169f122a1ee679fd4ad5e126a3e46491150f5b0e5e627b3a40c09e", Pod:"coredns-668d6bf9bc-r9kqp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4488b0d1665", MAC:"86:1a:78:82:2e:5f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:48:58.955673 containerd[1686]: 2025-11-05 15:48:58.949 [INFO][4416] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="90b795d8ca169f122a1ee679fd4ad5e126a3e46491150f5b0e5e627b3a40c09e" Namespace="kube-system" Pod="coredns-668d6bf9bc-r9kqp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--r9kqp-eth0" Nov 5 15:48:58.966526 containerd[1686]: time="2025-11-05T15:48:58.966338739Z" level=info msg="connecting to shim 90b795d8ca169f122a1ee679fd4ad5e126a3e46491150f5b0e5e627b3a40c09e" address="unix:///run/containerd/s/47fc57ff0877699031d572cc08d40f3d4acba8ad48b339ebee967b8abc711966" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:48:58.991187 systemd[1]: Started cri-containerd-90b795d8ca169f122a1ee679fd4ad5e126a3e46491150f5b0e5e627b3a40c09e.scope - libcontainer container 90b795d8ca169f122a1ee679fd4ad5e126a3e46491150f5b0e5e627b3a40c09e. Nov 5 15:48:59.002125 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:48:59.011675 kubelet[2996]: E1105 15:48:59.011648 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-794d68bb9-6lxsm" podUID="0dc71d36-e361-4406-9e29-3e94a7056136" Nov 5 15:48:59.060653 systemd-networkd[1579]: caliea4dc549cfe: Link UP Nov 5 15:48:59.061836 systemd-networkd[1579]: caliea4dc549cfe: Gained carrier Nov 5 15:48:59.075085 containerd[1686]: time="2025-11-05T15:48:59.075048798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r9kqp,Uid:1eb055ed-aaab-4a59-8ad1-afd7a5d570c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"90b795d8ca169f122a1ee679fd4ad5e126a3e46491150f5b0e5e627b3a40c09e\"" Nov 5 15:48:59.079329 containerd[1686]: time="2025-11-05T15:48:59.079308251Z" level=info msg="CreateContainer within sandbox \"90b795d8ca169f122a1ee679fd4ad5e126a3e46491150f5b0e5e627b3a40c09e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 15:48:59.089661 containerd[1686]: 2025-11-05 15:48:58.865 [INFO][4405] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--b57c799--kk9c4-eth0 calico-kube-controllers-b57c799- calico-system 820085b5-0e01-4ac8-ba7a-c7255a8764f6 844 0 2025-11-05 15:48:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:b57c799 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-b57c799-kk9c4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliea4dc549cfe [] [] }} ContainerID="2c82a8bae33dd71fe568c3e358f062a46416fc03ab61ab81e965f4d6b051471e" Namespace="calico-system" Pod="calico-kube-controllers-b57c799-kk9c4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b57c799--kk9c4-" Nov 5 15:48:59.089661 containerd[1686]: 2025-11-05 15:48:58.865 [INFO][4405] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2c82a8bae33dd71fe568c3e358f062a46416fc03ab61ab81e965f4d6b051471e" Namespace="calico-system" Pod="calico-kube-controllers-b57c799-kk9c4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b57c799--kk9c4-eth0" Nov 5 15:48:59.089661 containerd[1686]: 2025-11-05 15:48:58.908 [INFO][4426] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2c82a8bae33dd71fe568c3e358f062a46416fc03ab61ab81e965f4d6b051471e" HandleID="k8s-pod-network.2c82a8bae33dd71fe568c3e358f062a46416fc03ab61ab81e965f4d6b051471e" Workload="localhost-k8s-calico--kube--controllers--b57c799--kk9c4-eth0" Nov 5 15:48:59.089661 containerd[1686]: 2025-11-05 15:48:58.908 [INFO][4426] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2c82a8bae33dd71fe568c3e358f062a46416fc03ab61ab81e965f4d6b051471e" HandleID="k8s-pod-network.2c82a8bae33dd71fe568c3e358f062a46416fc03ab61ab81e965f4d6b051471e" Workload="localhost-k8s-calico--kube--controllers--b57c799--kk9c4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ccfe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-b57c799-kk9c4", "timestamp":"2025-11-05 15:48:58.908238161 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:48:59.089661 containerd[1686]: 2025-11-05 15:48:58.908 [INFO][4426] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:48:59.089661 containerd[1686]: 2025-11-05 15:48:58.929 [INFO][4426] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:48:59.089661 containerd[1686]: 2025-11-05 15:48:58.929 [INFO][4426] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:48:59.089661 containerd[1686]: 2025-11-05 15:48:59.008 [INFO][4426] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2c82a8bae33dd71fe568c3e358f062a46416fc03ab61ab81e965f4d6b051471e" host="localhost" Nov 5 15:48:59.089661 containerd[1686]: 2025-11-05 15:48:59.012 [INFO][4426] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:48:59.089661 containerd[1686]: 2025-11-05 15:48:59.020 [INFO][4426] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:48:59.089661 containerd[1686]: 2025-11-05 15:48:59.024 [INFO][4426] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:48:59.089661 containerd[1686]: 2025-11-05 15:48:59.026 [INFO][4426] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:48:59.089661 containerd[1686]: 2025-11-05 15:48:59.026 [INFO][4426] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2c82a8bae33dd71fe568c3e358f062a46416fc03ab61ab81e965f4d6b051471e" host="localhost" Nov 5 15:48:59.089661 containerd[1686]: 2025-11-05 15:48:59.027 [INFO][4426] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2c82a8bae33dd71fe568c3e358f062a46416fc03ab61ab81e965f4d6b051471e Nov 5 15:48:59.089661 containerd[1686]: 2025-11-05 15:48:59.039 [INFO][4426] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2c82a8bae33dd71fe568c3e358f062a46416fc03ab61ab81e965f4d6b051471e" host="localhost" Nov 5 15:48:59.089661 containerd[1686]: 2025-11-05 15:48:59.048 [INFO][4426] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.2c82a8bae33dd71fe568c3e358f062a46416fc03ab61ab81e965f4d6b051471e" host="localhost" Nov 5 15:48:59.089661 containerd[1686]: 2025-11-05 15:48:59.048 [INFO][4426] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.2c82a8bae33dd71fe568c3e358f062a46416fc03ab61ab81e965f4d6b051471e" host="localhost" Nov 5 15:48:59.089661 containerd[1686]: 2025-11-05 15:48:59.048 [INFO][4426] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:48:59.089661 containerd[1686]: 2025-11-05 15:48:59.048 [INFO][4426] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="2c82a8bae33dd71fe568c3e358f062a46416fc03ab61ab81e965f4d6b051471e" HandleID="k8s-pod-network.2c82a8bae33dd71fe568c3e358f062a46416fc03ab61ab81e965f4d6b051471e" Workload="localhost-k8s-calico--kube--controllers--b57c799--kk9c4-eth0" Nov 5 15:48:59.096408 containerd[1686]: 2025-11-05 15:48:59.056 [INFO][4405] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2c82a8bae33dd71fe568c3e358f062a46416fc03ab61ab81e965f4d6b051471e" Namespace="calico-system" Pod="calico-kube-controllers-b57c799-kk9c4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b57c799--kk9c4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b57c799--kk9c4-eth0", GenerateName:"calico-kube-controllers-b57c799-", Namespace:"calico-system", SelfLink:"", UID:"820085b5-0e01-4ac8-ba7a-c7255a8764f6", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 48, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b57c799", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-b57c799-kk9c4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliea4dc549cfe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:48:59.096408 containerd[1686]: 2025-11-05 15:48:59.057 [INFO][4405] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="2c82a8bae33dd71fe568c3e358f062a46416fc03ab61ab81e965f4d6b051471e" Namespace="calico-system" Pod="calico-kube-controllers-b57c799-kk9c4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b57c799--kk9c4-eth0" Nov 5 15:48:59.096408 containerd[1686]: 2025-11-05 15:48:59.057 [INFO][4405] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliea4dc549cfe ContainerID="2c82a8bae33dd71fe568c3e358f062a46416fc03ab61ab81e965f4d6b051471e" Namespace="calico-system" Pod="calico-kube-controllers-b57c799-kk9c4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b57c799--kk9c4-eth0" Nov 5 15:48:59.096408 containerd[1686]: 2025-11-05 15:48:59.062 [INFO][4405] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2c82a8bae33dd71fe568c3e358f062a46416fc03ab61ab81e965f4d6b051471e" Namespace="calico-system" Pod="calico-kube-controllers-b57c799-kk9c4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b57c799--kk9c4-eth0" Nov 5 15:48:59.096408 containerd[1686]: 2025-11-05 15:48:59.065 [INFO][4405] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2c82a8bae33dd71fe568c3e358f062a46416fc03ab61ab81e965f4d6b051471e" Namespace="calico-system" Pod="calico-kube-controllers-b57c799-kk9c4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b57c799--kk9c4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b57c799--kk9c4-eth0", GenerateName:"calico-kube-controllers-b57c799-", Namespace:"calico-system", SelfLink:"", UID:"820085b5-0e01-4ac8-ba7a-c7255a8764f6", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 48, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b57c799", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2c82a8bae33dd71fe568c3e358f062a46416fc03ab61ab81e965f4d6b051471e", Pod:"calico-kube-controllers-b57c799-kk9c4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliea4dc549cfe", MAC:"26:00:72:d2:da:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:48:59.096408 containerd[1686]: 2025-11-05 15:48:59.085 [INFO][4405] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2c82a8bae33dd71fe568c3e358f062a46416fc03ab61ab81e965f4d6b051471e" Namespace="calico-system" Pod="calico-kube-controllers-b57c799-kk9c4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b57c799--kk9c4-eth0" Nov 5 15:48:59.156636 containerd[1686]: time="2025-11-05T15:48:59.155887536Z" level=info msg="Container ec70cabbb96793a0a6c3e37d2b954fb7b5d453b9f009bc61613b717ea72f5592: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:48:59.161407 containerd[1686]: time="2025-11-05T15:48:59.161381806Z" level=info msg="connecting to shim 2c82a8bae33dd71fe568c3e358f062a46416fc03ab61ab81e965f4d6b051471e" address="unix:///run/containerd/s/72ea02ddcb6cc80ca84954c1ea90235e16d84435e0283e2d60f1186f0bb1272e" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:48:59.162962 containerd[1686]: time="2025-11-05T15:48:59.162931791Z" level=info msg="CreateContainer within sandbox \"90b795d8ca169f122a1ee679fd4ad5e126a3e46491150f5b0e5e627b3a40c09e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ec70cabbb96793a0a6c3e37d2b954fb7b5d453b9f009bc61613b717ea72f5592\"" Nov 5 15:48:59.163922 containerd[1686]: time="2025-11-05T15:48:59.163850628Z" level=info msg="StartContainer for \"ec70cabbb96793a0a6c3e37d2b954fb7b5d453b9f009bc61613b717ea72f5592\"" Nov 5 15:48:59.166281 containerd[1686]: time="2025-11-05T15:48:59.166246401Z" level=info msg="connecting to shim ec70cabbb96793a0a6c3e37d2b954fb7b5d453b9f009bc61613b717ea72f5592" address="unix:///run/containerd/s/47fc57ff0877699031d572cc08d40f3d4acba8ad48b339ebee967b8abc711966" protocol=ttrpc version=3 Nov 5 15:48:59.189189 systemd[1]: Started cri-containerd-ec70cabbb96793a0a6c3e37d2b954fb7b5d453b9f009bc61613b717ea72f5592.scope - libcontainer container ec70cabbb96793a0a6c3e37d2b954fb7b5d453b9f009bc61613b717ea72f5592. Nov 5 15:48:59.191912 systemd[1]: Started cri-containerd-2c82a8bae33dd71fe568c3e358f062a46416fc03ab61ab81e965f4d6b051471e.scope - libcontainer container 2c82a8bae33dd71fe568c3e358f062a46416fc03ab61ab81e965f4d6b051471e. Nov 5 15:48:59.203333 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:48:59.230208 containerd[1686]: time="2025-11-05T15:48:59.230176524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b57c799-kk9c4,Uid:820085b5-0e01-4ac8-ba7a-c7255a8764f6,Namespace:calico-system,Attempt:0,} returns sandbox id \"2c82a8bae33dd71fe568c3e358f062a46416fc03ab61ab81e965f4d6b051471e\"" Nov 5 15:48:59.232338 containerd[1686]: time="2025-11-05T15:48:59.232280480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:48:59.244795 containerd[1686]: time="2025-11-05T15:48:59.244727674Z" level=info msg="StartContainer for \"ec70cabbb96793a0a6c3e37d2b954fb7b5d453b9f009bc61613b717ea72f5592\" returns successfully" Nov 5 15:48:59.567914 containerd[1686]: time="2025-11-05T15:48:59.567835738Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:48:59.579722 containerd[1686]: time="2025-11-05T15:48:59.579666902Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:48:59.583816 containerd[1686]: time="2025-11-05T15:48:59.579763761Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:48:59.584126 kubelet[2996]: E1105 15:48:59.579885 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:48:59.584126 kubelet[2996]: E1105 15:48:59.579941 2996 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:48:59.584126 kubelet[2996]: E1105 15:48:59.580069 2996 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r6snb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-b57c799-kk9c4_calico-system(820085b5-0e01-4ac8-ba7a-c7255a8764f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:48:59.584126 kubelet[2996]: E1105 15:48:59.581216 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b57c799-kk9c4" podUID="820085b5-0e01-4ac8-ba7a-c7255a8764f6" Nov 5 15:48:59.773855 containerd[1686]: time="2025-11-05T15:48:59.773690818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66dbdd57f9-thx98,Uid:0fbf3e00-38e0-4753-acbc-319c3080ae42,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:48:59.791660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2021006247.mount: Deactivated successfully. Nov 5 15:48:59.916791 systemd-networkd[1579]: califb13ed599f1: Link UP Nov 5 15:48:59.917461 systemd-networkd[1579]: califb13ed599f1: Gained carrier Nov 5 15:48:59.930386 containerd[1686]: 2025-11-05 15:48:59.865 [INFO][4594] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--66dbdd57f9--thx98-eth0 calico-apiserver-66dbdd57f9- calico-apiserver 0fbf3e00-38e0-4753-acbc-319c3080ae42 834 0 2025-11-05 15:48:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66dbdd57f9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-66dbdd57f9-thx98 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califb13ed599f1 [] [] }} ContainerID="9e38a6b8ff73eef1bc3159737cf5339c5af865f1a5f5da06dcc095d2629fac26" Namespace="calico-apiserver" Pod="calico-apiserver-66dbdd57f9-thx98" WorkloadEndpoint="localhost-k8s-calico--apiserver--66dbdd57f9--thx98-" Nov 5 15:48:59.930386 containerd[1686]: 2025-11-05 15:48:59.866 [INFO][4594] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9e38a6b8ff73eef1bc3159737cf5339c5af865f1a5f5da06dcc095d2629fac26" Namespace="calico-apiserver" Pod="calico-apiserver-66dbdd57f9-thx98" WorkloadEndpoint="localhost-k8s-calico--apiserver--66dbdd57f9--thx98-eth0" Nov 5 15:48:59.930386 containerd[1686]: 2025-11-05 15:48:59.889 [INFO][4605] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e38a6b8ff73eef1bc3159737cf5339c5af865f1a5f5da06dcc095d2629fac26" HandleID="k8s-pod-network.9e38a6b8ff73eef1bc3159737cf5339c5af865f1a5f5da06dcc095d2629fac26" Workload="localhost-k8s-calico--apiserver--66dbdd57f9--thx98-eth0" Nov 5 15:48:59.930386 containerd[1686]: 2025-11-05 15:48:59.889 [INFO][4605] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9e38a6b8ff73eef1bc3159737cf5339c5af865f1a5f5da06dcc095d2629fac26" HandleID="k8s-pod-network.9e38a6b8ff73eef1bc3159737cf5339c5af865f1a5f5da06dcc095d2629fac26" Workload="localhost-k8s-calico--apiserver--66dbdd57f9--thx98-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-66dbdd57f9-thx98", "timestamp":"2025-11-05 15:48:59.88983082 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:48:59.930386 containerd[1686]: 2025-11-05 15:48:59.890 [INFO][4605] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:48:59.930386 containerd[1686]: 2025-11-05 15:48:59.890 [INFO][4605] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:48:59.930386 containerd[1686]: 2025-11-05 15:48:59.890 [INFO][4605] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:48:59.930386 containerd[1686]: 2025-11-05 15:48:59.894 [INFO][4605] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9e38a6b8ff73eef1bc3159737cf5339c5af865f1a5f5da06dcc095d2629fac26" host="localhost" Nov 5 15:48:59.930386 containerd[1686]: 2025-11-05 15:48:59.897 [INFO][4605] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:48:59.930386 containerd[1686]: 2025-11-05 15:48:59.900 [INFO][4605] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:48:59.930386 containerd[1686]: 2025-11-05 15:48:59.901 [INFO][4605] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:48:59.930386 containerd[1686]: 2025-11-05 15:48:59.903 [INFO][4605] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:48:59.930386 containerd[1686]: 2025-11-05 15:48:59.903 [INFO][4605] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9e38a6b8ff73eef1bc3159737cf5339c5af865f1a5f5da06dcc095d2629fac26" host="localhost" Nov 5 15:48:59.930386 containerd[1686]: 2025-11-05 15:48:59.904 [INFO][4605] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9e38a6b8ff73eef1bc3159737cf5339c5af865f1a5f5da06dcc095d2629fac26 Nov 5 15:48:59.930386 containerd[1686]: 2025-11-05 15:48:59.908 [INFO][4605] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9e38a6b8ff73eef1bc3159737cf5339c5af865f1a5f5da06dcc095d2629fac26" host="localhost" Nov 5 15:48:59.930386 containerd[1686]: 2025-11-05 15:48:59.911 [INFO][4605] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.9e38a6b8ff73eef1bc3159737cf5339c5af865f1a5f5da06dcc095d2629fac26" host="localhost" Nov 5 15:48:59.930386 containerd[1686]: 2025-11-05 15:48:59.911 [INFO][4605] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.9e38a6b8ff73eef1bc3159737cf5339c5af865f1a5f5da06dcc095d2629fac26" host="localhost" Nov 5 15:48:59.930386 containerd[1686]: 2025-11-05 15:48:59.911 [INFO][4605] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:48:59.930386 containerd[1686]: 2025-11-05 15:48:59.911 [INFO][4605] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="9e38a6b8ff73eef1bc3159737cf5339c5af865f1a5f5da06dcc095d2629fac26" HandleID="k8s-pod-network.9e38a6b8ff73eef1bc3159737cf5339c5af865f1a5f5da06dcc095d2629fac26" Workload="localhost-k8s-calico--apiserver--66dbdd57f9--thx98-eth0" Nov 5 15:48:59.935347 containerd[1686]: 2025-11-05 15:48:59.913 [INFO][4594] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9e38a6b8ff73eef1bc3159737cf5339c5af865f1a5f5da06dcc095d2629fac26" Namespace="calico-apiserver" Pod="calico-apiserver-66dbdd57f9-thx98" WorkloadEndpoint="localhost-k8s-calico--apiserver--66dbdd57f9--thx98-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66dbdd57f9--thx98-eth0", GenerateName:"calico-apiserver-66dbdd57f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"0fbf3e00-38e0-4753-acbc-319c3080ae42", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 48, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66dbdd57f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-66dbdd57f9-thx98", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb13ed599f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:48:59.935347 containerd[1686]: 2025-11-05 15:48:59.913 [INFO][4594] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="9e38a6b8ff73eef1bc3159737cf5339c5af865f1a5f5da06dcc095d2629fac26" Namespace="calico-apiserver" Pod="calico-apiserver-66dbdd57f9-thx98" WorkloadEndpoint="localhost-k8s-calico--apiserver--66dbdd57f9--thx98-eth0" Nov 5 15:48:59.935347 containerd[1686]: 2025-11-05 15:48:59.913 [INFO][4594] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb13ed599f1 ContainerID="9e38a6b8ff73eef1bc3159737cf5339c5af865f1a5f5da06dcc095d2629fac26" Namespace="calico-apiserver" Pod="calico-apiserver-66dbdd57f9-thx98" WorkloadEndpoint="localhost-k8s-calico--apiserver--66dbdd57f9--thx98-eth0" Nov 5 15:48:59.935347 containerd[1686]: 2025-11-05 15:48:59.918 [INFO][4594] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9e38a6b8ff73eef1bc3159737cf5339c5af865f1a5f5da06dcc095d2629fac26" Namespace="calico-apiserver" Pod="calico-apiserver-66dbdd57f9-thx98" WorkloadEndpoint="localhost-k8s-calico--apiserver--66dbdd57f9--thx98-eth0" Nov 5 15:48:59.935347 containerd[1686]: 2025-11-05 15:48:59.919 [INFO][4594] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9e38a6b8ff73eef1bc3159737cf5339c5af865f1a5f5da06dcc095d2629fac26" Namespace="calico-apiserver" Pod="calico-apiserver-66dbdd57f9-thx98" WorkloadEndpoint="localhost-k8s-calico--apiserver--66dbdd57f9--thx98-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66dbdd57f9--thx98-eth0", GenerateName:"calico-apiserver-66dbdd57f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"0fbf3e00-38e0-4753-acbc-319c3080ae42", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 48, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66dbdd57f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9e38a6b8ff73eef1bc3159737cf5339c5af865f1a5f5da06dcc095d2629fac26", Pod:"calico-apiserver-66dbdd57f9-thx98", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb13ed599f1", MAC:"be:8e:49:4d:25:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:48:59.935347 containerd[1686]: 2025-11-05 15:48:59.926 [INFO][4594] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9e38a6b8ff73eef1bc3159737cf5339c5af865f1a5f5da06dcc095d2629fac26" Namespace="calico-apiserver" Pod="calico-apiserver-66dbdd57f9-thx98" WorkloadEndpoint="localhost-k8s-calico--apiserver--66dbdd57f9--thx98-eth0" Nov 5 15:48:59.948450 containerd[1686]: time="2025-11-05T15:48:59.948389639Z" level=info msg="connecting to shim 9e38a6b8ff73eef1bc3159737cf5339c5af865f1a5f5da06dcc095d2629fac26" address="unix:///run/containerd/s/947f9126f3fc6d0423194ed5c0ea377bd47863416b037f70815551c1f9f350d3" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:48:59.969278 systemd[1]: Started cri-containerd-9e38a6b8ff73eef1bc3159737cf5339c5af865f1a5f5da06dcc095d2629fac26.scope - libcontainer container 9e38a6b8ff73eef1bc3159737cf5339c5af865f1a5f5da06dcc095d2629fac26. Nov 5 15:48:59.977205 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:49:00.024496 containerd[1686]: time="2025-11-05T15:49:00.024464584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66dbdd57f9-thx98,Uid:0fbf3e00-38e0-4753-acbc-319c3080ae42,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9e38a6b8ff73eef1bc3159737cf5339c5af865f1a5f5da06dcc095d2629fac26\"" Nov 5 15:49:00.026179 containerd[1686]: time="2025-11-05T15:49:00.026140106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:49:00.037312 systemd-networkd[1579]: cali4488b0d1665: Gained IPv6LL Nov 5 15:49:00.107643 kubelet[2996]: E1105 15:49:00.107320 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b57c799-kk9c4" podUID="820085b5-0e01-4ac8-ba7a-c7255a8764f6" Nov 5 15:49:00.137481 kubelet[2996]: I1105 15:49:00.136014 2996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-r9kqp" podStartSLOduration=35.131706984 podStartE2EDuration="35.131706984s" podCreationTimestamp="2025-11-05 15:48:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:49:00.120578472 +0000 UTC m=+40.428882513" watchObservedRunningTime="2025-11-05 15:49:00.131706984 +0000 UTC m=+40.440011004" Nov 5 15:49:00.353706 containerd[1686]: time="2025-11-05T15:49:00.353001794Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:00.356913 containerd[1686]: time="2025-11-05T15:49:00.356834417Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:49:00.358139 kubelet[2996]: E1105 15:49:00.357069 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:49:00.358139 kubelet[2996]: E1105 15:49:00.357122 2996 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:49:00.358139 kubelet[2996]: E1105 15:49:00.357207 2996 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xw72x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66dbdd57f9-thx98_calico-apiserver(0fbf3e00-38e0-4753-acbc-319c3080ae42): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:00.358294 containerd[1686]: time="2025-11-05T15:49:00.358040102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:49:00.358388 kubelet[2996]: E1105 15:49:00.358367 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66dbdd57f9-thx98" podUID="0fbf3e00-38e0-4753-acbc-319c3080ae42" Nov 5 15:49:00.741358 systemd-networkd[1579]: caliea4dc549cfe: Gained IPv6LL Nov 5 15:49:00.773114 containerd[1686]: time="2025-11-05T15:49:00.773036130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556695d997-5xswp,Uid:1353ebb0-4e2b-4c92-bf7e-cc5b1b71054e,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:49:00.773442 containerd[1686]: time="2025-11-05T15:49:00.773393417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9hj2w,Uid:5c5e9d05-7eb8-471e-907b-c18b5992bb51,Namespace:calico-system,Attempt:0,}" Nov 5 15:49:00.773656 containerd[1686]: time="2025-11-05T15:49:00.773606919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pv5b2,Uid:9045f224-fdd2-4555-a4fe-fb613a1c7ed0,Namespace:calico-system,Attempt:0,}" Nov 5 15:49:00.907157 systemd-networkd[1579]: calia1ca5da0343: Link UP Nov 5 15:49:00.907577 systemd-networkd[1579]: calia1ca5da0343: Gained carrier Nov 5 15:49:00.954844 containerd[1686]: 2025-11-05 15:49:00.845 [INFO][4679] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--pv5b2-eth0 goldmane-666569f655- calico-system 9045f224-fdd2-4555-a4fe-fb613a1c7ed0 840 0 2025-11-05 15:48:35 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-pv5b2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia1ca5da0343 [] [] }} ContainerID="a074cf10ebfead8376abd8e4482a91a913edd7f1ab0a00911ced7090feb60a39" Namespace="calico-system" Pod="goldmane-666569f655-pv5b2" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--pv5b2-" Nov 5 15:49:00.954844 containerd[1686]: 2025-11-05 15:49:00.845 [INFO][4679] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a074cf10ebfead8376abd8e4482a91a913edd7f1ab0a00911ced7090feb60a39" Namespace="calico-system" Pod="goldmane-666569f655-pv5b2" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--pv5b2-eth0" Nov 5 15:49:00.954844 containerd[1686]: 2025-11-05 15:49:00.877 [INFO][4708] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a074cf10ebfead8376abd8e4482a91a913edd7f1ab0a00911ced7090feb60a39" HandleID="k8s-pod-network.a074cf10ebfead8376abd8e4482a91a913edd7f1ab0a00911ced7090feb60a39" Workload="localhost-k8s-goldmane--666569f655--pv5b2-eth0" Nov 5 15:49:00.954844 containerd[1686]: 2025-11-05 15:49:00.877 [INFO][4708] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a074cf10ebfead8376abd8e4482a91a913edd7f1ab0a00911ced7090feb60a39" HandleID="k8s-pod-network.a074cf10ebfead8376abd8e4482a91a913edd7f1ab0a00911ced7090feb60a39" Workload="localhost-k8s-goldmane--666569f655--pv5b2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd870), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-pv5b2", "timestamp":"2025-11-05 15:49:00.877879478 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:49:00.954844 containerd[1686]: 2025-11-05 15:49:00.878 [INFO][4708] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:49:00.954844 containerd[1686]: 2025-11-05 15:49:00.878 [INFO][4708] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:49:00.954844 containerd[1686]: 2025-11-05 15:49:00.878 [INFO][4708] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:49:00.954844 containerd[1686]: 2025-11-05 15:49:00.885 [INFO][4708] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a074cf10ebfead8376abd8e4482a91a913edd7f1ab0a00911ced7090feb60a39" host="localhost" Nov 5 15:49:00.954844 containerd[1686]: 2025-11-05 15:49:00.888 [INFO][4708] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:49:00.954844 containerd[1686]: 2025-11-05 15:49:00.891 [INFO][4708] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:49:00.954844 containerd[1686]: 2025-11-05 15:49:00.892 [INFO][4708] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:49:00.954844 containerd[1686]: 2025-11-05 15:49:00.893 [INFO][4708] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:49:00.954844 containerd[1686]: 2025-11-05 15:49:00.893 [INFO][4708] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a074cf10ebfead8376abd8e4482a91a913edd7f1ab0a00911ced7090feb60a39" host="localhost" Nov 5 15:49:00.954844 containerd[1686]: 2025-11-05 15:49:00.894 [INFO][4708] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a074cf10ebfead8376abd8e4482a91a913edd7f1ab0a00911ced7090feb60a39 Nov 5 15:49:00.954844 containerd[1686]: 2025-11-05 15:49:00.896 [INFO][4708] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a074cf10ebfead8376abd8e4482a91a913edd7f1ab0a00911ced7090feb60a39" host="localhost" Nov 5 15:49:00.954844 containerd[1686]: 2025-11-05 15:49:00.900 [INFO][4708] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.a074cf10ebfead8376abd8e4482a91a913edd7f1ab0a00911ced7090feb60a39" host="localhost" Nov 5 15:49:00.954844 containerd[1686]: 2025-11-05 15:49:00.900 [INFO][4708] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.a074cf10ebfead8376abd8e4482a91a913edd7f1ab0a00911ced7090feb60a39" host="localhost" Nov 5 15:49:00.954844 containerd[1686]: 2025-11-05 15:49:00.901 [INFO][4708] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:49:00.954844 containerd[1686]: 2025-11-05 15:49:00.901 [INFO][4708] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="a074cf10ebfead8376abd8e4482a91a913edd7f1ab0a00911ced7090feb60a39" HandleID="k8s-pod-network.a074cf10ebfead8376abd8e4482a91a913edd7f1ab0a00911ced7090feb60a39" Workload="localhost-k8s-goldmane--666569f655--pv5b2-eth0" Nov 5 15:49:00.971810 containerd[1686]: 2025-11-05 15:49:00.903 [INFO][4679] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a074cf10ebfead8376abd8e4482a91a913edd7f1ab0a00911ced7090feb60a39" Namespace="calico-system" Pod="goldmane-666569f655-pv5b2" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--pv5b2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--pv5b2-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9045f224-fdd2-4555-a4fe-fb613a1c7ed0", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 48, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-pv5b2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia1ca5da0343", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:49:00.971810 containerd[1686]: 2025-11-05 15:49:00.904 [INFO][4679] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="a074cf10ebfead8376abd8e4482a91a913edd7f1ab0a00911ced7090feb60a39" Namespace="calico-system" Pod="goldmane-666569f655-pv5b2" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--pv5b2-eth0" Nov 5 15:49:00.971810 containerd[1686]: 2025-11-05 15:49:00.904 [INFO][4679] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia1ca5da0343 ContainerID="a074cf10ebfead8376abd8e4482a91a913edd7f1ab0a00911ced7090feb60a39" Namespace="calico-system" Pod="goldmane-666569f655-pv5b2" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--pv5b2-eth0" Nov 5 15:49:00.971810 containerd[1686]: 2025-11-05 15:49:00.908 [INFO][4679] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a074cf10ebfead8376abd8e4482a91a913edd7f1ab0a00911ced7090feb60a39" Namespace="calico-system" Pod="goldmane-666569f655-pv5b2" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--pv5b2-eth0" Nov 5 15:49:00.971810 containerd[1686]: 2025-11-05 15:49:00.909 [INFO][4679] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a074cf10ebfead8376abd8e4482a91a913edd7f1ab0a00911ced7090feb60a39" Namespace="calico-system" Pod="goldmane-666569f655-pv5b2" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--pv5b2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--pv5b2-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9045f224-fdd2-4555-a4fe-fb613a1c7ed0", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 48, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a074cf10ebfead8376abd8e4482a91a913edd7f1ab0a00911ced7090feb60a39", Pod:"goldmane-666569f655-pv5b2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia1ca5da0343", MAC:"da:0d:cc:24:4b:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:49:00.971810 containerd[1686]: 2025-11-05 15:49:00.951 [INFO][4679] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a074cf10ebfead8376abd8e4482a91a913edd7f1ab0a00911ced7090feb60a39" Namespace="calico-system" Pod="goldmane-666569f655-pv5b2" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--pv5b2-eth0" Nov 5 15:49:01.046149 systemd-networkd[1579]: cali7c0829c8398: Link UP Nov 5 15:49:01.048471 systemd-networkd[1579]: cali7c0829c8398: Gained carrier Nov 5 15:49:01.065962 containerd[1686]: 2025-11-05 15:49:00.847 [INFO][4670] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--9hj2w-eth0 csi-node-driver- calico-system 5c5e9d05-7eb8-471e-907b-c18b5992bb51 732 0 2025-11-05 15:48:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-9hj2w eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7c0829c8398 [] [] }} ContainerID="458a2e8781d295be56828125a21935a254d9a05bd5e2903b62691dd34a10803b" Namespace="calico-system" Pod="csi-node-driver-9hj2w" WorkloadEndpoint="localhost-k8s-csi--node--driver--9hj2w-" Nov 5 15:49:01.065962 containerd[1686]: 2025-11-05 15:49:00.848 [INFO][4670] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="458a2e8781d295be56828125a21935a254d9a05bd5e2903b62691dd34a10803b" Namespace="calico-system" Pod="csi-node-driver-9hj2w" WorkloadEndpoint="localhost-k8s-csi--node--driver--9hj2w-eth0" Nov 5 15:49:01.065962 containerd[1686]: 2025-11-05 15:49:00.878 [INFO][4711] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="458a2e8781d295be56828125a21935a254d9a05bd5e2903b62691dd34a10803b" HandleID="k8s-pod-network.458a2e8781d295be56828125a21935a254d9a05bd5e2903b62691dd34a10803b" Workload="localhost-k8s-csi--node--driver--9hj2w-eth0" Nov 5 15:49:01.065962 containerd[1686]: 2025-11-05 15:49:00.878 [INFO][4711] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="458a2e8781d295be56828125a21935a254d9a05bd5e2903b62691dd34a10803b" HandleID="k8s-pod-network.458a2e8781d295be56828125a21935a254d9a05bd5e2903b62691dd34a10803b" Workload="localhost-k8s-csi--node--driver--9hj2w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f860), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-9hj2w", "timestamp":"2025-11-05 15:49:00.878485843 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:49:01.065962 containerd[1686]: 2025-11-05 15:49:00.878 [INFO][4711] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:49:01.065962 containerd[1686]: 2025-11-05 15:49:00.901 [INFO][4711] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:49:01.065962 containerd[1686]: 2025-11-05 15:49:00.901 [INFO][4711] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:49:01.065962 containerd[1686]: 2025-11-05 15:49:00.986 [INFO][4711] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.458a2e8781d295be56828125a21935a254d9a05bd5e2903b62691dd34a10803b" host="localhost" Nov 5 15:49:01.065962 containerd[1686]: 2025-11-05 15:49:00.989 [INFO][4711] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:49:01.065962 containerd[1686]: 2025-11-05 15:49:00.991 [INFO][4711] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:49:01.065962 containerd[1686]: 2025-11-05 15:49:00.992 [INFO][4711] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:49:01.065962 containerd[1686]: 2025-11-05 15:49:00.994 [INFO][4711] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:49:01.065962 containerd[1686]: 2025-11-05 15:49:00.994 [INFO][4711] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.458a2e8781d295be56828125a21935a254d9a05bd5e2903b62691dd34a10803b" host="localhost" Nov 5 15:49:01.065962 containerd[1686]: 2025-11-05 15:49:00.994 [INFO][4711] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.458a2e8781d295be56828125a21935a254d9a05bd5e2903b62691dd34a10803b Nov 5 15:49:01.065962 containerd[1686]: 2025-11-05 15:49:01.028 [INFO][4711] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.458a2e8781d295be56828125a21935a254d9a05bd5e2903b62691dd34a10803b" host="localhost" Nov 5 15:49:01.065962 containerd[1686]: 2025-11-05 15:49:01.041 [INFO][4711] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.458a2e8781d295be56828125a21935a254d9a05bd5e2903b62691dd34a10803b" host="localhost" Nov 5 15:49:01.065962 containerd[1686]: 2025-11-05 15:49:01.041 [INFO][4711] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.458a2e8781d295be56828125a21935a254d9a05bd5e2903b62691dd34a10803b" host="localhost" Nov 5 15:49:01.065962 containerd[1686]: 2025-11-05 15:49:01.041 [INFO][4711] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:49:01.065962 containerd[1686]: 2025-11-05 15:49:01.042 [INFO][4711] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="458a2e8781d295be56828125a21935a254d9a05bd5e2903b62691dd34a10803b" HandleID="k8s-pod-network.458a2e8781d295be56828125a21935a254d9a05bd5e2903b62691dd34a10803b" Workload="localhost-k8s-csi--node--driver--9hj2w-eth0" Nov 5 15:49:01.067025 containerd[1686]: 2025-11-05 15:49:01.043 [INFO][4670] cni-plugin/k8s.go 418: Populated endpoint ContainerID="458a2e8781d295be56828125a21935a254d9a05bd5e2903b62691dd34a10803b" Namespace="calico-system" Pod="csi-node-driver-9hj2w" WorkloadEndpoint="localhost-k8s-csi--node--driver--9hj2w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9hj2w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5c5e9d05-7eb8-471e-907b-c18b5992bb51", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 48, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-9hj2w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7c0829c8398", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:49:01.067025 containerd[1686]: 2025-11-05 15:49:01.044 [INFO][4670] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="458a2e8781d295be56828125a21935a254d9a05bd5e2903b62691dd34a10803b" Namespace="calico-system" Pod="csi-node-driver-9hj2w" WorkloadEndpoint="localhost-k8s-csi--node--driver--9hj2w-eth0" Nov 5 15:49:01.067025 containerd[1686]: 2025-11-05 15:49:01.044 [INFO][4670] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c0829c8398 ContainerID="458a2e8781d295be56828125a21935a254d9a05bd5e2903b62691dd34a10803b" Namespace="calico-system" Pod="csi-node-driver-9hj2w" WorkloadEndpoint="localhost-k8s-csi--node--driver--9hj2w-eth0" Nov 5 15:49:01.067025 containerd[1686]: 2025-11-05 15:49:01.050 [INFO][4670] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="458a2e8781d295be56828125a21935a254d9a05bd5e2903b62691dd34a10803b" Namespace="calico-system" Pod="csi-node-driver-9hj2w" WorkloadEndpoint="localhost-k8s-csi--node--driver--9hj2w-eth0" Nov 5 15:49:01.067025 containerd[1686]: 2025-11-05 15:49:01.051 [INFO][4670] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="458a2e8781d295be56828125a21935a254d9a05bd5e2903b62691dd34a10803b" Namespace="calico-system" Pod="csi-node-driver-9hj2w" WorkloadEndpoint="localhost-k8s-csi--node--driver--9hj2w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9hj2w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5c5e9d05-7eb8-471e-907b-c18b5992bb51", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 48, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"458a2e8781d295be56828125a21935a254d9a05bd5e2903b62691dd34a10803b", Pod:"csi-node-driver-9hj2w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7c0829c8398", MAC:"ae:f4:30:3d:70:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:49:01.067025 containerd[1686]: 2025-11-05 15:49:01.064 [INFO][4670] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="458a2e8781d295be56828125a21935a254d9a05bd5e2903b62691dd34a10803b" Namespace="calico-system" Pod="csi-node-driver-9hj2w" WorkloadEndpoint="localhost-k8s-csi--node--driver--9hj2w-eth0" Nov 5 15:49:01.137401 containerd[1686]: time="2025-11-05T15:49:01.137332698Z" level=info msg="connecting to shim 458a2e8781d295be56828125a21935a254d9a05bd5e2903b62691dd34a10803b" address="unix:///run/containerd/s/9a696d9818f7d5225b5128613c48586d25d0f74abcad3b826f3156544b50ebeb" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:49:01.141146 systemd-networkd[1579]: cali0d7f5673a37: Link UP Nov 5 15:49:01.141264 systemd-networkd[1579]: cali0d7f5673a37: Gained carrier Nov 5 15:49:01.148430 containerd[1686]: time="2025-11-05T15:49:01.148152519Z" level=info msg="connecting to shim a074cf10ebfead8376abd8e4482a91a913edd7f1ab0a00911ced7090feb60a39" address="unix:///run/containerd/s/683af28da2feaf32394c09733c15ca4dd70c3df5ac1103f8b00ed5241eaf0208" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:49:01.156704 kubelet[2996]: E1105 15:49:01.156606 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b57c799-kk9c4" podUID="820085b5-0e01-4ac8-ba7a-c7255a8764f6" Nov 5 15:49:01.161180 kubelet[2996]: E1105 15:49:01.160320 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66dbdd57f9-thx98" podUID="0fbf3e00-38e0-4753-acbc-319c3080ae42" Nov 5 15:49:01.179745 containerd[1686]: 2025-11-05 15:49:00.845 [INFO][4672] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--556695d997--5xswp-eth0 calico-apiserver-556695d997- calico-apiserver 1353ebb0-4e2b-4c92-bf7e-cc5b1b71054e 843 0 2025-11-05 15:48:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:556695d997 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-556695d997-5xswp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0d7f5673a37 [] [] }} ContainerID="e0a190520a2d1083280eb3255d35ede604293fcc26cdbc5f89fbbcec8bc68a08" Namespace="calico-apiserver" Pod="calico-apiserver-556695d997-5xswp" WorkloadEndpoint="localhost-k8s-calico--apiserver--556695d997--5xswp-" Nov 5 15:49:01.179745 containerd[1686]: 2025-11-05 15:49:00.845 [INFO][4672] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e0a190520a2d1083280eb3255d35ede604293fcc26cdbc5f89fbbcec8bc68a08" Namespace="calico-apiserver" Pod="calico-apiserver-556695d997-5xswp" WorkloadEndpoint="localhost-k8s-calico--apiserver--556695d997--5xswp-eth0" Nov 5 15:49:01.179745 containerd[1686]: 2025-11-05 15:49:00.883 [INFO][4719] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e0a190520a2d1083280eb3255d35ede604293fcc26cdbc5f89fbbcec8bc68a08" HandleID="k8s-pod-network.e0a190520a2d1083280eb3255d35ede604293fcc26cdbc5f89fbbcec8bc68a08" Workload="localhost-k8s-calico--apiserver--556695d997--5xswp-eth0" Nov 5 15:49:01.179745 containerd[1686]: 2025-11-05 15:49:00.883 [INFO][4719] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e0a190520a2d1083280eb3255d35ede604293fcc26cdbc5f89fbbcec8bc68a08" HandleID="k8s-pod-network.e0a190520a2d1083280eb3255d35ede604293fcc26cdbc5f89fbbcec8bc68a08" Workload="localhost-k8s-calico--apiserver--556695d997--5xswp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-556695d997-5xswp", "timestamp":"2025-11-05 15:49:00.88359488 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:49:01.179745 containerd[1686]: 2025-11-05 15:49:00.883 [INFO][4719] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:49:01.179745 containerd[1686]: 2025-11-05 15:49:01.041 [INFO][4719] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:49:01.179745 containerd[1686]: 2025-11-05 15:49:01.041 [INFO][4719] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:49:01.179745 containerd[1686]: 2025-11-05 15:49:01.087 [INFO][4719] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e0a190520a2d1083280eb3255d35ede604293fcc26cdbc5f89fbbcec8bc68a08" host="localhost" Nov 5 15:49:01.179745 containerd[1686]: 2025-11-05 15:49:01.089 [INFO][4719] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:49:01.179745 containerd[1686]: 2025-11-05 15:49:01.093 [INFO][4719] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:49:01.179745 containerd[1686]: 2025-11-05 15:49:01.096 [INFO][4719] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:49:01.179745 containerd[1686]: 2025-11-05 15:49:01.099 [INFO][4719] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:49:01.179745 containerd[1686]: 2025-11-05 15:49:01.099 [INFO][4719] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e0a190520a2d1083280eb3255d35ede604293fcc26cdbc5f89fbbcec8bc68a08" host="localhost" Nov 5 15:49:01.179745 containerd[1686]: 2025-11-05 15:49:01.101 [INFO][4719] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e0a190520a2d1083280eb3255d35ede604293fcc26cdbc5f89fbbcec8bc68a08 Nov 5 15:49:01.179745 containerd[1686]: 2025-11-05 15:49:01.120 [INFO][4719] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e0a190520a2d1083280eb3255d35ede604293fcc26cdbc5f89fbbcec8bc68a08" host="localhost" Nov 5 15:49:01.179745 containerd[1686]: 2025-11-05 15:49:01.132 [INFO][4719] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.e0a190520a2d1083280eb3255d35ede604293fcc26cdbc5f89fbbcec8bc68a08" host="localhost" Nov 5 15:49:01.179745 containerd[1686]: 2025-11-05 15:49:01.132 [INFO][4719] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.e0a190520a2d1083280eb3255d35ede604293fcc26cdbc5f89fbbcec8bc68a08" host="localhost" Nov 5 15:49:01.179745 containerd[1686]: 2025-11-05 15:49:01.132 [INFO][4719] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:49:01.179745 containerd[1686]: 2025-11-05 15:49:01.132 [INFO][4719] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="e0a190520a2d1083280eb3255d35ede604293fcc26cdbc5f89fbbcec8bc68a08" HandleID="k8s-pod-network.e0a190520a2d1083280eb3255d35ede604293fcc26cdbc5f89fbbcec8bc68a08" Workload="localhost-k8s-calico--apiserver--556695d997--5xswp-eth0" Nov 5 15:49:01.180832 containerd[1686]: 2025-11-05 15:49:01.137 [INFO][4672] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e0a190520a2d1083280eb3255d35ede604293fcc26cdbc5f89fbbcec8bc68a08" Namespace="calico-apiserver" Pod="calico-apiserver-556695d997-5xswp" WorkloadEndpoint="localhost-k8s-calico--apiserver--556695d997--5xswp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--556695d997--5xswp-eth0", GenerateName:"calico-apiserver-556695d997-", Namespace:"calico-apiserver", SelfLink:"", UID:"1353ebb0-4e2b-4c92-bf7e-cc5b1b71054e", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 48, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"556695d997", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-556695d997-5xswp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0d7f5673a37", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:49:01.180832 containerd[1686]: 2025-11-05 15:49:01.137 [INFO][4672] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="e0a190520a2d1083280eb3255d35ede604293fcc26cdbc5f89fbbcec8bc68a08" Namespace="calico-apiserver" Pod="calico-apiserver-556695d997-5xswp" WorkloadEndpoint="localhost-k8s-calico--apiserver--556695d997--5xswp-eth0" Nov 5 15:49:01.180832 containerd[1686]: 2025-11-05 15:49:01.137 [INFO][4672] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0d7f5673a37 ContainerID="e0a190520a2d1083280eb3255d35ede604293fcc26cdbc5f89fbbcec8bc68a08" Namespace="calico-apiserver" Pod="calico-apiserver-556695d997-5xswp" WorkloadEndpoint="localhost-k8s-calico--apiserver--556695d997--5xswp-eth0" Nov 5 15:49:01.180832 containerd[1686]: 2025-11-05 15:49:01.141 [INFO][4672] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e0a190520a2d1083280eb3255d35ede604293fcc26cdbc5f89fbbcec8bc68a08" Namespace="calico-apiserver" Pod="calico-apiserver-556695d997-5xswp" WorkloadEndpoint="localhost-k8s-calico--apiserver--556695d997--5xswp-eth0" Nov 5 15:49:01.180832 containerd[1686]: 2025-11-05 15:49:01.145 [INFO][4672] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e0a190520a2d1083280eb3255d35ede604293fcc26cdbc5f89fbbcec8bc68a08" Namespace="calico-apiserver" Pod="calico-apiserver-556695d997-5xswp" WorkloadEndpoint="localhost-k8s-calico--apiserver--556695d997--5xswp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--556695d997--5xswp-eth0", GenerateName:"calico-apiserver-556695d997-", Namespace:"calico-apiserver", SelfLink:"", UID:"1353ebb0-4e2b-4c92-bf7e-cc5b1b71054e", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 48, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"556695d997", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e0a190520a2d1083280eb3255d35ede604293fcc26cdbc5f89fbbcec8bc68a08", Pod:"calico-apiserver-556695d997-5xswp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0d7f5673a37", MAC:"36:ff:66:bb:75:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:49:01.180832 containerd[1686]: 2025-11-05 15:49:01.165 [INFO][4672] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e0a190520a2d1083280eb3255d35ede604293fcc26cdbc5f89fbbcec8bc68a08" Namespace="calico-apiserver" Pod="calico-apiserver-556695d997-5xswp" WorkloadEndpoint="localhost-k8s-calico--apiserver--556695d997--5xswp-eth0" Nov 5 15:49:01.205387 systemd[1]: Started cri-containerd-458a2e8781d295be56828125a21935a254d9a05bd5e2903b62691dd34a10803b.scope - libcontainer container 458a2e8781d295be56828125a21935a254d9a05bd5e2903b62691dd34a10803b. Nov 5 15:49:01.209929 systemd[1]: Started cri-containerd-a074cf10ebfead8376abd8e4482a91a913edd7f1ab0a00911ced7090feb60a39.scope - libcontainer container a074cf10ebfead8376abd8e4482a91a913edd7f1ab0a00911ced7090feb60a39. Nov 5 15:49:01.221533 containerd[1686]: time="2025-11-05T15:49:01.221500549Z" level=info msg="connecting to shim e0a190520a2d1083280eb3255d35ede604293fcc26cdbc5f89fbbcec8bc68a08" address="unix:///run/containerd/s/4422a7013563fc6f61ba4c08f7c6b7c768deac1b54ca2dfc069a958ed94ffe84" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:49:01.229082 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:49:01.240691 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:49:01.252324 systemd[1]: Started cri-containerd-e0a190520a2d1083280eb3255d35ede604293fcc26cdbc5f89fbbcec8bc68a08.scope - libcontainer container e0a190520a2d1083280eb3255d35ede604293fcc26cdbc5f89fbbcec8bc68a08. Nov 5 15:49:01.253227 systemd-networkd[1579]: califb13ed599f1: Gained IPv6LL Nov 5 15:49:01.256337 containerd[1686]: time="2025-11-05T15:49:01.256266841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9hj2w,Uid:5c5e9d05-7eb8-471e-907b-c18b5992bb51,Namespace:calico-system,Attempt:0,} returns sandbox id \"458a2e8781d295be56828125a21935a254d9a05bd5e2903b62691dd34a10803b\"" Nov 5 15:49:01.258993 containerd[1686]: time="2025-11-05T15:49:01.258957682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:49:01.270615 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:49:01.288298 containerd[1686]: time="2025-11-05T15:49:01.288173930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pv5b2,Uid:9045f224-fdd2-4555-a4fe-fb613a1c7ed0,Namespace:calico-system,Attempt:0,} returns sandbox id \"a074cf10ebfead8376abd8e4482a91a913edd7f1ab0a00911ced7090feb60a39\"" Nov 5 15:49:01.306351 containerd[1686]: time="2025-11-05T15:49:01.306285385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556695d997-5xswp,Uid:1353ebb0-4e2b-4c92-bf7e-cc5b1b71054e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e0a190520a2d1083280eb3255d35ede604293fcc26cdbc5f89fbbcec8bc68a08\"" Nov 5 15:49:01.705064 containerd[1686]: time="2025-11-05T15:49:01.704622615Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:01.711914 containerd[1686]: time="2025-11-05T15:49:01.711503505Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:49:01.712136 containerd[1686]: time="2025-11-05T15:49:01.711537272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:49:01.712570 kubelet[2996]: E1105 15:49:01.712238 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:49:01.712570 kubelet[2996]: E1105 15:49:01.712281 2996 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:49:01.712726 kubelet[2996]: E1105 15:49:01.712688 2996 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kpvz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9hj2w_calico-system(5c5e9d05-7eb8-471e-907b-c18b5992bb51): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:01.713111 containerd[1686]: time="2025-11-05T15:49:01.713060394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:49:01.773633 containerd[1686]: time="2025-11-05T15:49:01.773437386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556695d997-rtkds,Uid:99dbe593-4354-4ab6-ba7c-be3559d541a3,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:49:01.829767 containerd[1686]: time="2025-11-05T15:49:01.829738530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d4wc4,Uid:3e7a2b0d-7ec6-4b5b-b6bd-9ce0b5174a51,Namespace:kube-system,Attempt:0,}" Nov 5 15:49:01.929025 systemd-networkd[1579]: calie891d374243: Link UP Nov 5 15:49:01.930118 systemd-networkd[1579]: calie891d374243: Gained carrier Nov 5 15:49:01.966293 containerd[1686]: 2025-11-05 15:49:01.827 [INFO][4896] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--556695d997--rtkds-eth0 calico-apiserver-556695d997- calico-apiserver 99dbe593-4354-4ab6-ba7c-be3559d541a3 838 0 2025-11-05 15:48:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:556695d997 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-556695d997-rtkds eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie891d374243 [] [] }} ContainerID="4417b62fbb5c3df53488b4c5fe30624832591064db4030c75deff7d9ede9f39e" Namespace="calico-apiserver" Pod="calico-apiserver-556695d997-rtkds" WorkloadEndpoint="localhost-k8s-calico--apiserver--556695d997--rtkds-" Nov 5 15:49:01.966293 containerd[1686]: 2025-11-05 15:49:01.828 [INFO][4896] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4417b62fbb5c3df53488b4c5fe30624832591064db4030c75deff7d9ede9f39e" Namespace="calico-apiserver" Pod="calico-apiserver-556695d997-rtkds" WorkloadEndpoint="localhost-k8s-calico--apiserver--556695d997--rtkds-eth0" Nov 5 15:49:01.966293 containerd[1686]: 2025-11-05 15:49:01.847 [INFO][4909] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4417b62fbb5c3df53488b4c5fe30624832591064db4030c75deff7d9ede9f39e" HandleID="k8s-pod-network.4417b62fbb5c3df53488b4c5fe30624832591064db4030c75deff7d9ede9f39e" Workload="localhost-k8s-calico--apiserver--556695d997--rtkds-eth0" Nov 5 15:49:01.966293 containerd[1686]: 2025-11-05 15:49:01.848 [INFO][4909] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4417b62fbb5c3df53488b4c5fe30624832591064db4030c75deff7d9ede9f39e" HandleID="k8s-pod-network.4417b62fbb5c3df53488b4c5fe30624832591064db4030c75deff7d9ede9f39e" Workload="localhost-k8s-calico--apiserver--556695d997--rtkds-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-556695d997-rtkds", "timestamp":"2025-11-05 15:49:01.847886645 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:49:01.966293 containerd[1686]: 2025-11-05 15:49:01.848 [INFO][4909] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:49:01.966293 containerd[1686]: 2025-11-05 15:49:01.848 [INFO][4909] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:49:01.966293 containerd[1686]: 2025-11-05 15:49:01.848 [INFO][4909] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:49:01.966293 containerd[1686]: 2025-11-05 15:49:01.855 [INFO][4909] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4417b62fbb5c3df53488b4c5fe30624832591064db4030c75deff7d9ede9f39e" host="localhost" Nov 5 15:49:01.966293 containerd[1686]: 2025-11-05 15:49:01.857 [INFO][4909] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:49:01.966293 containerd[1686]: 2025-11-05 15:49:01.859 [INFO][4909] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:49:01.966293 containerd[1686]: 2025-11-05 15:49:01.860 [INFO][4909] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:49:01.966293 containerd[1686]: 2025-11-05 15:49:01.861 [INFO][4909] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:49:01.966293 containerd[1686]: 2025-11-05 15:49:01.861 [INFO][4909] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4417b62fbb5c3df53488b4c5fe30624832591064db4030c75deff7d9ede9f39e" host="localhost" Nov 5 15:49:01.966293 containerd[1686]: 2025-11-05 15:49:01.862 [INFO][4909] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4417b62fbb5c3df53488b4c5fe30624832591064db4030c75deff7d9ede9f39e Nov 5 15:49:01.966293 containerd[1686]: 2025-11-05 15:49:01.869 [INFO][4909] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4417b62fbb5c3df53488b4c5fe30624832591064db4030c75deff7d9ede9f39e" host="localhost" Nov 5 15:49:01.966293 containerd[1686]: 2025-11-05 15:49:01.923 [INFO][4909] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.4417b62fbb5c3df53488b4c5fe30624832591064db4030c75deff7d9ede9f39e" host="localhost" Nov 5 15:49:01.966293 containerd[1686]: 2025-11-05 15:49:01.923 [INFO][4909] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.4417b62fbb5c3df53488b4c5fe30624832591064db4030c75deff7d9ede9f39e" host="localhost" Nov 5 15:49:01.966293 containerd[1686]: 2025-11-05 15:49:01.923 [INFO][4909] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:49:01.966293 containerd[1686]: 2025-11-05 15:49:01.923 [INFO][4909] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="4417b62fbb5c3df53488b4c5fe30624832591064db4030c75deff7d9ede9f39e" HandleID="k8s-pod-network.4417b62fbb5c3df53488b4c5fe30624832591064db4030c75deff7d9ede9f39e" Workload="localhost-k8s-calico--apiserver--556695d997--rtkds-eth0" Nov 5 15:49:01.975136 containerd[1686]: 2025-11-05 15:49:01.925 [INFO][4896] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4417b62fbb5c3df53488b4c5fe30624832591064db4030c75deff7d9ede9f39e" Namespace="calico-apiserver" Pod="calico-apiserver-556695d997-rtkds" WorkloadEndpoint="localhost-k8s-calico--apiserver--556695d997--rtkds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--556695d997--rtkds-eth0", GenerateName:"calico-apiserver-556695d997-", Namespace:"calico-apiserver", SelfLink:"", UID:"99dbe593-4354-4ab6-ba7c-be3559d541a3", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 48, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"556695d997", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-556695d997-rtkds", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie891d374243", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:49:01.975136 containerd[1686]: 2025-11-05 15:49:01.926 [INFO][4896] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="4417b62fbb5c3df53488b4c5fe30624832591064db4030c75deff7d9ede9f39e" Namespace="calico-apiserver" Pod="calico-apiserver-556695d997-rtkds" WorkloadEndpoint="localhost-k8s-calico--apiserver--556695d997--rtkds-eth0" Nov 5 15:49:01.975136 containerd[1686]: 2025-11-05 15:49:01.926 [INFO][4896] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie891d374243 ContainerID="4417b62fbb5c3df53488b4c5fe30624832591064db4030c75deff7d9ede9f39e" Namespace="calico-apiserver" Pod="calico-apiserver-556695d997-rtkds" WorkloadEndpoint="localhost-k8s-calico--apiserver--556695d997--rtkds-eth0" Nov 5 15:49:01.975136 containerd[1686]: 2025-11-05 15:49:01.928 [INFO][4896] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4417b62fbb5c3df53488b4c5fe30624832591064db4030c75deff7d9ede9f39e" Namespace="calico-apiserver" Pod="calico-apiserver-556695d997-rtkds" WorkloadEndpoint="localhost-k8s-calico--apiserver--556695d997--rtkds-eth0" Nov 5 15:49:01.975136 containerd[1686]: 2025-11-05 15:49:01.928 [INFO][4896] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4417b62fbb5c3df53488b4c5fe30624832591064db4030c75deff7d9ede9f39e" Namespace="calico-apiserver" Pod="calico-apiserver-556695d997-rtkds" WorkloadEndpoint="localhost-k8s-calico--apiserver--556695d997--rtkds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--556695d997--rtkds-eth0", GenerateName:"calico-apiserver-556695d997-", Namespace:"calico-apiserver", SelfLink:"", UID:"99dbe593-4354-4ab6-ba7c-be3559d541a3", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 48, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"556695d997", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4417b62fbb5c3df53488b4c5fe30624832591064db4030c75deff7d9ede9f39e", Pod:"calico-apiserver-556695d997-rtkds", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie891d374243", MAC:"de:cf:b9:0a:26:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:49:01.975136 containerd[1686]: 2025-11-05 15:49:01.963 [INFO][4896] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4417b62fbb5c3df53488b4c5fe30624832591064db4030c75deff7d9ede9f39e" Namespace="calico-apiserver" Pod="calico-apiserver-556695d997-rtkds" WorkloadEndpoint="localhost-k8s-calico--apiserver--556695d997--rtkds-eth0" Nov 5 15:49:02.042582 containerd[1686]: time="2025-11-05T15:49:02.042549838Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:02.049803 containerd[1686]: time="2025-11-05T15:49:02.049770576Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:49:02.049893 containerd[1686]: time="2025-11-05T15:49:02.049828983Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:49:02.049956 kubelet[2996]: E1105 15:49:02.049924 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:49:02.049987 kubelet[2996]: E1105 15:49:02.049971 2996 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:49:02.051024 containerd[1686]: time="2025-11-05T15:49:02.051006321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:49:02.051166 kubelet[2996]: E1105 15:49:02.051130 2996 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fljrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pv5b2_calico-system(9045f224-fdd2-4555-a4fe-fb613a1c7ed0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:02.052262 kubelet[2996]: E1105 15:49:02.052240 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pv5b2" podUID="9045f224-fdd2-4555-a4fe-fb613a1c7ed0" Nov 5 15:49:02.076556 containerd[1686]: time="2025-11-05T15:49:02.076525744Z" level=info msg="connecting to shim 4417b62fbb5c3df53488b4c5fe30624832591064db4030c75deff7d9ede9f39e" address="unix:///run/containerd/s/9b2c146cc881be020552e29c6c9c623a869209d1e84d859eec00a0b15939956c" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:49:02.085219 systemd-networkd[1579]: cali7c0829c8398: Gained IPv6LL Nov 5 15:49:02.096049 systemd-networkd[1579]: calib794d36ecba: Link UP Nov 5 15:49:02.098451 systemd-networkd[1579]: calib794d36ecba: Gained carrier Nov 5 15:49:02.126579 containerd[1686]: 2025-11-05 15:49:01.928 [INFO][4917] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--d4wc4-eth0 coredns-668d6bf9bc- kube-system 3e7a2b0d-7ec6-4b5b-b6bd-9ce0b5174a51 830 0 2025-11-05 15:48:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-d4wc4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib794d36ecba [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2ad0e57816fca09a58bb837995873875294e9f39be6b2d36c14ecaf2d988798a" Namespace="kube-system" Pod="coredns-668d6bf9bc-d4wc4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d4wc4-" Nov 5 15:49:02.126579 containerd[1686]: 2025-11-05 15:49:01.929 [INFO][4917] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2ad0e57816fca09a58bb837995873875294e9f39be6b2d36c14ecaf2d988798a" Namespace="kube-system" Pod="coredns-668d6bf9bc-d4wc4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d4wc4-eth0" Nov 5 15:49:02.126579 containerd[1686]: 2025-11-05 15:49:01.950 [INFO][4931] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ad0e57816fca09a58bb837995873875294e9f39be6b2d36c14ecaf2d988798a" HandleID="k8s-pod-network.2ad0e57816fca09a58bb837995873875294e9f39be6b2d36c14ecaf2d988798a" Workload="localhost-k8s-coredns--668d6bf9bc--d4wc4-eth0" Nov 5 15:49:02.126579 containerd[1686]: 2025-11-05 15:49:01.950 [INFO][4931] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2ad0e57816fca09a58bb837995873875294e9f39be6b2d36c14ecaf2d988798a" HandleID="k8s-pod-network.2ad0e57816fca09a58bb837995873875294e9f39be6b2d36c14ecaf2d988798a" Workload="localhost-k8s-coredns--668d6bf9bc--d4wc4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f160), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-d4wc4", "timestamp":"2025-11-05 15:49:01.950155075 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:49:02.126579 containerd[1686]: 2025-11-05 15:49:01.950 [INFO][4931] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:49:02.126579 containerd[1686]: 2025-11-05 15:49:01.950 [INFO][4931] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:49:02.126579 containerd[1686]: 2025-11-05 15:49:01.950 [INFO][4931] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:49:02.126579 containerd[1686]: 2025-11-05 15:49:01.967 [INFO][4931] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2ad0e57816fca09a58bb837995873875294e9f39be6b2d36c14ecaf2d988798a" host="localhost" Nov 5 15:49:02.126579 containerd[1686]: 2025-11-05 15:49:01.975 [INFO][4931] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:49:02.126579 containerd[1686]: 2025-11-05 15:49:01.980 [INFO][4931] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:49:02.126579 containerd[1686]: 2025-11-05 15:49:02.035 [INFO][4931] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:49:02.126579 containerd[1686]: 2025-11-05 15:49:02.047 [INFO][4931] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:49:02.126579 containerd[1686]: 2025-11-05 15:49:02.048 [INFO][4931] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2ad0e57816fca09a58bb837995873875294e9f39be6b2d36c14ecaf2d988798a" host="localhost" Nov 5 15:49:02.126579 containerd[1686]: 2025-11-05 15:49:02.060 [INFO][4931] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2ad0e57816fca09a58bb837995873875294e9f39be6b2d36c14ecaf2d988798a Nov 5 15:49:02.126579 containerd[1686]: 2025-11-05 15:49:02.071 [INFO][4931] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2ad0e57816fca09a58bb837995873875294e9f39be6b2d36c14ecaf2d988798a" host="localhost" Nov 5 15:49:02.126579 containerd[1686]: 2025-11-05 15:49:02.079 [INFO][4931] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.2ad0e57816fca09a58bb837995873875294e9f39be6b2d36c14ecaf2d988798a" host="localhost" Nov 5 15:49:02.126579 containerd[1686]: 2025-11-05 15:49:02.080 [INFO][4931] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.2ad0e57816fca09a58bb837995873875294e9f39be6b2d36c14ecaf2d988798a" host="localhost" Nov 5 15:49:02.126579 containerd[1686]: 2025-11-05 15:49:02.080 [INFO][4931] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:49:02.126579 containerd[1686]: 2025-11-05 15:49:02.080 [INFO][4931] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="2ad0e57816fca09a58bb837995873875294e9f39be6b2d36c14ecaf2d988798a" HandleID="k8s-pod-network.2ad0e57816fca09a58bb837995873875294e9f39be6b2d36c14ecaf2d988798a" Workload="localhost-k8s-coredns--668d6bf9bc--d4wc4-eth0" Nov 5 15:49:02.135081 containerd[1686]: 2025-11-05 15:49:02.088 [INFO][4917] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2ad0e57816fca09a58bb837995873875294e9f39be6b2d36c14ecaf2d988798a" Namespace="kube-system" Pod="coredns-668d6bf9bc-d4wc4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d4wc4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--d4wc4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3e7a2b0d-7ec6-4b5b-b6bd-9ce0b5174a51", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 48, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-d4wc4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib794d36ecba", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:49:02.135081 containerd[1686]: 2025-11-05 15:49:02.088 [INFO][4917] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="2ad0e57816fca09a58bb837995873875294e9f39be6b2d36c14ecaf2d988798a" Namespace="kube-system" Pod="coredns-668d6bf9bc-d4wc4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d4wc4-eth0" Nov 5 15:49:02.135081 containerd[1686]: 2025-11-05 15:49:02.088 [INFO][4917] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib794d36ecba ContainerID="2ad0e57816fca09a58bb837995873875294e9f39be6b2d36c14ecaf2d988798a" Namespace="kube-system" Pod="coredns-668d6bf9bc-d4wc4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d4wc4-eth0" Nov 5 15:49:02.135081 containerd[1686]: 2025-11-05 15:49:02.098 [INFO][4917] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2ad0e57816fca09a58bb837995873875294e9f39be6b2d36c14ecaf2d988798a" Namespace="kube-system" Pod="coredns-668d6bf9bc-d4wc4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d4wc4-eth0" Nov 5 15:49:02.135081 containerd[1686]: 2025-11-05 15:49:02.102 [INFO][4917] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2ad0e57816fca09a58bb837995873875294e9f39be6b2d36c14ecaf2d988798a" Namespace="kube-system" Pod="coredns-668d6bf9bc-d4wc4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d4wc4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--d4wc4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3e7a2b0d-7ec6-4b5b-b6bd-9ce0b5174a51", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 48, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2ad0e57816fca09a58bb837995873875294e9f39be6b2d36c14ecaf2d988798a", Pod:"coredns-668d6bf9bc-d4wc4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib794d36ecba", MAC:"26:4b:85:31:04:08", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:49:02.135081 containerd[1686]: 2025-11-05 15:49:02.121 [INFO][4917] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2ad0e57816fca09a58bb837995873875294e9f39be6b2d36c14ecaf2d988798a" Namespace="kube-system" Pod="coredns-668d6bf9bc-d4wc4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d4wc4-eth0" Nov 5 15:49:02.133664 systemd[1]: Started cri-containerd-4417b62fbb5c3df53488b4c5fe30624832591064db4030c75deff7d9ede9f39e.scope - libcontainer container 4417b62fbb5c3df53488b4c5fe30624832591064db4030c75deff7d9ede9f39e. Nov 5 15:49:02.156015 kubelet[2996]: E1105 15:49:02.155984 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pv5b2" podUID="9045f224-fdd2-4555-a4fe-fb613a1c7ed0" Nov 5 15:49:02.156153 kubelet[2996]: E1105 15:49:02.156144 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66dbdd57f9-thx98" podUID="0fbf3e00-38e0-4753-acbc-319c3080ae42" Nov 5 15:49:02.164945 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:49:02.264358 containerd[1686]: time="2025-11-05T15:49:02.264209030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556695d997-rtkds,Uid:99dbe593-4354-4ab6-ba7c-be3559d541a3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4417b62fbb5c3df53488b4c5fe30624832591064db4030c75deff7d9ede9f39e\"" Nov 5 15:49:02.275621 containerd[1686]: time="2025-11-05T15:49:02.275467412Z" level=info msg="connecting to shim 2ad0e57816fca09a58bb837995873875294e9f39be6b2d36c14ecaf2d988798a" address="unix:///run/containerd/s/f76fc8ea9e7b14d9628b4b323e6b6c23b603eb2d57b2b9abc589f59f65294480" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:49:02.302267 systemd[1]: Started cri-containerd-2ad0e57816fca09a58bb837995873875294e9f39be6b2d36c14ecaf2d988798a.scope - libcontainer container 2ad0e57816fca09a58bb837995873875294e9f39be6b2d36c14ecaf2d988798a. Nov 5 15:49:02.311956 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:49:02.342354 systemd-networkd[1579]: cali0d7f5673a37: Gained IPv6LL Nov 5 15:49:02.347536 containerd[1686]: time="2025-11-05T15:49:02.347511403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d4wc4,Uid:3e7a2b0d-7ec6-4b5b-b6bd-9ce0b5174a51,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ad0e57816fca09a58bb837995873875294e9f39be6b2d36c14ecaf2d988798a\"" Nov 5 15:49:02.357355 containerd[1686]: time="2025-11-05T15:49:02.357332840Z" level=info msg="CreateContainer within sandbox \"2ad0e57816fca09a58bb837995873875294e9f39be6b2d36c14ecaf2d988798a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 15:49:02.390903 containerd[1686]: time="2025-11-05T15:49:02.389833653Z" level=info msg="Container b151afc45ca8ae0df885d4279262c20ddfdd12a14aefb9b9bed471529736d152: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:49:02.404191 containerd[1686]: time="2025-11-05T15:49:02.404155932Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:02.409629 containerd[1686]: time="2025-11-05T15:49:02.409604055Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:49:02.409988 containerd[1686]: time="2025-11-05T15:49:02.409668141Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:49:02.410014 kubelet[2996]: E1105 15:49:02.409813 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:49:02.410014 kubelet[2996]: E1105 15:49:02.409861 2996 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:49:02.410228 kubelet[2996]: E1105 15:49:02.410182 2996 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p28c5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-556695d997-5xswp_calico-apiserver(1353ebb0-4e2b-4c92-bf7e-cc5b1b71054e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:02.410296 containerd[1686]: time="2025-11-05T15:49:02.410086291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:49:02.411742 kubelet[2996]: E1105 15:49:02.411718 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556695d997-5xswp" podUID="1353ebb0-4e2b-4c92-bf7e-cc5b1b71054e" Nov 5 15:49:02.413535 containerd[1686]: time="2025-11-05T15:49:02.413514623Z" level=info msg="CreateContainer within sandbox \"2ad0e57816fca09a58bb837995873875294e9f39be6b2d36c14ecaf2d988798a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b151afc45ca8ae0df885d4279262c20ddfdd12a14aefb9b9bed471529736d152\"" Nov 5 15:49:02.413876 containerd[1686]: time="2025-11-05T15:49:02.413860505Z" level=info msg="StartContainer for \"b151afc45ca8ae0df885d4279262c20ddfdd12a14aefb9b9bed471529736d152\"" Nov 5 15:49:02.414457 containerd[1686]: time="2025-11-05T15:49:02.414440754Z" level=info msg="connecting to shim b151afc45ca8ae0df885d4279262c20ddfdd12a14aefb9b9bed471529736d152" address="unix:///run/containerd/s/f76fc8ea9e7b14d9628b4b323e6b6c23b603eb2d57b2b9abc589f59f65294480" protocol=ttrpc version=3 Nov 5 15:49:02.435646 systemd[1]: Started cri-containerd-b151afc45ca8ae0df885d4279262c20ddfdd12a14aefb9b9bed471529736d152.scope - libcontainer container b151afc45ca8ae0df885d4279262c20ddfdd12a14aefb9b9bed471529736d152. Nov 5 15:49:02.477756 containerd[1686]: time="2025-11-05T15:49:02.477726899Z" level=info msg="StartContainer for \"b151afc45ca8ae0df885d4279262c20ddfdd12a14aefb9b9bed471529736d152\" returns successfully" Nov 5 15:49:02.789338 systemd-networkd[1579]: calia1ca5da0343: Gained IPv6LL Nov 5 15:49:02.795194 containerd[1686]: time="2025-11-05T15:49:02.795160969Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:02.795494 containerd[1686]: time="2025-11-05T15:49:02.795472704Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:49:02.795547 containerd[1686]: time="2025-11-05T15:49:02.795534852Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:49:02.795666 kubelet[2996]: E1105 15:49:02.795624 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:49:02.795666 kubelet[2996]: E1105 15:49:02.795660 2996 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:49:02.795857 kubelet[2996]: E1105 15:49:02.795829 2996 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kpvz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9hj2w_calico-system(5c5e9d05-7eb8-471e-907b-c18b5992bb51): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:02.796158 containerd[1686]: time="2025-11-05T15:49:02.795924022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:49:02.797643 kubelet[2996]: E1105 15:49:02.797568 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9hj2w" podUID="5c5e9d05-7eb8-471e-907b-c18b5992bb51" Nov 5 15:49:03.159005 kubelet[2996]: E1105 15:49:03.158532 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pv5b2" podUID="9045f224-fdd2-4555-a4fe-fb613a1c7ed0" Nov 5 15:49:03.160490 kubelet[2996]: E1105 15:49:03.159084 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9hj2w" podUID="5c5e9d05-7eb8-471e-907b-c18b5992bb51" Nov 5 15:49:03.160490 kubelet[2996]: E1105 15:49:03.160227 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556695d997-5xswp" podUID="1353ebb0-4e2b-4c92-bf7e-cc5b1b71054e" Nov 5 15:49:03.190855 kubelet[2996]: I1105 15:49:03.190326 2996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-d4wc4" podStartSLOduration=38.190313008 podStartE2EDuration="38.190313008s" podCreationTimestamp="2025-11-05 15:48:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:49:03.189826738 +0000 UTC m=+43.498130763" watchObservedRunningTime="2025-11-05 15:49:03.190313008 +0000 UTC m=+43.498617027" Nov 5 15:49:03.231552 containerd[1686]: time="2025-11-05T15:49:03.231485863Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:03.237537 systemd-networkd[1579]: calie891d374243: Gained IPv6LL Nov 5 15:49:03.242334 containerd[1686]: time="2025-11-05T15:49:03.242264136Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:49:03.242334 containerd[1686]: time="2025-11-05T15:49:03.242312272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:49:03.242440 kubelet[2996]: E1105 15:49:03.242407 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:49:03.242471 kubelet[2996]: E1105 15:49:03.242445 2996 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:49:03.242548 kubelet[2996]: E1105 15:49:03.242522 2996 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h8j4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-556695d997-rtkds_calico-apiserver(99dbe593-4354-4ab6-ba7c-be3559d541a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:03.243758 kubelet[2996]: E1105 15:49:03.243736 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556695d997-rtkds" podUID="99dbe593-4354-4ab6-ba7c-be3559d541a3" Nov 5 15:49:04.069185 systemd-networkd[1579]: calib794d36ecba: Gained IPv6LL Nov 5 15:49:04.160790 kubelet[2996]: E1105 15:49:04.160763 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556695d997-rtkds" podUID="99dbe593-4354-4ab6-ba7c-be3559d541a3" Nov 5 15:49:11.774979 containerd[1686]: time="2025-11-05T15:49:11.774599257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:49:12.124163 containerd[1686]: time="2025-11-05T15:49:12.124071430Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:12.134469 containerd[1686]: time="2025-11-05T15:49:12.134386556Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:49:12.134469 containerd[1686]: time="2025-11-05T15:49:12.134438226Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:49:12.134588 kubelet[2996]: E1105 15:49:12.134566 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:49:12.134797 kubelet[2996]: E1105 15:49:12.134601 2996 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:49:12.134797 kubelet[2996]: E1105 15:49:12.134684 2996 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r6snb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-b57c799-kk9c4_calico-system(820085b5-0e01-4ac8-ba7a-c7255a8764f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:12.135920 kubelet[2996]: E1105 15:49:12.135896 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b57c799-kk9c4" podUID="820085b5-0e01-4ac8-ba7a-c7255a8764f6" Nov 5 15:49:13.774134 containerd[1686]: time="2025-11-05T15:49:13.773702023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:49:14.150006 containerd[1686]: time="2025-11-05T15:49:14.149759495Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:14.150309 containerd[1686]: time="2025-11-05T15:49:14.150242886Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:49:14.150354 containerd[1686]: time="2025-11-05T15:49:14.150306887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:49:14.150455 kubelet[2996]: E1105 15:49:14.150428 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:49:14.150883 kubelet[2996]: E1105 15:49:14.150676 2996 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:49:14.150883 kubelet[2996]: E1105 15:49:14.150849 2996 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2eef99ffd7d7435483175276c9559998,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ghxh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-794d68bb9-6lxsm_calico-system(0dc71d36-e361-4406-9e29-3e94a7056136): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:14.151695 containerd[1686]: time="2025-11-05T15:49:14.151675696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:49:14.499262 containerd[1686]: time="2025-11-05T15:49:14.499231391Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:14.499585 containerd[1686]: time="2025-11-05T15:49:14.499565941Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:49:14.499630 containerd[1686]: time="2025-11-05T15:49:14.499618185Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:49:14.499728 kubelet[2996]: E1105 15:49:14.499705 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:49:14.499763 kubelet[2996]: E1105 15:49:14.499736 2996 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:49:14.499943 kubelet[2996]: E1105 15:49:14.499898 2996 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xw72x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66dbdd57f9-thx98_calico-apiserver(0fbf3e00-38e0-4753-acbc-319c3080ae42): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:14.500103 containerd[1686]: time="2025-11-05T15:49:14.500024712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:49:14.501396 kubelet[2996]: E1105 15:49:14.501376 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66dbdd57f9-thx98" podUID="0fbf3e00-38e0-4753-acbc-319c3080ae42" Nov 5 15:49:14.844632 containerd[1686]: time="2025-11-05T15:49:14.844544042Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:14.845170 containerd[1686]: time="2025-11-05T15:49:14.845142277Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:49:14.845228 containerd[1686]: time="2025-11-05T15:49:14.845211726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:49:14.845440 kubelet[2996]: E1105 15:49:14.845379 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:49:14.845440 kubelet[2996]: E1105 15:49:14.845411 2996 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:49:14.845668 kubelet[2996]: E1105 15:49:14.845599 2996 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ghxh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-794d68bb9-6lxsm_calico-system(0dc71d36-e361-4406-9e29-3e94a7056136): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:14.845908 containerd[1686]: time="2025-11-05T15:49:14.845821492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:49:14.847653 kubelet[2996]: E1105 15:49:14.847615 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-794d68bb9-6lxsm" podUID="0dc71d36-e361-4406-9e29-3e94a7056136" Nov 5 15:49:15.175209 containerd[1686]: time="2025-11-05T15:49:15.175176626Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:15.175828 containerd[1686]: time="2025-11-05T15:49:15.175788145Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:49:15.175931 containerd[1686]: time="2025-11-05T15:49:15.175828785Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:49:15.176107 kubelet[2996]: E1105 15:49:15.176068 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:49:15.176358 kubelet[2996]: E1105 15:49:15.176130 2996 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:49:15.176358 kubelet[2996]: E1105 15:49:15.176315 2996 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h8j4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-556695d997-rtkds_calico-apiserver(99dbe593-4354-4ab6-ba7c-be3559d541a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:15.176641 containerd[1686]: time="2025-11-05T15:49:15.176624686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:49:15.177484 kubelet[2996]: E1105 15:49:15.177462 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556695d997-rtkds" podUID="99dbe593-4354-4ab6-ba7c-be3559d541a3" Nov 5 15:49:15.578934 containerd[1686]: time="2025-11-05T15:49:15.578722663Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:15.579285 containerd[1686]: time="2025-11-05T15:49:15.579172379Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:49:15.579285 containerd[1686]: time="2025-11-05T15:49:15.579210132Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:49:15.579512 kubelet[2996]: E1105 15:49:15.579474 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:49:15.579585 kubelet[2996]: E1105 15:49:15.579523 2996 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:49:15.580448 kubelet[2996]: E1105 15:49:15.579643 2996 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fljrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pv5b2_calico-system(9045f224-fdd2-4555-a4fe-fb613a1c7ed0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:15.581159 kubelet[2996]: E1105 15:49:15.581142 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pv5b2" podUID="9045f224-fdd2-4555-a4fe-fb613a1c7ed0" Nov 5 15:49:15.774656 containerd[1686]: time="2025-11-05T15:49:15.774425793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:49:16.180887 containerd[1686]: time="2025-11-05T15:49:16.180857461Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:16.182638 containerd[1686]: time="2025-11-05T15:49:16.182610812Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:49:16.182724 containerd[1686]: time="2025-11-05T15:49:16.182652736Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:49:16.182775 kubelet[2996]: E1105 15:49:16.182724 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:49:16.183055 kubelet[2996]: E1105 15:49:16.182770 2996 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:49:16.183055 kubelet[2996]: E1105 15:49:16.182863 2996 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p28c5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-556695d997-5xswp_calico-apiserver(1353ebb0-4e2b-4c92-bf7e-cc5b1b71054e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:16.183969 kubelet[2996]: E1105 15:49:16.183942 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556695d997-5xswp" podUID="1353ebb0-4e2b-4c92-bf7e-cc5b1b71054e" Nov 5 15:49:18.774469 containerd[1686]: time="2025-11-05T15:49:18.773724968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:49:19.145950 containerd[1686]: time="2025-11-05T15:49:19.145869712Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:19.148572 containerd[1686]: time="2025-11-05T15:49:19.148523830Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:49:19.148660 containerd[1686]: time="2025-11-05T15:49:19.148632274Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:49:19.148809 kubelet[2996]: E1105 15:49:19.148772 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:49:19.149065 kubelet[2996]: E1105 15:49:19.148818 2996 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:49:19.149065 kubelet[2996]: E1105 15:49:19.148957 2996 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kpvz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9hj2w_calico-system(5c5e9d05-7eb8-471e-907b-c18b5992bb51): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:19.151604 containerd[1686]: time="2025-11-05T15:49:19.151570802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:49:19.504899 containerd[1686]: time="2025-11-05T15:49:19.504848666Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:19.507464 containerd[1686]: time="2025-11-05T15:49:19.507445403Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:49:19.507574 containerd[1686]: time="2025-11-05T15:49:19.507520358Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:49:19.507636 kubelet[2996]: E1105 15:49:19.507609 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:49:19.507677 kubelet[2996]: E1105 15:49:19.507651 2996 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:49:19.507752 kubelet[2996]: E1105 15:49:19.507726 2996 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kpvz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9hj2w_calico-system(5c5e9d05-7eb8-471e-907b-c18b5992bb51): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:19.509053 kubelet[2996]: E1105 15:49:19.509031 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9hj2w" podUID="5c5e9d05-7eb8-471e-907b-c18b5992bb51" Nov 5 15:49:26.772899 kubelet[2996]: E1105 15:49:26.772846 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556695d997-5xswp" podUID="1353ebb0-4e2b-4c92-bf7e-cc5b1b71054e" Nov 5 15:49:27.775736 kubelet[2996]: E1105 15:49:27.775588 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556695d997-rtkds" podUID="99dbe593-4354-4ab6-ba7c-be3559d541a3" Nov 5 15:49:27.776409 kubelet[2996]: E1105 15:49:27.776365 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b57c799-kk9c4" podUID="820085b5-0e01-4ac8-ba7a-c7255a8764f6" Nov 5 15:49:27.776586 kubelet[2996]: E1105 15:49:27.776281 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66dbdd57f9-thx98" podUID="0fbf3e00-38e0-4753-acbc-319c3080ae42" Nov 5 15:49:27.996457 containerd[1686]: time="2025-11-05T15:49:27.996430438Z" level=info msg="TaskExit event in podsandbox handler container_id:\"443c5b7e7b3751d7b453e391a592cbc3a9a0aadb25cff9a4826be2f933d92c9b\" id:\"a3f569766868bcb817932b91fa7928cb831e95bd008397405e743b2d4b8387a3\" pid:5128 exited_at:{seconds:1762357767 nanos:995993456}" Nov 5 15:49:29.774653 kubelet[2996]: E1105 15:49:29.774608 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-794d68bb9-6lxsm" podUID="0dc71d36-e361-4406-9e29-3e94a7056136" Nov 5 15:49:30.773142 kubelet[2996]: E1105 15:49:30.772964 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pv5b2" podUID="9045f224-fdd2-4555-a4fe-fb613a1c7ed0" Nov 5 15:49:33.773938 kubelet[2996]: E1105 15:49:33.773895 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9hj2w" podUID="5c5e9d05-7eb8-471e-907b-c18b5992bb51" Nov 5 15:49:38.489182 systemd[1]: Started sshd@9-139.178.70.100:22-139.178.89.65:46280.service - OpenSSH per-connection server daemon (139.178.89.65:46280). Nov 5 15:49:38.595508 sshd[5150]: Accepted publickey for core from 139.178.89.65 port 46280 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:49:38.597141 sshd-session[5150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:49:38.600584 systemd-logind[1651]: New session 10 of user core. Nov 5 15:49:38.606383 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 5 15:49:38.773734 containerd[1686]: time="2025-11-05T15:49:38.773421861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:49:39.093821 sshd[5153]: Connection closed by 139.178.89.65 port 46280 Nov 5 15:49:39.094252 sshd-session[5150]: pam_unix(sshd:session): session closed for user core Nov 5 15:49:39.099129 systemd[1]: sshd@9-139.178.70.100:22-139.178.89.65:46280.service: Deactivated successfully. Nov 5 15:49:39.100622 systemd[1]: session-10.scope: Deactivated successfully. Nov 5 15:49:39.101550 systemd-logind[1651]: Session 10 logged out. Waiting for processes to exit. Nov 5 15:49:39.102201 systemd-logind[1651]: Removed session 10. Nov 5 15:49:39.117377 containerd[1686]: time="2025-11-05T15:49:39.117342926Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:39.124839 containerd[1686]: time="2025-11-05T15:49:39.124804685Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:49:39.124920 containerd[1686]: time="2025-11-05T15:49:39.124871545Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:49:39.125006 kubelet[2996]: E1105 15:49:39.124978 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:49:39.125954 kubelet[2996]: E1105 15:49:39.125014 2996 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:49:39.125954 kubelet[2996]: E1105 15:49:39.125136 2996 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xw72x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66dbdd57f9-thx98_calico-apiserver(0fbf3e00-38e0-4753-acbc-319c3080ae42): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:39.126269 kubelet[2996]: E1105 15:49:39.126243 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66dbdd57f9-thx98" podUID="0fbf3e00-38e0-4753-acbc-319c3080ae42" Nov 5 15:49:40.774404 containerd[1686]: time="2025-11-05T15:49:40.774177250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:49:41.172949 containerd[1686]: time="2025-11-05T15:49:41.172915699Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:41.179920 containerd[1686]: time="2025-11-05T15:49:41.179849279Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:49:41.179920 containerd[1686]: time="2025-11-05T15:49:41.179903420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:49:41.180027 kubelet[2996]: E1105 15:49:41.179992 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:49:41.180278 kubelet[2996]: E1105 15:49:41.180025 2996 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:49:41.180278 kubelet[2996]: E1105 15:49:41.180140 2996 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p28c5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-556695d997-5xswp_calico-apiserver(1353ebb0-4e2b-4c92-bf7e-cc5b1b71054e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:41.182011 kubelet[2996]: E1105 15:49:41.181989 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556695d997-5xswp" podUID="1353ebb0-4e2b-4c92-bf7e-cc5b1b71054e" Nov 5 15:49:41.775389 containerd[1686]: time="2025-11-05T15:49:41.774983222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:49:42.155386 containerd[1686]: time="2025-11-05T15:49:42.155024418Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:42.155677 containerd[1686]: time="2025-11-05T15:49:42.155613268Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:49:42.155677 containerd[1686]: time="2025-11-05T15:49:42.155641329Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:49:42.156188 kubelet[2996]: E1105 15:49:42.155928 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:49:42.156188 kubelet[2996]: E1105 15:49:42.155968 2996 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:49:42.156290 kubelet[2996]: E1105 15:49:42.156187 2996 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r6snb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-b57c799-kk9c4_calico-system(820085b5-0e01-4ac8-ba7a-c7255a8764f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:42.156897 containerd[1686]: time="2025-11-05T15:49:42.156530533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:49:42.158690 kubelet[2996]: E1105 15:49:42.157985 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b57c799-kk9c4" podUID="820085b5-0e01-4ac8-ba7a-c7255a8764f6" Nov 5 15:49:42.574539 containerd[1686]: time="2025-11-05T15:49:42.574507753Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:42.576266 containerd[1686]: time="2025-11-05T15:49:42.576238429Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:49:42.576371 containerd[1686]: time="2025-11-05T15:49:42.576292409Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:49:42.576420 kubelet[2996]: E1105 15:49:42.576395 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:49:42.576675 kubelet[2996]: E1105 15:49:42.576429 2996 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:49:42.576675 kubelet[2996]: E1105 15:49:42.576505 2996 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h8j4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-556695d997-rtkds_calico-apiserver(99dbe593-4354-4ab6-ba7c-be3559d541a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:42.577642 kubelet[2996]: E1105 15:49:42.577617 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556695d997-rtkds" podUID="99dbe593-4354-4ab6-ba7c-be3559d541a3" Nov 5 15:49:43.773442 containerd[1686]: time="2025-11-05T15:49:43.773241764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:49:44.105453 systemd[1]: Started sshd@10-139.178.70.100:22-139.178.89.65:46290.service - OpenSSH per-connection server daemon (139.178.89.65:46290). Nov 5 15:49:44.136009 containerd[1686]: time="2025-11-05T15:49:44.135909720Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:44.138705 containerd[1686]: time="2025-11-05T15:49:44.138680306Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:49:44.138758 containerd[1686]: time="2025-11-05T15:49:44.138737721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:49:44.138844 kubelet[2996]: E1105 15:49:44.138820 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:49:44.139012 kubelet[2996]: E1105 15:49:44.138851 2996 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:49:44.139012 kubelet[2996]: E1105 15:49:44.138932 2996 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fljrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pv5b2_calico-system(9045f224-fdd2-4555-a4fe-fb613a1c7ed0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:44.140025 kubelet[2996]: E1105 15:49:44.140008 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pv5b2" podUID="9045f224-fdd2-4555-a4fe-fb613a1c7ed0" Nov 5 15:49:44.171101 sshd[5168]: Accepted publickey for core from 139.178.89.65 port 46290 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:49:44.170780 sshd-session[5168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:49:44.173569 systemd-logind[1651]: New session 11 of user core. Nov 5 15:49:44.179475 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 5 15:49:44.343114 sshd[5171]: Connection closed by 139.178.89.65 port 46290 Nov 5 15:49:44.345219 sshd-session[5168]: pam_unix(sshd:session): session closed for user core Nov 5 15:49:44.347258 systemd[1]: sshd@10-139.178.70.100:22-139.178.89.65:46290.service: Deactivated successfully. Nov 5 15:49:44.349186 systemd[1]: session-11.scope: Deactivated successfully. Nov 5 15:49:44.350792 systemd-logind[1651]: Session 11 logged out. Waiting for processes to exit. Nov 5 15:49:44.353096 systemd-logind[1651]: Removed session 11. Nov 5 15:49:44.780563 containerd[1686]: time="2025-11-05T15:49:44.780509279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:49:45.129482 containerd[1686]: time="2025-11-05T15:49:45.128953148Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:45.137175 containerd[1686]: time="2025-11-05T15:49:45.137067076Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:49:45.137175 containerd[1686]: time="2025-11-05T15:49:45.137106205Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:49:45.137306 kubelet[2996]: E1105 15:49:45.137276 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:49:45.137351 kubelet[2996]: E1105 15:49:45.137315 2996 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:49:45.138097 kubelet[2996]: E1105 15:49:45.137433 2996 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2eef99ffd7d7435483175276c9559998,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ghxh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-794d68bb9-6lxsm_calico-system(0dc71d36-e361-4406-9e29-3e94a7056136): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:45.139705 containerd[1686]: time="2025-11-05T15:49:45.139690372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:49:45.542209 containerd[1686]: time="2025-11-05T15:49:45.541945574Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:45.555107 containerd[1686]: time="2025-11-05T15:49:45.555067593Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:49:45.555251 containerd[1686]: time="2025-11-05T15:49:45.555189169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:49:45.555420 kubelet[2996]: E1105 15:49:45.555396 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:49:45.555787 kubelet[2996]: E1105 15:49:45.555653 2996 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:49:45.555787 kubelet[2996]: E1105 15:49:45.555747 2996 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ghxh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-794d68bb9-6lxsm_calico-system(0dc71d36-e361-4406-9e29-3e94a7056136): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:45.561708 kubelet[2996]: E1105 15:49:45.557447 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-794d68bb9-6lxsm" podUID="0dc71d36-e361-4406-9e29-3e94a7056136" Nov 5 15:49:46.774009 containerd[1686]: time="2025-11-05T15:49:46.773950958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:49:47.147473 containerd[1686]: time="2025-11-05T15:49:47.147282795Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:47.154686 containerd[1686]: time="2025-11-05T15:49:47.154655040Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:49:47.154759 containerd[1686]: time="2025-11-05T15:49:47.154719566Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:49:47.155337 kubelet[2996]: E1105 15:49:47.155280 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:49:47.155554 kubelet[2996]: E1105 15:49:47.155359 2996 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:49:47.155554 kubelet[2996]: E1105 15:49:47.155438 2996 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kpvz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9hj2w_calico-system(5c5e9d05-7eb8-471e-907b-c18b5992bb51): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:47.162013 containerd[1686]: time="2025-11-05T15:49:47.158471014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:49:47.509361 containerd[1686]: time="2025-11-05T15:49:47.509329599Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:47.512831 containerd[1686]: time="2025-11-05T15:49:47.512804421Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:49:47.512895 containerd[1686]: time="2025-11-05T15:49:47.512860405Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:49:47.512983 kubelet[2996]: E1105 15:49:47.512965 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:49:47.513046 kubelet[2996]: E1105 15:49:47.513031 2996 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:49:47.513177 kubelet[2996]: E1105 15:49:47.513150 2996 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kpvz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9hj2w_calico-system(5c5e9d05-7eb8-471e-907b-c18b5992bb51): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:47.514758 kubelet[2996]: E1105 15:49:47.514732 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9hj2w" podUID="5c5e9d05-7eb8-471e-907b-c18b5992bb51" Nov 5 15:49:49.353188 systemd[1]: Started sshd@11-139.178.70.100:22-139.178.89.65:42208.service - OpenSSH per-connection server daemon (139.178.89.65:42208). Nov 5 15:49:49.534749 sshd[5184]: Accepted publickey for core from 139.178.89.65 port 42208 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:49:49.535671 sshd-session[5184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:49:49.539140 systemd-logind[1651]: New session 12 of user core. Nov 5 15:49:49.547209 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 5 15:49:49.676065 sshd[5187]: Connection closed by 139.178.89.65 port 42208 Nov 5 15:49:49.676834 sshd-session[5184]: pam_unix(sshd:session): session closed for user core Nov 5 15:49:49.682251 systemd[1]: sshd@11-139.178.70.100:22-139.178.89.65:42208.service: Deactivated successfully. Nov 5 15:49:49.683220 systemd[1]: session-12.scope: Deactivated successfully. Nov 5 15:49:49.683693 systemd-logind[1651]: Session 12 logged out. Waiting for processes to exit. Nov 5 15:49:49.685221 systemd[1]: Started sshd@12-139.178.70.100:22-139.178.89.65:42222.service - OpenSSH per-connection server daemon (139.178.89.65:42222). Nov 5 15:49:49.686742 systemd-logind[1651]: Removed session 12. Nov 5 15:49:49.725927 sshd[5200]: Accepted publickey for core from 139.178.89.65 port 42222 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:49:49.726834 sshd-session[5200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:49:49.729977 systemd-logind[1651]: New session 13 of user core. Nov 5 15:49:49.740218 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 5 15:49:49.965065 sshd[5203]: Connection closed by 139.178.89.65 port 42222 Nov 5 15:49:49.966133 sshd-session[5200]: pam_unix(sshd:session): session closed for user core Nov 5 15:49:49.976384 systemd[1]: Started sshd@13-139.178.70.100:22-139.178.89.65:42238.service - OpenSSH per-connection server daemon (139.178.89.65:42238). Nov 5 15:49:49.976849 systemd[1]: sshd@12-139.178.70.100:22-139.178.89.65:42222.service: Deactivated successfully. Nov 5 15:49:49.978421 systemd[1]: session-13.scope: Deactivated successfully. Nov 5 15:49:49.979413 systemd-logind[1651]: Session 13 logged out. Waiting for processes to exit. Nov 5 15:49:49.982126 systemd-logind[1651]: Removed session 13. Nov 5 15:49:50.262242 sshd[5209]: Accepted publickey for core from 139.178.89.65 port 42238 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:49:50.266940 sshd-session[5209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:49:50.274274 systemd-logind[1651]: New session 14 of user core. Nov 5 15:49:50.279257 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 5 15:49:50.451556 sshd[5218]: Connection closed by 139.178.89.65 port 42238 Nov 5 15:49:50.451401 sshd-session[5209]: pam_unix(sshd:session): session closed for user core Nov 5 15:49:50.455645 systemd-logind[1651]: Session 14 logged out. Waiting for processes to exit. Nov 5 15:49:50.455709 systemd[1]: sshd@13-139.178.70.100:22-139.178.89.65:42238.service: Deactivated successfully. Nov 5 15:49:50.457175 systemd[1]: session-14.scope: Deactivated successfully. Nov 5 15:49:50.458615 systemd-logind[1651]: Removed session 14. Nov 5 15:49:51.774177 kubelet[2996]: E1105 15:49:51.773790 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556695d997-5xswp" podUID="1353ebb0-4e2b-4c92-bf7e-cc5b1b71054e" Nov 5 15:49:52.773021 kubelet[2996]: E1105 15:49:52.772748 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66dbdd57f9-thx98" podUID="0fbf3e00-38e0-4753-acbc-319c3080ae42" Nov 5 15:49:53.774552 kubelet[2996]: E1105 15:49:53.774521 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b57c799-kk9c4" podUID="820085b5-0e01-4ac8-ba7a-c7255a8764f6" Nov 5 15:49:55.463272 systemd[1]: Started sshd@14-139.178.70.100:22-139.178.89.65:42250.service - OpenSSH per-connection server daemon (139.178.89.65:42250). Nov 5 15:49:55.514649 sshd[5234]: Accepted publickey for core from 139.178.89.65 port 42250 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:49:55.516302 sshd-session[5234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:49:55.520734 systemd-logind[1651]: New session 15 of user core. Nov 5 15:49:55.526212 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 5 15:49:55.630340 sshd[5239]: Connection closed by 139.178.89.65 port 42250 Nov 5 15:49:55.630806 sshd-session[5234]: pam_unix(sshd:session): session closed for user core Nov 5 15:49:55.634617 systemd[1]: sshd@14-139.178.70.100:22-139.178.89.65:42250.service: Deactivated successfully. Nov 5 15:49:55.636570 systemd[1]: session-15.scope: Deactivated successfully. Nov 5 15:49:55.639886 systemd-logind[1651]: Session 15 logged out. Waiting for processes to exit. Nov 5 15:49:55.640624 systemd-logind[1651]: Removed session 15. Nov 5 15:49:55.774864 kubelet[2996]: E1105 15:49:55.774796 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556695d997-rtkds" podUID="99dbe593-4354-4ab6-ba7c-be3559d541a3" Nov 5 15:49:57.994000 containerd[1686]: time="2025-11-05T15:49:57.993897767Z" level=info msg="TaskExit event in podsandbox handler container_id:\"443c5b7e7b3751d7b453e391a592cbc3a9a0aadb25cff9a4826be2f933d92c9b\" id:\"cf2802fd995ffb540a4ff75bd5c2e1c3abe0d6e6118eb5fe1a0458601a18829c\" pid:5267 exit_status:1 exited_at:{seconds:1762357797 nanos:993683137}" Nov 5 15:49:58.774072 kubelet[2996]: E1105 15:49:58.773877 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pv5b2" podUID="9045f224-fdd2-4555-a4fe-fb613a1c7ed0" Nov 5 15:50:00.640639 systemd[1]: Started sshd@15-139.178.70.100:22-139.178.89.65:41486.service - OpenSSH per-connection server daemon (139.178.89.65:41486). Nov 5 15:50:00.734692 sshd[5280]: Accepted publickey for core from 139.178.89.65 port 41486 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:50:00.735645 sshd-session[5280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:50:00.739066 systemd-logind[1651]: New session 16 of user core. Nov 5 15:50:00.748303 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 5 15:50:00.773671 kubelet[2996]: E1105 15:50:00.773635 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-794d68bb9-6lxsm" podUID="0dc71d36-e361-4406-9e29-3e94a7056136" Nov 5 15:50:00.888498 sshd[5283]: Connection closed by 139.178.89.65 port 41486 Nov 5 15:50:00.888845 sshd-session[5280]: pam_unix(sshd:session): session closed for user core Nov 5 15:50:00.892226 systemd[1]: sshd@15-139.178.70.100:22-139.178.89.65:41486.service: Deactivated successfully. Nov 5 15:50:00.893606 systemd[1]: session-16.scope: Deactivated successfully. Nov 5 15:50:00.894396 systemd-logind[1651]: Session 16 logged out. Waiting for processes to exit. Nov 5 15:50:00.895782 systemd-logind[1651]: Removed session 16. Nov 5 15:50:02.773330 kubelet[2996]: E1105 15:50:02.773286 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9hj2w" podUID="5c5e9d05-7eb8-471e-907b-c18b5992bb51" Nov 5 15:50:05.773888 kubelet[2996]: E1105 15:50:05.773856 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66dbdd57f9-thx98" podUID="0fbf3e00-38e0-4753-acbc-319c3080ae42" Nov 5 15:50:05.899830 systemd[1]: Started sshd@16-139.178.70.100:22-139.178.89.65:41500.service - OpenSSH per-connection server daemon (139.178.89.65:41500). Nov 5 15:50:05.978900 sshd[5296]: Accepted publickey for core from 139.178.89.65 port 41500 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:50:05.979739 sshd-session[5296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:50:05.982693 systemd-logind[1651]: New session 17 of user core. Nov 5 15:50:05.989198 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 5 15:50:06.355550 sshd[5299]: Connection closed by 139.178.89.65 port 41500 Nov 5 15:50:06.355968 sshd-session[5296]: pam_unix(sshd:session): session closed for user core Nov 5 15:50:06.358649 systemd-logind[1651]: Session 17 logged out. Waiting for processes to exit. Nov 5 15:50:06.358784 systemd[1]: sshd@16-139.178.70.100:22-139.178.89.65:41500.service: Deactivated successfully. Nov 5 15:50:06.360048 systemd[1]: session-17.scope: Deactivated successfully. Nov 5 15:50:06.362033 systemd-logind[1651]: Removed session 17. Nov 5 15:50:06.774148 kubelet[2996]: E1105 15:50:06.773543 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b57c799-kk9c4" podUID="820085b5-0e01-4ac8-ba7a-c7255a8764f6" Nov 5 15:50:06.774304 kubelet[2996]: E1105 15:50:06.774182 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556695d997-5xswp" podUID="1353ebb0-4e2b-4c92-bf7e-cc5b1b71054e" Nov 5 15:50:07.773988 kubelet[2996]: E1105 15:50:07.773533 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556695d997-rtkds" podUID="99dbe593-4354-4ab6-ba7c-be3559d541a3" Nov 5 15:50:11.367277 systemd[1]: Started sshd@17-139.178.70.100:22-139.178.89.65:50178.service - OpenSSH per-connection server daemon (139.178.89.65:50178). Nov 5 15:50:11.454446 sshd[5311]: Accepted publickey for core from 139.178.89.65 port 50178 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:50:11.455341 sshd-session[5311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:50:11.461122 systemd-logind[1651]: New session 18 of user core. Nov 5 15:50:11.463180 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 5 15:50:11.605752 sshd[5314]: Connection closed by 139.178.89.65 port 50178 Nov 5 15:50:11.605696 sshd-session[5311]: pam_unix(sshd:session): session closed for user core Nov 5 15:50:11.612349 systemd[1]: sshd@17-139.178.70.100:22-139.178.89.65:50178.service: Deactivated successfully. Nov 5 15:50:11.613899 systemd[1]: session-18.scope: Deactivated successfully. Nov 5 15:50:11.614792 systemd-logind[1651]: Session 18 logged out. Waiting for processes to exit. Nov 5 15:50:11.617582 systemd[1]: Started sshd@18-139.178.70.100:22-139.178.89.65:50186.service - OpenSSH per-connection server daemon (139.178.89.65:50186). Nov 5 15:50:11.619321 systemd-logind[1651]: Removed session 18. Nov 5 15:50:11.657039 sshd[5326]: Accepted publickey for core from 139.178.89.65 port 50186 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:50:11.657856 sshd-session[5326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:50:11.663349 systemd-logind[1651]: New session 19 of user core. Nov 5 15:50:11.667198 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 5 15:50:12.398364 sshd[5329]: Connection closed by 139.178.89.65 port 50186 Nov 5 15:50:12.410047 systemd[1]: Started sshd@19-139.178.70.100:22-139.178.89.65:50200.service - OpenSSH per-connection server daemon (139.178.89.65:50200). Nov 5 15:50:12.429854 sshd-session[5326]: pam_unix(sshd:session): session closed for user core Nov 5 15:50:12.440294 systemd[1]: sshd@18-139.178.70.100:22-139.178.89.65:50186.service: Deactivated successfully. Nov 5 15:50:12.441507 systemd[1]: session-19.scope: Deactivated successfully. Nov 5 15:50:12.445705 systemd-logind[1651]: Session 19 logged out. Waiting for processes to exit. Nov 5 15:50:12.446533 systemd-logind[1651]: Removed session 19. Nov 5 15:50:12.503884 sshd[5336]: Accepted publickey for core from 139.178.89.65 port 50200 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:50:12.505239 sshd-session[5336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:50:12.508031 systemd-logind[1651]: New session 20 of user core. Nov 5 15:50:12.515026 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 5 15:50:13.049861 sshd[5342]: Connection closed by 139.178.89.65 port 50200 Nov 5 15:50:13.049769 sshd-session[5336]: pam_unix(sshd:session): session closed for user core Nov 5 15:50:13.059924 systemd[1]: sshd@19-139.178.70.100:22-139.178.89.65:50200.service: Deactivated successfully. Nov 5 15:50:13.061718 systemd[1]: session-20.scope: Deactivated successfully. Nov 5 15:50:13.063184 systemd-logind[1651]: Session 20 logged out. Waiting for processes to exit. Nov 5 15:50:13.065390 systemd-logind[1651]: Removed session 20. Nov 5 15:50:13.067605 systemd[1]: Started sshd@20-139.178.70.100:22-139.178.89.65:50208.service - OpenSSH per-connection server daemon (139.178.89.65:50208). Nov 5 15:50:13.164908 sshd[5357]: Accepted publickey for core from 139.178.89.65 port 50208 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:50:13.165843 sshd-session[5357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:50:13.169138 systemd-logind[1651]: New session 21 of user core. Nov 5 15:50:13.175422 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 5 15:50:13.516787 sshd[5363]: Connection closed by 139.178.89.65 port 50208 Nov 5 15:50:13.518404 sshd-session[5357]: pam_unix(sshd:session): session closed for user core Nov 5 15:50:13.530227 systemd[1]: sshd@20-139.178.70.100:22-139.178.89.65:50208.service: Deactivated successfully. Nov 5 15:50:13.533468 systemd[1]: session-21.scope: Deactivated successfully. Nov 5 15:50:13.536214 systemd-logind[1651]: Session 21 logged out. Waiting for processes to exit. Nov 5 15:50:13.537397 systemd-logind[1651]: Removed session 21. Nov 5 15:50:13.538930 systemd[1]: Started sshd@21-139.178.70.100:22-139.178.89.65:50212.service - OpenSSH per-connection server daemon (139.178.89.65:50212). Nov 5 15:50:13.597785 sshd[5372]: Accepted publickey for core from 139.178.89.65 port 50212 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:50:13.598782 sshd-session[5372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:50:13.602970 systemd-logind[1651]: New session 22 of user core. Nov 5 15:50:13.609432 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 5 15:50:13.718913 sshd[5375]: Connection closed by 139.178.89.65 port 50212 Nov 5 15:50:13.720560 sshd-session[5372]: pam_unix(sshd:session): session closed for user core Nov 5 15:50:13.722581 systemd[1]: sshd@21-139.178.70.100:22-139.178.89.65:50212.service: Deactivated successfully. Nov 5 15:50:13.723932 systemd[1]: session-22.scope: Deactivated successfully. Nov 5 15:50:13.724553 systemd-logind[1651]: Session 22 logged out. Waiting for processes to exit. Nov 5 15:50:13.725605 systemd-logind[1651]: Removed session 22. Nov 5 15:50:13.774047 kubelet[2996]: E1105 15:50:13.773729 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pv5b2" podUID="9045f224-fdd2-4555-a4fe-fb613a1c7ed0" Nov 5 15:50:13.775661 kubelet[2996]: E1105 15:50:13.775620 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-794d68bb9-6lxsm" podUID="0dc71d36-e361-4406-9e29-3e94a7056136" Nov 5 15:50:16.773810 kubelet[2996]: E1105 15:50:16.773685 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9hj2w" podUID="5c5e9d05-7eb8-471e-907b-c18b5992bb51" Nov 5 15:50:18.730140 systemd[1]: Started sshd@22-139.178.70.100:22-139.178.89.65:58310.service - OpenSSH per-connection server daemon (139.178.89.65:58310). Nov 5 15:50:18.799628 sshd[5395]: Accepted publickey for core from 139.178.89.65 port 58310 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:50:18.800476 sshd-session[5395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:50:18.804045 systemd-logind[1651]: New session 23 of user core. Nov 5 15:50:18.809246 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 5 15:50:18.966637 sshd[5398]: Connection closed by 139.178.89.65 port 58310 Nov 5 15:50:18.967429 sshd-session[5395]: pam_unix(sshd:session): session closed for user core Nov 5 15:50:18.971006 systemd-logind[1651]: Session 23 logged out. Waiting for processes to exit. Nov 5 15:50:18.971082 systemd[1]: sshd@22-139.178.70.100:22-139.178.89.65:58310.service: Deactivated successfully. Nov 5 15:50:18.972383 systemd[1]: session-23.scope: Deactivated successfully. Nov 5 15:50:18.973691 systemd-logind[1651]: Removed session 23. Nov 5 15:50:20.774174 containerd[1686]: time="2025-11-05T15:50:20.773938585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:50:21.186304 containerd[1686]: time="2025-11-05T15:50:21.186193960Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:50:21.186835 containerd[1686]: time="2025-11-05T15:50:21.186789515Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:50:21.186967 containerd[1686]: time="2025-11-05T15:50:21.186849638Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:50:21.187247 kubelet[2996]: E1105 15:50:21.187145 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:50:21.187800 kubelet[2996]: E1105 15:50:21.187532 2996 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:50:21.187800 kubelet[2996]: E1105 15:50:21.187630 2996 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xw72x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66dbdd57f9-thx98_calico-apiserver(0fbf3e00-38e0-4753-acbc-319c3080ae42): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:50:21.188812 kubelet[2996]: E1105 15:50:21.188789 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66dbdd57f9-thx98" podUID="0fbf3e00-38e0-4753-acbc-319c3080ae42" Nov 5 15:50:21.773889 kubelet[2996]: E1105 15:50:21.773858 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b57c799-kk9c4" podUID="820085b5-0e01-4ac8-ba7a-c7255a8764f6" Nov 5 15:50:21.774493 containerd[1686]: time="2025-11-05T15:50:21.774294113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:50:21.775139 kubelet[2996]: E1105 15:50:21.774886 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556695d997-rtkds" podUID="99dbe593-4354-4ab6-ba7c-be3559d541a3" Nov 5 15:50:22.163323 containerd[1686]: time="2025-11-05T15:50:22.163160473Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:50:22.164123 containerd[1686]: time="2025-11-05T15:50:22.163996824Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:50:22.164123 containerd[1686]: time="2025-11-05T15:50:22.164073493Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:50:22.164831 kubelet[2996]: E1105 15:50:22.164413 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:50:22.164831 kubelet[2996]: E1105 15:50:22.164459 2996 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:50:22.164831 kubelet[2996]: E1105 15:50:22.164559 2996 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p28c5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-556695d997-5xswp_calico-apiserver(1353ebb0-4e2b-4c92-bf7e-cc5b1b71054e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:50:22.166186 kubelet[2996]: E1105 15:50:22.166150 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556695d997-5xswp" podUID="1353ebb0-4e2b-4c92-bf7e-cc5b1b71054e" Nov 5 15:50:23.976215 systemd[1]: Started sshd@23-139.178.70.100:22-139.178.89.65:58314.service - OpenSSH per-connection server daemon (139.178.89.65:58314). Nov 5 15:50:24.039929 sshd[5413]: Accepted publickey for core from 139.178.89.65 port 58314 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:50:24.040964 sshd-session[5413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:50:24.044464 systemd-logind[1651]: New session 24 of user core. Nov 5 15:50:24.048223 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 5 15:50:24.155109 sshd[5416]: Connection closed by 139.178.89.65 port 58314 Nov 5 15:50:24.155531 sshd-session[5413]: pam_unix(sshd:session): session closed for user core Nov 5 15:50:24.157858 systemd-logind[1651]: Session 24 logged out. Waiting for processes to exit. Nov 5 15:50:24.158389 systemd[1]: sshd@23-139.178.70.100:22-139.178.89.65:58314.service: Deactivated successfully. Nov 5 15:50:24.159714 systemd[1]: session-24.scope: Deactivated successfully. Nov 5 15:50:24.161226 systemd-logind[1651]: Removed session 24. Nov 5 15:50:24.773627 containerd[1686]: time="2025-11-05T15:50:24.773557554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:50:24.774735 kubelet[2996]: E1105 15:50:24.774569 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-794d68bb9-6lxsm" podUID="0dc71d36-e361-4406-9e29-3e94a7056136" Nov 5 15:50:25.363385 containerd[1686]: time="2025-11-05T15:50:25.363352601Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:50:25.363670 containerd[1686]: time="2025-11-05T15:50:25.363651519Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:50:25.363940 containerd[1686]: time="2025-11-05T15:50:25.363696806Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:50:25.363966 kubelet[2996]: E1105 15:50:25.363795 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:50:25.363966 kubelet[2996]: E1105 15:50:25.363846 2996 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:50:25.366318 kubelet[2996]: E1105 15:50:25.366270 2996 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fljrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pv5b2_calico-system(9045f224-fdd2-4555-a4fe-fb613a1c7ed0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:50:25.368053 kubelet[2996]: E1105 15:50:25.368031 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pv5b2" podUID="9045f224-fdd2-4555-a4fe-fb613a1c7ed0" Nov 5 15:50:28.044475 containerd[1686]: time="2025-11-05T15:50:28.044443802Z" level=info msg="TaskExit event in podsandbox handler container_id:\"443c5b7e7b3751d7b453e391a592cbc3a9a0aadb25cff9a4826be2f933d92c9b\" id:\"78a2bde5a9dcd2794acd780806744bd55be48ea07bbb9740ffc157433ed6c588\" pid:5452 exited_at:{seconds:1762357828 nanos:44111189}" Nov 5 15:50:29.167018 systemd[1]: Started sshd@24-139.178.70.100:22-139.178.89.65:60726.service - OpenSSH per-connection server daemon (139.178.89.65:60726). Nov 5 15:50:29.341438 sshd[5465]: Accepted publickey for core from 139.178.89.65 port 60726 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:50:29.347116 sshd-session[5465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:50:29.354701 systemd-logind[1651]: New session 25 of user core. Nov 5 15:50:29.360259 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 5 15:50:29.582713 sshd[5468]: Connection closed by 139.178.89.65 port 60726 Nov 5 15:50:29.583039 sshd-session[5465]: pam_unix(sshd:session): session closed for user core Nov 5 15:50:29.585810 systemd-logind[1651]: Session 25 logged out. Waiting for processes to exit. Nov 5 15:50:29.586006 systemd[1]: sshd@24-139.178.70.100:22-139.178.89.65:60726.service: Deactivated successfully. Nov 5 15:50:29.587351 systemd[1]: session-25.scope: Deactivated successfully. Nov 5 15:50:29.588526 systemd-logind[1651]: Removed session 25. Nov 5 15:50:31.775111 kubelet[2996]: E1105 15:50:31.775043 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66dbdd57f9-thx98" podUID="0fbf3e00-38e0-4753-acbc-319c3080ae42" Nov 5 15:50:31.776138 containerd[1686]: time="2025-11-05T15:50:31.776112315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:50:32.136993 containerd[1686]: time="2025-11-05T15:50:32.136897693Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:50:32.137296 containerd[1686]: time="2025-11-05T15:50:32.137241702Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:50:32.137346 containerd[1686]: time="2025-11-05T15:50:32.137298866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:50:32.137713 kubelet[2996]: E1105 15:50:32.137443 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:50:32.137713 kubelet[2996]: E1105 15:50:32.137484 2996 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:50:32.137713 kubelet[2996]: E1105 15:50:32.137570 2996 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kpvz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9hj2w_calico-system(5c5e9d05-7eb8-471e-907b-c18b5992bb51): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:50:32.139371 containerd[1686]: time="2025-11-05T15:50:32.139351922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:50:32.504803 containerd[1686]: time="2025-11-05T15:50:32.504450490Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:50:32.505193 containerd[1686]: time="2025-11-05T15:50:32.505166757Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:50:32.505748 kubelet[2996]: E1105 15:50:32.505460 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:50:32.506241 kubelet[2996]: E1105 15:50:32.505836 2996 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:50:32.506241 kubelet[2996]: E1105 15:50:32.506190 2996 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kpvz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9hj2w_calico-system(5c5e9d05-7eb8-471e-907b-c18b5992bb51): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:50:32.507592 kubelet[2996]: E1105 15:50:32.507325 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9hj2w" podUID="5c5e9d05-7eb8-471e-907b-c18b5992bb51" Nov 5 15:50:32.513004 containerd[1686]: time="2025-11-05T15:50:32.505169408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:50:34.593171 systemd[1]: Started sshd@25-139.178.70.100:22-139.178.89.65:60736.service - OpenSSH per-connection server daemon (139.178.89.65:60736). Nov 5 15:50:34.667962 sshd[5494]: Accepted publickey for core from 139.178.89.65 port 60736 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:50:34.668786 sshd-session[5494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:50:34.675132 systemd-logind[1651]: New session 26 of user core. Nov 5 15:50:34.677231 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 5 15:50:34.786999 sshd[5497]: Connection closed by 139.178.89.65 port 60736 Nov 5 15:50:34.787393 sshd-session[5494]: pam_unix(sshd:session): session closed for user core Nov 5 15:50:34.790571 systemd-logind[1651]: Session 26 logged out. Waiting for processes to exit. Nov 5 15:50:34.790742 systemd[1]: sshd@25-139.178.70.100:22-139.178.89.65:60736.service: Deactivated successfully. Nov 5 15:50:34.792699 systemd[1]: session-26.scope: Deactivated successfully. Nov 5 15:50:34.794711 systemd-logind[1651]: Removed session 26. Nov 5 15:50:35.774642 kubelet[2996]: E1105 15:50:35.774580 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556695d997-5xswp" podUID="1353ebb0-4e2b-4c92-bf7e-cc5b1b71054e" Nov 5 15:50:35.776257 containerd[1686]: time="2025-11-05T15:50:35.776236315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:50:36.106636 containerd[1686]: time="2025-11-05T15:50:36.106471128Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:50:36.113185 containerd[1686]: time="2025-11-05T15:50:36.113151366Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:50:36.113255 containerd[1686]: time="2025-11-05T15:50:36.113217336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:50:36.113577 kubelet[2996]: E1105 15:50:36.113368 2996 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:50:36.113577 kubelet[2996]: E1105 15:50:36.113406 2996 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:50:36.113577 kubelet[2996]: E1105 15:50:36.113499 2996 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h8j4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-556695d997-rtkds_calico-apiserver(99dbe593-4354-4ab6-ba7c-be3559d541a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:50:36.115010 kubelet[2996]: E1105 15:50:36.114951 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556695d997-rtkds" podUID="99dbe593-4354-4ab6-ba7c-be3559d541a3" Nov 5 15:50:36.773231 kubelet[2996]: E1105 15:50:36.773200 2996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pv5b2" podUID="9045f224-fdd2-4555-a4fe-fb613a1c7ed0" Nov 5 15:50:36.773354 containerd[1686]: time="2025-11-05T15:50:36.773300670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\""