Nov 4 23:47:01.533053 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 4 22:00:22 -00 2025 Nov 4 23:47:01.534090 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:47:01.534100 kernel: Disabled fast string operations Nov 4 23:47:01.534105 kernel: BIOS-provided physical RAM map: Nov 4 23:47:01.534110 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Nov 4 23:47:01.534114 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Nov 4 23:47:01.534122 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Nov 4 23:47:01.534127 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Nov 4 23:47:01.534132 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Nov 4 23:47:01.534136 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Nov 4 23:47:01.534141 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Nov 4 23:47:01.534146 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Nov 4 23:47:01.534151 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Nov 4 23:47:01.534155 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Nov 4 23:47:01.534162 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Nov 4 23:47:01.534168 kernel: NX (Execute Disable) protection: active Nov 4 23:47:01.534173 kernel: APIC: Static calls initialized Nov 4 23:47:01.534178 kernel: SMBIOS 2.7 present. Nov 4 23:47:01.534184 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Nov 4 23:47:01.534189 kernel: DMI: Memory slots populated: 1/128 Nov 4 23:47:01.534195 kernel: vmware: hypercall mode: 0x00 Nov 4 23:47:01.534201 kernel: Hypervisor detected: VMware Nov 4 23:47:01.534206 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Nov 4 23:47:01.534211 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Nov 4 23:47:01.534216 kernel: vmware: using clock offset of 4980492300 ns Nov 4 23:47:01.534221 kernel: tsc: Detected 3408.000 MHz processor Nov 4 23:47:01.534227 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 4 23:47:01.534234 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 4 23:47:01.534239 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Nov 4 23:47:01.534246 kernel: total RAM covered: 3072M Nov 4 23:47:01.534252 kernel: Found optimal setting for mtrr clean up Nov 4 23:47:01.534258 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Nov 4 23:47:01.534264 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Nov 4 23:47:01.534269 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 4 23:47:01.534275 kernel: Using GB pages for direct mapping Nov 4 23:47:01.534281 kernel: ACPI: Early table checksum verification disabled Nov 4 23:47:01.534286 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Nov 4 23:47:01.534293 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Nov 4 23:47:01.534299 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Nov 4 23:47:01.534304 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Nov 4 23:47:01.534312 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Nov 4 23:47:01.534318 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Nov 4 23:47:01.534324 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Nov 4 23:47:01.534330 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Nov 4 23:47:01.534336 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Nov 4 23:47:01.534342 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Nov 4 23:47:01.534351 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Nov 4 23:47:01.534359 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Nov 4 23:47:01.534366 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Nov 4 23:47:01.534372 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Nov 4 23:47:01.534378 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Nov 4 23:47:01.534383 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Nov 4 23:47:01.534389 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Nov 4 23:47:01.534395 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Nov 4 23:47:01.534400 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Nov 4 23:47:01.534406 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Nov 4 23:47:01.534412 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Nov 4 23:47:01.534418 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Nov 4 23:47:01.534424 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 4 23:47:01.534429 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 4 23:47:01.534435 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Nov 4 23:47:01.534441 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00001000-0x7fffffff] Nov 4 23:47:01.534447 kernel: NODE_DATA(0) allocated [mem 0x7fff8dc0-0x7fffffff] Nov 4 23:47:01.534454 kernel: Zone ranges: Nov 4 23:47:01.534460 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 4 23:47:01.534466 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Nov 4 23:47:01.534472 kernel: Normal empty Nov 4 23:47:01.534478 kernel: Device empty Nov 4 23:47:01.534483 kernel: Movable zone start for each node Nov 4 23:47:01.534489 kernel: Early memory node ranges Nov 4 23:47:01.534495 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Nov 4 23:47:01.534501 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Nov 4 23:47:01.534507 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Nov 4 23:47:01.534513 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Nov 4 23:47:01.534518 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 4 23:47:01.534524 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Nov 4 23:47:01.534530 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Nov 4 23:47:01.534536 kernel: ACPI: PM-Timer IO Port: 0x1008 Nov 4 23:47:01.534542 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Nov 4 23:47:01.534548 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Nov 4 23:47:01.534554 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Nov 4 23:47:01.534560 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Nov 4 23:47:01.534565 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Nov 4 23:47:01.534571 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Nov 4 23:47:01.534576 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Nov 4 23:47:01.534582 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Nov 4 23:47:01.534589 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Nov 4 23:47:01.534594 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Nov 4 23:47:01.534600 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Nov 4 23:47:01.534605 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Nov 4 23:47:01.534611 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Nov 4 23:47:01.534616 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Nov 4 23:47:01.534622 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Nov 4 23:47:01.534628 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Nov 4 23:47:01.534633 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Nov 4 23:47:01.534640 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Nov 4 23:47:01.534645 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Nov 4 23:47:01.534651 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Nov 4 23:47:01.534656 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Nov 4 23:47:01.534662 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Nov 4 23:47:01.534668 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Nov 4 23:47:01.534673 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Nov 4 23:47:01.534679 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Nov 4 23:47:01.534685 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Nov 4 23:47:01.534691 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Nov 4 23:47:01.534696 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Nov 4 23:47:01.534702 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Nov 4 23:47:01.534707 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Nov 4 23:47:01.534713 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Nov 4 23:47:01.534719 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Nov 4 23:47:01.534724 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Nov 4 23:47:01.534731 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Nov 4 23:47:01.534736 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Nov 4 23:47:01.534742 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Nov 4 23:47:01.534747 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Nov 4 23:47:01.534753 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Nov 4 23:47:01.534759 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Nov 4 23:47:01.534765 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Nov 4 23:47:01.534774 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Nov 4 23:47:01.534780 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Nov 4 23:47:01.534786 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Nov 4 23:47:01.534793 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Nov 4 23:47:01.534799 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Nov 4 23:47:01.534805 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Nov 4 23:47:01.534811 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Nov 4 23:47:01.534817 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Nov 4 23:47:01.534823 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Nov 4 23:47:01.534829 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Nov 4 23:47:01.534835 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Nov 4 23:47:01.534841 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Nov 4 23:47:01.534847 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Nov 4 23:47:01.534853 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Nov 4 23:47:01.534859 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Nov 4 23:47:01.534865 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Nov 4 23:47:01.534872 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Nov 4 23:47:01.534878 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Nov 4 23:47:01.534884 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Nov 4 23:47:01.534890 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Nov 4 23:47:01.534896 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Nov 4 23:47:01.534902 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Nov 4 23:47:01.534908 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Nov 4 23:47:01.534913 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Nov 4 23:47:01.534920 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Nov 4 23:47:01.534926 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Nov 4 23:47:01.534932 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Nov 4 23:47:01.534938 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Nov 4 23:47:01.534944 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Nov 4 23:47:01.534950 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Nov 4 23:47:01.534956 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Nov 4 23:47:01.534961 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Nov 4 23:47:01.534968 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Nov 4 23:47:01.534974 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Nov 4 23:47:01.534980 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Nov 4 23:47:01.534986 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Nov 4 23:47:01.534992 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Nov 4 23:47:01.534998 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Nov 4 23:47:01.535004 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Nov 4 23:47:01.535010 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Nov 4 23:47:01.535015 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Nov 4 23:47:01.535023 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Nov 4 23:47:01.535029 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Nov 4 23:47:01.535035 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Nov 4 23:47:01.535040 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Nov 4 23:47:01.535046 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Nov 4 23:47:01.535052 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Nov 4 23:47:01.535058 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Nov 4 23:47:01.535064 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Nov 4 23:47:01.535749 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Nov 4 23:47:01.535762 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Nov 4 23:47:01.535768 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Nov 4 23:47:01.535774 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Nov 4 23:47:01.535780 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Nov 4 23:47:01.535785 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Nov 4 23:47:01.535791 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Nov 4 23:47:01.535797 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Nov 4 23:47:01.535805 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Nov 4 23:47:01.535811 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Nov 4 23:47:01.535817 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Nov 4 23:47:01.535823 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Nov 4 23:47:01.535829 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Nov 4 23:47:01.535835 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Nov 4 23:47:01.535841 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Nov 4 23:47:01.535847 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Nov 4 23:47:01.535854 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Nov 4 23:47:01.535861 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Nov 4 23:47:01.535866 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Nov 4 23:47:01.535872 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Nov 4 23:47:01.535878 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Nov 4 23:47:01.535884 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Nov 4 23:47:01.535890 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Nov 4 23:47:01.535896 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Nov 4 23:47:01.535903 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Nov 4 23:47:01.535909 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Nov 4 23:47:01.535915 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Nov 4 23:47:01.535920 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Nov 4 23:47:01.535926 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Nov 4 23:47:01.535932 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Nov 4 23:47:01.535938 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Nov 4 23:47:01.535944 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Nov 4 23:47:01.535950 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Nov 4 23:47:01.535957 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Nov 4 23:47:01.535962 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Nov 4 23:47:01.535968 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Nov 4 23:47:01.535974 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Nov 4 23:47:01.535980 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Nov 4 23:47:01.535986 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Nov 4 23:47:01.535992 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Nov 4 23:47:01.535998 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Nov 4 23:47:01.536005 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 4 23:47:01.536011 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Nov 4 23:47:01.536018 kernel: TSC deadline timer available Nov 4 23:47:01.536024 kernel: CPU topo: Max. logical packages: 128 Nov 4 23:47:01.536030 kernel: CPU topo: Max. logical dies: 128 Nov 4 23:47:01.536036 kernel: CPU topo: Max. dies per package: 1 Nov 4 23:47:01.536042 kernel: CPU topo: Max. threads per core: 1 Nov 4 23:47:01.536049 kernel: CPU topo: Num. cores per package: 1 Nov 4 23:47:01.536055 kernel: CPU topo: Num. threads per package: 1 Nov 4 23:47:01.536061 kernel: CPU topo: Allowing 2 present CPUs plus 126 hotplug CPUs Nov 4 23:47:01.536067 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Nov 4 23:47:01.536131 kernel: Booting paravirtualized kernel on VMware hypervisor Nov 4 23:47:01.536138 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 4 23:47:01.536145 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Nov 4 23:47:01.536151 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Nov 4 23:47:01.536159 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Nov 4 23:47:01.536165 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Nov 4 23:47:01.536171 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Nov 4 23:47:01.536177 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Nov 4 23:47:01.536183 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Nov 4 23:47:01.536189 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Nov 4 23:47:01.536195 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Nov 4 23:47:01.536203 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Nov 4 23:47:01.536209 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Nov 4 23:47:01.536215 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Nov 4 23:47:01.536220 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Nov 4 23:47:01.536227 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Nov 4 23:47:01.536233 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Nov 4 23:47:01.536239 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Nov 4 23:47:01.536246 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Nov 4 23:47:01.536252 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Nov 4 23:47:01.536258 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Nov 4 23:47:01.536265 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:47:01.536272 kernel: random: crng init done Nov 4 23:47:01.536278 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Nov 4 23:47:01.536285 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Nov 4 23:47:01.536291 kernel: printk: log_buf_len min size: 262144 bytes Nov 4 23:47:01.536298 kernel: printk: log_buf_len: 1048576 bytes Nov 4 23:47:01.536304 kernel: printk: early log buf free: 245688(93%) Nov 4 23:47:01.536310 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 4 23:47:01.536316 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 4 23:47:01.536322 kernel: Fallback order for Node 0: 0 Nov 4 23:47:01.536328 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524157 Nov 4 23:47:01.536336 kernel: Policy zone: DMA32 Nov 4 23:47:01.536342 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 4 23:47:01.536349 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Nov 4 23:47:01.536355 kernel: ftrace: allocating 40092 entries in 157 pages Nov 4 23:47:01.536361 kernel: ftrace: allocated 157 pages with 5 groups Nov 4 23:47:01.536367 kernel: Dynamic Preempt: voluntary Nov 4 23:47:01.536373 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 4 23:47:01.536381 kernel: rcu: RCU event tracing is enabled. Nov 4 23:47:01.536387 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Nov 4 23:47:01.536393 kernel: Trampoline variant of Tasks RCU enabled. Nov 4 23:47:01.536399 kernel: Rude variant of Tasks RCU enabled. Nov 4 23:47:01.536405 kernel: Tracing variant of Tasks RCU enabled. Nov 4 23:47:01.536411 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 4 23:47:01.536417 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Nov 4 23:47:01.536423 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 4 23:47:01.536431 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 4 23:47:01.536437 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 4 23:47:01.536444 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Nov 4 23:47:01.536450 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Nov 4 23:47:01.536456 kernel: Console: colour VGA+ 80x25 Nov 4 23:47:01.536462 kernel: printk: legacy console [tty0] enabled Nov 4 23:47:01.536468 kernel: printk: legacy console [ttyS0] enabled Nov 4 23:47:01.536476 kernel: ACPI: Core revision 20240827 Nov 4 23:47:01.536482 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Nov 4 23:47:01.536488 kernel: APIC: Switch to symmetric I/O mode setup Nov 4 23:47:01.536495 kernel: x2apic enabled Nov 4 23:47:01.536501 kernel: APIC: Switched APIC routing to: physical x2apic Nov 4 23:47:01.536507 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 4 23:47:01.536513 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Nov 4 23:47:01.536521 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Nov 4 23:47:01.536527 kernel: Disabled fast string operations Nov 4 23:47:01.536533 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 4 23:47:01.536539 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 4 23:47:01.536546 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 4 23:47:01.536552 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Nov 4 23:47:01.536558 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 4 23:47:01.536566 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 4 23:47:01.536572 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 4 23:47:01.536578 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 4 23:47:01.536584 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 4 23:47:01.536591 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 4 23:47:01.536597 kernel: SRBDS: Unknown: Dependent on hypervisor status Nov 4 23:47:01.536603 kernel: GDS: Unknown: Dependent on hypervisor status Nov 4 23:47:01.536611 kernel: active return thunk: its_return_thunk Nov 4 23:47:01.536617 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 4 23:47:01.536623 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 4 23:47:01.536630 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 4 23:47:01.536636 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 4 23:47:01.536642 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 4 23:47:01.536648 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 4 23:47:01.536655 kernel: Freeing SMP alternatives memory: 32K Nov 4 23:47:01.536662 kernel: pid_max: default: 131072 minimum: 1024 Nov 4 23:47:01.536668 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 4 23:47:01.536674 kernel: landlock: Up and running. Nov 4 23:47:01.536681 kernel: SELinux: Initializing. Nov 4 23:47:01.536687 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 4 23:47:01.536693 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 4 23:47:01.536699 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Nov 4 23:47:01.536706 kernel: Performance Events: Skylake events, core PMU driver. Nov 4 23:47:01.536713 kernel: core: CPUID marked event: 'cpu cycles' unavailable Nov 4 23:47:01.536719 kernel: core: CPUID marked event: 'instructions' unavailable Nov 4 23:47:01.536725 kernel: core: CPUID marked event: 'bus cycles' unavailable Nov 4 23:47:01.536731 kernel: core: CPUID marked event: 'cache references' unavailable Nov 4 23:47:01.536737 kernel: core: CPUID marked event: 'cache misses' unavailable Nov 4 23:47:01.536744 kernel: core: CPUID marked event: 'branch instructions' unavailable Nov 4 23:47:01.536750 kernel: core: CPUID marked event: 'branch misses' unavailable Nov 4 23:47:01.536756 kernel: ... version: 1 Nov 4 23:47:01.536762 kernel: ... bit width: 48 Nov 4 23:47:01.536769 kernel: ... generic registers: 4 Nov 4 23:47:01.536775 kernel: ... value mask: 0000ffffffffffff Nov 4 23:47:01.536781 kernel: ... max period: 000000007fffffff Nov 4 23:47:01.536788 kernel: ... fixed-purpose events: 0 Nov 4 23:47:01.536795 kernel: ... event mask: 000000000000000f Nov 4 23:47:01.536801 kernel: signal: max sigframe size: 1776 Nov 4 23:47:01.536808 kernel: rcu: Hierarchical SRCU implementation. Nov 4 23:47:01.536814 kernel: rcu: Max phase no-delay instances is 400. Nov 4 23:47:01.536821 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Nov 4 23:47:01.536827 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 4 23:47:01.536833 kernel: smp: Bringing up secondary CPUs ... Nov 4 23:47:01.536840 kernel: smpboot: x86: Booting SMP configuration: Nov 4 23:47:01.536846 kernel: .... node #0, CPUs: #1 Nov 4 23:47:01.536852 kernel: Disabled fast string operations Nov 4 23:47:01.536858 kernel: smp: Brought up 1 node, 2 CPUs Nov 4 23:47:01.536865 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Nov 4 23:47:01.536871 kernel: Memory: 1946748K/2096628K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15936K init, 2108K bss, 138500K reserved, 0K cma-reserved) Nov 4 23:47:01.536877 kernel: devtmpfs: initialized Nov 4 23:47:01.536885 kernel: x86/mm: Memory block size: 128MB Nov 4 23:47:01.536891 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Nov 4 23:47:01.536897 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 4 23:47:01.536903 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Nov 4 23:47:01.536910 kernel: pinctrl core: initialized pinctrl subsystem Nov 4 23:47:01.536916 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 4 23:47:01.536922 kernel: audit: initializing netlink subsys (disabled) Nov 4 23:47:01.536930 kernel: audit: type=2000 audit(1762300018.292:1): state=initialized audit_enabled=0 res=1 Nov 4 23:47:01.536936 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 4 23:47:01.536942 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 4 23:47:01.536948 kernel: cpuidle: using governor menu Nov 4 23:47:01.536954 kernel: Simple Boot Flag at 0x36 set to 0x80 Nov 4 23:47:01.536960 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 4 23:47:01.536967 kernel: dca service started, version 1.12.1 Nov 4 23:47:01.536974 kernel: PCI: ECAM [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) for domain 0000 [bus 00-7f] Nov 4 23:47:01.536988 kernel: PCI: Using configuration type 1 for base access Nov 4 23:47:01.536995 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 4 23:47:01.537002 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 4 23:47:01.537009 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 4 23:47:01.537015 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 4 23:47:01.537021 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 4 23:47:01.537030 kernel: ACPI: Added _OSI(Module Device) Nov 4 23:47:01.537036 kernel: ACPI: Added _OSI(Processor Device) Nov 4 23:47:01.537043 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 4 23:47:01.537049 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 4 23:47:01.537056 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Nov 4 23:47:01.537062 kernel: ACPI: Interpreter enabled Nov 4 23:47:01.537081 kernel: ACPI: PM: (supports S0 S1 S5) Nov 4 23:47:01.537091 kernel: ACPI: Using IOAPIC for interrupt routing Nov 4 23:47:01.537101 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 4 23:47:01.537107 kernel: PCI: Using E820 reservations for host bridge windows Nov 4 23:47:01.537114 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Nov 4 23:47:01.537120 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Nov 4 23:47:01.537233 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 4 23:47:01.537303 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Nov 4 23:47:01.537375 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Nov 4 23:47:01.537385 kernel: PCI host bridge to bus 0000:00 Nov 4 23:47:01.537452 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 4 23:47:01.537512 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Nov 4 23:47:01.537570 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 4 23:47:01.537635 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 4 23:47:01.537693 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Nov 4 23:47:01.537751 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Nov 4 23:47:01.537828 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 conventional PCI endpoint Nov 4 23:47:01.537904 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 conventional PCI bridge Nov 4 23:47:01.537971 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 4 23:47:01.538046 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Nov 4 23:47:01.538132 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a conventional PCI endpoint Nov 4 23:47:01.538203 kernel: pci 0000:00:07.1: BAR 4 [io 0x1060-0x106f] Nov 4 23:47:01.538357 kernel: pci 0000:00:07.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Nov 4 23:47:01.538433 kernel: pci 0000:00:07.1: BAR 1 [io 0x03f6]: legacy IDE quirk Nov 4 23:47:01.538499 kernel: pci 0000:00:07.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Nov 4 23:47:01.538567 kernel: pci 0000:00:07.1: BAR 3 [io 0x0376]: legacy IDE quirk Nov 4 23:47:01.538655 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Nov 4 23:47:01.539669 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Nov 4 23:47:01.539747 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Nov 4 23:47:01.539821 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 conventional PCI endpoint Nov 4 23:47:01.541725 kernel: pci 0000:00:07.7: BAR 0 [io 0x1080-0x10bf] Nov 4 23:47:01.541808 kernel: pci 0000:00:07.7: BAR 1 [mem 0xfebfe000-0xfebfffff 64bit] Nov 4 23:47:01.541884 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 conventional PCI endpoint Nov 4 23:47:01.541957 kernel: pci 0000:00:0f.0: BAR 0 [io 0x1070-0x107f] Nov 4 23:47:01.542025 kernel: pci 0000:00:0f.0: BAR 1 [mem 0xe8000000-0xefffffff pref] Nov 4 23:47:01.542106 kernel: pci 0000:00:0f.0: BAR 2 [mem 0xfe000000-0xfe7fffff] Nov 4 23:47:01.542175 kernel: pci 0000:00:0f.0: ROM [mem 0x00000000-0x00007fff pref] Nov 4 23:47:01.543104 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 4 23:47:01.543185 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 conventional PCI bridge Nov 4 23:47:01.543257 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Nov 4 23:47:01.543328 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Nov 4 23:47:01.543394 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Nov 4 23:47:01.543459 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 4 23:47:01.543528 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 23:47:01.543601 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 4 23:47:01.543668 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Nov 4 23:47:01.543734 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Nov 4 23:47:01.543799 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Nov 4 23:47:01.543872 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 23:47:01.543937 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 4 23:47:01.544005 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Nov 4 23:47:01.545087 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Nov 4 23:47:01.545173 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Nov 4 23:47:01.545243 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Nov 4 23:47:01.545315 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 23:47:01.545383 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 4 23:47:01.545455 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Nov 4 23:47:01.545522 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Nov 4 23:47:01.545588 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Nov 4 23:47:01.545653 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Nov 4 23:47:01.545724 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 23:47:01.545792 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 4 23:47:01.545858 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Nov 4 23:47:01.545924 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Nov 4 23:47:01.545989 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Nov 4 23:47:01.546058 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 23:47:01.546133 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 4 23:47:01.546200 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Nov 4 23:47:01.546265 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 4 23:47:01.546329 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Nov 4 23:47:01.546399 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 23:47:01.546465 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 4 23:47:01.546529 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Nov 4 23:47:01.546600 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Nov 4 23:47:01.546669 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Nov 4 23:47:01.546738 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 23:47:01.546804 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 4 23:47:01.546869 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Nov 4 23:47:01.546933 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Nov 4 23:47:01.547002 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Nov 4 23:47:01.547582 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 23:47:01.547678 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 4 23:47:01.547750 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Nov 4 23:47:01.547818 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Nov 4 23:47:01.547885 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Nov 4 23:47:01.547958 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 23:47:01.548026 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 4 23:47:01.548109 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Nov 4 23:47:01.548177 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Nov 4 23:47:01.548244 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Nov 4 23:47:01.548316 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 23:47:01.548390 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 4 23:47:01.548457 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Nov 4 23:47:01.548524 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Nov 4 23:47:01.548589 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Nov 4 23:47:01.548696 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Nov 4 23:47:01.548770 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 23:47:01.548839 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 4 23:47:01.548904 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Nov 4 23:47:01.548969 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Nov 4 23:47:01.549034 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Nov 4 23:47:01.549114 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Nov 4 23:47:01.549186 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 23:47:01.549252 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 4 23:47:01.549317 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Nov 4 23:47:01.549382 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 4 23:47:01.549447 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Nov 4 23:47:01.549515 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 23:47:01.549585 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 4 23:47:01.549650 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Nov 4 23:47:01.549715 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 4 23:47:01.549779 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Nov 4 23:47:01.549849 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 23:47:01.549914 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 4 23:47:01.549989 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Nov 4 23:47:01.551482 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Nov 4 23:47:01.551576 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Nov 4 23:47:01.551651 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 23:47:01.551720 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 4 23:47:01.551786 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Nov 4 23:47:01.551855 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Nov 4 23:47:01.551920 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Nov 4 23:47:01.552400 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 23:47:01.552469 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 4 23:47:01.552535 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Nov 4 23:47:01.552599 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 4 23:47:01.552667 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Nov 4 23:47:01.552736 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 23:47:01.552801 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 4 23:47:01.552866 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Nov 4 23:47:01.552945 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Nov 4 23:47:01.553014 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 4 23:47:01.554140 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Nov 4 23:47:01.554221 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 23:47:01.554296 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 4 23:47:01.554370 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Nov 4 23:47:01.554436 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Nov 4 23:47:01.554515 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Nov 4 23:47:01.554594 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Nov 4 23:47:01.556578 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 23:47:01.556674 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 4 23:47:01.556749 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Nov 4 23:47:01.556816 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Nov 4 23:47:01.556882 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Nov 4 23:47:01.556947 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Nov 4 23:47:01.557017 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 23:47:01.557100 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 4 23:47:01.557169 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Nov 4 23:47:01.557234 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 4 23:47:01.557299 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Nov 4 23:47:01.557368 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 23:47:01.557434 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 4 23:47:01.557502 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Nov 4 23:47:01.557566 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 4 23:47:01.557631 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Nov 4 23:47:01.557700 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 23:47:01.557764 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 4 23:47:01.557829 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Nov 4 23:47:01.557895 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Nov 4 23:47:01.557960 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Nov 4 23:47:01.558034 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 23:47:01.558116 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 4 23:47:01.558189 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Nov 4 23:47:01.558276 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Nov 4 23:47:01.558367 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Nov 4 23:47:01.558459 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 23:47:01.558529 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 4 23:47:01.558599 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Nov 4 23:47:01.558673 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 4 23:47:01.558739 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Nov 4 23:47:01.558813 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 23:47:01.558878 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 4 23:47:01.558943 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Nov 4 23:47:01.559007 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Nov 4 23:47:01.559082 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Nov 4 23:47:01.559149 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Nov 4 23:47:01.559230 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 23:47:01.559298 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 4 23:47:01.559363 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Nov 4 23:47:01.559427 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Nov 4 23:47:01.559492 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Nov 4 23:47:01.559557 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Nov 4 23:47:01.559637 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 23:47:01.559703 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 4 23:47:01.559770 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Nov 4 23:47:01.559835 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Nov 4 23:47:01.559899 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Nov 4 23:47:01.559970 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 23:47:01.560038 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 4 23:47:01.560370 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Nov 4 23:47:01.560438 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 4 23:47:01.560504 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Nov 4 23:47:01.560574 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 23:47:01.560668 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 4 23:47:01.560738 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Nov 4 23:47:01.560802 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Nov 4 23:47:01.560867 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Nov 4 23:47:01.560936 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 23:47:01.561001 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 4 23:47:01.561066 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Nov 4 23:47:01.561145 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Nov 4 23:47:01.561212 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Nov 4 23:47:01.561279 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 23:47:01.561345 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 4 23:47:01.561409 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Nov 4 23:47:01.561473 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Nov 4 23:47:01.561539 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Nov 4 23:47:01.561616 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 4 23:47:01.561681 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 4 23:47:01.561745 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Nov 4 23:47:01.561817 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 4 23:47:01.561882 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Nov 4 23:47:01.561952 kernel: pci_bus 0000:01: extended config space not accessible Nov 4 23:47:01.562019 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 4 23:47:01.562105 kernel: pci_bus 0000:02: extended config space not accessible Nov 4 23:47:01.562117 kernel: acpiphp: Slot [32] registered Nov 4 23:47:01.562124 kernel: acpiphp: Slot [33] registered Nov 4 23:47:01.562130 kernel: acpiphp: Slot [34] registered Nov 4 23:47:01.562139 kernel: acpiphp: Slot [35] registered Nov 4 23:47:01.562146 kernel: acpiphp: Slot [36] registered Nov 4 23:47:01.562152 kernel: acpiphp: Slot [37] registered Nov 4 23:47:01.562159 kernel: acpiphp: Slot [38] registered Nov 4 23:47:01.562165 kernel: acpiphp: Slot [39] registered Nov 4 23:47:01.562172 kernel: acpiphp: Slot [40] registered Nov 4 23:47:01.562178 kernel: acpiphp: Slot [41] registered Nov 4 23:47:01.562184 kernel: acpiphp: Slot [42] registered Nov 4 23:47:01.562192 kernel: acpiphp: Slot [43] registered Nov 4 23:47:01.562199 kernel: acpiphp: Slot [44] registered Nov 4 23:47:01.562205 kernel: acpiphp: Slot [45] registered Nov 4 23:47:01.562212 kernel: acpiphp: Slot [46] registered Nov 4 23:47:01.562218 kernel: acpiphp: Slot [47] registered Nov 4 23:47:01.562225 kernel: acpiphp: Slot [48] registered Nov 4 23:47:01.562231 kernel: acpiphp: Slot [49] registered Nov 4 23:47:01.562239 kernel: acpiphp: Slot [50] registered Nov 4 23:47:01.562245 kernel: acpiphp: Slot [51] registered Nov 4 23:47:01.562252 kernel: acpiphp: Slot [52] registered Nov 4 23:47:01.562258 kernel: acpiphp: Slot [53] registered Nov 4 23:47:01.562264 kernel: acpiphp: Slot [54] registered Nov 4 23:47:01.562271 kernel: acpiphp: Slot [55] registered Nov 4 23:47:01.562277 kernel: acpiphp: Slot [56] registered Nov 4 23:47:01.562284 kernel: acpiphp: Slot [57] registered Nov 4 23:47:01.562291 kernel: acpiphp: Slot [58] registered Nov 4 23:47:01.562297 kernel: acpiphp: Slot [59] registered Nov 4 23:47:01.562304 kernel: acpiphp: Slot [60] registered Nov 4 23:47:01.562310 kernel: acpiphp: Slot [61] registered Nov 4 23:47:01.562316 kernel: acpiphp: Slot [62] registered Nov 4 23:47:01.562323 kernel: acpiphp: Slot [63] registered Nov 4 23:47:01.562400 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Nov 4 23:47:01.562470 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Nov 4 23:47:01.562537 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Nov 4 23:47:01.562602 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Nov 4 23:47:01.562666 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Nov 4 23:47:01.562730 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Nov 4 23:47:01.562804 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 PCIe Endpoint Nov 4 23:47:01.562874 kernel: pci 0000:03:00.0: BAR 0 [io 0x4000-0x4007] Nov 4 23:47:01.562940 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfd5f8000-0xfd5fffff 64bit] Nov 4 23:47:01.563005 kernel: pci 0000:03:00.0: ROM [mem 0x00000000-0x0000ffff pref] Nov 4 23:47:01.563081 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Nov 4 23:47:01.563154 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Nov 4 23:47:01.563224 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 4 23:47:01.563295 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 4 23:47:01.563362 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 4 23:47:01.563429 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 4 23:47:01.563496 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 4 23:47:01.563562 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 4 23:47:01.563629 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 4 23:47:01.563698 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 4 23:47:01.563771 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 PCIe Endpoint Nov 4 23:47:01.563838 kernel: pci 0000:0b:00.0: BAR 0 [mem 0xfd4fc000-0xfd4fcfff] Nov 4 23:47:01.563902 kernel: pci 0000:0b:00.0: BAR 1 [mem 0xfd4fd000-0xfd4fdfff] Nov 4 23:47:01.563967 kernel: pci 0000:0b:00.0: BAR 2 [mem 0xfd4fe000-0xfd4fffff] Nov 4 23:47:01.564032 kernel: pci 0000:0b:00.0: BAR 3 [io 0x5000-0x500f] Nov 4 23:47:01.564114 kernel: pci 0000:0b:00.0: ROM [mem 0x00000000-0x0000ffff pref] Nov 4 23:47:01.564182 kernel: pci 0000:0b:00.0: supports D1 D2 Nov 4 23:47:01.564247 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 4 23:47:01.564313 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Nov 4 23:47:01.564378 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 4 23:47:01.564446 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 4 23:47:01.564516 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 4 23:47:01.564582 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 4 23:47:01.564652 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 4 23:47:01.564718 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 4 23:47:01.564785 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 4 23:47:01.564851 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 4 23:47:01.564920 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 4 23:47:01.564986 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 4 23:47:01.565051 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 4 23:47:01.565126 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 4 23:47:01.565194 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 4 23:47:01.565261 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 4 23:47:01.565329 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 4 23:47:01.565420 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 4 23:47:01.565490 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 4 23:47:01.565556 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 4 23:47:01.565623 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 4 23:47:01.565690 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 4 23:47:01.565759 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 4 23:47:01.565824 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 4 23:47:01.565889 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 4 23:47:01.565954 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 4 23:47:01.565964 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Nov 4 23:47:01.565971 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Nov 4 23:47:01.565979 kernel: ACPI: PCI: Interrupt link LNKB disabled Nov 4 23:47:01.565986 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 4 23:47:01.565993 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Nov 4 23:47:01.566000 kernel: iommu: Default domain type: Translated Nov 4 23:47:01.566007 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 4 23:47:01.566013 kernel: PCI: Using ACPI for IRQ routing Nov 4 23:47:01.566020 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 4 23:47:01.566026 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Nov 4 23:47:01.566034 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Nov 4 23:47:01.566118 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Nov 4 23:47:01.566185 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Nov 4 23:47:01.566250 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 4 23:47:01.566259 kernel: vgaarb: loaded Nov 4 23:47:01.566266 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Nov 4 23:47:01.566273 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Nov 4 23:47:01.566283 kernel: clocksource: Switched to clocksource tsc-early Nov 4 23:47:01.566289 kernel: VFS: Disk quotas dquot_6.6.0 Nov 4 23:47:01.566296 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 4 23:47:01.566303 kernel: pnp: PnP ACPI init Nov 4 23:47:01.566373 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Nov 4 23:47:01.566435 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Nov 4 23:47:01.566521 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Nov 4 23:47:01.566588 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Nov 4 23:47:01.566653 kernel: pnp 00:06: [dma 2] Nov 4 23:47:01.566719 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Nov 4 23:47:01.566779 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Nov 4 23:47:01.566841 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Nov 4 23:47:01.566850 kernel: pnp: PnP ACPI: found 8 devices Nov 4 23:47:01.566858 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 4 23:47:01.566865 kernel: NET: Registered PF_INET protocol family Nov 4 23:47:01.566871 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 4 23:47:01.566878 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 4 23:47:01.566884 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 4 23:47:01.566893 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 4 23:47:01.566899 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 4 23:47:01.566906 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 4 23:47:01.566913 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 4 23:47:01.566919 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 4 23:47:01.566926 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 4 23:47:01.566932 kernel: NET: Registered PF_XDP protocol family Nov 4 23:47:01.566999 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Nov 4 23:47:01.567066 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 4 23:47:01.567148 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 4 23:47:01.567214 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 4 23:47:01.567279 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 4 23:47:01.567345 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Nov 4 23:47:01.567412 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Nov 4 23:47:01.567478 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Nov 4 23:47:01.567543 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Nov 4 23:47:01.567613 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Nov 4 23:47:01.567679 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Nov 4 23:47:01.567744 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Nov 4 23:47:01.567811 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Nov 4 23:47:01.567879 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Nov 4 23:47:01.567944 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Nov 4 23:47:01.568009 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Nov 4 23:47:01.568090 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Nov 4 23:47:01.568160 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Nov 4 23:47:01.568248 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Nov 4 23:47:01.568324 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Nov 4 23:47:01.568390 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Nov 4 23:47:01.568455 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Nov 4 23:47:01.568521 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Nov 4 23:47:01.568587 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref]: assigned Nov 4 23:47:01.568670 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref]: assigned Nov 4 23:47:01.568739 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.568806 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.568871 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.568935 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.569001 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.569066 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.569149 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.569219 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.569285 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.569350 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.569414 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.569478 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.569542 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.569608 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.569672 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.569736 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.569800 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.569864 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.569929 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.570991 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.571075 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.571146 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.571214 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.571279 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.571346 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.571411 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.571479 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.571544 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.571620 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.571689 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.571754 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.571819 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.571886 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.571952 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.572016 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.573588 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.573669 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.573738 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.573810 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.573876 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.573943 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.574009 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.574083 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.574152 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.574218 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.574285 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.574350 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.574414 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.574479 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.574543 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.574608 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.574677 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.574742 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.574809 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.574874 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.574940 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.575005 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.576087 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.576170 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.576241 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.576313 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.576381 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.576449 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.576514 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.576579 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.576649 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.576797 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.577079 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.578156 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.578231 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.578490 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.578560 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.578636 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.578702 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.578773 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.578838 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.578905 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.578970 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.579036 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.580135 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.580227 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.580311 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.580381 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Nov 4 23:47:01.580452 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Nov 4 23:47:01.580518 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 4 23:47:01.580602 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Nov 4 23:47:01.580675 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Nov 4 23:47:01.580753 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Nov 4 23:47:01.580819 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 4 23:47:01.580888 kernel: pci 0000:03:00.0: ROM [mem 0xfd500000-0xfd50ffff pref]: assigned Nov 4 23:47:01.580955 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 4 23:47:01.581019 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Nov 4 23:47:01.581110 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Nov 4 23:47:01.581180 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Nov 4 23:47:01.581247 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 4 23:47:01.581311 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Nov 4 23:47:01.581374 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Nov 4 23:47:01.581438 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Nov 4 23:47:01.581504 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 4 23:47:01.581568 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Nov 4 23:47:01.581635 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Nov 4 23:47:01.581702 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Nov 4 23:47:01.581768 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 4 23:47:01.581832 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Nov 4 23:47:01.581895 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Nov 4 23:47:01.581960 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 4 23:47:01.582024 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Nov 4 23:47:01.582142 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 4 23:47:01.582219 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 4 23:47:01.582284 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Nov 4 23:47:01.582347 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Nov 4 23:47:01.582411 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 4 23:47:01.582475 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Nov 4 23:47:01.582539 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Nov 4 23:47:01.582605 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 4 23:47:01.582669 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Nov 4 23:47:01.582732 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Nov 4 23:47:01.582800 kernel: pci 0000:0b:00.0: ROM [mem 0xfd400000-0xfd40ffff pref]: assigned Nov 4 23:47:01.582864 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 4 23:47:01.582930 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Nov 4 23:47:01.582996 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Nov 4 23:47:01.583060 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Nov 4 23:47:01.583147 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 4 23:47:01.583211 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Nov 4 23:47:01.583276 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Nov 4 23:47:01.583342 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Nov 4 23:47:01.583408 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 4 23:47:01.583475 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Nov 4 23:47:01.583540 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Nov 4 23:47:01.583605 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Nov 4 23:47:01.583674 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 4 23:47:01.583737 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Nov 4 23:47:01.583800 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 4 23:47:01.583865 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 4 23:47:01.583932 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Nov 4 23:47:01.583996 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 4 23:47:01.584060 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 4 23:47:01.584135 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Nov 4 23:47:01.584199 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Nov 4 23:47:01.584267 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 4 23:47:01.584335 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Nov 4 23:47:01.584400 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Nov 4 23:47:01.584466 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 4 23:47:01.584531 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Nov 4 23:47:01.584600 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 4 23:47:01.584670 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 4 23:47:01.584736 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Nov 4 23:47:01.584800 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Nov 4 23:47:01.584864 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 4 23:47:01.584930 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 4 23:47:01.584995 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Nov 4 23:47:01.585059 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Nov 4 23:47:01.585137 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Nov 4 23:47:01.585207 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 4 23:47:01.585271 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Nov 4 23:47:01.585336 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Nov 4 23:47:01.585401 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Nov 4 23:47:01.585467 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 4 23:47:01.585532 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Nov 4 23:47:01.585596 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 4 23:47:01.585663 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 4 23:47:01.585735 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Nov 4 23:47:01.585808 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 4 23:47:01.585874 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 4 23:47:01.585939 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Nov 4 23:47:01.586002 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Nov 4 23:47:01.586084 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 4 23:47:01.586154 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Nov 4 23:47:01.586218 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Nov 4 23:47:01.586285 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 4 23:47:01.586350 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Nov 4 23:47:01.586416 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 4 23:47:01.586486 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 4 23:47:01.586551 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Nov 4 23:47:01.586620 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Nov 4 23:47:01.586685 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Nov 4 23:47:01.586750 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 4 23:47:01.586814 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Nov 4 23:47:01.586878 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Nov 4 23:47:01.586944 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Nov 4 23:47:01.587019 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 4 23:47:01.587111 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Nov 4 23:47:01.587179 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Nov 4 23:47:01.587247 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 4 23:47:01.587311 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Nov 4 23:47:01.587376 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 4 23:47:01.587444 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 4 23:47:01.587509 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Nov 4 23:47:01.588916 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Nov 4 23:47:01.588989 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 4 23:47:01.589056 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Nov 4 23:47:01.589138 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Nov 4 23:47:01.589209 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 4 23:47:01.589274 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Nov 4 23:47:01.589338 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Nov 4 23:47:01.589406 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 4 23:47:01.589471 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Nov 4 23:47:01.589536 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 4 23:47:01.589602 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Nov 4 23:47:01.589666 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Nov 4 23:47:01.589723 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Nov 4 23:47:01.589780 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Nov 4 23:47:01.589837 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Nov 4 23:47:01.589900 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Nov 4 23:47:01.589963 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Nov 4 23:47:01.590021 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 4 23:47:01.591248 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Nov 4 23:47:01.591317 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Nov 4 23:47:01.591378 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Nov 4 23:47:01.591438 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Nov 4 23:47:01.591500 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Nov 4 23:47:01.591564 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Nov 4 23:47:01.591624 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Nov 4 23:47:01.591683 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Nov 4 23:47:01.591747 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Nov 4 23:47:01.591809 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Nov 4 23:47:01.591868 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Nov 4 23:47:01.591931 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Nov 4 23:47:01.591990 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Nov 4 23:47:01.592049 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Nov 4 23:47:01.592147 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Nov 4 23:47:01.592212 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Nov 4 23:47:01.592275 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Nov 4 23:47:01.592335 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 4 23:47:01.592403 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Nov 4 23:47:01.592463 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Nov 4 23:47:01.592537 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Nov 4 23:47:01.592597 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Nov 4 23:47:01.592664 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Nov 4 23:47:01.592723 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Nov 4 23:47:01.592787 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Nov 4 23:47:01.592850 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Nov 4 23:47:01.592909 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Nov 4 23:47:01.592971 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Nov 4 23:47:01.593031 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Nov 4 23:47:01.593108 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Nov 4 23:47:01.593177 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Nov 4 23:47:01.593236 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Nov 4 23:47:01.593295 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Nov 4 23:47:01.593358 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Nov 4 23:47:01.593417 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 4 23:47:01.593480 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Nov 4 23:47:01.593542 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 4 23:47:01.593610 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Nov 4 23:47:01.593670 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Nov 4 23:47:01.593734 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Nov 4 23:47:01.593794 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Nov 4 23:47:01.593873 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Nov 4 23:47:01.593934 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 4 23:47:01.593997 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Nov 4 23:47:01.594057 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Nov 4 23:47:01.594148 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 4 23:47:01.594212 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Nov 4 23:47:01.594275 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Nov 4 23:47:01.594336 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Nov 4 23:47:01.594400 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Nov 4 23:47:01.594459 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Nov 4 23:47:01.594518 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Nov 4 23:47:01.594583 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Nov 4 23:47:01.594642 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 4 23:47:01.594708 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Nov 4 23:47:01.594767 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 4 23:47:01.594830 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Nov 4 23:47:01.594891 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Nov 4 23:47:01.594954 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Nov 4 23:47:01.595013 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Nov 4 23:47:01.595134 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Nov 4 23:47:01.595199 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 4 23:47:01.595264 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Nov 4 23:47:01.595328 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Nov 4 23:47:01.595387 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Nov 4 23:47:01.595453 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Nov 4 23:47:01.595513 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Nov 4 23:47:01.595572 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Nov 4 23:47:01.595643 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Nov 4 23:47:01.595704 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Nov 4 23:47:01.595767 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Nov 4 23:47:01.595827 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 4 23:47:01.595891 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Nov 4 23:47:01.595951 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Nov 4 23:47:01.596020 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Nov 4 23:47:01.596158 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Nov 4 23:47:01.596226 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Nov 4 23:47:01.596286 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Nov 4 23:47:01.596350 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Nov 4 23:47:01.596414 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 4 23:47:01.596485 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 4 23:47:01.596496 kernel: PCI: CLS 32 bytes, default 64 Nov 4 23:47:01.596503 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 4 23:47:01.596510 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Nov 4 23:47:01.596517 kernel: clocksource: Switched to clocksource tsc Nov 4 23:47:01.596526 kernel: Initialise system trusted keyrings Nov 4 23:47:01.596532 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 4 23:47:01.596539 kernel: Key type asymmetric registered Nov 4 23:47:01.596545 kernel: Asymmetric key parser 'x509' registered Nov 4 23:47:01.596552 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 4 23:47:01.596559 kernel: io scheduler mq-deadline registered Nov 4 23:47:01.596565 kernel: io scheduler kyber registered Nov 4 23:47:01.596573 kernel: io scheduler bfq registered Nov 4 23:47:01.596640 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Nov 4 23:47:01.596707 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 23:47:01.596774 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Nov 4 23:47:01.596840 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 23:47:01.596907 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Nov 4 23:47:01.596975 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 23:47:01.597041 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Nov 4 23:47:01.597114 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 23:47:01.597182 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Nov 4 23:47:01.597247 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 23:47:01.597313 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Nov 4 23:47:01.597390 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 23:47:01.597457 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Nov 4 23:47:01.597523 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 23:47:01.597589 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Nov 4 23:47:01.597659 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 23:47:01.597725 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Nov 4 23:47:01.597793 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 23:47:01.597860 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Nov 4 23:47:01.597926 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 23:47:01.597993 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Nov 4 23:47:01.598058 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 23:47:01.598161 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Nov 4 23:47:01.598234 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 23:47:01.598310 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Nov 4 23:47:01.598379 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 23:47:01.598447 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Nov 4 23:47:01.598514 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 23:47:01.598582 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Nov 4 23:47:01.599187 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 23:47:01.599260 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Nov 4 23:47:01.599329 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 23:47:01.599398 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Nov 4 23:47:01.599464 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 23:47:01.599533 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Nov 4 23:47:01.599602 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 23:47:01.599670 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Nov 4 23:47:01.599735 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 23:47:01.599803 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Nov 4 23:47:01.599869 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 23:47:01.599936 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Nov 4 23:47:01.600006 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 23:47:01.601098 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Nov 4 23:47:01.601185 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 23:47:01.601259 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Nov 4 23:47:01.601327 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 23:47:01.601399 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Nov 4 23:47:01.601467 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 23:47:01.601542 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Nov 4 23:47:01.601617 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 23:47:01.601685 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Nov 4 23:47:01.601752 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 23:47:01.601820 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Nov 4 23:47:01.601886 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 23:47:01.601955 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Nov 4 23:47:01.602022 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 23:47:01.602098 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Nov 4 23:47:01.602165 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 23:47:01.602233 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Nov 4 23:47:01.602299 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 23:47:01.602369 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Nov 4 23:47:01.602434 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 23:47:01.602502 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Nov 4 23:47:01.602568 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 4 23:47:01.602580 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 4 23:47:01.602588 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 4 23:47:01.602596 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 4 23:47:01.602603 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Nov 4 23:47:01.602610 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 4 23:47:01.602617 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 4 23:47:01.602687 kernel: rtc_cmos 00:01: registered as rtc0 Nov 4 23:47:01.602749 kernel: rtc_cmos 00:01: setting system clock to 2025-11-04T23:47:00 UTC (1762300020) Nov 4 23:47:01.602814 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Nov 4 23:47:01.602824 kernel: intel_pstate: CPU model not supported Nov 4 23:47:01.602832 kernel: NET: Registered PF_INET6 protocol family Nov 4 23:47:01.602839 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 4 23:47:01.602846 kernel: Segment Routing with IPv6 Nov 4 23:47:01.602853 kernel: In-situ OAM (IOAM) with IPv6 Nov 4 23:47:01.602860 kernel: NET: Registered PF_PACKET protocol family Nov 4 23:47:01.602870 kernel: Key type dns_resolver registered Nov 4 23:47:01.602877 kernel: IPI shorthand broadcast: enabled Nov 4 23:47:01.602884 kernel: sched_clock: Marking stable (1467020706, 175089362)->(1655753424, -13643356) Nov 4 23:47:01.602890 kernel: registered taskstats version 1 Nov 4 23:47:01.602898 kernel: Loading compiled-in X.509 certificates Nov 4 23:47:01.602905 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: ace064fb6689a15889f35c6439909c760a72ef44' Nov 4 23:47:01.602912 kernel: Demotion targets for Node 0: null Nov 4 23:47:01.602920 kernel: Key type .fscrypt registered Nov 4 23:47:01.602927 kernel: Key type fscrypt-provisioning registered Nov 4 23:47:01.602934 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 4 23:47:01.602941 kernel: ima: Allocated hash algorithm: sha1 Nov 4 23:47:01.602948 kernel: ima: No architecture policies found Nov 4 23:47:01.602954 kernel: clk: Disabling unused clocks Nov 4 23:47:01.602961 kernel: Freeing unused kernel image (initmem) memory: 15936K Nov 4 23:47:01.602969 kernel: Write protecting the kernel read-only data: 40960k Nov 4 23:47:01.602976 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 4 23:47:01.602983 kernel: Run /init as init process Nov 4 23:47:01.602990 kernel: with arguments: Nov 4 23:47:01.602997 kernel: /init Nov 4 23:47:01.603003 kernel: with environment: Nov 4 23:47:01.603010 kernel: HOME=/ Nov 4 23:47:01.603017 kernel: TERM=linux Nov 4 23:47:01.603024 kernel: SCSI subsystem initialized Nov 4 23:47:01.603031 kernel: VMware PVSCSI driver - version 1.0.7.0-k Nov 4 23:47:01.603037 kernel: vmw_pvscsi: using 64bit dma Nov 4 23:47:01.603044 kernel: vmw_pvscsi: max_id: 16 Nov 4 23:47:01.603051 kernel: vmw_pvscsi: setting ring_pages to 8 Nov 4 23:47:01.603058 kernel: vmw_pvscsi: enabling reqCallThreshold Nov 4 23:47:01.603065 kernel: vmw_pvscsi: driver-based request coalescing enabled Nov 4 23:47:01.603091 kernel: vmw_pvscsi: using MSI-X Nov 4 23:47:01.603182 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Nov 4 23:47:01.603254 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Nov 4 23:47:01.603333 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Nov 4 23:47:01.603405 kernel: sd 0:0:0:0: [sda] 25804800 512-byte logical blocks: (13.2 GB/12.3 GiB) Nov 4 23:47:01.603475 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 4 23:47:01.603547 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Nov 4 23:47:01.603616 kernel: sd 0:0:0:0: [sda] Cache data unavailable Nov 4 23:47:01.603685 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Nov 4 23:47:01.603695 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 4 23:47:01.603762 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 4 23:47:01.603772 kernel: libata version 3.00 loaded. Nov 4 23:47:01.603845 kernel: ata_piix 0000:00:07.1: version 2.13 Nov 4 23:47:01.603919 kernel: scsi host1: ata_piix Nov 4 23:47:01.603994 kernel: scsi host2: ata_piix Nov 4 23:47:01.604005 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 lpm-pol 0 Nov 4 23:47:01.604012 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 lpm-pol 0 Nov 4 23:47:01.604019 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Nov 4 23:47:01.604128 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Nov 4 23:47:01.604140 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 4 23:47:01.604147 kernel: device-mapper: uevent: version 1.0.3 Nov 4 23:47:01.604154 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 4 23:47:01.604224 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Nov 4 23:47:01.604234 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 4 23:47:01.604243 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 4 23:47:01.604312 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 4 23:47:01.604322 kernel: raid6: avx2x4 gen() 47450 MB/s Nov 4 23:47:01.604329 kernel: raid6: avx2x2 gen() 53024 MB/s Nov 4 23:47:01.604336 kernel: raid6: avx2x1 gen() 44493 MB/s Nov 4 23:47:01.604343 kernel: raid6: using algorithm avx2x2 gen() 53024 MB/s Nov 4 23:47:01.604350 kernel: raid6: .... xor() 32263 MB/s, rmw enabled Nov 4 23:47:01.604357 kernel: raid6: using avx2x2 recovery algorithm Nov 4 23:47:01.604368 kernel: xor: automatically using best checksumming function avx Nov 4 23:47:01.604375 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 4 23:47:01.604382 kernel: BTRFS: device fsid f719dc90-1cf7-4f08-a80f-0dda441372cc devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (196) Nov 4 23:47:01.604390 kernel: BTRFS info (device dm-0): first mount of filesystem f719dc90-1cf7-4f08-a80f-0dda441372cc Nov 4 23:47:01.604397 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:47:01.604404 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 4 23:47:01.604410 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 4 23:47:01.604419 kernel: BTRFS info (device dm-0): enabling free space tree Nov 4 23:47:01.604426 kernel: loop: module loaded Nov 4 23:47:01.604433 kernel: loop0: detected capacity change from 0 to 100120 Nov 4 23:47:01.604439 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 4 23:47:01.604448 systemd[1]: Successfully made /usr/ read-only. Nov 4 23:47:01.604458 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 23:47:01.604467 systemd[1]: Detected virtualization vmware. Nov 4 23:47:01.604474 systemd[1]: Detected architecture x86-64. Nov 4 23:47:01.604480 systemd[1]: Running in initrd. Nov 4 23:47:01.604487 systemd[1]: No hostname configured, using default hostname. Nov 4 23:47:01.604495 systemd[1]: Hostname set to . Nov 4 23:47:01.604501 systemd[1]: Initializing machine ID from random generator. Nov 4 23:47:01.604509 systemd[1]: Queued start job for default target initrd.target. Nov 4 23:47:01.604517 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 23:47:01.604525 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:47:01.604532 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:47:01.604541 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 4 23:47:01.604548 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 23:47:01.604557 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 4 23:47:01.604564 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 4 23:47:01.604571 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:47:01.604579 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:47:01.604586 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 4 23:47:01.604593 systemd[1]: Reached target paths.target - Path Units. Nov 4 23:47:01.604606 systemd[1]: Reached target slices.target - Slice Units. Nov 4 23:47:01.604613 systemd[1]: Reached target swap.target - Swaps. Nov 4 23:47:01.604620 systemd[1]: Reached target timers.target - Timer Units. Nov 4 23:47:01.604627 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 23:47:01.604634 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 23:47:01.604642 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 4 23:47:01.604649 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 4 23:47:01.604657 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:47:01.604665 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 23:47:01.604672 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:47:01.604679 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 23:47:01.604687 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Nov 4 23:47:01.604694 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 4 23:47:01.604701 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 23:47:01.604710 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 4 23:47:01.604717 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 4 23:47:01.604725 systemd[1]: Starting systemd-fsck-usr.service... Nov 4 23:47:01.604732 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 23:47:01.604739 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 23:47:01.604746 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:47:01.604755 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 4 23:47:01.604762 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:47:01.604770 systemd[1]: Finished systemd-fsck-usr.service. Nov 4 23:47:01.604777 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 23:47:01.604801 systemd-journald[333]: Collecting audit messages is disabled. Nov 4 23:47:01.604819 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 4 23:47:01.604827 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Nov 4 23:47:01.604836 kernel: Bridge firewalling registered Nov 4 23:47:01.604843 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 23:47:01.604851 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:47:01.604858 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 23:47:01.604865 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 4 23:47:01.604873 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 23:47:01.604881 systemd-journald[333]: Journal started Nov 4 23:47:01.604897 systemd-journald[333]: Runtime Journal (/run/log/journal/5ee95684431541e0b18e676dd1c50896) is 4.8M, max 38.5M, 33.7M free. Nov 4 23:47:01.608097 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 23:47:01.580707 systemd-modules-load[334]: Inserted module 'br_netfilter' Nov 4 23:47:01.609862 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 23:47:01.612140 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 23:47:01.622539 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:47:01.622979 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 23:47:01.625914 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 4 23:47:01.628511 systemd-tmpfiles[352]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 4 23:47:01.628936 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 23:47:01.632373 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:47:01.632658 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:47:01.640061 dracut-cmdline[373]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 ip=139.178.70.101::139.178.70.97:28::ens192:off:1.1.1.1:1.0.0.1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:47:01.665698 systemd-resolved[374]: Positive Trust Anchors: Nov 4 23:47:01.665709 systemd-resolved[374]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 23:47:01.665711 systemd-resolved[374]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 23:47:01.665733 systemd-resolved[374]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 23:47:01.684835 systemd-resolved[374]: Defaulting to hostname 'linux'. Nov 4 23:47:01.685474 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 23:47:01.685629 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:47:01.715090 kernel: Loading iSCSI transport class v2.0-870. Nov 4 23:47:01.727086 kernel: iscsi: registered transport (tcp) Nov 4 23:47:01.752088 kernel: iscsi: registered transport (qla4xxx) Nov 4 23:47:01.752132 kernel: QLogic iSCSI HBA Driver Nov 4 23:47:01.770390 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 23:47:01.788270 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:47:01.788646 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 23:47:01.815621 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 4 23:47:01.816509 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 4 23:47:01.817132 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 4 23:47:01.843623 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 4 23:47:01.844614 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:47:01.863437 systemd-udevd[621]: Using default interface naming scheme 'v257'. Nov 4 23:47:01.869876 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:47:01.871518 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 4 23:47:01.889955 dracut-pre-trigger[686]: rd.md=0: removing MD RAID activation Nov 4 23:47:01.891974 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 23:47:01.893679 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 23:47:01.907993 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 23:47:01.910146 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 23:47:01.921267 systemd-networkd[729]: lo: Link UP Nov 4 23:47:01.921272 systemd-networkd[729]: lo: Gained carrier Nov 4 23:47:01.921537 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 23:47:01.921691 systemd[1]: Reached target network.target - Network. Nov 4 23:47:01.994175 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:47:01.997152 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 4 23:47:02.062775 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Nov 4 23:47:02.070726 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Nov 4 23:47:02.076679 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Nov 4 23:47:02.083829 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Nov 4 23:47:02.084700 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 4 23:47:02.145308 kernel: VMware vmxnet3 virtual NIC driver - version 1.9.0.0-k-NAPI Nov 4 23:47:02.145350 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Nov 4 23:47:02.145488 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Nov 4 23:47:02.178465 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Nov 4 23:47:02.191097 kernel: cryptd: max_cpu_qlen set to 1000 Nov 4 23:47:02.194093 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Nov 4 23:47:02.195042 (udev-worker)[755]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Nov 4 23:47:02.195413 systemd-networkd[729]: eth0: Interface name change detected, renamed to ens192. Nov 4 23:47:02.204126 kernel: AES CTR mode by8 optimization enabled Nov 4 23:47:02.206063 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:47:02.206709 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:47:02.207252 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:47:02.210285 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:47:02.232783 systemd-networkd[729]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Nov 4 23:47:02.238328 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Nov 4 23:47:02.238455 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Nov 4 23:47:02.234079 systemd-networkd[729]: ens192: Link UP Nov 4 23:47:02.234082 systemd-networkd[729]: ens192: Gained carrier Nov 4 23:47:02.251638 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:47:02.285218 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 4 23:47:02.285584 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 23:47:02.285728 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:47:02.285925 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 23:47:02.286660 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 4 23:47:02.301651 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 4 23:47:03.166843 disk-uuid[788]: Warning: The kernel is still using the old partition table. Nov 4 23:47:03.166843 disk-uuid[788]: The new table will be used at the next reboot or after you Nov 4 23:47:03.166843 disk-uuid[788]: run partprobe(8) or kpartx(8) Nov 4 23:47:03.166843 disk-uuid[788]: The operation has completed successfully. Nov 4 23:47:03.174463 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 4 23:47:03.174528 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 4 23:47:03.175612 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 4 23:47:03.194087 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (887) Nov 4 23:47:03.196600 kernel: BTRFS info (device sda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:47:03.196624 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:47:03.200090 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 4 23:47:03.200121 kernel: BTRFS info (device sda6): enabling free space tree Nov 4 23:47:03.204088 kernel: BTRFS info (device sda6): last unmount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:47:03.204686 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 4 23:47:03.205584 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 4 23:47:03.345200 ignition[906]: Ignition 2.22.0 Nov 4 23:47:03.345462 ignition[906]: Stage: fetch-offline Nov 4 23:47:03.345489 ignition[906]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:47:03.345495 ignition[906]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 4 23:47:03.345544 ignition[906]: parsed url from cmdline: "" Nov 4 23:47:03.345546 ignition[906]: no config URL provided Nov 4 23:47:03.345549 ignition[906]: reading system config file "/usr/lib/ignition/user.ign" Nov 4 23:47:03.345554 ignition[906]: no config at "/usr/lib/ignition/user.ign" Nov 4 23:47:03.345915 ignition[906]: config successfully fetched Nov 4 23:47:03.345933 ignition[906]: parsing config with SHA512: d64eee0daf0a67804400c31ecd8e5cdcaa3a8ded8e59cd06e65d5fee0ff6bfa3d784c1324422522aa7f9369a4682addd09a0fe357d6356dd1dd50f15b73fdb88 Nov 4 23:47:03.349974 unknown[906]: fetched base config from "system" Nov 4 23:47:03.349985 unknown[906]: fetched user config from "vmware" Nov 4 23:47:03.350264 ignition[906]: fetch-offline: fetch-offline passed Nov 4 23:47:03.350302 ignition[906]: Ignition finished successfully Nov 4 23:47:03.351156 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 23:47:03.351548 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 4 23:47:03.352094 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 4 23:47:03.373170 ignition[914]: Ignition 2.22.0 Nov 4 23:47:03.373183 ignition[914]: Stage: kargs Nov 4 23:47:03.373277 ignition[914]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:47:03.373283 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 4 23:47:03.373904 ignition[914]: kargs: kargs passed Nov 4 23:47:03.373940 ignition[914]: Ignition finished successfully Nov 4 23:47:03.375541 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 4 23:47:03.376440 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 4 23:47:03.399373 ignition[921]: Ignition 2.22.0 Nov 4 23:47:03.399383 ignition[921]: Stage: disks Nov 4 23:47:03.399471 ignition[921]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:47:03.399476 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 4 23:47:03.400970 ignition[921]: disks: disks passed Nov 4 23:47:03.401151 ignition[921]: Ignition finished successfully Nov 4 23:47:03.402445 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 4 23:47:03.403051 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 4 23:47:03.403324 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 4 23:47:03.403565 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 23:47:03.403789 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 23:47:03.404011 systemd[1]: Reached target basic.target - Basic System. Nov 4 23:47:03.404803 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 4 23:47:03.427772 systemd-fsck[929]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Nov 4 23:47:03.429049 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 4 23:47:03.429900 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 4 23:47:03.543082 kernel: EXT4-fs (sda9): mounted filesystem cfb29ed0-6faf-41a8-b421-3abc514e4975 r/w with ordered data mode. Quota mode: none. Nov 4 23:47:03.543396 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 4 23:47:03.543752 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 4 23:47:03.545323 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 23:47:03.546027 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 4 23:47:03.547304 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 4 23:47:03.547510 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 4 23:47:03.547533 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 23:47:03.552912 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 4 23:47:03.554049 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 4 23:47:03.561088 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (937) Nov 4 23:47:03.563090 kernel: BTRFS info (device sda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:47:03.565087 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:47:03.573088 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 4 23:47:03.573129 kernel: BTRFS info (device sda6): enabling free space tree Nov 4 23:47:03.574504 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 23:47:03.602386 initrd-setup-root[961]: cut: /sysroot/etc/passwd: No such file or directory Nov 4 23:47:03.605854 initrd-setup-root[968]: cut: /sysroot/etc/group: No such file or directory Nov 4 23:47:03.608491 initrd-setup-root[975]: cut: /sysroot/etc/shadow: No such file or directory Nov 4 23:47:03.610670 initrd-setup-root[982]: cut: /sysroot/etc/gshadow: No such file or directory Nov 4 23:47:03.695455 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 4 23:47:03.696595 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 4 23:47:03.698166 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 4 23:47:03.710459 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 4 23:47:03.713087 kernel: BTRFS info (device sda6): last unmount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:47:03.726397 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 4 23:47:03.738082 ignition[1050]: INFO : Ignition 2.22.0 Nov 4 23:47:03.738082 ignition[1050]: INFO : Stage: mount Nov 4 23:47:03.738082 ignition[1050]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:47:03.738082 ignition[1050]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 4 23:47:03.738990 ignition[1050]: INFO : mount: mount passed Nov 4 23:47:03.738990 ignition[1050]: INFO : Ignition finished successfully Nov 4 23:47:03.739799 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 4 23:47:03.740937 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 4 23:47:03.840192 systemd-networkd[729]: ens192: Gained IPv6LL Nov 4 23:47:04.544757 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 23:47:04.560817 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1061) Nov 4 23:47:04.560846 kernel: BTRFS info (device sda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:47:04.560856 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:47:04.566085 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 4 23:47:04.566112 kernel: BTRFS info (device sda6): enabling free space tree Nov 4 23:47:04.565980 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 23:47:04.586468 ignition[1077]: INFO : Ignition 2.22.0 Nov 4 23:47:04.586468 ignition[1077]: INFO : Stage: files Nov 4 23:47:04.586892 ignition[1077]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:47:04.586892 ignition[1077]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 4 23:47:04.587178 ignition[1077]: DEBUG : files: compiled without relabeling support, skipping Nov 4 23:47:04.587621 ignition[1077]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 4 23:47:04.587621 ignition[1077]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 4 23:47:04.590264 ignition[1077]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 4 23:47:04.590406 ignition[1077]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 4 23:47:04.590551 ignition[1077]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 4 23:47:04.590482 unknown[1077]: wrote ssh authorized keys file for user: core Nov 4 23:47:04.591749 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 4 23:47:04.591929 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 4 23:47:04.639468 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 4 23:47:04.719527 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 4 23:47:04.719527 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 4 23:47:04.719527 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 4 23:47:04.719527 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 4 23:47:04.719527 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 4 23:47:04.719527 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 23:47:04.719527 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 23:47:04.719527 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 23:47:04.719527 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 23:47:04.725562 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 23:47:04.725778 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 23:47:04.725778 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 23:47:04.727939 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 23:47:04.728170 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 23:47:04.728170 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 4 23:47:05.188408 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 4 23:47:05.377985 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 23:47:05.377985 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Nov 4 23:47:05.379178 ignition[1077]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Nov 4 23:47:05.379178 ignition[1077]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 4 23:47:05.379705 ignition[1077]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 23:47:05.380078 ignition[1077]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 23:47:05.380078 ignition[1077]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 4 23:47:05.380078 ignition[1077]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 4 23:47:05.380078 ignition[1077]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 4 23:47:05.380863 ignition[1077]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 4 23:47:05.380863 ignition[1077]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 4 23:47:05.380863 ignition[1077]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 4 23:47:05.420938 ignition[1077]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 4 23:47:05.424539 ignition[1077]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 4 23:47:05.425132 ignition[1077]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 4 23:47:05.425132 ignition[1077]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 4 23:47:05.425132 ignition[1077]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 4 23:47:05.425132 ignition[1077]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 4 23:47:05.426770 ignition[1077]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 4 23:47:05.426770 ignition[1077]: INFO : files: files passed Nov 4 23:47:05.426770 ignition[1077]: INFO : Ignition finished successfully Nov 4 23:47:05.427325 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 4 23:47:05.428472 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 4 23:47:05.430176 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 4 23:47:05.442112 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 4 23:47:05.442185 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 4 23:47:05.445734 initrd-setup-root-after-ignition[1111]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:47:05.445734 initrd-setup-root-after-ignition[1111]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:47:05.446808 initrd-setup-root-after-ignition[1115]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:47:05.447727 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 23:47:05.448292 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 4 23:47:05.449010 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 4 23:47:05.485557 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 4 23:47:05.485646 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 4 23:47:05.485942 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 4 23:47:05.486063 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 4 23:47:05.486395 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 4 23:47:05.486931 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 4 23:47:05.510894 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 23:47:05.511777 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 4 23:47:05.530182 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 23:47:05.530365 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:47:05.530890 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:47:05.531220 systemd[1]: Stopped target timers.target - Timer Units. Nov 4 23:47:05.531490 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 4 23:47:05.531688 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 23:47:05.532150 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 4 23:47:05.532426 systemd[1]: Stopped target basic.target - Basic System. Nov 4 23:47:05.532672 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 4 23:47:05.532917 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 23:47:05.533261 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 4 23:47:05.533571 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 4 23:47:05.533859 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 4 23:47:05.534152 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 23:47:05.534430 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 4 23:47:05.534729 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 4 23:47:05.534982 systemd[1]: Stopped target swap.target - Swaps. Nov 4 23:47:05.535250 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 4 23:47:05.535451 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 4 23:47:05.535846 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:47:05.536127 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:47:05.536435 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 4 23:47:05.536653 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:47:05.537001 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 4 23:47:05.537151 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 4 23:47:05.537443 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 4 23:47:05.537554 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 23:47:05.537817 systemd[1]: Stopped target paths.target - Path Units. Nov 4 23:47:05.537963 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 4 23:47:05.542166 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:47:05.542425 systemd[1]: Stopped target slices.target - Slice Units. Nov 4 23:47:05.542564 systemd[1]: Stopped target sockets.target - Socket Units. Nov 4 23:47:05.542701 systemd[1]: iscsid.socket: Deactivated successfully. Nov 4 23:47:05.542767 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 23:47:05.542906 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 4 23:47:05.542961 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 23:47:05.543165 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 4 23:47:05.543246 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 23:47:05.543419 systemd[1]: ignition-files.service: Deactivated successfully. Nov 4 23:47:05.543486 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 4 23:47:05.545630 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 4 23:47:05.547197 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 4 23:47:05.547481 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 4 23:47:05.547678 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:47:05.548034 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 4 23:47:05.548244 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:47:05.548581 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 4 23:47:05.548807 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 23:47:05.553951 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 4 23:47:05.556013 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 4 23:47:05.569367 ignition[1136]: INFO : Ignition 2.22.0 Nov 4 23:47:05.569367 ignition[1136]: INFO : Stage: umount Nov 4 23:47:05.569367 ignition[1136]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:47:05.569367 ignition[1136]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 4 23:47:05.569367 ignition[1136]: INFO : umount: umount passed Nov 4 23:47:05.569367 ignition[1136]: INFO : Ignition finished successfully Nov 4 23:47:05.568988 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 4 23:47:05.571334 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 4 23:47:05.571438 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 4 23:47:05.571884 systemd[1]: Stopped target network.target - Network. Nov 4 23:47:05.572004 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 4 23:47:05.572038 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 4 23:47:05.572198 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 4 23:47:05.572224 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 4 23:47:05.572348 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 4 23:47:05.572374 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 4 23:47:05.572526 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 4 23:47:05.572549 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 4 23:47:05.572761 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 4 23:47:05.573041 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 4 23:47:05.579151 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 4 23:47:05.579244 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 4 23:47:05.580561 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 4 23:47:05.580752 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 4 23:47:05.580778 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:47:05.581471 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 4 23:47:05.581577 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 4 23:47:05.581606 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 23:47:05.581750 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Nov 4 23:47:05.581772 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Nov 4 23:47:05.581913 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:47:05.585801 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 4 23:47:05.585868 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 4 23:47:05.586953 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 4 23:47:05.587016 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:47:05.587467 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 4 23:47:05.587496 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 4 23:47:05.592917 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 4 23:47:05.593194 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:47:05.593658 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 4 23:47:05.593689 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 4 23:47:05.593973 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 4 23:47:05.593993 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:47:05.594187 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 4 23:47:05.594217 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 4 23:47:05.594483 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 4 23:47:05.594508 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 4 23:47:05.594788 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 4 23:47:05.594814 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 23:47:05.595658 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 4 23:47:05.595767 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 4 23:47:05.595795 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:47:05.595925 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 4 23:47:05.595948 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:47:05.596083 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 4 23:47:05.596110 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 23:47:05.596230 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 4 23:47:05.596252 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:47:05.596371 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:47:05.596392 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:47:05.611641 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 4 23:47:05.611710 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 4 23:47:05.632504 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 4 23:47:05.632591 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 4 23:47:05.856358 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 4 23:47:05.856478 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 4 23:47:05.857033 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 4 23:47:05.857204 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 4 23:47:05.857252 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 4 23:47:05.858171 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 4 23:47:05.876488 systemd[1]: Switching root. Nov 4 23:47:05.902442 systemd-journald[333]: Journal stopped Nov 4 23:47:06.880475 systemd-journald[333]: Received SIGTERM from PID 1 (systemd). Nov 4 23:47:06.880513 kernel: SELinux: policy capability network_peer_controls=1 Nov 4 23:47:06.880524 kernel: SELinux: policy capability open_perms=1 Nov 4 23:47:06.880531 kernel: SELinux: policy capability extended_socket_class=1 Nov 4 23:47:06.880538 kernel: SELinux: policy capability always_check_network=0 Nov 4 23:47:06.880546 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 4 23:47:06.880554 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 4 23:47:06.880561 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 4 23:47:06.880567 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 4 23:47:06.880575 kernel: SELinux: policy capability userspace_initial_context=0 Nov 4 23:47:06.880582 kernel: audit: type=1403 audit(1762300026.316:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 4 23:47:06.880590 systemd[1]: Successfully loaded SELinux policy in 65.210ms. Nov 4 23:47:06.880600 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.090ms. Nov 4 23:47:06.880610 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 23:47:06.880619 systemd[1]: Detected virtualization vmware. Nov 4 23:47:06.880628 systemd[1]: Detected architecture x86-64. Nov 4 23:47:06.880637 systemd[1]: Detected first boot. Nov 4 23:47:06.880646 systemd[1]: Initializing machine ID from random generator. Nov 4 23:47:06.880654 zram_generator::config[1180]: No configuration found. Nov 4 23:47:06.880797 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Nov 4 23:47:06.880812 kernel: Guest personality initialized and is active Nov 4 23:47:06.880819 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 4 23:47:06.880826 kernel: Initialized host personality Nov 4 23:47:06.880834 kernel: NET: Registered PF_VSOCK protocol family Nov 4 23:47:06.880842 systemd[1]: Populated /etc with preset unit settings. Nov 4 23:47:06.880852 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 4 23:47:06.880864 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Nov 4 23:47:06.880877 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 4 23:47:06.880885 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 4 23:47:06.880893 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 4 23:47:06.880902 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 4 23:47:06.880910 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 4 23:47:06.880922 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 4 23:47:06.880934 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 4 23:47:06.880945 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 4 23:47:06.880954 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 4 23:47:06.880962 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 4 23:47:06.880970 systemd[1]: Created slice user.slice - User and Session Slice. Nov 4 23:47:06.880979 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:47:06.880987 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:47:06.880997 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 4 23:47:06.881005 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 4 23:47:06.881013 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 4 23:47:06.881022 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 23:47:06.881031 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 4 23:47:06.881042 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:47:06.881054 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:47:06.881062 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 4 23:47:06.881080 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 4 23:47:06.881089 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 4 23:47:06.881097 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 4 23:47:06.881108 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:47:06.881120 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 23:47:06.881129 systemd[1]: Reached target slices.target - Slice Units. Nov 4 23:47:06.881138 systemd[1]: Reached target swap.target - Swaps. Nov 4 23:47:06.881145 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 4 23:47:06.881154 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 4 23:47:06.881164 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 4 23:47:06.887321 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:47:06.887341 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 23:47:06.887350 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:47:06.887361 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 4 23:47:06.887369 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 4 23:47:06.887376 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 4 23:47:06.887384 systemd[1]: Mounting media.mount - External Media Directory... Nov 4 23:47:06.887392 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:47:06.887400 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 4 23:47:06.887408 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 4 23:47:06.887417 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 4 23:47:06.887426 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 4 23:47:06.887433 systemd[1]: Reached target machines.target - Containers. Nov 4 23:47:06.887441 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 4 23:47:06.887448 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Nov 4 23:47:06.887456 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 23:47:06.887463 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 4 23:47:06.887472 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:47:06.887480 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 23:47:06.887488 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 23:47:06.887495 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 4 23:47:06.887504 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 23:47:06.887511 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 4 23:47:06.887520 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 4 23:47:06.887528 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 4 23:47:06.887536 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 4 23:47:06.887543 systemd[1]: Stopped systemd-fsck-usr.service. Nov 4 23:47:06.887552 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:47:06.887559 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 23:47:06.887567 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 23:47:06.887576 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 23:47:06.887584 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 4 23:47:06.887591 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 4 23:47:06.887599 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 23:47:06.887607 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:47:06.887615 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 4 23:47:06.887624 kernel: fuse: init (API version 7.41) Nov 4 23:47:06.887631 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 4 23:47:06.887639 systemd[1]: Mounted media.mount - External Media Directory. Nov 4 23:47:06.887646 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 4 23:47:06.887654 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 4 23:47:06.887662 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 4 23:47:06.887669 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:47:06.887678 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 4 23:47:06.887686 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 4 23:47:06.887694 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:47:06.887701 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:47:06.887709 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 4 23:47:06.887716 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 4 23:47:06.887724 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 23:47:06.887733 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 23:47:06.887741 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 23:47:06.887749 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 23:47:06.887756 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 4 23:47:06.887764 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 4 23:47:06.887772 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 23:47:06.887780 kernel: ACPI: bus type drm_connector registered Nov 4 23:47:06.887788 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 23:47:06.887796 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 23:47:06.887804 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 23:47:06.887814 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 23:47:06.887823 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:47:06.887831 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 4 23:47:06.887839 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 4 23:47:06.887847 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 4 23:47:06.887857 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 23:47:06.887865 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 4 23:47:06.887873 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 4 23:47:06.887881 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 23:47:06.887889 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 4 23:47:06.887919 systemd-journald[1268]: Collecting audit messages is disabled. Nov 4 23:47:06.887937 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:47:06.887947 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 4 23:47:06.887955 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 23:47:06.887964 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 4 23:47:06.887972 systemd-journald[1268]: Journal started Nov 4 23:47:06.887987 systemd-journald[1268]: Runtime Journal (/run/log/journal/aa34d0dd7935425a89987a7805324e13) is 4.8M, max 38.5M, 33.7M free. Nov 4 23:47:06.665365 systemd[1]: Queued start job for default target multi-user.target. Nov 4 23:47:06.672034 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 4 23:47:06.672321 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 4 23:47:06.888542 jq[1250]: true Nov 4 23:47:06.900514 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 23:47:06.900551 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 4 23:47:06.900563 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 23:47:06.900360 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 4 23:47:06.900654 jq[1289]: true Nov 4 23:47:06.910822 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 4 23:47:06.918849 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 4 23:47:06.924695 kernel: loop1: detected capacity change from 0 to 229808 Nov 4 23:47:06.923941 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:47:06.926018 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 4 23:47:06.926654 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 4 23:47:06.930248 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 4 23:47:06.937622 systemd-tmpfiles[1294]: ACLs are not supported, ignoring. Nov 4 23:47:06.937633 systemd-tmpfiles[1294]: ACLs are not supported, ignoring. Nov 4 23:47:06.944244 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 23:47:06.947186 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 4 23:47:06.952526 systemd-journald[1268]: Time spent on flushing to /var/log/journal/aa34d0dd7935425a89987a7805324e13 is 40.291ms for 1756 entries. Nov 4 23:47:06.952526 systemd-journald[1268]: System Journal (/var/log/journal/aa34d0dd7935425a89987a7805324e13) is 8M, max 588.1M, 580.1M free. Nov 4 23:47:06.999114 systemd-journald[1268]: Received client request to flush runtime journal. Nov 4 23:47:06.999148 kernel: loop2: detected capacity change from 0 to 110984 Nov 4 23:47:06.958989 ignition[1298]: Ignition 2.22.0 Nov 4 23:47:06.954109 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 4 23:47:06.960318 ignition[1298]: deleting config from guestinfo properties Nov 4 23:47:06.973480 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Nov 4 23:47:06.964484 ignition[1298]: Successfully deleted config Nov 4 23:47:06.996115 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:47:07.001101 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 4 23:47:07.026102 kernel: loop3: detected capacity change from 0 to 128048 Nov 4 23:47:07.106580 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 4 23:47:07.108133 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 23:47:07.110179 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 23:47:07.126619 systemd-tmpfiles[1351]: ACLs are not supported, ignoring. Nov 4 23:47:07.126633 systemd-tmpfiles[1351]: ACLs are not supported, ignoring. Nov 4 23:47:07.129466 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:47:07.133095 kernel: loop4: detected capacity change from 0 to 2960 Nov 4 23:47:07.146762 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 4 23:47:07.176411 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 4 23:47:07.185092 kernel: loop5: detected capacity change from 0 to 229808 Nov 4 23:47:07.224914 systemd-resolved[1350]: Positive Trust Anchors: Nov 4 23:47:07.225146 systemd-resolved[1350]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 23:47:07.225180 systemd-resolved[1350]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 23:47:07.225236 systemd-resolved[1350]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 23:47:07.230091 kernel: loop6: detected capacity change from 0 to 110984 Nov 4 23:47:07.230405 systemd-resolved[1350]: Defaulting to hostname 'linux'. Nov 4 23:47:07.231359 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 23:47:07.231545 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:47:07.245109 kernel: loop7: detected capacity change from 0 to 128048 Nov 4 23:47:07.269103 kernel: loop1: detected capacity change from 0 to 2960 Nov 4 23:47:07.275840 (sd-merge)[1361]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-vmware.raw'. Nov 4 23:47:07.278411 (sd-merge)[1361]: Merged extensions into '/usr'. Nov 4 23:47:07.282221 systemd[1]: Reload requested from client PID 1307 ('systemd-sysext') (unit systemd-sysext.service)... Nov 4 23:47:07.282233 systemd[1]: Reloading... Nov 4 23:47:07.349110 zram_generator::config[1391]: No configuration found. Nov 4 23:47:07.441501 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 4 23:47:07.492248 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 4 23:47:07.492424 systemd[1]: Reloading finished in 209 ms. Nov 4 23:47:07.517691 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 4 23:47:07.522156 systemd[1]: Starting ensure-sysext.service... Nov 4 23:47:07.525158 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 23:47:07.537309 systemd-tmpfiles[1447]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 4 23:47:07.537332 systemd-tmpfiles[1447]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 4 23:47:07.537485 systemd-tmpfiles[1447]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 4 23:47:07.537657 systemd-tmpfiles[1447]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 4 23:47:07.538179 systemd-tmpfiles[1447]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 4 23:47:07.538346 systemd-tmpfiles[1447]: ACLs are not supported, ignoring. Nov 4 23:47:07.538383 systemd-tmpfiles[1447]: ACLs are not supported, ignoring. Nov 4 23:47:07.545015 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 4 23:47:07.548192 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:47:07.553195 systemd[1]: Reload requested from client PID 1446 ('systemctl') (unit ensure-sysext.service)... Nov 4 23:47:07.553208 systemd[1]: Reloading... Nov 4 23:47:07.556446 systemd-tmpfiles[1447]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 23:47:07.556450 systemd-tmpfiles[1447]: Skipping /boot Nov 4 23:47:07.562479 systemd-tmpfiles[1447]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 23:47:07.562569 systemd-tmpfiles[1447]: Skipping /boot Nov 4 23:47:07.585791 systemd-udevd[1450]: Using default interface naming scheme 'v257'. Nov 4 23:47:07.619388 zram_generator::config[1478]: No configuration found. Nov 4 23:47:07.731107 kernel: mousedev: PS/2 mouse device common for all mice Nov 4 23:47:07.733129 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 4 23:47:07.746101 kernel: ACPI: button: Power Button [PWRF] Nov 4 23:47:07.763714 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 4 23:47:07.851627 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 4 23:47:07.852068 systemd[1]: Reloading finished in 298 ms. Nov 4 23:47:07.861848 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:47:07.868767 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:47:07.893116 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Nov 4 23:47:07.902817 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:47:07.905438 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 23:47:07.908305 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 4 23:47:07.910910 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 4 23:47:07.912860 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:47:07.921490 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 23:47:07.924905 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 23:47:07.925161 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:47:07.925241 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:47:07.930498 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 4 23:47:07.953549 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 23:47:07.956707 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 4 23:47:07.956875 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:47:07.958528 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:47:07.958700 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:47:07.959312 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 23:47:07.959652 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 23:47:07.960049 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 23:47:07.960609 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 23:47:07.961785 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 23:47:07.961936 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 23:47:07.964703 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:47:07.968458 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:47:07.971184 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 23:47:07.974180 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 23:47:07.974388 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:47:07.974475 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:47:07.974546 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:47:07.977860 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:47:07.982144 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 23:47:07.982396 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:47:07.982474 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:47:07.982585 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:47:07.988633 systemd[1]: Finished ensure-sysext.service. Nov 4 23:47:07.992306 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 4 23:47:08.007002 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 4 23:47:08.020881 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Nov 4 23:47:08.025868 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 4 23:47:08.033411 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 4 23:47:08.042495 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 4 23:47:08.042851 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 4 23:47:08.046132 (udev-worker)[1529]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Nov 4 23:47:08.057322 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:47:08.057812 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:47:08.058639 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:47:08.069087 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 23:47:08.076921 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 23:47:08.078169 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 23:47:08.078384 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 23:47:08.078777 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 23:47:08.078956 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 23:47:08.080884 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 4 23:47:08.086190 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 23:47:08.086235 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 23:47:08.100778 augenrules[1630]: No rules Nov 4 23:47:08.105688 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 23:47:08.106584 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 23:47:08.198402 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 4 23:47:08.198635 systemd[1]: Reached target time-set.target - System Time Set. Nov 4 23:47:08.231811 systemd-networkd[1587]: lo: Link UP Nov 4 23:47:08.231817 systemd-networkd[1587]: lo: Gained carrier Nov 4 23:47:08.233507 systemd-networkd[1587]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Nov 4 23:47:08.233643 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 23:47:08.233841 systemd[1]: Reached target network.target - Network. Nov 4 23:47:08.236382 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 4 23:47:08.238327 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Nov 4 23:47:08.238549 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Nov 4 23:47:08.239745 systemd-networkd[1587]: ens192: Link UP Nov 4 23:47:08.240189 systemd-networkd[1587]: ens192: Gained carrier Nov 4 23:47:08.245419 systemd-timesyncd[1598]: Network configuration changed, trying to establish connection. Nov 4 23:47:08.251617 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 4 23:47:08.252275 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:47:08.281323 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 4 23:47:08.779989 ldconfig[1576]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 4 23:47:08.781807 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 4 23:47:08.783160 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 4 23:47:08.795569 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 4 23:47:08.795918 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 23:47:08.796098 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 4 23:47:08.796239 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 4 23:47:08.796364 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 4 23:47:08.796572 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 4 23:47:08.796728 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 4 23:47:08.796846 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 4 23:47:08.796961 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 4 23:47:08.796979 systemd[1]: Reached target paths.target - Path Units. Nov 4 23:47:08.797109 systemd[1]: Reached target timers.target - Timer Units. Nov 4 23:47:08.797824 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 4 23:47:08.799433 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 4 23:47:08.801194 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 4 23:47:08.801403 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 4 23:47:08.801536 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 4 23:47:08.804395 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 4 23:47:08.805015 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 4 23:47:08.805593 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 4 23:47:08.806253 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 23:47:08.806366 systemd[1]: Reached target basic.target - Basic System. Nov 4 23:47:08.806506 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 4 23:47:08.806526 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 4 23:47:08.807580 systemd[1]: Starting containerd.service - containerd container runtime... Nov 4 23:47:08.808635 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 4 23:47:08.811240 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 4 23:47:08.815693 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 4 23:47:08.818262 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 4 23:47:08.818404 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 4 23:47:08.819648 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 4 23:47:08.822858 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 4 23:47:08.829046 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 4 23:47:08.832520 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 4 23:47:08.833676 jq[1654]: false Nov 4 23:47:08.835624 google_oslogin_nss_cache[1656]: oslogin_cache_refresh[1656]: Refreshing passwd entry cache Nov 4 23:47:08.835626 oslogin_cache_refresh[1656]: Refreshing passwd entry cache Nov 4 23:47:08.837996 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 4 23:47:08.844217 google_oslogin_nss_cache[1656]: oslogin_cache_refresh[1656]: Failure getting users, quitting Nov 4 23:47:08.844210 oslogin_cache_refresh[1656]: Failure getting users, quitting Nov 4 23:47:08.844320 google_oslogin_nss_cache[1656]: oslogin_cache_refresh[1656]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 23:47:08.844320 google_oslogin_nss_cache[1656]: oslogin_cache_refresh[1656]: Refreshing group entry cache Nov 4 23:47:08.844223 oslogin_cache_refresh[1656]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 23:47:08.844256 oslogin_cache_refresh[1656]: Refreshing group entry cache Nov 4 23:47:08.845282 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 4 23:47:08.845442 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 4 23:47:08.845972 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 4 23:47:08.847216 systemd[1]: Starting update-engine.service - Update Engine... Nov 4 23:47:08.850313 google_oslogin_nss_cache[1656]: oslogin_cache_refresh[1656]: Failure getting groups, quitting Nov 4 23:47:08.850313 google_oslogin_nss_cache[1656]: oslogin_cache_refresh[1656]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 23:47:08.850306 oslogin_cache_refresh[1656]: Failure getting groups, quitting Nov 4 23:47:08.850316 oslogin_cache_refresh[1656]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 23:47:08.852158 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 4 23:47:08.854495 extend-filesystems[1655]: Found /dev/sda6 Nov 4 23:47:08.860155 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Nov 4 23:47:08.860705 extend-filesystems[1655]: Found /dev/sda9 Nov 4 23:47:08.861936 extend-filesystems[1655]: Checking size of /dev/sda9 Nov 4 23:47:08.866685 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 4 23:47:08.867226 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 4 23:47:08.867448 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 4 23:47:08.867647 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 4 23:47:08.867789 jq[1670]: true Nov 4 23:47:08.867991 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 4 23:47:08.868299 systemd[1]: motdgen.service: Deactivated successfully. Nov 4 23:47:08.868567 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 4 23:47:08.869359 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 4 23:47:08.870112 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 4 23:47:08.882198 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Nov 4 23:47:08.888085 update_engine[1669]: I20251104 23:47:08.886888 1669 main.cc:92] Flatcar Update Engine starting Nov 4 23:47:08.889239 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Nov 4 23:47:08.900837 (ntainerd)[1684]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 4 23:47:08.904293 extend-filesystems[1655]: Resized partition /dev/sda9 Nov 4 23:47:08.911164 unknown[1698]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Nov 4 23:47:08.911721 unknown[1698]: Core dump limit set to -1 Nov 4 23:47:08.917819 jq[1681]: true Nov 4 23:47:08.926538 extend-filesystems[1707]: resize2fs 1.47.3 (8-Jul-2025) Nov 4 23:47:08.937892 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Nov 4 23:47:08.942962 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 1635323 blocks Nov 4 23:47:08.943034 kernel: EXT4-fs (sda9): resized filesystem to 1635323 Nov 4 23:47:08.943054 tar[1680]: linux-amd64/LICENSE Nov 4 23:47:08.950181 tar[1680]: linux-amd64/helm Nov 4 23:47:08.950957 extend-filesystems[1707]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 4 23:47:08.950957 extend-filesystems[1707]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 4 23:47:08.950957 extend-filesystems[1707]: The filesystem on /dev/sda9 is now 1635323 (4k) blocks long. Nov 4 23:47:08.952866 extend-filesystems[1655]: Resized filesystem in /dev/sda9 Nov 4 23:47:08.953264 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 4 23:47:08.953429 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 4 23:47:08.973335 dbus-daemon[1652]: [system] SELinux support is enabled Nov 4 23:47:08.973562 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 4 23:47:08.975705 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 4 23:47:08.975724 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 4 23:47:08.975864 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 4 23:47:08.975875 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 4 23:47:08.986553 systemd[1]: Started update-engine.service - Update Engine. Nov 4 23:47:08.989475 update_engine[1669]: I20251104 23:47:08.988452 1669 update_check_scheduler.cc:74] Next update check in 9m8s Nov 4 23:47:08.994918 systemd-logind[1666]: Watching system buttons on /dev/input/event2 (Power Button) Nov 4 23:47:08.994934 systemd-logind[1666]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 4 23:47:08.996314 systemd-logind[1666]: New seat seat0. Nov 4 23:47:08.999996 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 4 23:47:09.000242 systemd[1]: Started systemd-logind.service - User Login Management. Nov 4 23:47:09.040338 bash[1731]: Updated "/home/core/.ssh/authorized_keys" Nov 4 23:47:09.040092 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 4 23:47:09.041899 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 4 23:47:09.177362 sshd_keygen[1682]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 4 23:47:09.206469 locksmithd[1725]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 4 23:47:09.225669 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 4 23:47:09.229292 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 4 23:47:09.233660 containerd[1684]: time="2025-11-04T23:47:09Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 4 23:47:09.234929 containerd[1684]: time="2025-11-04T23:47:09.234902454Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 4 23:47:09.246444 systemd[1]: issuegen.service: Deactivated successfully. Nov 4 23:47:09.252231 containerd[1684]: time="2025-11-04T23:47:09.248140759Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.725µs" Nov 4 23:47:09.252231 containerd[1684]: time="2025-11-04T23:47:09.248160700Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 4 23:47:09.252231 containerd[1684]: time="2025-11-04T23:47:09.248173511Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 4 23:47:09.252231 containerd[1684]: time="2025-11-04T23:47:09.248312551Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 4 23:47:09.252231 containerd[1684]: time="2025-11-04T23:47:09.248324677Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 4 23:47:09.252231 containerd[1684]: time="2025-11-04T23:47:09.248340611Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 23:47:09.252231 containerd[1684]: time="2025-11-04T23:47:09.248375378Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 23:47:09.252231 containerd[1684]: time="2025-11-04T23:47:09.248382820Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 23:47:09.252231 containerd[1684]: time="2025-11-04T23:47:09.248501553Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 23:47:09.252231 containerd[1684]: time="2025-11-04T23:47:09.248510655Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 23:47:09.252231 containerd[1684]: time="2025-11-04T23:47:09.248517169Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 23:47:09.252231 containerd[1684]: time="2025-11-04T23:47:09.248521566Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 4 23:47:09.246599 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 4 23:47:09.252576 containerd[1684]: time="2025-11-04T23:47:09.248564463Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 4 23:47:09.252576 containerd[1684]: time="2025-11-04T23:47:09.248695481Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 23:47:09.252576 containerd[1684]: time="2025-11-04T23:47:09.248713197Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 23:47:09.252576 containerd[1684]: time="2025-11-04T23:47:09.248719344Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 4 23:47:09.252576 containerd[1684]: time="2025-11-04T23:47:09.248899714Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 4 23:47:09.252576 containerd[1684]: time="2025-11-04T23:47:09.250108538Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 4 23:47:09.252576 containerd[1684]: time="2025-11-04T23:47:09.250171272Z" level=info msg="metadata content store policy set" policy=shared Nov 4 23:47:09.248494 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 4 23:47:09.252738 containerd[1684]: time="2025-11-04T23:47:09.252697786Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 4 23:47:09.252753 containerd[1684]: time="2025-11-04T23:47:09.252738584Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 4 23:47:09.252772 containerd[1684]: time="2025-11-04T23:47:09.252753190Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 4 23:47:09.252772 containerd[1684]: time="2025-11-04T23:47:09.252761578Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 4 23:47:09.252772 containerd[1684]: time="2025-11-04T23:47:09.252769700Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 4 23:47:09.252812 containerd[1684]: time="2025-11-04T23:47:09.252776627Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 4 23:47:09.252826 containerd[1684]: time="2025-11-04T23:47:09.252809617Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 4 23:47:09.252826 containerd[1684]: time="2025-11-04T23:47:09.252818664Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 4 23:47:09.252855 containerd[1684]: time="2025-11-04T23:47:09.252825010Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 4 23:47:09.252855 containerd[1684]: time="2025-11-04T23:47:09.252831267Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 4 23:47:09.252855 containerd[1684]: time="2025-11-04T23:47:09.252836534Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 4 23:47:09.252855 containerd[1684]: time="2025-11-04T23:47:09.252847649Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 4 23:47:09.253475 containerd[1684]: time="2025-11-04T23:47:09.252922430Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 4 23:47:09.253475 containerd[1684]: time="2025-11-04T23:47:09.252939153Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 4 23:47:09.253475 containerd[1684]: time="2025-11-04T23:47:09.252949530Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 4 23:47:09.253475 containerd[1684]: time="2025-11-04T23:47:09.252958502Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 4 23:47:09.253475 containerd[1684]: time="2025-11-04T23:47:09.252965195Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 4 23:47:09.253475 containerd[1684]: time="2025-11-04T23:47:09.252971880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 4 23:47:09.253475 containerd[1684]: time="2025-11-04T23:47:09.252977643Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 4 23:47:09.253475 containerd[1684]: time="2025-11-04T23:47:09.252983193Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 4 23:47:09.253475 containerd[1684]: time="2025-11-04T23:47:09.252989475Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 4 23:47:09.253475 containerd[1684]: time="2025-11-04T23:47:09.252995177Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 4 23:47:09.253475 containerd[1684]: time="2025-11-04T23:47:09.253000992Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 4 23:47:09.253475 containerd[1684]: time="2025-11-04T23:47:09.253039026Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 4 23:47:09.253475 containerd[1684]: time="2025-11-04T23:47:09.253047539Z" level=info msg="Start snapshots syncer" Nov 4 23:47:09.253475 containerd[1684]: time="2025-11-04T23:47:09.253067549Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 4 23:47:09.253674 containerd[1684]: time="2025-11-04T23:47:09.253262970Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 4 23:47:09.253674 containerd[1684]: time="2025-11-04T23:47:09.253297862Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 4 23:47:09.253757 containerd[1684]: time="2025-11-04T23:47:09.253444724Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 4 23:47:09.254066 containerd[1684]: time="2025-11-04T23:47:09.254051263Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 4 23:47:09.254066 containerd[1684]: time="2025-11-04T23:47:09.254129610Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 4 23:47:09.254066 containerd[1684]: time="2025-11-04T23:47:09.254141352Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 4 23:47:09.254066 containerd[1684]: time="2025-11-04T23:47:09.254153684Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 4 23:47:09.254066 containerd[1684]: time="2025-11-04T23:47:09.254161777Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 4 23:47:09.254066 containerd[1684]: time="2025-11-04T23:47:09.254167944Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 4 23:47:09.254066 containerd[1684]: time="2025-11-04T23:47:09.254176100Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 4 23:47:09.254066 containerd[1684]: time="2025-11-04T23:47:09.254200051Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 4 23:47:09.254326 containerd[1684]: time="2025-11-04T23:47:09.254211892Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 4 23:47:09.254326 containerd[1684]: time="2025-11-04T23:47:09.254218768Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 4 23:47:09.254326 containerd[1684]: time="2025-11-04T23:47:09.254247679Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 23:47:09.254326 containerd[1684]: time="2025-11-04T23:47:09.254258934Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 23:47:09.254326 containerd[1684]: time="2025-11-04T23:47:09.254264528Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 23:47:09.254326 containerd[1684]: time="2025-11-04T23:47:09.254270368Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 23:47:09.254326 containerd[1684]: time="2025-11-04T23:47:09.254274686Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 4 23:47:09.254326 containerd[1684]: time="2025-11-04T23:47:09.254279979Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 4 23:47:09.254326 containerd[1684]: time="2025-11-04T23:47:09.254285956Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 4 23:47:09.254326 containerd[1684]: time="2025-11-04T23:47:09.254325038Z" level=info msg="runtime interface created" Nov 4 23:47:09.254457 containerd[1684]: time="2025-11-04T23:47:09.254329684Z" level=info msg="created NRI interface" Nov 4 23:47:09.254457 containerd[1684]: time="2025-11-04T23:47:09.254334863Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 4 23:47:09.254457 containerd[1684]: time="2025-11-04T23:47:09.254343815Z" level=info msg="Connect containerd service" Nov 4 23:47:09.254457 containerd[1684]: time="2025-11-04T23:47:09.254363219Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 4 23:47:09.255377 containerd[1684]: time="2025-11-04T23:47:09.255230876Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 4 23:47:09.292526 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 4 23:47:09.297247 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 4 23:47:09.298567 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 4 23:47:09.299061 systemd[1]: Reached target getty.target - Login Prompts. Nov 4 23:47:09.393241 containerd[1684]: time="2025-11-04T23:47:09.393205363Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 4 23:47:09.393313 containerd[1684]: time="2025-11-04T23:47:09.393251479Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 4 23:47:09.393313 containerd[1684]: time="2025-11-04T23:47:09.393270603Z" level=info msg="Start subscribing containerd event" Nov 4 23:47:09.393313 containerd[1684]: time="2025-11-04T23:47:09.393287102Z" level=info msg="Start recovering state" Nov 4 23:47:09.393355 containerd[1684]: time="2025-11-04T23:47:09.393339923Z" level=info msg="Start event monitor" Nov 4 23:47:09.393355 containerd[1684]: time="2025-11-04T23:47:09.393347678Z" level=info msg="Start cni network conf syncer for default" Nov 4 23:47:09.393355 containerd[1684]: time="2025-11-04T23:47:09.393352433Z" level=info msg="Start streaming server" Nov 4 23:47:09.393405 containerd[1684]: time="2025-11-04T23:47:09.393359712Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 4 23:47:09.393405 containerd[1684]: time="2025-11-04T23:47:09.393364179Z" level=info msg="runtime interface starting up..." Nov 4 23:47:09.393405 containerd[1684]: time="2025-11-04T23:47:09.393367257Z" level=info msg="starting plugins..." Nov 4 23:47:09.393405 containerd[1684]: time="2025-11-04T23:47:09.393374268Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 4 23:47:09.393466 containerd[1684]: time="2025-11-04T23:47:09.393434025Z" level=info msg="containerd successfully booted in 0.160024s" Nov 4 23:47:09.393577 systemd[1]: Started containerd.service - containerd container runtime. Nov 4 23:47:09.396885 tar[1680]: linux-amd64/README.md Nov 4 23:47:09.410125 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 4 23:47:09.856189 systemd-networkd[1587]: ens192: Gained IPv6LL Nov 4 23:47:09.856596 systemd-timesyncd[1598]: Network configuration changed, trying to establish connection. Nov 4 23:47:09.858158 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 4 23:47:09.858796 systemd[1]: Reached target network-online.target - Network is Online. Nov 4 23:47:09.860513 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Nov 4 23:47:09.861958 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:47:09.868246 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 4 23:47:09.900397 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 4 23:47:09.927166 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 4 23:47:09.927418 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Nov 4 23:47:09.928064 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 4 23:47:10.806628 systemd-timesyncd[1598]: Network configuration changed, trying to establish connection. Nov 4 23:47:11.908845 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:47:11.909696 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 4 23:47:11.910717 systemd[1]: Startup finished in 2.239s (kernel) + 4.971s (initrd) + 5.658s (userspace) = 12.869s. Nov 4 23:47:11.922450 (kubelet)[1859]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:47:12.297295 login[1812]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 4 23:47:12.299231 login[1813]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 4 23:47:12.303684 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 4 23:47:12.304693 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 4 23:47:12.313105 systemd-logind[1666]: New session 1 of user core. Nov 4 23:47:12.319537 systemd-logind[1666]: New session 2 of user core. Nov 4 23:47:12.324867 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 4 23:47:12.328259 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 4 23:47:12.338416 (systemd)[1870]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 4 23:47:12.341163 systemd-logind[1666]: New session c1 of user core. Nov 4 23:47:12.703996 systemd[1870]: Queued start job for default target default.target. Nov 4 23:47:12.719397 systemd[1870]: Created slice app.slice - User Application Slice. Nov 4 23:47:12.719425 systemd[1870]: Reached target paths.target - Paths. Nov 4 23:47:12.719464 systemd[1870]: Reached target timers.target - Timers. Nov 4 23:47:12.720396 systemd[1870]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 4 23:47:12.729130 systemd[1870]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 4 23:47:12.729274 systemd[1870]: Reached target sockets.target - Sockets. Nov 4 23:47:12.729381 systemd[1870]: Reached target basic.target - Basic System. Nov 4 23:47:12.729460 systemd[1870]: Reached target default.target - Main User Target. Nov 4 23:47:12.729487 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 4 23:47:12.729579 systemd[1870]: Startup finished in 382ms. Nov 4 23:47:12.731884 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 4 23:47:12.734723 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 4 23:47:13.768444 kubelet[1859]: E1104 23:47:13.768404 1859 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:47:13.770104 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:47:13.770229 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:47:13.770676 systemd[1]: kubelet.service: Consumed 788ms CPU time, 269.6M memory peak. Nov 4 23:47:23.881932 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 4 23:47:23.883341 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:47:24.522935 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:47:24.525498 (kubelet)[1909]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:47:24.610143 kubelet[1909]: E1104 23:47:24.610105 1909 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:47:24.612796 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:47:24.612939 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:47:24.613318 systemd[1]: kubelet.service: Consumed 107ms CPU time, 110.6M memory peak. Nov 4 23:47:34.631757 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 4 23:47:34.633184 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:47:34.985030 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:47:34.990366 (kubelet)[1924]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:47:35.014136 kubelet[1924]: E1104 23:47:35.014098 1924 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:47:35.015705 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:47:35.015789 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:47:35.016256 systemd[1]: kubelet.service: Consumed 95ms CPU time, 108.1M memory peak. Nov 4 23:47:39.103997 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 4 23:47:39.105275 systemd[1]: Started sshd@0-139.178.70.101:22-139.178.68.195:41156.service - OpenSSH per-connection server daemon (139.178.68.195:41156). Nov 4 23:47:39.271324 sshd[1932]: Accepted publickey for core from 139.178.68.195 port 41156 ssh2: RSA SHA256:SN3AO0qmEwLXzHNXZ4jj0XXdhb1SagWw2vkr2zk9BUc Nov 4 23:47:39.272216 sshd-session[1932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:47:39.275124 systemd-logind[1666]: New session 3 of user core. Nov 4 23:47:39.283581 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 4 23:47:39.340360 systemd[1]: Started sshd@1-139.178.70.101:22-139.178.68.195:41168.service - OpenSSH per-connection server daemon (139.178.68.195:41168). Nov 4 23:47:39.385181 sshd[1938]: Accepted publickey for core from 139.178.68.195 port 41168 ssh2: RSA SHA256:SN3AO0qmEwLXzHNXZ4jj0XXdhb1SagWw2vkr2zk9BUc Nov 4 23:47:39.385512 sshd-session[1938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:47:39.388450 systemd-logind[1666]: New session 4 of user core. Nov 4 23:47:39.396190 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 4 23:47:39.444566 sshd[1941]: Connection closed by 139.178.68.195 port 41168 Nov 4 23:47:39.445210 sshd-session[1938]: pam_unix(sshd:session): session closed for user core Nov 4 23:47:39.451569 systemd[1]: sshd@1-139.178.70.101:22-139.178.68.195:41168.service: Deactivated successfully. Nov 4 23:47:39.452682 systemd[1]: session-4.scope: Deactivated successfully. Nov 4 23:47:39.453202 systemd-logind[1666]: Session 4 logged out. Waiting for processes to exit. Nov 4 23:47:39.455107 systemd[1]: Started sshd@2-139.178.70.101:22-139.178.68.195:41172.service - OpenSSH per-connection server daemon (139.178.68.195:41172). Nov 4 23:47:39.457510 systemd-logind[1666]: Removed session 4. Nov 4 23:47:39.499598 sshd[1947]: Accepted publickey for core from 139.178.68.195 port 41172 ssh2: RSA SHA256:SN3AO0qmEwLXzHNXZ4jj0XXdhb1SagWw2vkr2zk9BUc Nov 4 23:47:39.500351 sshd-session[1947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:47:39.504123 systemd-logind[1666]: New session 5 of user core. Nov 4 23:47:39.513296 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 4 23:47:39.560478 sshd[1950]: Connection closed by 139.178.68.195 port 41172 Nov 4 23:47:39.560797 sshd-session[1947]: pam_unix(sshd:session): session closed for user core Nov 4 23:47:39.570917 systemd[1]: sshd@2-139.178.70.101:22-139.178.68.195:41172.service: Deactivated successfully. Nov 4 23:47:39.571957 systemd[1]: session-5.scope: Deactivated successfully. Nov 4 23:47:39.572520 systemd-logind[1666]: Session 5 logged out. Waiting for processes to exit. Nov 4 23:47:39.573423 systemd-logind[1666]: Removed session 5. Nov 4 23:47:39.574276 systemd[1]: Started sshd@3-139.178.70.101:22-139.178.68.195:41178.service - OpenSSH per-connection server daemon (139.178.68.195:41178). Nov 4 23:47:39.611602 sshd[1956]: Accepted publickey for core from 139.178.68.195 port 41178 ssh2: RSA SHA256:SN3AO0qmEwLXzHNXZ4jj0XXdhb1SagWw2vkr2zk9BUc Nov 4 23:47:39.612375 sshd-session[1956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:47:39.615026 systemd-logind[1666]: New session 6 of user core. Nov 4 23:47:39.625171 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 4 23:47:39.674603 sshd[1959]: Connection closed by 139.178.68.195 port 41178 Nov 4 23:47:39.675255 sshd-session[1956]: pam_unix(sshd:session): session closed for user core Nov 4 23:47:39.684102 systemd[1]: sshd@3-139.178.70.101:22-139.178.68.195:41178.service: Deactivated successfully. Nov 4 23:47:39.684920 systemd[1]: session-6.scope: Deactivated successfully. Nov 4 23:47:39.685380 systemd-logind[1666]: Session 6 logged out. Waiting for processes to exit. Nov 4 23:47:39.686553 systemd[1]: Started sshd@4-139.178.70.101:22-139.178.68.195:41192.service - OpenSSH per-connection server daemon (139.178.68.195:41192). Nov 4 23:47:39.687214 systemd-logind[1666]: Removed session 6. Nov 4 23:47:39.728928 sshd[1965]: Accepted publickey for core from 139.178.68.195 port 41192 ssh2: RSA SHA256:SN3AO0qmEwLXzHNXZ4jj0XXdhb1SagWw2vkr2zk9BUc Nov 4 23:47:39.729715 sshd-session[1965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:47:39.732204 systemd-logind[1666]: New session 7 of user core. Nov 4 23:47:39.739297 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 4 23:47:39.796678 sudo[1969]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 4 23:47:39.797027 sudo[1969]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:47:39.809347 sudo[1969]: pam_unix(sudo:session): session closed for user root Nov 4 23:47:39.810566 sshd[1968]: Connection closed by 139.178.68.195 port 41192 Nov 4 23:47:39.810494 sshd-session[1965]: pam_unix(sshd:session): session closed for user core Nov 4 23:47:39.815124 systemd[1]: sshd@4-139.178.70.101:22-139.178.68.195:41192.service: Deactivated successfully. Nov 4 23:47:39.815996 systemd[1]: session-7.scope: Deactivated successfully. Nov 4 23:47:39.816512 systemd-logind[1666]: Session 7 logged out. Waiting for processes to exit. Nov 4 23:47:39.817985 systemd[1]: Started sshd@5-139.178.70.101:22-139.178.68.195:41202.service - OpenSSH per-connection server daemon (139.178.68.195:41202). Nov 4 23:47:39.818752 systemd-logind[1666]: Removed session 7. Nov 4 23:47:39.861850 sshd[1975]: Accepted publickey for core from 139.178.68.195 port 41202 ssh2: RSA SHA256:SN3AO0qmEwLXzHNXZ4jj0XXdhb1SagWw2vkr2zk9BUc Nov 4 23:47:39.862592 sshd-session[1975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:47:39.865297 systemd-logind[1666]: New session 8 of user core. Nov 4 23:47:39.875177 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 4 23:47:39.924280 sudo[1980]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 4 23:47:39.924451 sudo[1980]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:47:39.926817 sudo[1980]: pam_unix(sudo:session): session closed for user root Nov 4 23:47:39.930984 sudo[1979]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 4 23:47:39.931367 sudo[1979]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:47:39.937599 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 23:47:39.962643 augenrules[2002]: No rules Nov 4 23:47:39.963286 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 23:47:39.963505 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 23:47:39.964274 sudo[1979]: pam_unix(sudo:session): session closed for user root Nov 4 23:47:39.965048 sshd[1978]: Connection closed by 139.178.68.195 port 41202 Nov 4 23:47:39.965862 sshd-session[1975]: pam_unix(sshd:session): session closed for user core Nov 4 23:47:39.970737 systemd[1]: sshd@5-139.178.70.101:22-139.178.68.195:41202.service: Deactivated successfully. Nov 4 23:47:39.971985 systemd[1]: session-8.scope: Deactivated successfully. Nov 4 23:47:39.972998 systemd-logind[1666]: Session 8 logged out. Waiting for processes to exit. Nov 4 23:47:39.973911 systemd-logind[1666]: Removed session 8. Nov 4 23:47:39.974984 systemd[1]: Started sshd@6-139.178.70.101:22-139.178.68.195:41216.service - OpenSSH per-connection server daemon (139.178.68.195:41216). Nov 4 23:47:40.012583 sshd[2011]: Accepted publickey for core from 139.178.68.195 port 41216 ssh2: RSA SHA256:SN3AO0qmEwLXzHNXZ4jj0XXdhb1SagWw2vkr2zk9BUc Nov 4 23:47:40.013648 sshd-session[2011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:47:40.016224 systemd-logind[1666]: New session 9 of user core. Nov 4 23:47:40.023379 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 4 23:47:40.072461 sudo[2015]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 4 23:47:40.072626 sudo[2015]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:47:40.432950 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 4 23:47:40.441383 (dockerd)[2033]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 4 23:47:40.700191 dockerd[2033]: time="2025-11-04T23:47:40.700107152Z" level=info msg="Starting up" Nov 4 23:47:40.701004 dockerd[2033]: time="2025-11-04T23:47:40.700980881Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 4 23:47:40.708016 dockerd[2033]: time="2025-11-04T23:47:40.707991743Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 4 23:47:40.722730 systemd[1]: var-lib-docker-metacopy\x2dcheck4095859690-merged.mount: Deactivated successfully. Nov 4 23:47:40.735785 dockerd[2033]: time="2025-11-04T23:47:40.735611437Z" level=info msg="Loading containers: start." Nov 4 23:47:40.745087 kernel: Initializing XFRM netlink socket Nov 4 23:47:40.922463 systemd-networkd[1587]: docker0: Link UP Nov 4 23:47:40.923883 dockerd[2033]: time="2025-11-04T23:47:40.923859001Z" level=info msg="Loading containers: done." Nov 4 23:47:40.931848 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3512986388-merged.mount: Deactivated successfully. Nov 4 23:47:40.937269 dockerd[2033]: time="2025-11-04T23:47:40.937241847Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 4 23:47:40.937343 dockerd[2033]: time="2025-11-04T23:47:40.937300009Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 4 23:47:40.937377 dockerd[2033]: time="2025-11-04T23:47:40.937362254Z" level=info msg="Initializing buildkit" Nov 4 23:49:19.219903 systemd-resolved[1350]: Clock change detected. Flushing caches. Nov 4 23:49:19.220055 systemd-timesyncd[1598]: Contacted time server 108.61.215.221:123 (2.flatcar.pool.ntp.org). Nov 4 23:49:19.220089 systemd-timesyncd[1598]: Initial clock synchronization to Tue 2025-11-04 23:49:19.219817 UTC. Nov 4 23:49:19.248264 dockerd[2033]: time="2025-11-04T23:49:19.248196436Z" level=info msg="Completed buildkit initialization" Nov 4 23:49:19.251044 dockerd[2033]: time="2025-11-04T23:49:19.251026056Z" level=info msg="Daemon has completed initialization" Nov 4 23:49:19.251468 dockerd[2033]: time="2025-11-04T23:49:19.251119530Z" level=info msg="API listen on /run/docker.sock" Nov 4 23:49:19.251258 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 4 23:49:20.222555 containerd[1684]: time="2025-11-04T23:49:20.222527904Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 4 23:49:20.967579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1194230480.mount: Deactivated successfully. Nov 4 23:49:22.152343 containerd[1684]: time="2025-11-04T23:49:22.151824200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:49:22.153155 containerd[1684]: time="2025-11-04T23:49:22.153133308Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Nov 4 23:49:22.153487 containerd[1684]: time="2025-11-04T23:49:22.153473656Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:49:22.155571 containerd[1684]: time="2025-11-04T23:49:22.155547922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:49:22.156276 containerd[1684]: time="2025-11-04T23:49:22.156256993Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 1.933705408s" Nov 4 23:49:22.156382 containerd[1684]: time="2025-11-04T23:49:22.156370536Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 4 23:49:22.156961 containerd[1684]: time="2025-11-04T23:49:22.156944571Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 4 23:49:23.403296 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 4 23:49:23.405271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:49:24.182827 containerd[1684]: time="2025-11-04T23:49:24.182780897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:49:24.188005 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:49:24.189145 containerd[1684]: time="2025-11-04T23:49:24.189122453Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Nov 4 23:49:24.194408 containerd[1684]: time="2025-11-04T23:49:24.194383879Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:49:24.195707 (kubelet)[2313]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:49:24.203914 containerd[1684]: time="2025-11-04T23:49:24.203860342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:49:24.204417 containerd[1684]: time="2025-11-04T23:49:24.204400900Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 2.047438011s" Nov 4 23:49:24.204458 containerd[1684]: time="2025-11-04T23:49:24.204418778Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 4 23:49:24.205084 containerd[1684]: time="2025-11-04T23:49:24.205070155Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 4 23:49:24.224168 kubelet[2313]: E1104 23:49:24.224145 2313 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:49:24.225740 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:49:24.225825 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:49:24.226178 systemd[1]: kubelet.service: Consumed 108ms CPU time, 109.8M memory peak. Nov 4 23:49:25.440313 containerd[1684]: time="2025-11-04T23:49:25.440276494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:49:25.450434 containerd[1684]: time="2025-11-04T23:49:25.450405325Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Nov 4 23:49:25.460010 containerd[1684]: time="2025-11-04T23:49:25.459962117Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:49:25.476069 containerd[1684]: time="2025-11-04T23:49:25.476035870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:49:25.476821 containerd[1684]: time="2025-11-04T23:49:25.476793138Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.271705715s" Nov 4 23:49:25.476821 containerd[1684]: time="2025-11-04T23:49:25.476818579Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 4 23:49:25.477372 containerd[1684]: time="2025-11-04T23:49:25.477340278Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 4 23:49:27.334955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1288784987.mount: Deactivated successfully. Nov 4 23:49:27.787002 containerd[1684]: time="2025-11-04T23:49:27.786931264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:49:27.794329 containerd[1684]: time="2025-11-04T23:49:27.794305092Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Nov 4 23:49:27.806750 containerd[1684]: time="2025-11-04T23:49:27.806690162Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:49:27.815350 containerd[1684]: time="2025-11-04T23:49:27.815322714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:49:27.816174 containerd[1684]: time="2025-11-04T23:49:27.816143665Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.338782692s" Nov 4 23:49:27.816174 containerd[1684]: time="2025-11-04T23:49:27.816169482Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 4 23:49:27.816772 containerd[1684]: time="2025-11-04T23:49:27.816473787Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 4 23:49:28.492131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3269670189.mount: Deactivated successfully. Nov 4 23:49:29.310283 containerd[1684]: time="2025-11-04T23:49:29.310187231Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Nov 4 23:49:29.310283 containerd[1684]: time="2025-11-04T23:49:29.310235112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:49:29.311042 containerd[1684]: time="2025-11-04T23:49:29.311022246Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:49:29.313153 containerd[1684]: time="2025-11-04T23:49:29.313134923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:49:29.313832 containerd[1684]: time="2025-11-04T23:49:29.313814732Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.497321442s" Nov 4 23:49:29.313869 containerd[1684]: time="2025-11-04T23:49:29.313833259Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 4 23:49:29.314490 containerd[1684]: time="2025-11-04T23:49:29.314476753Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 4 23:49:30.035646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2261813540.mount: Deactivated successfully. Nov 4 23:49:30.037912 containerd[1684]: time="2025-11-04T23:49:30.037889002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:49:30.038474 containerd[1684]: time="2025-11-04T23:49:30.038456608Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 4 23:49:30.038747 containerd[1684]: time="2025-11-04T23:49:30.038732667Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:49:30.040081 containerd[1684]: time="2025-11-04T23:49:30.040062654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:49:30.040469 containerd[1684]: time="2025-11-04T23:49:30.040451194Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 725.961678ms" Nov 4 23:49:30.040499 containerd[1684]: time="2025-11-04T23:49:30.040472556Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 4 23:49:30.040895 containerd[1684]: time="2025-11-04T23:49:30.040882647Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 4 23:49:30.634083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount174923843.mount: Deactivated successfully. Nov 4 23:49:32.283073 containerd[1684]: time="2025-11-04T23:49:32.282365449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:49:32.285220 containerd[1684]: time="2025-11-04T23:49:32.285204635Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Nov 4 23:49:32.290146 containerd[1684]: time="2025-11-04T23:49:32.290124774Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:49:32.295601 containerd[1684]: time="2025-11-04T23:49:32.295584615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:49:32.297715 containerd[1684]: time="2025-11-04T23:49:32.297699294Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.256800705s" Nov 4 23:49:32.297794 containerd[1684]: time="2025-11-04T23:49:32.297781258Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 4 23:49:32.325859 update_engine[1669]: I20251104 23:49:32.325821 1669 update_attempter.cc:509] Updating boot flags... Nov 4 23:49:34.403935 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 4 23:49:34.407596 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:49:34.884539 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:49:34.889653 (kubelet)[2493]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:49:34.967254 kubelet[2493]: E1104 23:49:34.967227 2493 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:49:34.969020 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:49:34.969111 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:49:34.969335 systemd[1]: kubelet.service: Consumed 99ms CPU time, 110.2M memory peak. Nov 4 23:49:35.354066 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:49:35.354169 systemd[1]: kubelet.service: Consumed 99ms CPU time, 110.2M memory peak. Nov 4 23:49:35.355875 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:49:35.372971 systemd[1]: Reload requested from client PID 2508 ('systemctl') (unit session-9.scope)... Nov 4 23:49:35.372984 systemd[1]: Reloading... Nov 4 23:49:35.446518 zram_generator::config[2551]: No configuration found. Nov 4 23:49:35.519139 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 4 23:49:35.589382 systemd[1]: Reloading finished in 216 ms. Nov 4 23:49:35.627463 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 4 23:49:35.627552 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 4 23:49:35.627728 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:49:35.628988 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:49:36.128972 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:49:36.131677 (kubelet)[2619]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 23:49:36.165909 kubelet[2619]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:49:36.165909 kubelet[2619]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 23:49:36.165909 kubelet[2619]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:49:36.166634 kubelet[2619]: I1104 23:49:36.166381 2619 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 23:49:36.934517 kubelet[2619]: I1104 23:49:36.934009 2619 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 4 23:49:36.934517 kubelet[2619]: I1104 23:49:36.934027 2619 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 23:49:36.934517 kubelet[2619]: I1104 23:49:36.934150 2619 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 23:49:36.964551 kubelet[2619]: E1104 23:49:36.964525 2619 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.70.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 4 23:49:36.967134 kubelet[2619]: I1104 23:49:36.967053 2619 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 23:49:37.013954 kubelet[2619]: I1104 23:49:37.013936 2619 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 23:49:37.018751 kubelet[2619]: I1104 23:49:37.018734 2619 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 4 23:49:37.022444 kubelet[2619]: I1104 23:49:37.022415 2619 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 23:49:37.024853 kubelet[2619]: I1104 23:49:37.022441 2619 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 23:49:37.024941 kubelet[2619]: I1104 23:49:37.024855 2619 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 23:49:37.024941 kubelet[2619]: I1104 23:49:37.024862 2619 container_manager_linux.go:303] "Creating device plugin manager" Nov 4 23:49:37.025605 kubelet[2619]: I1104 23:49:37.025592 2619 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:49:37.028098 kubelet[2619]: I1104 23:49:37.028084 2619 kubelet.go:480] "Attempting to sync node with API server" Nov 4 23:49:37.028126 kubelet[2619]: I1104 23:49:37.028100 2619 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 23:49:37.028126 kubelet[2619]: I1104 23:49:37.028123 2619 kubelet.go:386] "Adding apiserver pod source" Nov 4 23:49:37.028157 kubelet[2619]: I1104 23:49:37.028135 2619 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 23:49:37.034809 kubelet[2619]: E1104 23:49:37.034677 2619 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 23:49:37.038267 kubelet[2619]: E1104 23:49:37.038255 2619 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 23:49:37.038367 kubelet[2619]: I1104 23:49:37.038358 2619 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 4 23:49:37.038691 kubelet[2619]: I1104 23:49:37.038682 2619 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 23:49:37.041505 kubelet[2619]: W1104 23:49:37.041358 2619 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 4 23:49:37.046168 kubelet[2619]: I1104 23:49:37.046160 2619 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 4 23:49:37.046266 kubelet[2619]: I1104 23:49:37.046261 2619 server.go:1289] "Started kubelet" Nov 4 23:49:37.047389 kubelet[2619]: I1104 23:49:37.047380 2619 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 23:49:37.054544 kubelet[2619]: E1104 23:49:37.050060 2619 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.101:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.101:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1874f2ac8872c1ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-04 23:49:37.046225324 +0000 UTC m=+0.912484275,LastTimestamp:2025-11-04 23:49:37.046225324 +0000 UTC m=+0.912484275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 4 23:49:37.056506 kubelet[2619]: I1104 23:49:37.055625 2619 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 23:49:37.056506 kubelet[2619]: I1104 23:49:37.055800 2619 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 23:49:37.056608 kubelet[2619]: I1104 23:49:37.056588 2619 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 23:49:37.058447 kubelet[2619]: I1104 23:49:37.058117 2619 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 23:49:37.061571 kubelet[2619]: I1104 23:49:37.061562 2619 server.go:317] "Adding debug handlers to kubelet server" Nov 4 23:49:37.064935 kubelet[2619]: I1104 23:49:37.064563 2619 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 4 23:49:37.064935 kubelet[2619]: E1104 23:49:37.064679 2619 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 23:49:37.064935 kubelet[2619]: E1104 23:49:37.064923 2619 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.101:6443: connect: connection refused" interval="200ms" Nov 4 23:49:37.065046 kubelet[2619]: I1104 23:49:37.065037 2619 reconciler.go:26] "Reconciler: start to sync state" Nov 4 23:49:37.066642 kubelet[2619]: I1104 23:49:37.066628 2619 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 4 23:49:37.066860 kubelet[2619]: E1104 23:49:37.066844 2619 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 23:49:37.067056 kubelet[2619]: I1104 23:49:37.067042 2619 factory.go:223] Registration of the systemd container factory successfully Nov 4 23:49:37.067096 kubelet[2619]: I1104 23:49:37.067084 2619 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 23:49:37.068948 kubelet[2619]: I1104 23:49:37.068928 2619 factory.go:223] Registration of the containerd container factory successfully Nov 4 23:49:37.076804 kubelet[2619]: I1104 23:49:37.076739 2619 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 4 23:49:37.079282 kubelet[2619]: E1104 23:49:37.078875 2619 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 23:49:37.079282 kubelet[2619]: I1104 23:49:37.079117 2619 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 4 23:49:37.079282 kubelet[2619]: I1104 23:49:37.079124 2619 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 4 23:49:37.079282 kubelet[2619]: I1104 23:49:37.079139 2619 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 23:49:37.079282 kubelet[2619]: I1104 23:49:37.079143 2619 kubelet.go:2436] "Starting kubelet main sync loop" Nov 4 23:49:37.079282 kubelet[2619]: E1104 23:49:37.079164 2619 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 23:49:37.082031 kubelet[2619]: E1104 23:49:37.082017 2619 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 23:49:37.088593 kubelet[2619]: I1104 23:49:37.088581 2619 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 23:49:37.088812 kubelet[2619]: I1104 23:49:37.088660 2619 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 23:49:37.088812 kubelet[2619]: I1104 23:49:37.088674 2619 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:49:37.090476 kubelet[2619]: I1104 23:49:37.090468 2619 policy_none.go:49] "None policy: Start" Nov 4 23:49:37.090547 kubelet[2619]: I1104 23:49:37.090541 2619 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 4 23:49:37.090586 kubelet[2619]: I1104 23:49:37.090581 2619 state_mem.go:35] "Initializing new in-memory state store" Nov 4 23:49:37.094943 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 4 23:49:37.103462 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 4 23:49:37.106020 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 4 23:49:37.116264 kubelet[2619]: E1104 23:49:37.116172 2619 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 23:49:37.116504 kubelet[2619]: I1104 23:49:37.116372 2619 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 23:49:37.116504 kubelet[2619]: I1104 23:49:37.116385 2619 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 23:49:37.116647 kubelet[2619]: I1104 23:49:37.116640 2619 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 23:49:37.118540 kubelet[2619]: E1104 23:49:37.118530 2619 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 23:49:37.118603 kubelet[2619]: E1104 23:49:37.118597 2619 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 4 23:49:37.187720 systemd[1]: Created slice kubepods-burstable-pod42259a2cfd4ad5bb22df64ef808385eb.slice - libcontainer container kubepods-burstable-pod42259a2cfd4ad5bb22df64ef808385eb.slice. Nov 4 23:49:37.212744 kubelet[2619]: E1104 23:49:37.212701 2619 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:49:37.215305 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Nov 4 23:49:37.217995 kubelet[2619]: I1104 23:49:37.217772 2619 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 23:49:37.217995 kubelet[2619]: E1104 23:49:37.217976 2619 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.101:6443/api/v1/nodes\": dial tcp 139.178.70.101:6443: connect: connection refused" node="localhost" Nov 4 23:49:37.224204 kubelet[2619]: E1104 23:49:37.224193 2619 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:49:37.226582 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Nov 4 23:49:37.227973 kubelet[2619]: E1104 23:49:37.227963 2619 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:49:37.271551 kubelet[2619]: E1104 23:49:37.271521 2619 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.101:6443: connect: connection refused" interval="400ms" Nov 4 23:49:37.367004 kubelet[2619]: I1104 23:49:37.366973 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42259a2cfd4ad5bb22df64ef808385eb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"42259a2cfd4ad5bb22df64ef808385eb\") " pod="kube-system/kube-apiserver-localhost" Nov 4 23:49:37.367004 kubelet[2619]: I1104 23:49:37.366999 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:49:37.367122 kubelet[2619]: I1104 23:49:37.367012 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:49:37.367122 kubelet[2619]: I1104 23:49:37.367033 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:49:37.367122 kubelet[2619]: I1104 23:49:37.367054 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:49:37.367122 kubelet[2619]: I1104 23:49:37.367065 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:49:37.367122 kubelet[2619]: I1104 23:49:37.367081 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 4 23:49:37.367212 kubelet[2619]: I1104 23:49:37.367093 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42259a2cfd4ad5bb22df64ef808385eb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"42259a2cfd4ad5bb22df64ef808385eb\") " pod="kube-system/kube-apiserver-localhost" Nov 4 23:49:37.367212 kubelet[2619]: I1104 23:49:37.367120 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42259a2cfd4ad5bb22df64ef808385eb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"42259a2cfd4ad5bb22df64ef808385eb\") " pod="kube-system/kube-apiserver-localhost" Nov 4 23:49:37.419348 kubelet[2619]: I1104 23:49:37.419153 2619 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 23:49:37.419434 kubelet[2619]: E1104 23:49:37.419391 2619 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.101:6443/api/v1/nodes\": dial tcp 139.178.70.101:6443: connect: connection refused" node="localhost" Nov 4 23:49:37.514518 containerd[1684]: time="2025-11-04T23:49:37.514315601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:42259a2cfd4ad5bb22df64ef808385eb,Namespace:kube-system,Attempt:0,}" Nov 4 23:49:37.545753 containerd[1684]: time="2025-11-04T23:49:37.545599091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Nov 4 23:49:37.554715 containerd[1684]: time="2025-11-04T23:49:37.554696426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Nov 4 23:49:37.620319 containerd[1684]: time="2025-11-04T23:49:37.620143622Z" level=info msg="connecting to shim 4784af45fefcc87c9537f61a0fcdf8433b3984c01075aa695892e9dc667052a4" address="unix:///run/containerd/s/aa10b61d5705205da5a596a7b64c2e599abbb1bc8f763b46cf3bca3d38ac4d5b" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:49:37.620768 containerd[1684]: time="2025-11-04T23:49:37.620750042Z" level=info msg="connecting to shim ece2ac670ed0fb91c0bdcb019a1c84de391437636d527568e45ff37a8b422037" address="unix:///run/containerd/s/d1c14302a60046b60af464e8268a52df4331e30da9276e19068596e53c06dd4b" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:49:37.627491 containerd[1684]: time="2025-11-04T23:49:37.627468276Z" level=info msg="connecting to shim 1af7b511cefec4056a392323f5e10f71bc32ece475fd05ef359a587c56899eac" address="unix:///run/containerd/s/8134ab25df486347be42de7e6e8de675177dadb3da2424109c2e7b93555d906c" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:49:37.681387 kubelet[2619]: E1104 23:49:37.681350 2619 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.101:6443: connect: connection refused" interval="800ms" Nov 4 23:49:37.740626 systemd[1]: Started cri-containerd-4784af45fefcc87c9537f61a0fcdf8433b3984c01075aa695892e9dc667052a4.scope - libcontainer container 4784af45fefcc87c9537f61a0fcdf8433b3984c01075aa695892e9dc667052a4. Nov 4 23:49:37.742205 systemd[1]: Started cri-containerd-ece2ac670ed0fb91c0bdcb019a1c84de391437636d527568e45ff37a8b422037.scope - libcontainer container ece2ac670ed0fb91c0bdcb019a1c84de391437636d527568e45ff37a8b422037. Nov 4 23:49:37.745249 systemd[1]: Started cri-containerd-1af7b511cefec4056a392323f5e10f71bc32ece475fd05ef359a587c56899eac.scope - libcontainer container 1af7b511cefec4056a392323f5e10f71bc32ece475fd05ef359a587c56899eac. Nov 4 23:49:37.792745 containerd[1684]: time="2025-11-04T23:49:37.792451620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:42259a2cfd4ad5bb22df64ef808385eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"ece2ac670ed0fb91c0bdcb019a1c84de391437636d527568e45ff37a8b422037\"" Nov 4 23:49:37.796333 containerd[1684]: time="2025-11-04T23:49:37.796309333Z" level=info msg="CreateContainer within sandbox \"ece2ac670ed0fb91c0bdcb019a1c84de391437636d527568e45ff37a8b422037\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 4 23:49:37.800957 containerd[1684]: time="2025-11-04T23:49:37.800930773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"1af7b511cefec4056a392323f5e10f71bc32ece475fd05ef359a587c56899eac\"" Nov 4 23:49:37.803225 containerd[1684]: time="2025-11-04T23:49:37.803206028Z" level=info msg="Container 88b1ae0dff7be20f14925bc8837c28bfbb61806a03c08d519c0dc738e6e93b82: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:49:37.806852 containerd[1684]: time="2025-11-04T23:49:37.806505204Z" level=info msg="CreateContainer within sandbox \"1af7b511cefec4056a392323f5e10f71bc32ece475fd05ef359a587c56899eac\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 4 23:49:37.810297 containerd[1684]: time="2025-11-04T23:49:37.810273685Z" level=info msg="Container 89dd43f61980c5cc19956e7029381532f61d2337506db8989caeba79fb0a4f1e: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:49:37.815420 containerd[1684]: time="2025-11-04T23:49:37.815393693Z" level=info msg="CreateContainer within sandbox \"ece2ac670ed0fb91c0bdcb019a1c84de391437636d527568e45ff37a8b422037\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"88b1ae0dff7be20f14925bc8837c28bfbb61806a03c08d519c0dc738e6e93b82\"" Nov 4 23:49:37.815738 containerd[1684]: time="2025-11-04T23:49:37.815575371Z" level=info msg="CreateContainer within sandbox \"1af7b511cefec4056a392323f5e10f71bc32ece475fd05ef359a587c56899eac\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"89dd43f61980c5cc19956e7029381532f61d2337506db8989caeba79fb0a4f1e\"" Nov 4 23:49:37.816110 containerd[1684]: time="2025-11-04T23:49:37.816096005Z" level=info msg="StartContainer for \"88b1ae0dff7be20f14925bc8837c28bfbb61806a03c08d519c0dc738e6e93b82\"" Nov 4 23:49:37.816351 containerd[1684]: time="2025-11-04T23:49:37.816287357Z" level=info msg="StartContainer for \"89dd43f61980c5cc19956e7029381532f61d2337506db8989caeba79fb0a4f1e\"" Nov 4 23:49:37.816942 containerd[1684]: time="2025-11-04T23:49:37.816929747Z" level=info msg="connecting to shim 89dd43f61980c5cc19956e7029381532f61d2337506db8989caeba79fb0a4f1e" address="unix:///run/containerd/s/8134ab25df486347be42de7e6e8de675177dadb3da2424109c2e7b93555d906c" protocol=ttrpc version=3 Nov 4 23:49:37.817429 containerd[1684]: time="2025-11-04T23:49:37.817411614Z" level=info msg="connecting to shim 88b1ae0dff7be20f14925bc8837c28bfbb61806a03c08d519c0dc738e6e93b82" address="unix:///run/containerd/s/d1c14302a60046b60af464e8268a52df4331e30da9276e19068596e53c06dd4b" protocol=ttrpc version=3 Nov 4 23:49:37.820609 kubelet[2619]: I1104 23:49:37.820589 2619 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 23:49:37.821104 kubelet[2619]: E1104 23:49:37.820929 2619 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.101:6443/api/v1/nodes\": dial tcp 139.178.70.101:6443: connect: connection refused" node="localhost" Nov 4 23:49:37.823151 containerd[1684]: time="2025-11-04T23:49:37.823128013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"4784af45fefcc87c9537f61a0fcdf8433b3984c01075aa695892e9dc667052a4\"" Nov 4 23:49:37.826820 containerd[1684]: time="2025-11-04T23:49:37.826743763Z" level=info msg="CreateContainer within sandbox \"4784af45fefcc87c9537f61a0fcdf8433b3984c01075aa695892e9dc667052a4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 4 23:49:37.834552 containerd[1684]: time="2025-11-04T23:49:37.834527936Z" level=info msg="Container 0bb3ffedc26c5b9cc08ce97c8d0580ec9b16903eeb68682631e2c4e8fab965e8: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:49:37.837116 containerd[1684]: time="2025-11-04T23:49:37.837056791Z" level=info msg="CreateContainer within sandbox \"4784af45fefcc87c9537f61a0fcdf8433b3984c01075aa695892e9dc667052a4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0bb3ffedc26c5b9cc08ce97c8d0580ec9b16903eeb68682631e2c4e8fab965e8\"" Nov 4 23:49:37.837410 containerd[1684]: time="2025-11-04T23:49:37.837375649Z" level=info msg="StartContainer for \"0bb3ffedc26c5b9cc08ce97c8d0580ec9b16903eeb68682631e2c4e8fab965e8\"" Nov 4 23:49:37.839022 containerd[1684]: time="2025-11-04T23:49:37.837959887Z" level=info msg="connecting to shim 0bb3ffedc26c5b9cc08ce97c8d0580ec9b16903eeb68682631e2c4e8fab965e8" address="unix:///run/containerd/s/aa10b61d5705205da5a596a7b64c2e599abbb1bc8f763b46cf3bca3d38ac4d5b" protocol=ttrpc version=3 Nov 4 23:49:37.838059 systemd[1]: Started cri-containerd-88b1ae0dff7be20f14925bc8837c28bfbb61806a03c08d519c0dc738e6e93b82.scope - libcontainer container 88b1ae0dff7be20f14925bc8837c28bfbb61806a03c08d519c0dc738e6e93b82. Nov 4 23:49:37.838893 systemd[1]: Started cri-containerd-89dd43f61980c5cc19956e7029381532f61d2337506db8989caeba79fb0a4f1e.scope - libcontainer container 89dd43f61980c5cc19956e7029381532f61d2337506db8989caeba79fb0a4f1e. Nov 4 23:49:37.863577 systemd[1]: Started cri-containerd-0bb3ffedc26c5b9cc08ce97c8d0580ec9b16903eeb68682631e2c4e8fab965e8.scope - libcontainer container 0bb3ffedc26c5b9cc08ce97c8d0580ec9b16903eeb68682631e2c4e8fab965e8. Nov 4 23:49:37.892510 containerd[1684]: time="2025-11-04T23:49:37.892451305Z" level=info msg="StartContainer for \"88b1ae0dff7be20f14925bc8837c28bfbb61806a03c08d519c0dc738e6e93b82\" returns successfully" Nov 4 23:49:37.900723 containerd[1684]: time="2025-11-04T23:49:37.900650587Z" level=info msg="StartContainer for \"89dd43f61980c5cc19956e7029381532f61d2337506db8989caeba79fb0a4f1e\" returns successfully" Nov 4 23:49:37.929815 containerd[1684]: time="2025-11-04T23:49:37.929751486Z" level=info msg="StartContainer for \"0bb3ffedc26c5b9cc08ce97c8d0580ec9b16903eeb68682631e2c4e8fab965e8\" returns successfully" Nov 4 23:49:37.930398 kubelet[2619]: E1104 23:49:37.930384 2619 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 23:49:38.092858 kubelet[2619]: E1104 23:49:38.092297 2619 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:49:38.094015 kubelet[2619]: E1104 23:49:38.094005 2619 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:49:38.096487 kubelet[2619]: E1104 23:49:38.096467 2619 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:49:38.315245 kubelet[2619]: E1104 23:49:38.315221 2619 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 23:49:38.358153 kubelet[2619]: E1104 23:49:38.357994 2619 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 23:49:38.623015 kubelet[2619]: I1104 23:49:38.622114 2619 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 23:49:39.097599 kubelet[2619]: E1104 23:49:39.097503 2619 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:49:39.097849 kubelet[2619]: E1104 23:49:39.097793 2619 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:49:39.317624 kubelet[2619]: E1104 23:49:39.317606 2619 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 4 23:49:39.419723 kubelet[2619]: I1104 23:49:39.419557 2619 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 4 23:49:39.465700 kubelet[2619]: I1104 23:49:39.465672 2619 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 23:49:39.488275 kubelet[2619]: I1104 23:49:39.488120 2619 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 23:49:39.496235 kubelet[2619]: E1104 23:49:39.496218 2619 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 4 23:49:39.519950 kubelet[2619]: E1104 23:49:39.519821 2619 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 4 23:49:39.519950 kubelet[2619]: I1104 23:49:39.519840 2619 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 23:49:39.522285 kubelet[2619]: E1104 23:49:39.522187 2619 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 4 23:49:39.522285 kubelet[2619]: I1104 23:49:39.522201 2619 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 23:49:39.524295 kubelet[2619]: E1104 23:49:39.524283 2619 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 4 23:49:40.037435 kubelet[2619]: I1104 23:49:40.037297 2619 apiserver.go:52] "Watching apiserver" Nov 4 23:49:40.066962 kubelet[2619]: I1104 23:49:40.066933 2619 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 4 23:49:41.106603 systemd[1]: Reload requested from client PID 2893 ('systemctl') (unit session-9.scope)... Nov 4 23:49:41.106787 systemd[1]: Reloading... Nov 4 23:49:41.168512 zram_generator::config[2940]: No configuration found. Nov 4 23:49:41.251450 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 4 23:49:41.329883 systemd[1]: Reloading finished in 222 ms. Nov 4 23:49:41.348877 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:49:41.362762 systemd[1]: kubelet.service: Deactivated successfully. Nov 4 23:49:41.362939 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:49:41.362973 systemd[1]: kubelet.service: Consumed 1.073s CPU time, 127.5M memory peak. Nov 4 23:49:41.364837 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:49:41.534738 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:49:41.542772 (kubelet)[3005]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 23:49:41.648466 kubelet[3005]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:49:41.648466 kubelet[3005]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 23:49:41.648466 kubelet[3005]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:49:41.648689 kubelet[3005]: I1104 23:49:41.648450 3005 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 23:49:41.674010 kubelet[3005]: I1104 23:49:41.673984 3005 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 4 23:49:41.674010 kubelet[3005]: I1104 23:49:41.674002 3005 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 23:49:41.674176 kubelet[3005]: I1104 23:49:41.674163 3005 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 23:49:41.676182 kubelet[3005]: I1104 23:49:41.675134 3005 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 4 23:49:41.677307 kubelet[3005]: I1104 23:49:41.677292 3005 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 23:49:41.699944 kubelet[3005]: I1104 23:49:41.699922 3005 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 23:49:41.702791 kubelet[3005]: I1104 23:49:41.702774 3005 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 4 23:49:41.702945 kubelet[3005]: I1104 23:49:41.702919 3005 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 23:49:41.703065 kubelet[3005]: I1104 23:49:41.702946 3005 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 23:49:41.704587 kubelet[3005]: I1104 23:49:41.704567 3005 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 23:49:41.704587 kubelet[3005]: I1104 23:49:41.704586 3005 container_manager_linux.go:303] "Creating device plugin manager" Nov 4 23:49:41.704654 kubelet[3005]: I1104 23:49:41.704629 3005 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:49:41.704803 kubelet[3005]: I1104 23:49:41.704787 3005 kubelet.go:480] "Attempting to sync node with API server" Nov 4 23:49:41.704838 kubelet[3005]: I1104 23:49:41.704810 3005 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 23:49:41.704838 kubelet[3005]: I1104 23:49:41.704829 3005 kubelet.go:386] "Adding apiserver pod source" Nov 4 23:49:41.705800 kubelet[3005]: I1104 23:49:41.704840 3005 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 23:49:41.711285 kubelet[3005]: I1104 23:49:41.710794 3005 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 4 23:49:41.711285 kubelet[3005]: I1104 23:49:41.711156 3005 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 23:49:41.715967 kubelet[3005]: I1104 23:49:41.715946 3005 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 4 23:49:41.716935 kubelet[3005]: I1104 23:49:41.715978 3005 server.go:1289] "Started kubelet" Nov 4 23:49:41.716935 kubelet[3005]: I1104 23:49:41.716114 3005 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 23:49:41.720944 kubelet[3005]: I1104 23:49:41.720842 3005 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 23:49:41.722861 kubelet[3005]: I1104 23:49:41.722798 3005 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 23:49:41.722963 kubelet[3005]: I1104 23:49:41.722951 3005 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 23:49:41.723701 kubelet[3005]: I1104 23:49:41.723517 3005 server.go:317] "Adding debug handlers to kubelet server" Nov 4 23:49:41.724800 kubelet[3005]: I1104 23:49:41.724786 3005 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 23:49:41.726213 kubelet[3005]: I1104 23:49:41.726196 3005 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 4 23:49:41.726514 kubelet[3005]: I1104 23:49:41.726483 3005 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 4 23:49:41.726569 kubelet[3005]: I1104 23:49:41.726559 3005 reconciler.go:26] "Reconciler: start to sync state" Nov 4 23:49:41.728011 kubelet[3005]: I1104 23:49:41.727997 3005 factory.go:223] Registration of the systemd container factory successfully Nov 4 23:49:41.728087 kubelet[3005]: I1104 23:49:41.728073 3005 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 23:49:41.731132 kubelet[3005]: I1104 23:49:41.731026 3005 factory.go:223] Registration of the containerd container factory successfully Nov 4 23:49:41.733507 kubelet[3005]: I1104 23:49:41.733234 3005 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 4 23:49:41.733936 kubelet[3005]: I1104 23:49:41.733918 3005 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 4 23:49:41.733936 kubelet[3005]: I1104 23:49:41.733934 3005 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 4 23:49:41.733986 kubelet[3005]: I1104 23:49:41.733946 3005 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 23:49:41.733986 kubelet[3005]: I1104 23:49:41.733950 3005 kubelet.go:2436] "Starting kubelet main sync loop" Nov 4 23:49:41.733986 kubelet[3005]: E1104 23:49:41.733970 3005 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 23:49:41.771581 kubelet[3005]: I1104 23:49:41.771562 3005 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 23:49:41.771581 kubelet[3005]: I1104 23:49:41.771574 3005 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 23:49:41.771679 kubelet[3005]: I1104 23:49:41.771612 3005 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:49:41.771750 kubelet[3005]: I1104 23:49:41.771738 3005 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 4 23:49:41.771780 kubelet[3005]: I1104 23:49:41.771749 3005 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 4 23:49:41.771780 kubelet[3005]: I1104 23:49:41.771761 3005 policy_none.go:49] "None policy: Start" Nov 4 23:49:41.771780 kubelet[3005]: I1104 23:49:41.771767 3005 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 4 23:49:41.771780 kubelet[3005]: I1104 23:49:41.771774 3005 state_mem.go:35] "Initializing new in-memory state store" Nov 4 23:49:41.771854 kubelet[3005]: I1104 23:49:41.771844 3005 state_mem.go:75] "Updated machine memory state" Nov 4 23:49:41.775726 kubelet[3005]: E1104 23:49:41.775403 3005 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 23:49:41.775726 kubelet[3005]: I1104 23:49:41.775487 3005 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 23:49:41.775726 kubelet[3005]: I1104 23:49:41.775493 3005 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 23:49:41.776053 kubelet[3005]: I1104 23:49:41.775835 3005 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 23:49:41.776513 kubelet[3005]: E1104 23:49:41.776493 3005 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 23:49:41.834736 kubelet[3005]: I1104 23:49:41.834713 3005 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 23:49:41.834931 kubelet[3005]: I1104 23:49:41.834879 3005 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 23:49:41.835514 kubelet[3005]: I1104 23:49:41.834988 3005 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 23:49:41.880119 kubelet[3005]: I1104 23:49:41.879931 3005 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 23:49:41.889043 kubelet[3005]: I1104 23:49:41.889022 3005 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 4 23:49:41.889220 kubelet[3005]: I1104 23:49:41.889183 3005 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 4 23:49:41.927883 kubelet[3005]: I1104 23:49:41.927858 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42259a2cfd4ad5bb22df64ef808385eb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"42259a2cfd4ad5bb22df64ef808385eb\") " pod="kube-system/kube-apiserver-localhost" Nov 4 23:49:41.927883 kubelet[3005]: I1104 23:49:41.927884 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42259a2cfd4ad5bb22df64ef808385eb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"42259a2cfd4ad5bb22df64ef808385eb\") " pod="kube-system/kube-apiserver-localhost" Nov 4 23:49:41.928107 kubelet[3005]: I1104 23:49:41.927898 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:49:41.928107 kubelet[3005]: I1104 23:49:41.927909 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:49:41.928107 kubelet[3005]: I1104 23:49:41.927925 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:49:41.928107 kubelet[3005]: I1104 23:49:41.927947 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 4 23:49:41.928107 kubelet[3005]: I1104 23:49:41.927970 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42259a2cfd4ad5bb22df64ef808385eb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"42259a2cfd4ad5bb22df64ef808385eb\") " pod="kube-system/kube-apiserver-localhost" Nov 4 23:49:41.928337 kubelet[3005]: I1104 23:49:41.928289 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:49:41.928337 kubelet[3005]: I1104 23:49:41.928314 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:49:42.706726 kubelet[3005]: I1104 23:49:42.706524 3005 apiserver.go:52] "Watching apiserver" Nov 4 23:49:42.726672 kubelet[3005]: I1104 23:49:42.726634 3005 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 4 23:49:42.795840 kubelet[3005]: I1104 23:49:42.795675 3005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.795657803 podStartE2EDuration="1.795657803s" podCreationTimestamp="2025-11-04 23:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:49:42.795106814 +0000 UTC m=+1.236082597" watchObservedRunningTime="2025-11-04 23:49:42.795657803 +0000 UTC m=+1.236633576" Nov 4 23:49:42.796365 kubelet[3005]: I1104 23:49:42.795760 3005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.795756173 podStartE2EDuration="1.795756173s" podCreationTimestamp="2025-11-04 23:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:49:42.790637027 +0000 UTC m=+1.231612809" watchObservedRunningTime="2025-11-04 23:49:42.795756173 +0000 UTC m=+1.236731948" Nov 4 23:49:42.800658 kubelet[3005]: I1104 23:49:42.800484 3005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.800469783 podStartE2EDuration="1.800469783s" podCreationTimestamp="2025-11-04 23:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:49:42.80022483 +0000 UTC m=+1.241200612" watchObservedRunningTime="2025-11-04 23:49:42.800469783 +0000 UTC m=+1.241445565" Nov 4 23:49:48.511513 kubelet[3005]: I1104 23:49:48.511479 3005 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 4 23:49:48.512531 containerd[1684]: time="2025-11-04T23:49:48.511867051Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 4 23:49:48.512690 kubelet[3005]: I1104 23:49:48.511962 3005 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 4 23:49:48.832276 systemd[1]: Created slice kubepods-besteffort-podf8a37f16_a0cd_46c1_9b98_a8b2d03d6b5a.slice - libcontainer container kubepods-besteffort-podf8a37f16_a0cd_46c1_9b98_a8b2d03d6b5a.slice. Nov 4 23:49:48.873409 kubelet[3005]: I1104 23:49:48.873380 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f8a37f16-a0cd-46c1-9b98-a8b2d03d6b5a-kube-proxy\") pod \"kube-proxy-57j9q\" (UID: \"f8a37f16-a0cd-46c1-9b98-a8b2d03d6b5a\") " pod="kube-system/kube-proxy-57j9q" Nov 4 23:49:48.873409 kubelet[3005]: I1104 23:49:48.873407 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8a37f16-a0cd-46c1-9b98-a8b2d03d6b5a-lib-modules\") pod \"kube-proxy-57j9q\" (UID: \"f8a37f16-a0cd-46c1-9b98-a8b2d03d6b5a\") " pod="kube-system/kube-proxy-57j9q" Nov 4 23:49:48.873754 kubelet[3005]: I1104 23:49:48.873419 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8a37f16-a0cd-46c1-9b98-a8b2d03d6b5a-xtables-lock\") pod \"kube-proxy-57j9q\" (UID: \"f8a37f16-a0cd-46c1-9b98-a8b2d03d6b5a\") " pod="kube-system/kube-proxy-57j9q" Nov 4 23:49:48.873754 kubelet[3005]: I1104 23:49:48.873429 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9jzk\" (UniqueName: \"kubernetes.io/projected/f8a37f16-a0cd-46c1-9b98-a8b2d03d6b5a-kube-api-access-v9jzk\") pod \"kube-proxy-57j9q\" (UID: \"f8a37f16-a0cd-46c1-9b98-a8b2d03d6b5a\") " pod="kube-system/kube-proxy-57j9q" Nov 4 23:49:48.977117 kubelet[3005]: E1104 23:49:48.977091 3005 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 4 23:49:48.977117 kubelet[3005]: E1104 23:49:48.977111 3005 projected.go:194] Error preparing data for projected volume kube-api-access-v9jzk for pod kube-system/kube-proxy-57j9q: configmap "kube-root-ca.crt" not found Nov 4 23:49:48.977221 kubelet[3005]: E1104 23:49:48.977150 3005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f8a37f16-a0cd-46c1-9b98-a8b2d03d6b5a-kube-api-access-v9jzk podName:f8a37f16-a0cd-46c1-9b98-a8b2d03d6b5a nodeName:}" failed. No retries permitted until 2025-11-04 23:49:49.477136422 +0000 UTC m=+7.918112191 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v9jzk" (UniqueName: "kubernetes.io/projected/f8a37f16-a0cd-46c1-9b98-a8b2d03d6b5a-kube-api-access-v9jzk") pod "kube-proxy-57j9q" (UID: "f8a37f16-a0cd-46c1-9b98-a8b2d03d6b5a") : configmap "kube-root-ca.crt" not found Nov 4 23:49:49.701762 systemd[1]: Created slice kubepods-besteffort-poddbd5f535_37cb_49ea_b8fa_c852848084db.slice - libcontainer container kubepods-besteffort-poddbd5f535_37cb_49ea_b8fa_c852848084db.slice. Nov 4 23:49:49.740760 containerd[1684]: time="2025-11-04T23:49:49.740510150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-57j9q,Uid:f8a37f16-a0cd-46c1-9b98-a8b2d03d6b5a,Namespace:kube-system,Attempt:0,}" Nov 4 23:49:49.753263 containerd[1684]: time="2025-11-04T23:49:49.753184975Z" level=info msg="connecting to shim 2baeb9334467ddcbac00b928247c5bd6efae3bb85cc9d2e39de50176030e38b1" address="unix:///run/containerd/s/09b3a05ba23afaf2413d7925a26e989a1092c3d89a06cc209abd236d134ff5c9" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:49:49.776680 systemd[1]: Started cri-containerd-2baeb9334467ddcbac00b928247c5bd6efae3bb85cc9d2e39de50176030e38b1.scope - libcontainer container 2baeb9334467ddcbac00b928247c5bd6efae3bb85cc9d2e39de50176030e38b1. Nov 4 23:49:49.779872 kubelet[3005]: I1104 23:49:49.779829 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx7zm\" (UniqueName: \"kubernetes.io/projected/dbd5f535-37cb-49ea-b8fa-c852848084db-kube-api-access-xx7zm\") pod \"tigera-operator-7dcd859c48-z68fj\" (UID: \"dbd5f535-37cb-49ea-b8fa-c852848084db\") " pod="tigera-operator/tigera-operator-7dcd859c48-z68fj" Nov 4 23:49:49.779872 kubelet[3005]: I1104 23:49:49.779850 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dbd5f535-37cb-49ea-b8fa-c852848084db-var-lib-calico\") pod \"tigera-operator-7dcd859c48-z68fj\" (UID: \"dbd5f535-37cb-49ea-b8fa-c852848084db\") " pod="tigera-operator/tigera-operator-7dcd859c48-z68fj" Nov 4 23:49:49.801038 containerd[1684]: time="2025-11-04T23:49:49.801016789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-57j9q,Uid:f8a37f16-a0cd-46c1-9b98-a8b2d03d6b5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2baeb9334467ddcbac00b928247c5bd6efae3bb85cc9d2e39de50176030e38b1\"" Nov 4 23:49:49.803826 containerd[1684]: time="2025-11-04T23:49:49.803794316Z" level=info msg="CreateContainer within sandbox \"2baeb9334467ddcbac00b928247c5bd6efae3bb85cc9d2e39de50176030e38b1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 4 23:49:49.810603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount780526381.mount: Deactivated successfully. Nov 4 23:49:49.810812 containerd[1684]: time="2025-11-04T23:49:49.810703897Z" level=info msg="Container 0e19462201790096964bf16707a890674e2343b5f06de3d98a8e5e70db2e1211: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:49:49.815186 containerd[1684]: time="2025-11-04T23:49:49.815166246Z" level=info msg="CreateContainer within sandbox \"2baeb9334467ddcbac00b928247c5bd6efae3bb85cc9d2e39de50176030e38b1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0e19462201790096964bf16707a890674e2343b5f06de3d98a8e5e70db2e1211\"" Nov 4 23:49:49.815723 containerd[1684]: time="2025-11-04T23:49:49.815702781Z" level=info msg="StartContainer for \"0e19462201790096964bf16707a890674e2343b5f06de3d98a8e5e70db2e1211\"" Nov 4 23:49:49.816553 containerd[1684]: time="2025-11-04T23:49:49.816512405Z" level=info msg="connecting to shim 0e19462201790096964bf16707a890674e2343b5f06de3d98a8e5e70db2e1211" address="unix:///run/containerd/s/09b3a05ba23afaf2413d7925a26e989a1092c3d89a06cc209abd236d134ff5c9" protocol=ttrpc version=3 Nov 4 23:49:49.833618 systemd[1]: Started cri-containerd-0e19462201790096964bf16707a890674e2343b5f06de3d98a8e5e70db2e1211.scope - libcontainer container 0e19462201790096964bf16707a890674e2343b5f06de3d98a8e5e70db2e1211. Nov 4 23:49:49.858623 containerd[1684]: time="2025-11-04T23:49:49.858572979Z" level=info msg="StartContainer for \"0e19462201790096964bf16707a890674e2343b5f06de3d98a8e5e70db2e1211\" returns successfully" Nov 4 23:49:50.004346 containerd[1684]: time="2025-11-04T23:49:50.003962781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-z68fj,Uid:dbd5f535-37cb-49ea-b8fa-c852848084db,Namespace:tigera-operator,Attempt:0,}" Nov 4 23:49:50.014419 containerd[1684]: time="2025-11-04T23:49:50.014329345Z" level=info msg="connecting to shim 4ba4cef980ada35e3c287680f5fe82aa74aee6a237dc63b32ad61979bf798f5e" address="unix:///run/containerd/s/f8fb0ebeb41b76e727c2ef9c61361ae0822ecd67623e0ba055f42eecce033a64" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:49:50.034592 systemd[1]: Started cri-containerd-4ba4cef980ada35e3c287680f5fe82aa74aee6a237dc63b32ad61979bf798f5e.scope - libcontainer container 4ba4cef980ada35e3c287680f5fe82aa74aee6a237dc63b32ad61979bf798f5e. Nov 4 23:49:50.067722 containerd[1684]: time="2025-11-04T23:49:50.067694279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-z68fj,Uid:dbd5f535-37cb-49ea-b8fa-c852848084db,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4ba4cef980ada35e3c287680f5fe82aa74aee6a237dc63b32ad61979bf798f5e\"" Nov 4 23:49:50.069769 containerd[1684]: time="2025-11-04T23:49:50.069752004Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 4 23:49:50.800925 kubelet[3005]: I1104 23:49:50.800887 3005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-57j9q" podStartSLOduration=2.800866785 podStartE2EDuration="2.800866785s" podCreationTimestamp="2025-11-04 23:49:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:49:50.79803325 +0000 UTC m=+9.239009032" watchObservedRunningTime="2025-11-04 23:49:50.800866785 +0000 UTC m=+9.241842559" Nov 4 23:49:51.491162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1938246670.mount: Deactivated successfully. Nov 4 23:49:51.998916 containerd[1684]: time="2025-11-04T23:49:51.998887896Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:49:52.012748 containerd[1684]: time="2025-11-04T23:49:52.012609981Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:49:52.033895 containerd[1684]: time="2025-11-04T23:49:52.033862937Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 4 23:49:52.067565 containerd[1684]: time="2025-11-04T23:49:52.067535325Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:49:52.068209 containerd[1684]: time="2025-11-04T23:49:52.068191655Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.998420837s" Nov 4 23:49:52.068239 containerd[1684]: time="2025-11-04T23:49:52.068211049Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 4 23:49:52.082714 containerd[1684]: time="2025-11-04T23:49:52.082684882Z" level=info msg="CreateContainer within sandbox \"4ba4cef980ada35e3c287680f5fe82aa74aee6a237dc63b32ad61979bf798f5e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 4 23:49:52.104517 containerd[1684]: time="2025-11-04T23:49:52.104271463Z" level=info msg="Container 925a11492d749b8120846b088bb34bc323b61abd50ad82f406c380c061e6ad80: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:49:52.126299 containerd[1684]: time="2025-11-04T23:49:52.126269845Z" level=info msg="CreateContainer within sandbox \"4ba4cef980ada35e3c287680f5fe82aa74aee6a237dc63b32ad61979bf798f5e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"925a11492d749b8120846b088bb34bc323b61abd50ad82f406c380c061e6ad80\"" Nov 4 23:49:52.126590 containerd[1684]: time="2025-11-04T23:49:52.126574860Z" level=info msg="StartContainer for \"925a11492d749b8120846b088bb34bc323b61abd50ad82f406c380c061e6ad80\"" Nov 4 23:49:52.127035 containerd[1684]: time="2025-11-04T23:49:52.127020134Z" level=info msg="connecting to shim 925a11492d749b8120846b088bb34bc323b61abd50ad82f406c380c061e6ad80" address="unix:///run/containerd/s/f8fb0ebeb41b76e727c2ef9c61361ae0822ecd67623e0ba055f42eecce033a64" protocol=ttrpc version=3 Nov 4 23:49:52.142589 systemd[1]: Started cri-containerd-925a11492d749b8120846b088bb34bc323b61abd50ad82f406c380c061e6ad80.scope - libcontainer container 925a11492d749b8120846b088bb34bc323b61abd50ad82f406c380c061e6ad80. Nov 4 23:49:52.168515 containerd[1684]: time="2025-11-04T23:49:52.167614447Z" level=info msg="StartContainer for \"925a11492d749b8120846b088bb34bc323b61abd50ad82f406c380c061e6ad80\" returns successfully" Nov 4 23:49:57.260245 sudo[2015]: pam_unix(sudo:session): session closed for user root Nov 4 23:49:57.263531 sshd[2014]: Connection closed by 139.178.68.195 port 41216 Nov 4 23:49:57.264949 sshd-session[2011]: pam_unix(sshd:session): session closed for user core Nov 4 23:49:57.268075 systemd[1]: sshd@6-139.178.70.101:22-139.178.68.195:41216.service: Deactivated successfully. Nov 4 23:49:57.270368 systemd[1]: session-9.scope: Deactivated successfully. Nov 4 23:49:57.271027 systemd[1]: session-9.scope: Consumed 4.088s CPU time, 153.9M memory peak. Nov 4 23:49:57.273294 systemd-logind[1666]: Session 9 logged out. Waiting for processes to exit. Nov 4 23:49:57.274836 systemd-logind[1666]: Removed session 9. Nov 4 23:50:01.552641 kubelet[3005]: I1104 23:50:01.551910 3005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-z68fj" podStartSLOduration=10.551775112 podStartE2EDuration="12.551896954s" podCreationTimestamp="2025-11-04 23:49:49 +0000 UTC" firstStartedPulling="2025-11-04 23:49:50.068587917 +0000 UTC m=+8.509563686" lastFinishedPulling="2025-11-04 23:49:52.068709757 +0000 UTC m=+10.509685528" observedRunningTime="2025-11-04 23:49:52.795665969 +0000 UTC m=+11.236641749" watchObservedRunningTime="2025-11-04 23:50:01.551896954 +0000 UTC m=+19.992872736" Nov 4 23:50:01.573803 systemd[1]: Created slice kubepods-besteffort-pod78fdd449_951d_4bc7_b7b9_771211a0f6ff.slice - libcontainer container kubepods-besteffort-pod78fdd449_951d_4bc7_b7b9_771211a0f6ff.slice. Nov 4 23:50:01.652179 kubelet[3005]: I1104 23:50:01.652095 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/78fdd449-951d-4bc7-b7b9-771211a0f6ff-typha-certs\") pod \"calico-typha-fc58769b6-zqlkt\" (UID: \"78fdd449-951d-4bc7-b7b9-771211a0f6ff\") " pod="calico-system/calico-typha-fc58769b6-zqlkt" Nov 4 23:50:01.652179 kubelet[3005]: I1104 23:50:01.652125 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdncp\" (UniqueName: \"kubernetes.io/projected/78fdd449-951d-4bc7-b7b9-771211a0f6ff-kube-api-access-mdncp\") pod \"calico-typha-fc58769b6-zqlkt\" (UID: \"78fdd449-951d-4bc7-b7b9-771211a0f6ff\") " pod="calico-system/calico-typha-fc58769b6-zqlkt" Nov 4 23:50:01.652179 kubelet[3005]: I1104 23:50:01.652141 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78fdd449-951d-4bc7-b7b9-771211a0f6ff-tigera-ca-bundle\") pod \"calico-typha-fc58769b6-zqlkt\" (UID: \"78fdd449-951d-4bc7-b7b9-771211a0f6ff\") " pod="calico-system/calico-typha-fc58769b6-zqlkt" Nov 4 23:50:01.912025 containerd[1684]: time="2025-11-04T23:50:01.911977930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fc58769b6-zqlkt,Uid:78fdd449-951d-4bc7-b7b9-771211a0f6ff,Namespace:calico-system,Attempt:0,}" Nov 4 23:50:01.970923 systemd[1]: Created slice kubepods-besteffort-podda2cccd5_0ad0_40b4_a153_ca39265c98ab.slice - libcontainer container kubepods-besteffort-podda2cccd5_0ad0_40b4_a153_ca39265c98ab.slice. Nov 4 23:50:02.055095 kubelet[3005]: I1104 23:50:02.055059 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/da2cccd5-0ad0-40b4-a153-ca39265c98ab-cni-log-dir\") pod \"calico-node-r8mrt\" (UID: \"da2cccd5-0ad0-40b4-a153-ca39265c98ab\") " pod="calico-system/calico-node-r8mrt" Nov 4 23:50:02.055095 kubelet[3005]: I1104 23:50:02.055091 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/da2cccd5-0ad0-40b4-a153-ca39265c98ab-var-run-calico\") pod \"calico-node-r8mrt\" (UID: \"da2cccd5-0ad0-40b4-a153-ca39265c98ab\") " pod="calico-system/calico-node-r8mrt" Nov 4 23:50:02.055205 kubelet[3005]: I1104 23:50:02.055113 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da2cccd5-0ad0-40b4-a153-ca39265c98ab-tigera-ca-bundle\") pod \"calico-node-r8mrt\" (UID: \"da2cccd5-0ad0-40b4-a153-ca39265c98ab\") " pod="calico-system/calico-node-r8mrt" Nov 4 23:50:02.055205 kubelet[3005]: I1104 23:50:02.055130 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/da2cccd5-0ad0-40b4-a153-ca39265c98ab-flexvol-driver-host\") pod \"calico-node-r8mrt\" (UID: \"da2cccd5-0ad0-40b4-a153-ca39265c98ab\") " pod="calico-system/calico-node-r8mrt" Nov 4 23:50:02.055205 kubelet[3005]: I1104 23:50:02.055153 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/da2cccd5-0ad0-40b4-a153-ca39265c98ab-node-certs\") pod \"calico-node-r8mrt\" (UID: \"da2cccd5-0ad0-40b4-a153-ca39265c98ab\") " pod="calico-system/calico-node-r8mrt" Nov 4 23:50:02.055205 kubelet[3005]: I1104 23:50:02.055162 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da2cccd5-0ad0-40b4-a153-ca39265c98ab-xtables-lock\") pod \"calico-node-r8mrt\" (UID: \"da2cccd5-0ad0-40b4-a153-ca39265c98ab\") " pod="calico-system/calico-node-r8mrt" Nov 4 23:50:02.055205 kubelet[3005]: I1104 23:50:02.055182 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7gdr\" (UniqueName: \"kubernetes.io/projected/da2cccd5-0ad0-40b4-a153-ca39265c98ab-kube-api-access-x7gdr\") pod \"calico-node-r8mrt\" (UID: \"da2cccd5-0ad0-40b4-a153-ca39265c98ab\") " pod="calico-system/calico-node-r8mrt" Nov 4 23:50:02.055289 kubelet[3005]: I1104 23:50:02.055193 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/da2cccd5-0ad0-40b4-a153-ca39265c98ab-cni-net-dir\") pod \"calico-node-r8mrt\" (UID: \"da2cccd5-0ad0-40b4-a153-ca39265c98ab\") " pod="calico-system/calico-node-r8mrt" Nov 4 23:50:02.055289 kubelet[3005]: I1104 23:50:02.055203 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da2cccd5-0ad0-40b4-a153-ca39265c98ab-lib-modules\") pod \"calico-node-r8mrt\" (UID: \"da2cccd5-0ad0-40b4-a153-ca39265c98ab\") " pod="calico-system/calico-node-r8mrt" Nov 4 23:50:02.055289 kubelet[3005]: I1104 23:50:02.055211 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/da2cccd5-0ad0-40b4-a153-ca39265c98ab-var-lib-calico\") pod \"calico-node-r8mrt\" (UID: \"da2cccd5-0ad0-40b4-a153-ca39265c98ab\") " pod="calico-system/calico-node-r8mrt" Nov 4 23:50:02.055289 kubelet[3005]: I1104 23:50:02.055220 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/da2cccd5-0ad0-40b4-a153-ca39265c98ab-cni-bin-dir\") pod \"calico-node-r8mrt\" (UID: \"da2cccd5-0ad0-40b4-a153-ca39265c98ab\") " pod="calico-system/calico-node-r8mrt" Nov 4 23:50:02.055289 kubelet[3005]: I1104 23:50:02.055229 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/da2cccd5-0ad0-40b4-a153-ca39265c98ab-policysync\") pod \"calico-node-r8mrt\" (UID: \"da2cccd5-0ad0-40b4-a153-ca39265c98ab\") " pod="calico-system/calico-node-r8mrt" Nov 4 23:50:02.118291 containerd[1684]: time="2025-11-04T23:50:02.118163381Z" level=info msg="connecting to shim a69c0820516ebb5046a58e87c5e1cfd20bade12cd41c70442df06383bf464386" address="unix:///run/containerd/s/b7f9c376032ed2318d805c2cfe90690a3c9ec996ab42b8f7245fd3af17712440" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:50:02.146796 systemd[1]: Started cri-containerd-a69c0820516ebb5046a58e87c5e1cfd20bade12cd41c70442df06383bf464386.scope - libcontainer container a69c0820516ebb5046a58e87c5e1cfd20bade12cd41c70442df06383bf464386. Nov 4 23:50:02.167560 kubelet[3005]: E1104 23:50:02.166688 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j2cxk" podUID="87667173-4fdd-43b0-b698-59acc1ea7515" Nov 4 23:50:02.183375 kubelet[3005]: E1104 23:50:02.183252 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.183375 kubelet[3005]: W1104 23:50:02.183274 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.189980 kubelet[3005]: E1104 23:50:02.189947 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.206761 containerd[1684]: time="2025-11-04T23:50:02.206741796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fc58769b6-zqlkt,Uid:78fdd449-951d-4bc7-b7b9-771211a0f6ff,Namespace:calico-system,Attempt:0,} returns sandbox id \"a69c0820516ebb5046a58e87c5e1cfd20bade12cd41c70442df06383bf464386\"" Nov 4 23:50:02.207641 containerd[1684]: time="2025-11-04T23:50:02.207625500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 4 23:50:02.231626 kubelet[3005]: E1104 23:50:02.231602 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.231747 kubelet[3005]: W1104 23:50:02.231708 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.231747 kubelet[3005]: E1104 23:50:02.231726 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.246732 kubelet[3005]: E1104 23:50:02.246645 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.246732 kubelet[3005]: W1104 23:50:02.246665 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.246732 kubelet[3005]: E1104 23:50:02.246681 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.246962 kubelet[3005]: E1104 23:50:02.246908 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.246962 kubelet[3005]: W1104 23:50:02.246915 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.246962 kubelet[3005]: E1104 23:50:02.246921 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.247152 kubelet[3005]: E1104 23:50:02.247146 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.247195 kubelet[3005]: W1104 23:50:02.247189 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.247231 kubelet[3005]: E1104 23:50:02.247226 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.253645 kubelet[3005]: E1104 23:50:02.253539 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.253645 kubelet[3005]: W1104 23:50:02.253553 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.253645 kubelet[3005]: E1104 23:50:02.253567 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.253794 kubelet[3005]: E1104 23:50:02.253787 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.253840 kubelet[3005]: W1104 23:50:02.253834 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.253916 kubelet[3005]: E1104 23:50:02.253870 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.253974 kubelet[3005]: E1104 23:50:02.253969 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.254050 kubelet[3005]: W1104 23:50:02.254002 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.254050 kubelet[3005]: E1104 23:50:02.254009 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.254130 kubelet[3005]: E1104 23:50:02.254124 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.254228 kubelet[3005]: W1104 23:50:02.254162 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.254228 kubelet[3005]: E1104 23:50:02.254171 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.254310 kubelet[3005]: E1104 23:50:02.254304 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.254395 kubelet[3005]: W1104 23:50:02.254339 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.254395 kubelet[3005]: E1104 23:50:02.254346 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.254465 kubelet[3005]: E1104 23:50:02.254459 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.254577 kubelet[3005]: W1104 23:50:02.254507 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.254577 kubelet[3005]: E1104 23:50:02.254516 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.254656 kubelet[3005]: E1104 23:50:02.254651 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.254689 kubelet[3005]: W1104 23:50:02.254684 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.254724 kubelet[3005]: E1104 23:50:02.254719 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.254890 kubelet[3005]: E1104 23:50:02.254836 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.254890 kubelet[3005]: W1104 23:50:02.254843 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.254890 kubelet[3005]: E1104 23:50:02.254849 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.254983 kubelet[3005]: E1104 23:50:02.254978 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.255015 kubelet[3005]: W1104 23:50:02.255010 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.255044 kubelet[3005]: E1104 23:50:02.255039 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.255210 kubelet[3005]: E1104 23:50:02.255160 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.255210 kubelet[3005]: W1104 23:50:02.255168 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.255210 kubelet[3005]: E1104 23:50:02.255172 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.255391 kubelet[3005]: E1104 23:50:02.255329 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.255391 kubelet[3005]: W1104 23:50:02.255337 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.255391 kubelet[3005]: E1104 23:50:02.255345 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.255489 kubelet[3005]: E1104 23:50:02.255483 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.255614 kubelet[3005]: W1104 23:50:02.255558 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.255614 kubelet[3005]: E1104 23:50:02.255568 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.255695 kubelet[3005]: E1104 23:50:02.255689 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.255731 kubelet[3005]: W1104 23:50:02.255726 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.255805 kubelet[3005]: E1104 23:50:02.255758 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.255880 kubelet[3005]: E1104 23:50:02.255874 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.255988 kubelet[3005]: W1104 23:50:02.255940 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.255988 kubelet[3005]: E1104 23:50:02.255948 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.256113 kubelet[3005]: E1104 23:50:02.256027 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.256113 kubelet[3005]: W1104 23:50:02.256031 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.256113 kubelet[3005]: E1104 23:50:02.256036 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.256191 kubelet[3005]: E1104 23:50:02.256186 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.256269 kubelet[3005]: W1104 23:50:02.256222 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.256269 kubelet[3005]: E1104 23:50:02.256229 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.256349 kubelet[3005]: E1104 23:50:02.256343 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.256381 kubelet[3005]: W1104 23:50:02.256376 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.256417 kubelet[3005]: E1104 23:50:02.256408 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.256601 kubelet[3005]: E1104 23:50:02.256596 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.256724 kubelet[3005]: W1104 23:50:02.256644 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.256724 kubelet[3005]: E1104 23:50:02.256652 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.256724 kubelet[3005]: I1104 23:50:02.256670 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/87667173-4fdd-43b0-b698-59acc1ea7515-kubelet-dir\") pod \"csi-node-driver-j2cxk\" (UID: \"87667173-4fdd-43b0-b698-59acc1ea7515\") " pod="calico-system/csi-node-driver-j2cxk" Nov 4 23:50:02.256868 kubelet[3005]: E1104 23:50:02.256825 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.256868 kubelet[3005]: W1104 23:50:02.256832 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.256868 kubelet[3005]: E1104 23:50:02.256838 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.256868 kubelet[3005]: I1104 23:50:02.256851 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/87667173-4fdd-43b0-b698-59acc1ea7515-varrun\") pod \"csi-node-driver-j2cxk\" (UID: \"87667173-4fdd-43b0-b698-59acc1ea7515\") " pod="calico-system/csi-node-driver-j2cxk" Nov 4 23:50:02.256947 kubelet[3005]: E1104 23:50:02.256940 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.256987 kubelet[3005]: W1104 23:50:02.256947 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.256987 kubelet[3005]: E1104 23:50:02.256955 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.257071 kubelet[3005]: E1104 23:50:02.257042 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.257071 kubelet[3005]: W1104 23:50:02.257048 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.257071 kubelet[3005]: E1104 23:50:02.257053 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.257146 kubelet[3005]: E1104 23:50:02.257136 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.257146 kubelet[3005]: W1104 23:50:02.257143 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.257242 kubelet[3005]: E1104 23:50:02.257148 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.257242 kubelet[3005]: I1104 23:50:02.257161 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/87667173-4fdd-43b0-b698-59acc1ea7515-registration-dir\") pod \"csi-node-driver-j2cxk\" (UID: \"87667173-4fdd-43b0-b698-59acc1ea7515\") " pod="calico-system/csi-node-driver-j2cxk" Nov 4 23:50:02.257360 kubelet[3005]: E1104 23:50:02.257303 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.257360 kubelet[3005]: W1104 23:50:02.257309 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.257360 kubelet[3005]: E1104 23:50:02.257319 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.257505 kubelet[3005]: E1104 23:50:02.257459 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.257505 kubelet[3005]: W1104 23:50:02.257464 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.257505 kubelet[3005]: E1104 23:50:02.257469 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.257654 kubelet[3005]: E1104 23:50:02.257648 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.257726 kubelet[3005]: W1104 23:50:02.257685 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.257726 kubelet[3005]: E1104 23:50:02.257693 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.257726 kubelet[3005]: I1104 23:50:02.257707 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8ng9\" (UniqueName: \"kubernetes.io/projected/87667173-4fdd-43b0-b698-59acc1ea7515-kube-api-access-t8ng9\") pod \"csi-node-driver-j2cxk\" (UID: \"87667173-4fdd-43b0-b698-59acc1ea7515\") " pod="calico-system/csi-node-driver-j2cxk" Nov 4 23:50:02.257784 kubelet[3005]: E1104 23:50:02.257773 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.257784 kubelet[3005]: W1104 23:50:02.257778 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.257784 kubelet[3005]: E1104 23:50:02.257784 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.257856 kubelet[3005]: E1104 23:50:02.257845 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.257856 kubelet[3005]: W1104 23:50:02.257853 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.257932 kubelet[3005]: E1104 23:50:02.257858 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.257956 kubelet[3005]: E1104 23:50:02.257938 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.257956 kubelet[3005]: W1104 23:50:02.257942 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.257956 kubelet[3005]: E1104 23:50:02.257946 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.257956 kubelet[3005]: I1104 23:50:02.257955 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/87667173-4fdd-43b0-b698-59acc1ea7515-socket-dir\") pod \"csi-node-driver-j2cxk\" (UID: \"87667173-4fdd-43b0-b698-59acc1ea7515\") " pod="calico-system/csi-node-driver-j2cxk" Nov 4 23:50:02.258034 kubelet[3005]: E1104 23:50:02.258024 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.258034 kubelet[3005]: W1104 23:50:02.258032 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.258124 kubelet[3005]: E1104 23:50:02.258038 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.258124 kubelet[3005]: E1104 23:50:02.258114 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.258124 kubelet[3005]: W1104 23:50:02.258118 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.258124 kubelet[3005]: E1104 23:50:02.258123 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.258223 kubelet[3005]: E1104 23:50:02.258199 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.258223 kubelet[3005]: W1104 23:50:02.258204 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.258223 kubelet[3005]: E1104 23:50:02.258209 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.258324 kubelet[3005]: E1104 23:50:02.258280 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.258324 kubelet[3005]: W1104 23:50:02.258284 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.258324 kubelet[3005]: E1104 23:50:02.258288 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.273023 containerd[1684]: time="2025-11-04T23:50:02.272994843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r8mrt,Uid:da2cccd5-0ad0-40b4-a153-ca39265c98ab,Namespace:calico-system,Attempt:0,}" Nov 4 23:50:02.358304 kubelet[3005]: E1104 23:50:02.358275 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.358304 kubelet[3005]: W1104 23:50:02.358291 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.358304 kubelet[3005]: E1104 23:50:02.358306 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.358690 kubelet[3005]: E1104 23:50:02.358418 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.358690 kubelet[3005]: W1104 23:50:02.358423 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.358690 kubelet[3005]: E1104 23:50:02.358428 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.358690 kubelet[3005]: E1104 23:50:02.358544 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.358690 kubelet[3005]: W1104 23:50:02.358548 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.358690 kubelet[3005]: E1104 23:50:02.358554 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.358690 kubelet[3005]: E1104 23:50:02.358661 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.358690 kubelet[3005]: W1104 23:50:02.358666 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.358690 kubelet[3005]: E1104 23:50:02.358671 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.359023 kubelet[3005]: E1104 23:50:02.358774 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.359023 kubelet[3005]: W1104 23:50:02.358779 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.359023 kubelet[3005]: E1104 23:50:02.358783 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.359023 kubelet[3005]: E1104 23:50:02.358912 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.359023 kubelet[3005]: W1104 23:50:02.358918 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.359023 kubelet[3005]: E1104 23:50:02.358924 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.359023 kubelet[3005]: E1104 23:50:02.358996 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.359023 kubelet[3005]: W1104 23:50:02.359002 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.359023 kubelet[3005]: E1104 23:50:02.359007 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.359400 kubelet[3005]: E1104 23:50:02.359085 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.359400 kubelet[3005]: W1104 23:50:02.359089 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.359400 kubelet[3005]: E1104 23:50:02.359093 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.359400 kubelet[3005]: E1104 23:50:02.359236 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.359400 kubelet[3005]: W1104 23:50:02.359242 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.359400 kubelet[3005]: E1104 23:50:02.359247 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.359400 kubelet[3005]: E1104 23:50:02.359310 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.359400 kubelet[3005]: W1104 23:50:02.359314 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.359400 kubelet[3005]: E1104 23:50:02.359319 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.359400 kubelet[3005]: E1104 23:50:02.359381 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.359757 kubelet[3005]: W1104 23:50:02.359387 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.359757 kubelet[3005]: E1104 23:50:02.359394 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.359757 kubelet[3005]: E1104 23:50:02.359554 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.359757 kubelet[3005]: W1104 23:50:02.359561 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.359757 kubelet[3005]: E1104 23:50:02.359570 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.359757 kubelet[3005]: E1104 23:50:02.359713 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.359757 kubelet[3005]: W1104 23:50:02.359720 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.359757 kubelet[3005]: E1104 23:50:02.359727 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.359896 kubelet[3005]: E1104 23:50:02.359837 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.359896 kubelet[3005]: W1104 23:50:02.359843 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.359896 kubelet[3005]: E1104 23:50:02.359849 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.359985 kubelet[3005]: E1104 23:50:02.359974 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.359985 kubelet[3005]: W1104 23:50:02.359982 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.360041 kubelet[3005]: E1104 23:50:02.359988 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.360089 kubelet[3005]: E1104 23:50:02.360077 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.360089 kubelet[3005]: W1104 23:50:02.360085 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.360125 kubelet[3005]: E1104 23:50:02.360091 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.360178 kubelet[3005]: E1104 23:50:02.360167 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.360178 kubelet[3005]: W1104 23:50:02.360175 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.360221 kubelet[3005]: E1104 23:50:02.360182 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.360258 kubelet[3005]: E1104 23:50:02.360247 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.360258 kubelet[3005]: W1104 23:50:02.360255 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.360297 kubelet[3005]: E1104 23:50:02.360259 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.360331 kubelet[3005]: E1104 23:50:02.360321 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.360331 kubelet[3005]: W1104 23:50:02.360329 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.360396 kubelet[3005]: E1104 23:50:02.360334 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.360413 kubelet[3005]: E1104 23:50:02.360408 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.360428 kubelet[3005]: W1104 23:50:02.360413 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.360428 kubelet[3005]: E1104 23:50:02.360418 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.360514 kubelet[3005]: E1104 23:50:02.360487 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.360514 kubelet[3005]: W1104 23:50:02.360503 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.360514 kubelet[3005]: E1104 23:50:02.360510 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.360584 kubelet[3005]: E1104 23:50:02.360574 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.360584 kubelet[3005]: W1104 23:50:02.360580 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.360621 kubelet[3005]: E1104 23:50:02.360585 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.360651 kubelet[3005]: E1104 23:50:02.360644 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.360651 kubelet[3005]: W1104 23:50:02.360648 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.360690 kubelet[3005]: E1104 23:50:02.360653 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.360740 kubelet[3005]: E1104 23:50:02.360729 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.360740 kubelet[3005]: W1104 23:50:02.360738 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.360776 kubelet[3005]: E1104 23:50:02.360744 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.369613 kubelet[3005]: E1104 23:50:02.369589 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.369613 kubelet[3005]: W1104 23:50:02.369605 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.369613 kubelet[3005]: E1104 23:50:02.369617 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.372755 kubelet[3005]: E1104 23:50:02.372741 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:02.372755 kubelet[3005]: W1104 23:50:02.372750 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:02.372806 kubelet[3005]: E1104 23:50:02.372758 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:02.606446 containerd[1684]: time="2025-11-04T23:50:02.606123391Z" level=info msg="connecting to shim eb01ab0558eba8fe173e8fa4ca3cd80d997c07f2f942a01fd159388e394f0ebe" address="unix:///run/containerd/s/bf3ad9936cefa3dcdd7abafb6ea8e8b7f7b6fdef117a0ede880bc243c8a78895" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:50:02.630657 systemd[1]: Started cri-containerd-eb01ab0558eba8fe173e8fa4ca3cd80d997c07f2f942a01fd159388e394f0ebe.scope - libcontainer container eb01ab0558eba8fe173e8fa4ca3cd80d997c07f2f942a01fd159388e394f0ebe. Nov 4 23:50:02.661462 containerd[1684]: time="2025-11-04T23:50:02.661434990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r8mrt,Uid:da2cccd5-0ad0-40b4-a153-ca39265c98ab,Namespace:calico-system,Attempt:0,} returns sandbox id \"eb01ab0558eba8fe173e8fa4ca3cd80d997c07f2f942a01fd159388e394f0ebe\"" Nov 4 23:50:03.735143 kubelet[3005]: E1104 23:50:03.734733 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j2cxk" podUID="87667173-4fdd-43b0-b698-59acc1ea7515" Nov 4 23:50:03.912032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount831276631.mount: Deactivated successfully. Nov 4 23:50:05.260888 containerd[1684]: time="2025-11-04T23:50:05.260731436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:50:05.277450 containerd[1684]: time="2025-11-04T23:50:05.265332715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 4 23:50:05.277686 containerd[1684]: time="2025-11-04T23:50:05.272635971Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:50:05.280695 containerd[1684]: time="2025-11-04T23:50:05.280051218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:50:05.280695 containerd[1684]: time="2025-11-04T23:50:05.280393931Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.072664516s" Nov 4 23:50:05.280695 containerd[1684]: time="2025-11-04T23:50:05.280409849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 4 23:50:05.282108 containerd[1684]: time="2025-11-04T23:50:05.282065590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 4 23:50:05.301442 containerd[1684]: time="2025-11-04T23:50:05.301336796Z" level=info msg="CreateContainer within sandbox \"a69c0820516ebb5046a58e87c5e1cfd20bade12cd41c70442df06383bf464386\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 4 23:50:05.326238 containerd[1684]: time="2025-11-04T23:50:05.326199066Z" level=info msg="Container 5b0a88c4583824d329ce0d333188ce890f0b30b15d3e1b475e2a65b0a1178005: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:50:05.328188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount755562694.mount: Deactivated successfully. Nov 4 23:50:05.344730 containerd[1684]: time="2025-11-04T23:50:05.344658779Z" level=info msg="CreateContainer within sandbox \"a69c0820516ebb5046a58e87c5e1cfd20bade12cd41c70442df06383bf464386\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5b0a88c4583824d329ce0d333188ce890f0b30b15d3e1b475e2a65b0a1178005\"" Nov 4 23:50:05.346216 containerd[1684]: time="2025-11-04T23:50:05.345112673Z" level=info msg="StartContainer for \"5b0a88c4583824d329ce0d333188ce890f0b30b15d3e1b475e2a65b0a1178005\"" Nov 4 23:50:05.346609 containerd[1684]: time="2025-11-04T23:50:05.346589646Z" level=info msg="connecting to shim 5b0a88c4583824d329ce0d333188ce890f0b30b15d3e1b475e2a65b0a1178005" address="unix:///run/containerd/s/b7f9c376032ed2318d805c2cfe90690a3c9ec996ab42b8f7245fd3af17712440" protocol=ttrpc version=3 Nov 4 23:50:05.394657 systemd[1]: Started cri-containerd-5b0a88c4583824d329ce0d333188ce890f0b30b15d3e1b475e2a65b0a1178005.scope - libcontainer container 5b0a88c4583824d329ce0d333188ce890f0b30b15d3e1b475e2a65b0a1178005. Nov 4 23:50:05.461895 containerd[1684]: time="2025-11-04T23:50:05.461874885Z" level=info msg="StartContainer for \"5b0a88c4583824d329ce0d333188ce890f0b30b15d3e1b475e2a65b0a1178005\" returns successfully" Nov 4 23:50:05.738523 kubelet[3005]: E1104 23:50:05.738470 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j2cxk" podUID="87667173-4fdd-43b0-b698-59acc1ea7515" Nov 4 23:50:05.823178 kubelet[3005]: I1104 23:50:05.823092 3005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-fc58769b6-zqlkt" podStartSLOduration=1.7487302279999999 podStartE2EDuration="4.823078212s" podCreationTimestamp="2025-11-04 23:50:01 +0000 UTC" firstStartedPulling="2025-11-04 23:50:02.207483106 +0000 UTC m=+20.648458876" lastFinishedPulling="2025-11-04 23:50:05.281831088 +0000 UTC m=+23.722806860" observedRunningTime="2025-11-04 23:50:05.822109848 +0000 UTC m=+24.263085630" watchObservedRunningTime="2025-11-04 23:50:05.823078212 +0000 UTC m=+24.264053996" Nov 4 23:50:05.877570 kubelet[3005]: E1104 23:50:05.877547 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:05.877570 kubelet[3005]: W1104 23:50:05.877564 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:05.877689 kubelet[3005]: E1104 23:50:05.877583 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:05.877855 kubelet[3005]: E1104 23:50:05.877844 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:05.877855 kubelet[3005]: W1104 23:50:05.877853 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:05.877904 kubelet[3005]: E1104 23:50:05.877860 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:05.878130 kubelet[3005]: E1104 23:50:05.878011 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:05.878130 kubelet[3005]: W1104 23:50:05.878016 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:05.878130 kubelet[3005]: E1104 23:50:05.878021 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:05.878256 kubelet[3005]: E1104 23:50:05.878243 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:05.878256 kubelet[3005]: W1104 23:50:05.878252 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:05.878311 kubelet[3005]: E1104 23:50:05.878258 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:05.878358 kubelet[3005]: E1104 23:50:05.878346 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:05.878358 kubelet[3005]: W1104 23:50:05.878352 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:05.878397 kubelet[3005]: E1104 23:50:05.878364 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:05.878459 kubelet[3005]: E1104 23:50:05.878447 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:05.878459 kubelet[3005]: W1104 23:50:05.878455 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:05.878541 kubelet[3005]: E1104 23:50:05.878461 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:05.881077 kubelet[3005]: E1104 23:50:05.878546 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:05.881077 kubelet[3005]: W1104 23:50:05.878551 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:05.881077 kubelet[3005]: E1104 23:50:05.878555 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:05.881077 kubelet[3005]: E1104 23:50:05.878647 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:05.881077 kubelet[3005]: W1104 23:50:05.878652 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:05.881077 kubelet[3005]: E1104 23:50:05.878665 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:05.881077 kubelet[3005]: E1104 23:50:05.878779 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:05.881077 kubelet[3005]: W1104 23:50:05.878797 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:05.881077 kubelet[3005]: E1104 23:50:05.878804 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:05.881077 kubelet[3005]: E1104 23:50:05.878895 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:05.881248 kubelet[3005]: W1104 23:50:05.878899 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:05.881248 kubelet[3005]: E1104 23:50:05.878904 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:05.881248 kubelet[3005]: E1104 23:50:05.878990 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:05.881248 kubelet[3005]: W1104 23:50:05.878994 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:05.881248 kubelet[3005]: E1104 23:50:05.878999 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:05.881248 kubelet[3005]: E1104 23:50:05.879089 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:05.881248 kubelet[3005]: W1104 23:50:05.879093 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:05.881248 kubelet[3005]: E1104 23:50:05.879099 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:05.881248 kubelet[3005]: E1104 23:50:05.879184 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:05.881248 kubelet[3005]: W1104 23:50:05.879188 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:05.881403 kubelet[3005]: E1104 23:50:05.879192 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:05.881403 kubelet[3005]: E1104 23:50:05.879267 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:05.881403 kubelet[3005]: W1104 23:50:05.879272 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:05.881403 kubelet[3005]: E1104 23:50:05.879276 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:05.881403 kubelet[3005]: E1104 23:50:05.879354 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:05.881403 kubelet[3005]: W1104 23:50:05.879358 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:05.881403 kubelet[3005]: E1104 23:50:05.879364 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:05.881403 kubelet[3005]: E1104 23:50:05.880538 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:05.881403 kubelet[3005]: W1104 23:50:05.880543 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:05.881403 kubelet[3005]: E1104 23:50:05.880548 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:05.889119 kubelet[3005]: E1104 23:50:05.889107 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:05.889119 kubelet[3005]: W1104 23:50:05.889116 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:05.889193 kubelet[3005]: E1104 23:50:05.889124 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:05.889243 kubelet[3005]: E1104 23:50:05.889233 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:05.889243 kubelet[3005]: W1104 23:50:05.889241 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:05.889373 kubelet[3005]: E1104 23:50:05.889246 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:05.889485 kubelet[3005]: E1104 23:50:05.889419 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:05.889485 kubelet[3005]: W1104 23:50:05.889429 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:05.889485 kubelet[3005]: E1104 23:50:05.889437 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:05.889581 kubelet[3005]: E1104 23:50:05.889575 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:05.889661 kubelet[3005]: W1104 23:50:05.889608 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:05.889661 kubelet[3005]: E1104 23:50:05.889616 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:05.889739 kubelet[3005]: E1104 23:50:05.889733 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:05.889840 kubelet[3005]: W1104 23:50:05.889764 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:05.889840 kubelet[3005]: E1104 23:50:05.889771 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:05.889919 kubelet[3005]: E1104 23:50:05.889913 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:05.889951 kubelet[3005]: W1104 23:50:05.889947 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:05.889986 kubelet[3005]: E1104 23:50:05.889981 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:05.890172 kubelet[3005]: E1104 23:50:05.890161 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:05.890172 kubelet[3005]: W1104 23:50:05.890169 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:05.890220 kubelet[3005]: E1104 23:50:05.890174 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:05.890261 kubelet[3005]: E1104 23:50:05.890249 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:05.890261 kubelet[3005]: W1104 23:50:05.890257 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:05.890261 kubelet[3005]: E1104 23:50:05.890262 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:05.890344 kubelet[3005]: E1104 23:50:05.890325 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:05.890344 kubelet[3005]: W1104 23:50:05.890329 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:05.890344 kubelet[3005]: E1104 23:50:05.890333 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:05.890423 kubelet[3005]: E1104 23:50:05.890412 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:05.890423 kubelet[3005]: W1104 23:50:05.890421 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:05.890467 kubelet[3005]: E1104 23:50:05.890427 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:05.890536 kubelet[3005]: E1104 23:50:05.890525 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:05.890536 kubelet[3005]: W1104 23:50:05.890533 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:05.890588 kubelet[3005]: E1104 23:50:05.890539 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:05.890640 kubelet[3005]: E1104 23:50:05.890632 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:05.890640 kubelet[3005]: W1104 23:50:05.890639 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:05.890680 kubelet[3005]: E1104 23:50:05.890644 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:05.890806 kubelet[3005]: E1104 23:50:05.890794 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:05.890806 kubelet[3005]: W1104 23:50:05.890803 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:05.890847 kubelet[3005]: E1104 23:50:05.890808 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:05.890990 kubelet[3005]: E1104 23:50:05.890980 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:05.890990 kubelet[3005]: W1104 23:50:05.890988 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:05.891040 kubelet[3005]: E1104 23:50:05.890993 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:05.891075 kubelet[3005]: E1104 23:50:05.891071 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:05.891095 kubelet[3005]: W1104 23:50:05.891075 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:05.891095 kubelet[3005]: E1104 23:50:05.891080 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:05.891230 kubelet[3005]: E1104 23:50:05.891218 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:05.891230 kubelet[3005]: W1104 23:50:05.891226 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:05.891275 kubelet[3005]: E1104 23:50:05.891231 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:05.891322 kubelet[3005]: E1104 23:50:05.891314 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:05.891322 kubelet[3005]: W1104 23:50:05.891321 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:05.891359 kubelet[3005]: E1104 23:50:05.891327 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:06.821029 kubelet[3005]: I1104 23:50:06.820979 3005 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 23:50:06.885300 kubelet[3005]: E1104 23:50:06.885279 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:06.885300 kubelet[3005]: W1104 23:50:06.885296 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:06.885419 kubelet[3005]: E1104 23:50:06.885312 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:06.885419 kubelet[3005]: E1104 23:50:06.885395 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:06.885419 kubelet[3005]: W1104 23:50:06.885400 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:06.885419 kubelet[3005]: E1104 23:50:06.885404 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:06.885483 kubelet[3005]: E1104 23:50:06.885471 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:06.885483 kubelet[3005]: W1104 23:50:06.885475 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:06.885483 kubelet[3005]: E1104 23:50:06.885480 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:06.885571 kubelet[3005]: E1104 23:50:06.885562 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:06.885571 kubelet[3005]: W1104 23:50:06.885569 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:06.885619 kubelet[3005]: E1104 23:50:06.885573 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:06.885659 kubelet[3005]: E1104 23:50:06.885651 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:06.885690 kubelet[3005]: W1104 23:50:06.885661 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:06.885690 kubelet[3005]: E1104 23:50:06.885666 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:06.885736 kubelet[3005]: E1104 23:50:06.885726 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:06.885736 kubelet[3005]: W1104 23:50:06.885733 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:06.885788 kubelet[3005]: E1104 23:50:06.885737 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:06.885813 kubelet[3005]: E1104 23:50:06.885799 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:06.885813 kubelet[3005]: W1104 23:50:06.885807 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:06.885813 kubelet[3005]: E1104 23:50:06.885811 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:06.885905 kubelet[3005]: E1104 23:50:06.885879 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:06.885905 kubelet[3005]: W1104 23:50:06.885882 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:06.885905 kubelet[3005]: E1104 23:50:06.885886 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:06.885974 kubelet[3005]: E1104 23:50:06.885966 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:06.885974 kubelet[3005]: W1104 23:50:06.885970 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:06.886024 kubelet[3005]: E1104 23:50:06.885975 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:06.886076 kubelet[3005]: E1104 23:50:06.886066 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:06.886076 kubelet[3005]: W1104 23:50:06.886074 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:06.886129 kubelet[3005]: E1104 23:50:06.886080 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:06.886148 kubelet[3005]: E1104 23:50:06.886142 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:06.886148 kubelet[3005]: W1104 23:50:06.886146 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:06.886185 kubelet[3005]: E1104 23:50:06.886150 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:06.886220 kubelet[3005]: E1104 23:50:06.886209 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:06.886220 kubelet[3005]: W1104 23:50:06.886217 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:06.886426 kubelet[3005]: E1104 23:50:06.886221 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:06.886426 kubelet[3005]: E1104 23:50:06.886295 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:06.886426 kubelet[3005]: W1104 23:50:06.886299 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:06.886426 kubelet[3005]: E1104 23:50:06.886303 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:06.886426 kubelet[3005]: E1104 23:50:06.886366 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:06.886426 kubelet[3005]: W1104 23:50:06.886369 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:06.886426 kubelet[3005]: E1104 23:50:06.886374 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:06.886597 kubelet[3005]: E1104 23:50:06.886441 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:06.886597 kubelet[3005]: W1104 23:50:06.886445 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:06.886597 kubelet[3005]: E1104 23:50:06.886449 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:06.894861 kubelet[3005]: E1104 23:50:06.894810 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:06.894861 kubelet[3005]: W1104 23:50:06.894830 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:06.894861 kubelet[3005]: E1104 23:50:06.894844 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:06.895124 kubelet[3005]: E1104 23:50:06.895091 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:06.895124 kubelet[3005]: W1104 23:50:06.895097 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:06.895124 kubelet[3005]: E1104 23:50:06.895103 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:06.895226 kubelet[3005]: E1104 23:50:06.895215 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:06.895226 kubelet[3005]: W1104 23:50:06.895222 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:06.895280 kubelet[3005]: E1104 23:50:06.895228 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:06.895308 kubelet[3005]: E1104 23:50:06.895302 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:06.895308 kubelet[3005]: W1104 23:50:06.895307 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:06.895367 kubelet[3005]: E1104 23:50:06.895311 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:06.895399 kubelet[3005]: E1104 23:50:06.895378 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:06.895399 kubelet[3005]: W1104 23:50:06.895382 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:06.895399 kubelet[3005]: E1104 23:50:06.895387 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:06.895481 kubelet[3005]: E1104 23:50:06.895471 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:06.895481 kubelet[3005]: W1104 23:50:06.895477 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:06.895589 kubelet[3005]: E1104 23:50:06.895483 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:06.895705 kubelet[3005]: E1104 23:50:06.895665 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:06.895705 kubelet[3005]: W1104 23:50:06.895671 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:06.895705 kubelet[3005]: E1104 23:50:06.895677 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:06.895770 kubelet[3005]: E1104 23:50:06.895764 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:06.895770 kubelet[3005]: W1104 23:50:06.895769 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:06.895825 kubelet[3005]: E1104 23:50:06.895774 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:06.895876 kubelet[3005]: E1104 23:50:06.895867 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:06.895912 kubelet[3005]: W1104 23:50:06.895875 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:06.895912 kubelet[3005]: E1104 23:50:06.895881 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:06.895952 kubelet[3005]: E1104 23:50:06.895948 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:06.895988 kubelet[3005]: W1104 23:50:06.895956 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:06.895988 kubelet[3005]: E1104 23:50:06.895963 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:06.896042 kubelet[3005]: E1104 23:50:06.896038 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:06.896042 kubelet[3005]: W1104 23:50:06.896042 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:06.896088 kubelet[3005]: E1104 23:50:06.896046 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:06.896502 kubelet[3005]: E1104 23:50:06.896431 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:06.896502 kubelet[3005]: W1104 23:50:06.896468 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:06.896502 kubelet[3005]: E1104 23:50:06.896475 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:06.896779 kubelet[3005]: E1104 23:50:06.896773 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:06.896819 kubelet[3005]: W1104 23:50:06.896810 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:06.896860 kubelet[3005]: E1104 23:50:06.896854 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:06.897017 kubelet[3005]: E1104 23:50:06.896971 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:06.897017 kubelet[3005]: W1104 23:50:06.896977 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:06.897017 kubelet[3005]: E1104 23:50:06.896982 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:06.897118 kubelet[3005]: E1104 23:50:06.897113 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:06.903969 kubelet[3005]: W1104 23:50:06.897221 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:06.903969 kubelet[3005]: E1104 23:50:06.897248 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:06.903969 kubelet[3005]: E1104 23:50:06.897408 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:06.903969 kubelet[3005]: W1104 23:50:06.897413 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:06.903969 kubelet[3005]: E1104 23:50:06.897418 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:06.903969 kubelet[3005]: E1104 23:50:06.897852 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:06.903969 kubelet[3005]: W1104 23:50:06.897857 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:06.903969 kubelet[3005]: E1104 23:50:06.897867 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:06.903969 kubelet[3005]: E1104 23:50:06.898770 3005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:50:06.903969 kubelet[3005]: W1104 23:50:06.898776 3005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:50:06.904159 kubelet[3005]: E1104 23:50:06.898786 3005 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:50:07.160608 containerd[1684]: time="2025-11-04T23:50:07.160553843Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:50:07.173443 containerd[1684]: time="2025-11-04T23:50:07.173418580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 4 23:50:07.183982 containerd[1684]: time="2025-11-04T23:50:07.183959446Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:50:07.197928 containerd[1684]: time="2025-11-04T23:50:07.197846488Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:50:07.198907 containerd[1684]: time="2025-11-04T23:50:07.198880599Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.916767901s" Nov 4 23:50:07.198907 containerd[1684]: time="2025-11-04T23:50:07.198905514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 4 23:50:07.218840 containerd[1684]: time="2025-11-04T23:50:07.218810960Z" level=info msg="CreateContainer within sandbox \"eb01ab0558eba8fe173e8fa4ca3cd80d997c07f2f942a01fd159388e394f0ebe\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 4 23:50:07.311928 containerd[1684]: time="2025-11-04T23:50:07.311869356Z" level=info msg="Container a3f1acd1f1ef5682a90fd785f0141d87671a7eccae30ea1b33d3829019bd2bbb: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:50:07.381320 containerd[1684]: time="2025-11-04T23:50:07.381286760Z" level=info msg="CreateContainer within sandbox \"eb01ab0558eba8fe173e8fa4ca3cd80d997c07f2f942a01fd159388e394f0ebe\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a3f1acd1f1ef5682a90fd785f0141d87671a7eccae30ea1b33d3829019bd2bbb\"" Nov 4 23:50:07.381907 containerd[1684]: time="2025-11-04T23:50:07.381865545Z" level=info msg="StartContainer for \"a3f1acd1f1ef5682a90fd785f0141d87671a7eccae30ea1b33d3829019bd2bbb\"" Nov 4 23:50:07.383319 containerd[1684]: time="2025-11-04T23:50:07.383288842Z" level=info msg="connecting to shim a3f1acd1f1ef5682a90fd785f0141d87671a7eccae30ea1b33d3829019bd2bbb" address="unix:///run/containerd/s/bf3ad9936cefa3dcdd7abafb6ea8e8b7f7b6fdef117a0ede880bc243c8a78895" protocol=ttrpc version=3 Nov 4 23:50:07.405618 systemd[1]: Started cri-containerd-a3f1acd1f1ef5682a90fd785f0141d87671a7eccae30ea1b33d3829019bd2bbb.scope - libcontainer container a3f1acd1f1ef5682a90fd785f0141d87671a7eccae30ea1b33d3829019bd2bbb. Nov 4 23:50:07.458948 containerd[1684]: time="2025-11-04T23:50:07.458703670Z" level=info msg="StartContainer for \"a3f1acd1f1ef5682a90fd785f0141d87671a7eccae30ea1b33d3829019bd2bbb\" returns successfully" Nov 4 23:50:07.459544 systemd[1]: cri-containerd-a3f1acd1f1ef5682a90fd785f0141d87671a7eccae30ea1b33d3829019bd2bbb.scope: Deactivated successfully. Nov 4 23:50:07.478333 containerd[1684]: time="2025-11-04T23:50:07.478165082Z" level=info msg="received exit event container_id:\"a3f1acd1f1ef5682a90fd785f0141d87671a7eccae30ea1b33d3829019bd2bbb\" id:\"a3f1acd1f1ef5682a90fd785f0141d87671a7eccae30ea1b33d3829019bd2bbb\" pid:3710 exited_at:{seconds:1762300207 nanos:464074326}" Nov 4 23:50:07.506851 containerd[1684]: time="2025-11-04T23:50:07.506812263Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a3f1acd1f1ef5682a90fd785f0141d87671a7eccae30ea1b33d3829019bd2bbb\" id:\"a3f1acd1f1ef5682a90fd785f0141d87671a7eccae30ea1b33d3829019bd2bbb\" pid:3710 exited_at:{seconds:1762300207 nanos:464074326}" Nov 4 23:50:07.515758 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3f1acd1f1ef5682a90fd785f0141d87671a7eccae30ea1b33d3829019bd2bbb-rootfs.mount: Deactivated successfully. Nov 4 23:50:07.735644 kubelet[3005]: E1104 23:50:07.735260 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j2cxk" podUID="87667173-4fdd-43b0-b698-59acc1ea7515" Nov 4 23:50:09.736186 kubelet[3005]: E1104 23:50:09.735590 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j2cxk" podUID="87667173-4fdd-43b0-b698-59acc1ea7515" Nov 4 23:50:09.820556 containerd[1684]: time="2025-11-04T23:50:09.820529117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 4 23:50:11.735101 kubelet[3005]: E1104 23:50:11.734894 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j2cxk" podUID="87667173-4fdd-43b0-b698-59acc1ea7515" Nov 4 23:50:13.736026 kubelet[3005]: E1104 23:50:13.734701 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j2cxk" podUID="87667173-4fdd-43b0-b698-59acc1ea7515" Nov 4 23:50:14.120174 containerd[1684]: time="2025-11-04T23:50:14.120078467Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:50:14.120653 containerd[1684]: time="2025-11-04T23:50:14.120635903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 4 23:50:14.121516 containerd[1684]: time="2025-11-04T23:50:14.120940925Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:50:14.121897 containerd[1684]: time="2025-11-04T23:50:14.121877957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:50:14.122306 containerd[1684]: time="2025-11-04T23:50:14.122292669Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.301739779s" Nov 4 23:50:14.122358 containerd[1684]: time="2025-11-04T23:50:14.122350096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 4 23:50:14.124673 containerd[1684]: time="2025-11-04T23:50:14.124443403Z" level=info msg="CreateContainer within sandbox \"eb01ab0558eba8fe173e8fa4ca3cd80d997c07f2f942a01fd159388e394f0ebe\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 4 23:50:14.175178 containerd[1684]: time="2025-11-04T23:50:14.175150741Z" level=info msg="Container 5ecf4eee56f561ae438dc8ad70d3c7279cde38d4672ea0c6d89993b3a34588d6: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:50:14.177780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3469232856.mount: Deactivated successfully. Nov 4 23:50:14.263674 containerd[1684]: time="2025-11-04T23:50:14.263636382Z" level=info msg="CreateContainer within sandbox \"eb01ab0558eba8fe173e8fa4ca3cd80d997c07f2f942a01fd159388e394f0ebe\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5ecf4eee56f561ae438dc8ad70d3c7279cde38d4672ea0c6d89993b3a34588d6\"" Nov 4 23:50:14.264292 containerd[1684]: time="2025-11-04T23:50:14.264176649Z" level=info msg="StartContainer for \"5ecf4eee56f561ae438dc8ad70d3c7279cde38d4672ea0c6d89993b3a34588d6\"" Nov 4 23:50:14.265267 containerd[1684]: time="2025-11-04T23:50:14.265243798Z" level=info msg="connecting to shim 5ecf4eee56f561ae438dc8ad70d3c7279cde38d4672ea0c6d89993b3a34588d6" address="unix:///run/containerd/s/bf3ad9936cefa3dcdd7abafb6ea8e8b7f7b6fdef117a0ede880bc243c8a78895" protocol=ttrpc version=3 Nov 4 23:50:14.288624 systemd[1]: Started cri-containerd-5ecf4eee56f561ae438dc8ad70d3c7279cde38d4672ea0c6d89993b3a34588d6.scope - libcontainer container 5ecf4eee56f561ae438dc8ad70d3c7279cde38d4672ea0c6d89993b3a34588d6. Nov 4 23:50:14.325240 containerd[1684]: time="2025-11-04T23:50:14.325213492Z" level=info msg="StartContainer for \"5ecf4eee56f561ae438dc8ad70d3c7279cde38d4672ea0c6d89993b3a34588d6\" returns successfully" Nov 4 23:50:15.507259 systemd[1]: cri-containerd-5ecf4eee56f561ae438dc8ad70d3c7279cde38d4672ea0c6d89993b3a34588d6.scope: Deactivated successfully. Nov 4 23:50:15.507460 systemd[1]: cri-containerd-5ecf4eee56f561ae438dc8ad70d3c7279cde38d4672ea0c6d89993b3a34588d6.scope: Consumed 312ms CPU time, 158.1M memory peak, 1.4M read from disk, 171.3M written to disk. Nov 4 23:50:15.519046 containerd[1684]: time="2025-11-04T23:50:15.519019118Z" level=info msg="received exit event container_id:\"5ecf4eee56f561ae438dc8ad70d3c7279cde38d4672ea0c6d89993b3a34588d6\" id:\"5ecf4eee56f561ae438dc8ad70d3c7279cde38d4672ea0c6d89993b3a34588d6\" pid:3769 exited_at:{seconds:1762300215 nanos:518434677}" Nov 4 23:50:15.519237 containerd[1684]: time="2025-11-04T23:50:15.519213107Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5ecf4eee56f561ae438dc8ad70d3c7279cde38d4672ea0c6d89993b3a34588d6\" id:\"5ecf4eee56f561ae438dc8ad70d3c7279cde38d4672ea0c6d89993b3a34588d6\" pid:3769 exited_at:{seconds:1762300215 nanos:518434677}" Nov 4 23:50:15.568939 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ecf4eee56f561ae438dc8ad70d3c7279cde38d4672ea0c6d89993b3a34588d6-rootfs.mount: Deactivated successfully. Nov 4 23:50:15.595115 kubelet[3005]: I1104 23:50:15.595048 3005 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 4 23:50:15.742125 systemd[1]: Created slice kubepods-besteffort-pod011689d6_d825_4490_866e_209734536e09.slice - libcontainer container kubepods-besteffort-pod011689d6_d825_4490_866e_209734536e09.slice. Nov 4 23:50:15.749074 systemd[1]: Created slice kubepods-besteffort-pod22a0a317_e976_4828_b61a_a22d937f284c.slice - libcontainer container kubepods-besteffort-pod22a0a317_e976_4828_b61a_a22d937f284c.slice. Nov 4 23:50:15.755944 systemd[1]: Created slice kubepods-besteffort-podbb817c56_dcfd_42dd_9fb6_688549d80317.slice - libcontainer container kubepods-besteffort-podbb817c56_dcfd_42dd_9fb6_688549d80317.slice. Nov 4 23:50:15.760201 systemd[1]: Created slice kubepods-besteffort-pod87667173_4fdd_43b0_b698_59acc1ea7515.slice - libcontainer container kubepods-besteffort-pod87667173_4fdd_43b0_b698_59acc1ea7515.slice. Nov 4 23:50:15.767707 systemd[1]: Created slice kubepods-burstable-pod91558b45_d6b2_43bf_b363_90b9cd5da166.slice - libcontainer container kubepods-burstable-pod91558b45_d6b2_43bf_b363_90b9cd5da166.slice. Nov 4 23:50:15.772911 containerd[1684]: time="2025-11-04T23:50:15.772883242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j2cxk,Uid:87667173-4fdd-43b0-b698-59acc1ea7515,Namespace:calico-system,Attempt:0,}" Nov 4 23:50:15.774854 systemd[1]: Created slice kubepods-burstable-podcb2b84b4_36c2_466a_bbb6_32feb10e4093.slice - libcontainer container kubepods-burstable-podcb2b84b4_36c2_466a_bbb6_32feb10e4093.slice. Nov 4 23:50:15.781545 kubelet[3005]: I1104 23:50:15.781525 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhvcj\" (UniqueName: \"kubernetes.io/projected/91558b45-d6b2-43bf-b363-90b9cd5da166-kube-api-access-nhvcj\") pod \"coredns-674b8bbfcf-fx4jk\" (UID: \"91558b45-d6b2-43bf-b363-90b9cd5da166\") " pod="kube-system/coredns-674b8bbfcf-fx4jk" Nov 4 23:50:15.781656 kubelet[3005]: I1104 23:50:15.781646 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v69cm\" (UniqueName: \"kubernetes.io/projected/22a0a317-e976-4828-b61a-a22d937f284c-kube-api-access-v69cm\") pod \"calico-kube-controllers-548bc79dd9-9vmzn\" (UID: \"22a0a317-e976-4828-b61a-a22d937f284c\") " pod="calico-system/calico-kube-controllers-548bc79dd9-9vmzn" Nov 4 23:50:15.781709 kubelet[3005]: I1104 23:50:15.781702 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bb817c56-dcfd-42dd-9fb6-688549d80317-calico-apiserver-certs\") pod \"calico-apiserver-556c6f6458-cpm97\" (UID: \"bb817c56-dcfd-42dd-9fb6-688549d80317\") " pod="calico-apiserver/calico-apiserver-556c6f6458-cpm97" Nov 4 23:50:15.781751 kubelet[3005]: I1104 23:50:15.781745 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4j4f\" (UniqueName: \"kubernetes.io/projected/bb817c56-dcfd-42dd-9fb6-688549d80317-kube-api-access-w4j4f\") pod \"calico-apiserver-556c6f6458-cpm97\" (UID: \"bb817c56-dcfd-42dd-9fb6-688549d80317\") " pod="calico-apiserver/calico-apiserver-556c6f6458-cpm97" Nov 4 23:50:15.781793 kubelet[3005]: I1104 23:50:15.781787 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91558b45-d6b2-43bf-b363-90b9cd5da166-config-volume\") pod \"coredns-674b8bbfcf-fx4jk\" (UID: \"91558b45-d6b2-43bf-b363-90b9cd5da166\") " pod="kube-system/coredns-674b8bbfcf-fx4jk" Nov 4 23:50:15.781839 kubelet[3005]: I1104 23:50:15.781833 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9da2e8c6-5e46-4a22-9ad0-99b948b30cea-goldmane-key-pair\") pod \"goldmane-666569f655-x2sxv\" (UID: \"9da2e8c6-5e46-4a22-9ad0-99b948b30cea\") " pod="calico-system/goldmane-666569f655-x2sxv" Nov 4 23:50:15.781964 kubelet[3005]: I1104 23:50:15.781884 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9da2e8c6-5e46-4a22-9ad0-99b948b30cea-goldmane-ca-bundle\") pod \"goldmane-666569f655-x2sxv\" (UID: \"9da2e8c6-5e46-4a22-9ad0-99b948b30cea\") " pod="calico-system/goldmane-666569f655-x2sxv" Nov 4 23:50:15.782018 kubelet[3005]: I1104 23:50:15.782005 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrpcb\" (UniqueName: \"kubernetes.io/projected/cb2b84b4-36c2-466a-bbb6-32feb10e4093-kube-api-access-nrpcb\") pod \"coredns-674b8bbfcf-nb4vh\" (UID: \"cb2b84b4-36c2-466a-bbb6-32feb10e4093\") " pod="kube-system/coredns-674b8bbfcf-nb4vh" Nov 4 23:50:15.782219 kubelet[3005]: I1104 23:50:15.782104 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/011689d6-d825-4490-866e-209734536e09-calico-apiserver-certs\") pod \"calico-apiserver-556c6f6458-pstkj\" (UID: \"011689d6-d825-4490-866e-209734536e09\") " pod="calico-apiserver/calico-apiserver-556c6f6458-pstkj" Nov 4 23:50:15.782295 kubelet[3005]: I1104 23:50:15.782279 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kjc5\" (UniqueName: \"kubernetes.io/projected/011689d6-d825-4490-866e-209734536e09-kube-api-access-8kjc5\") pod \"calico-apiserver-556c6f6458-pstkj\" (UID: \"011689d6-d825-4490-866e-209734536e09\") " pod="calico-apiserver/calico-apiserver-556c6f6458-pstkj" Nov 4 23:50:15.783680 kubelet[3005]: I1104 23:50:15.783670 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22a0a317-e976-4828-b61a-a22d937f284c-tigera-ca-bundle\") pod \"calico-kube-controllers-548bc79dd9-9vmzn\" (UID: \"22a0a317-e976-4828-b61a-a22d937f284c\") " pod="calico-system/calico-kube-controllers-548bc79dd9-9vmzn" Nov 4 23:50:15.783735 kubelet[3005]: I1104 23:50:15.783727 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9da2e8c6-5e46-4a22-9ad0-99b948b30cea-config\") pod \"goldmane-666569f655-x2sxv\" (UID: \"9da2e8c6-5e46-4a22-9ad0-99b948b30cea\") " pod="calico-system/goldmane-666569f655-x2sxv" Nov 4 23:50:15.783791 kubelet[3005]: I1104 23:50:15.783777 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzm8g\" (UniqueName: \"kubernetes.io/projected/9da2e8c6-5e46-4a22-9ad0-99b948b30cea-kube-api-access-pzm8g\") pod \"goldmane-666569f655-x2sxv\" (UID: \"9da2e8c6-5e46-4a22-9ad0-99b948b30cea\") " pod="calico-system/goldmane-666569f655-x2sxv" Nov 4 23:50:15.783838 kubelet[3005]: I1104 23:50:15.783830 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb2b84b4-36c2-466a-bbb6-32feb10e4093-config-volume\") pod \"coredns-674b8bbfcf-nb4vh\" (UID: \"cb2b84b4-36c2-466a-bbb6-32feb10e4093\") " pod="kube-system/coredns-674b8bbfcf-nb4vh" Nov 4 23:50:15.786593 systemd[1]: Created slice kubepods-besteffort-pod9da2e8c6_5e46_4a22_9ad0_99b948b30cea.slice - libcontainer container kubepods-besteffort-pod9da2e8c6_5e46_4a22_9ad0_99b948b30cea.slice. Nov 4 23:50:15.802280 systemd[1]: Created slice kubepods-besteffort-pod0836ae4f_732e_4b0c_8bb2_f4c010292701.slice - libcontainer container kubepods-besteffort-pod0836ae4f_732e_4b0c_8bb2_f4c010292701.slice. Nov 4 23:50:15.812084 systemd[1]: Created slice kubepods-besteffort-pod003bf0e1_d270_4d41_a398_c04be88d91c0.slice - libcontainer container kubepods-besteffort-pod003bf0e1_d270_4d41_a398_c04be88d91c0.slice. Nov 4 23:50:15.853046 containerd[1684]: time="2025-11-04T23:50:15.853023911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 4 23:50:15.885021 kubelet[3005]: I1104 23:50:15.884951 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/003bf0e1-d270-4d41-a398-c04be88d91c0-calico-apiserver-certs\") pod \"calico-apiserver-6d9799dc6f-nkdjs\" (UID: \"003bf0e1-d270-4d41-a398-c04be88d91c0\") " pod="calico-apiserver/calico-apiserver-6d9799dc6f-nkdjs" Nov 4 23:50:15.885021 kubelet[3005]: I1104 23:50:15.884994 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0836ae4f-732e-4b0c-8bb2-f4c010292701-whisker-backend-key-pair\") pod \"whisker-86d7444576-b5cr7\" (UID: \"0836ae4f-732e-4b0c-8bb2-f4c010292701\") " pod="calico-system/whisker-86d7444576-b5cr7" Nov 4 23:50:15.885021 kubelet[3005]: I1104 23:50:15.885005 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh6d9\" (UniqueName: \"kubernetes.io/projected/0836ae4f-732e-4b0c-8bb2-f4c010292701-kube-api-access-fh6d9\") pod \"whisker-86d7444576-b5cr7\" (UID: \"0836ae4f-732e-4b0c-8bb2-f4c010292701\") " pod="calico-system/whisker-86d7444576-b5cr7" Nov 4 23:50:15.885720 kubelet[3005]: I1104 23:50:15.885615 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpwdl\" (UniqueName: \"kubernetes.io/projected/003bf0e1-d270-4d41-a398-c04be88d91c0-kube-api-access-lpwdl\") pod \"calico-apiserver-6d9799dc6f-nkdjs\" (UID: \"003bf0e1-d270-4d41-a398-c04be88d91c0\") " pod="calico-apiserver/calico-apiserver-6d9799dc6f-nkdjs" Nov 4 23:50:15.885720 kubelet[3005]: I1104 23:50:15.885637 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0836ae4f-732e-4b0c-8bb2-f4c010292701-whisker-ca-bundle\") pod \"whisker-86d7444576-b5cr7\" (UID: \"0836ae4f-732e-4b0c-8bb2-f4c010292701\") " pod="calico-system/whisker-86d7444576-b5cr7" Nov 4 23:50:16.002591 containerd[1684]: time="2025-11-04T23:50:16.002554915Z" level=error msg="Failed to destroy network for sandbox \"c02800b2e5d3a8c1480940af9fc6c57015b2b9618afa8ef686a719d7d2ff2e0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:50:16.003251 containerd[1684]: time="2025-11-04T23:50:16.003225874Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j2cxk,Uid:87667173-4fdd-43b0-b698-59acc1ea7515,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c02800b2e5d3a8c1480940af9fc6c57015b2b9618afa8ef686a719d7d2ff2e0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:50:16.004818 kubelet[3005]: E1104 23:50:16.003534 3005 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c02800b2e5d3a8c1480940af9fc6c57015b2b9618afa8ef686a719d7d2ff2e0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:50:16.004818 kubelet[3005]: E1104 23:50:16.003577 3005 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c02800b2e5d3a8c1480940af9fc6c57015b2b9618afa8ef686a719d7d2ff2e0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j2cxk" Nov 4 23:50:16.004818 kubelet[3005]: E1104 23:50:16.003596 3005 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c02800b2e5d3a8c1480940af9fc6c57015b2b9618afa8ef686a719d7d2ff2e0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j2cxk" Nov 4 23:50:16.004906 kubelet[3005]: E1104 23:50:16.003630 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j2cxk_calico-system(87667173-4fdd-43b0-b698-59acc1ea7515)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j2cxk_calico-system(87667173-4fdd-43b0-b698-59acc1ea7515)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c02800b2e5d3a8c1480940af9fc6c57015b2b9618afa8ef686a719d7d2ff2e0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j2cxk" podUID="87667173-4fdd-43b0-b698-59acc1ea7515" Nov 4 23:50:16.046093 containerd[1684]: time="2025-11-04T23:50:16.045856311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556c6f6458-pstkj,Uid:011689d6-d825-4490-866e-209734536e09,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:50:16.053480 containerd[1684]: time="2025-11-04T23:50:16.053449627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-548bc79dd9-9vmzn,Uid:22a0a317-e976-4828-b61a-a22d937f284c,Namespace:calico-system,Attempt:0,}" Nov 4 23:50:16.059413 containerd[1684]: time="2025-11-04T23:50:16.059388039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556c6f6458-cpm97,Uid:bb817c56-dcfd-42dd-9fb6-688549d80317,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:50:16.072131 containerd[1684]: time="2025-11-04T23:50:16.072088656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fx4jk,Uid:91558b45-d6b2-43bf-b363-90b9cd5da166,Namespace:kube-system,Attempt:0,}" Nov 4 23:50:16.085655 containerd[1684]: time="2025-11-04T23:50:16.085588171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nb4vh,Uid:cb2b84b4-36c2-466a-bbb6-32feb10e4093,Namespace:kube-system,Attempt:0,}" Nov 4 23:50:16.093889 containerd[1684]: time="2025-11-04T23:50:16.093816708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-x2sxv,Uid:9da2e8c6-5e46-4a22-9ad0-99b948b30cea,Namespace:calico-system,Attempt:0,}" Nov 4 23:50:16.110567 containerd[1684]: time="2025-11-04T23:50:16.110542644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86d7444576-b5cr7,Uid:0836ae4f-732e-4b0c-8bb2-f4c010292701,Namespace:calico-system,Attempt:0,}" Nov 4 23:50:16.115592 containerd[1684]: time="2025-11-04T23:50:16.115562851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d9799dc6f-nkdjs,Uid:003bf0e1-d270-4d41-a398-c04be88d91c0,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:50:16.120136 containerd[1684]: time="2025-11-04T23:50:16.120065162Z" level=error msg="Failed to destroy network for sandbox \"05fdbba35012e5cd959c722efb2c6e31f3d408ff5bf6782b17e73a35763e668b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:50:16.121840 containerd[1684]: time="2025-11-04T23:50:16.121758651Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556c6f6458-pstkj,Uid:011689d6-d825-4490-866e-209734536e09,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"05fdbba35012e5cd959c722efb2c6e31f3d408ff5bf6782b17e73a35763e668b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:50:16.122738 kubelet[3005]: E1104 23:50:16.121937 3005 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05fdbba35012e5cd959c722efb2c6e31f3d408ff5bf6782b17e73a35763e668b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:50:16.122738 kubelet[3005]: E1104 23:50:16.121982 3005 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05fdbba35012e5cd959c722efb2c6e31f3d408ff5bf6782b17e73a35763e668b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-556c6f6458-pstkj" Nov 4 23:50:16.122738 kubelet[3005]: E1104 23:50:16.122000 3005 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05fdbba35012e5cd959c722efb2c6e31f3d408ff5bf6782b17e73a35763e668b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-556c6f6458-pstkj" Nov 4 23:50:16.122811 kubelet[3005]: E1104 23:50:16.122037 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-556c6f6458-pstkj_calico-apiserver(011689d6-d825-4490-866e-209734536e09)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-556c6f6458-pstkj_calico-apiserver(011689d6-d825-4490-866e-209734536e09)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05fdbba35012e5cd959c722efb2c6e31f3d408ff5bf6782b17e73a35763e668b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-556c6f6458-pstkj" podUID="011689d6-d825-4490-866e-209734536e09" Nov 4 23:50:16.166209 containerd[1684]: time="2025-11-04T23:50:16.166164448Z" level=error msg="Failed to destroy network for sandbox \"5895ecb8465d89997c44d9195351d7a5fed08596a81be5a566f6df55f20b9cd8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:50:16.166646 containerd[1684]: time="2025-11-04T23:50:16.166622915Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-548bc79dd9-9vmzn,Uid:22a0a317-e976-4828-b61a-a22d937f284c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5895ecb8465d89997c44d9195351d7a5fed08596a81be5a566f6df55f20b9cd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:50:16.167270 kubelet[3005]: E1104 23:50:16.166956 3005 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5895ecb8465d89997c44d9195351d7a5fed08596a81be5a566f6df55f20b9cd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:50:16.167270 kubelet[3005]: E1104 23:50:16.167020 3005 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5895ecb8465d89997c44d9195351d7a5fed08596a81be5a566f6df55f20b9cd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-548bc79dd9-9vmzn" Nov 4 23:50:16.167270 kubelet[3005]: E1104 23:50:16.167035 3005 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5895ecb8465d89997c44d9195351d7a5fed08596a81be5a566f6df55f20b9cd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-548bc79dd9-9vmzn" Nov 4 23:50:16.167474 kubelet[3005]: E1104 23:50:16.167084 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-548bc79dd9-9vmzn_calico-system(22a0a317-e976-4828-b61a-a22d937f284c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-548bc79dd9-9vmzn_calico-system(22a0a317-e976-4828-b61a-a22d937f284c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5895ecb8465d89997c44d9195351d7a5fed08596a81be5a566f6df55f20b9cd8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-548bc79dd9-9vmzn" podUID="22a0a317-e976-4828-b61a-a22d937f284c" Nov 4 23:50:16.180955 containerd[1684]: time="2025-11-04T23:50:16.180928494Z" level=error msg="Failed to destroy network for sandbox \"1c2ff1eab07feaedff220d076e8bc1e4da5690def2baf94ad6d2bfe9fcd6791e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:50:16.181755 containerd[1684]: time="2025-11-04T23:50:16.181699860Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556c6f6458-cpm97,Uid:bb817c56-dcfd-42dd-9fb6-688549d80317,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c2ff1eab07feaedff220d076e8bc1e4da5690def2baf94ad6d2bfe9fcd6791e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:50:16.182739 kubelet[3005]: E1104 23:50:16.182557 3005 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c2ff1eab07feaedff220d076e8bc1e4da5690def2baf94ad6d2bfe9fcd6791e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:50:16.182739 kubelet[3005]: E1104 23:50:16.182691 3005 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c2ff1eab07feaedff220d076e8bc1e4da5690def2baf94ad6d2bfe9fcd6791e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-556c6f6458-cpm97" Nov 4 23:50:16.182739 kubelet[3005]: E1104 23:50:16.182708 3005 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c2ff1eab07feaedff220d076e8bc1e4da5690def2baf94ad6d2bfe9fcd6791e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-556c6f6458-cpm97" Nov 4 23:50:16.183067 containerd[1684]: time="2025-11-04T23:50:16.182891075Z" level=error msg="Failed to destroy network for sandbox \"46ed7df12693883d16024ee2cdade3f90d9298d6784abda5e4c728f6e11bef8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:50:16.183101 kubelet[3005]: E1104 23:50:16.182929 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-556c6f6458-cpm97_calico-apiserver(bb817c56-dcfd-42dd-9fb6-688549d80317)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-556c6f6458-cpm97_calico-apiserver(bb817c56-dcfd-42dd-9fb6-688549d80317)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c2ff1eab07feaedff220d076e8bc1e4da5690def2baf94ad6d2bfe9fcd6791e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-556c6f6458-cpm97" podUID="bb817c56-dcfd-42dd-9fb6-688549d80317" Nov 4 23:50:16.184736 containerd[1684]: time="2025-11-04T23:50:16.184667609Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fx4jk,Uid:91558b45-d6b2-43bf-b363-90b9cd5da166,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"46ed7df12693883d16024ee2cdade3f90d9298d6784abda5e4c728f6e11bef8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:50:16.184905 kubelet[3005]: E1104 23:50:16.184873 3005 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46ed7df12693883d16024ee2cdade3f90d9298d6784abda5e4c728f6e11bef8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:50:16.185920 kubelet[3005]: E1104 23:50:16.184981 3005 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46ed7df12693883d16024ee2cdade3f90d9298d6784abda5e4c728f6e11bef8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fx4jk" Nov 4 23:50:16.186323 kubelet[3005]: E1104 23:50:16.184995 3005 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46ed7df12693883d16024ee2cdade3f90d9298d6784abda5e4c728f6e11bef8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fx4jk" Nov 4 23:50:16.186323 kubelet[3005]: E1104 23:50:16.186172 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fx4jk_kube-system(91558b45-d6b2-43bf-b363-90b9cd5da166)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fx4jk_kube-system(91558b45-d6b2-43bf-b363-90b9cd5da166)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46ed7df12693883d16024ee2cdade3f90d9298d6784abda5e4c728f6e11bef8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fx4jk" podUID="91558b45-d6b2-43bf-b363-90b9cd5da166" Nov 4 23:50:16.198585 containerd[1684]: time="2025-11-04T23:50:16.198543990Z" level=error msg="Failed to destroy network for sandbox \"96044bf9b629d3a32c1601d1e41e12cb66ba67d32413858cf4f71db634993c23\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:50:16.198980 containerd[1684]: time="2025-11-04T23:50:16.198946309Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86d7444576-b5cr7,Uid:0836ae4f-732e-4b0c-8bb2-f4c010292701,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"96044bf9b629d3a32c1601d1e41e12cb66ba67d32413858cf4f71db634993c23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:50:16.199164 kubelet[3005]: E1104 23:50:16.199136 3005 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96044bf9b629d3a32c1601d1e41e12cb66ba67d32413858cf4f71db634993c23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:50:16.199211 kubelet[3005]: E1104 23:50:16.199180 3005 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96044bf9b629d3a32c1601d1e41e12cb66ba67d32413858cf4f71db634993c23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-86d7444576-b5cr7" Nov 4 23:50:16.199211 kubelet[3005]: E1104 23:50:16.199194 3005 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96044bf9b629d3a32c1601d1e41e12cb66ba67d32413858cf4f71db634993c23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-86d7444576-b5cr7" Nov 4 23:50:16.199362 kubelet[3005]: E1104 23:50:16.199242 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-86d7444576-b5cr7_calico-system(0836ae4f-732e-4b0c-8bb2-f4c010292701)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-86d7444576-b5cr7_calico-system(0836ae4f-732e-4b0c-8bb2-f4c010292701)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96044bf9b629d3a32c1601d1e41e12cb66ba67d32413858cf4f71db634993c23\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-86d7444576-b5cr7" podUID="0836ae4f-732e-4b0c-8bb2-f4c010292701" Nov 4 23:50:16.227221 containerd[1684]: time="2025-11-04T23:50:16.227156989Z" level=error msg="Failed to destroy network for sandbox \"8b15abd08fafbfb9155ce587cbc4fad3217176a20ead41ce1115e314b3c5b768\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:50:16.228973 containerd[1684]: time="2025-11-04T23:50:16.228634007Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d9799dc6f-nkdjs,Uid:003bf0e1-d270-4d41-a398-c04be88d91c0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b15abd08fafbfb9155ce587cbc4fad3217176a20ead41ce1115e314b3c5b768\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:50:16.229050 kubelet[3005]: E1104 23:50:16.228763 3005 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b15abd08fafbfb9155ce587cbc4fad3217176a20ead41ce1115e314b3c5b768\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:50:16.229050 kubelet[3005]: E1104 23:50:16.228800 3005 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b15abd08fafbfb9155ce587cbc4fad3217176a20ead41ce1115e314b3c5b768\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d9799dc6f-nkdjs" Nov 4 23:50:16.229050 kubelet[3005]: E1104 23:50:16.228817 3005 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b15abd08fafbfb9155ce587cbc4fad3217176a20ead41ce1115e314b3c5b768\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d9799dc6f-nkdjs" Nov 4 23:50:16.229129 kubelet[3005]: E1104 23:50:16.228864 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d9799dc6f-nkdjs_calico-apiserver(003bf0e1-d270-4d41-a398-c04be88d91c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d9799dc6f-nkdjs_calico-apiserver(003bf0e1-d270-4d41-a398-c04be88d91c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b15abd08fafbfb9155ce587cbc4fad3217176a20ead41ce1115e314b3c5b768\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d9799dc6f-nkdjs" podUID="003bf0e1-d270-4d41-a398-c04be88d91c0" Nov 4 23:50:16.234101 containerd[1684]: time="2025-11-04T23:50:16.234068209Z" level=error msg="Failed to destroy network for sandbox \"685fd0dda9ef456d00d49964b962bd9534fcc9a0c82d6cd94fb0086e97e7225d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:50:16.234216 containerd[1684]: time="2025-11-04T23:50:16.234090877Z" level=error msg="Failed to destroy network for sandbox \"756b08f4dd612a1bab5dcaa88622a2e6c12a126862b2b82d3864290815615fb2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:50:16.234628 containerd[1684]: time="2025-11-04T23:50:16.234608359Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nb4vh,Uid:cb2b84b4-36c2-466a-bbb6-32feb10e4093,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"756b08f4dd612a1bab5dcaa88622a2e6c12a126862b2b82d3864290815615fb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:50:16.234920 containerd[1684]: time="2025-11-04T23:50:16.234872207Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-x2sxv,Uid:9da2e8c6-5e46-4a22-9ad0-99b948b30cea,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"685fd0dda9ef456d00d49964b962bd9534fcc9a0c82d6cd94fb0086e97e7225d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:50:16.235053 kubelet[3005]: E1104 23:50:16.234951 3005 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"685fd0dda9ef456d00d49964b962bd9534fcc9a0c82d6cd94fb0086e97e7225d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:50:16.235053 kubelet[3005]: E1104 23:50:16.234979 3005 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"756b08f4dd612a1bab5dcaa88622a2e6c12a126862b2b82d3864290815615fb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:50:16.235053 kubelet[3005]: E1104 23:50:16.235002 3005 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"756b08f4dd612a1bab5dcaa88622a2e6c12a126862b2b82d3864290815615fb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nb4vh" Nov 4 23:50:16.235053 kubelet[3005]: E1104 23:50:16.234989 3005 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"685fd0dda9ef456d00d49964b962bd9534fcc9a0c82d6cd94fb0086e97e7225d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-x2sxv" Nov 4 23:50:16.235142 kubelet[3005]: E1104 23:50:16.235016 3005 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"756b08f4dd612a1bab5dcaa88622a2e6c12a126862b2b82d3864290815615fb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nb4vh" Nov 4 23:50:16.235142 kubelet[3005]: E1104 23:50:16.235019 3005 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"685fd0dda9ef456d00d49964b962bd9534fcc9a0c82d6cd94fb0086e97e7225d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-x2sxv" Nov 4 23:50:16.235142 kubelet[3005]: E1104 23:50:16.235045 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-x2sxv_calico-system(9da2e8c6-5e46-4a22-9ad0-99b948b30cea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-x2sxv_calico-system(9da2e8c6-5e46-4a22-9ad0-99b948b30cea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"685fd0dda9ef456d00d49964b962bd9534fcc9a0c82d6cd94fb0086e97e7225d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-x2sxv" podUID="9da2e8c6-5e46-4a22-9ad0-99b948b30cea" Nov 4 23:50:16.235669 kubelet[3005]: E1104 23:50:16.235273 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-nb4vh_kube-system(cb2b84b4-36c2-466a-bbb6-32feb10e4093)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-nb4vh_kube-system(cb2b84b4-36c2-466a-bbb6-32feb10e4093)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"756b08f4dd612a1bab5dcaa88622a2e6c12a126862b2b82d3864290815615fb2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-nb4vh" podUID="cb2b84b4-36c2-466a-bbb6-32feb10e4093" Nov 4 23:50:16.571628 systemd[1]: run-netns-cni\x2d09712501\x2d69ec\x2ddee7\x2d37b6\x2d20094da2aa9e.mount: Deactivated successfully. Nov 4 23:50:21.191890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount124197223.mount: Deactivated successfully. Nov 4 23:50:21.371228 containerd[1684]: time="2025-11-04T23:50:21.358075441Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:50:21.376985 containerd[1684]: time="2025-11-04T23:50:21.376343177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 4 23:50:21.377398 containerd[1684]: time="2025-11-04T23:50:21.377383333Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:50:21.378585 containerd[1684]: time="2025-11-04T23:50:21.378569651Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:50:21.380564 containerd[1684]: time="2025-11-04T23:50:21.380536993Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 5.52568568s" Nov 4 23:50:21.380564 containerd[1684]: time="2025-11-04T23:50:21.380561120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 4 23:50:21.400801 containerd[1684]: time="2025-11-04T23:50:21.400779863Z" level=info msg="CreateContainer within sandbox \"eb01ab0558eba8fe173e8fa4ca3cd80d997c07f2f942a01fd159388e394f0ebe\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 4 23:50:21.464818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount854948278.mount: Deactivated successfully. Nov 4 23:50:21.465400 containerd[1684]: time="2025-11-04T23:50:21.464868776Z" level=info msg="Container 1587cf69f3d687559b45fb0478326d84937e839c640d1ba52d2a281401b8b76b: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:50:21.497453 containerd[1684]: time="2025-11-04T23:50:21.497416545Z" level=info msg="CreateContainer within sandbox \"eb01ab0558eba8fe173e8fa4ca3cd80d997c07f2f942a01fd159388e394f0ebe\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1587cf69f3d687559b45fb0478326d84937e839c640d1ba52d2a281401b8b76b\"" Nov 4 23:50:21.498369 containerd[1684]: time="2025-11-04T23:50:21.498038493Z" level=info msg="StartContainer for \"1587cf69f3d687559b45fb0478326d84937e839c640d1ba52d2a281401b8b76b\"" Nov 4 23:50:21.502327 containerd[1684]: time="2025-11-04T23:50:21.502302727Z" level=info msg="connecting to shim 1587cf69f3d687559b45fb0478326d84937e839c640d1ba52d2a281401b8b76b" address="unix:///run/containerd/s/bf3ad9936cefa3dcdd7abafb6ea8e8b7f7b6fdef117a0ede880bc243c8a78895" protocol=ttrpc version=3 Nov 4 23:50:21.566616 systemd[1]: Started cri-containerd-1587cf69f3d687559b45fb0478326d84937e839c640d1ba52d2a281401b8b76b.scope - libcontainer container 1587cf69f3d687559b45fb0478326d84937e839c640d1ba52d2a281401b8b76b. Nov 4 23:50:21.619937 containerd[1684]: time="2025-11-04T23:50:21.619912577Z" level=info msg="StartContainer for \"1587cf69f3d687559b45fb0478326d84937e839c640d1ba52d2a281401b8b76b\" returns successfully" Nov 4 23:50:22.712662 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 4 23:50:22.718164 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 4 23:50:22.731696 containerd[1684]: time="2025-11-04T23:50:22.731628610Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1587cf69f3d687559b45fb0478326d84937e839c640d1ba52d2a281401b8b76b\" id:\"a88a879a8e641c7babc334062ee7b792d795ed5c6803ce8aa279dd7a253b521b\" pid:4103 exit_status:1 exited_at:{seconds:1762300222 nanos:727649200}" Nov 4 23:50:23.005255 containerd[1684]: time="2025-11-04T23:50:23.005147290Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1587cf69f3d687559b45fb0478326d84937e839c640d1ba52d2a281401b8b76b\" id:\"03b333c8ff720ea5205bbdf81a64fbf08472eb93908190496b32178b0a0665ce\" pid:4140 exit_status:1 exited_at:{seconds:1762300223 nanos:4797468}" Nov 4 23:50:23.557091 kubelet[3005]: I1104 23:50:23.554983 3005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-r8mrt" podStartSLOduration=3.832915088 podStartE2EDuration="22.551859887s" podCreationTimestamp="2025-11-04 23:50:01 +0000 UTC" firstStartedPulling="2025-11-04 23:50:02.662062509 +0000 UTC m=+21.103038281" lastFinishedPulling="2025-11-04 23:50:21.381007308 +0000 UTC m=+39.821983080" observedRunningTime="2025-11-04 23:50:21.897382247 +0000 UTC m=+40.338358028" watchObservedRunningTime="2025-11-04 23:50:23.551859887 +0000 UTC m=+41.992835663" Nov 4 23:50:23.753517 kubelet[3005]: I1104 23:50:23.753418 3005 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fh6d9\" (UniqueName: \"kubernetes.io/projected/0836ae4f-732e-4b0c-8bb2-f4c010292701-kube-api-access-fh6d9\") pod \"0836ae4f-732e-4b0c-8bb2-f4c010292701\" (UID: \"0836ae4f-732e-4b0c-8bb2-f4c010292701\") " Nov 4 23:50:23.753517 kubelet[3005]: I1104 23:50:23.753480 3005 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0836ae4f-732e-4b0c-8bb2-f4c010292701-whisker-ca-bundle\") pod \"0836ae4f-732e-4b0c-8bb2-f4c010292701\" (UID: \"0836ae4f-732e-4b0c-8bb2-f4c010292701\") " Nov 4 23:50:23.760859 kubelet[3005]: I1104 23:50:23.753643 3005 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0836ae4f-732e-4b0c-8bb2-f4c010292701-whisker-backend-key-pair\") pod \"0836ae4f-732e-4b0c-8bb2-f4c010292701\" (UID: \"0836ae4f-732e-4b0c-8bb2-f4c010292701\") " Nov 4 23:50:23.769111 kubelet[3005]: I1104 23:50:23.769051 3005 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0836ae4f-732e-4b0c-8bb2-f4c010292701-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0836ae4f-732e-4b0c-8bb2-f4c010292701" (UID: "0836ae4f-732e-4b0c-8bb2-f4c010292701"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 4 23:50:23.783024 systemd[1]: var-lib-kubelet-pods-0836ae4f\x2d732e\x2d4b0c\x2d8bb2\x2df4c010292701-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfh6d9.mount: Deactivated successfully. Nov 4 23:50:23.783285 systemd[1]: var-lib-kubelet-pods-0836ae4f\x2d732e\x2d4b0c\x2d8bb2\x2df4c010292701-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 4 23:50:23.784587 kubelet[3005]: I1104 23:50:23.783891 3005 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0836ae4f-732e-4b0c-8bb2-f4c010292701-kube-api-access-fh6d9" (OuterVolumeSpecName: "kube-api-access-fh6d9") pod "0836ae4f-732e-4b0c-8bb2-f4c010292701" (UID: "0836ae4f-732e-4b0c-8bb2-f4c010292701"). InnerVolumeSpecName "kube-api-access-fh6d9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 23:50:23.784587 kubelet[3005]: I1104 23:50:23.784550 3005 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0836ae4f-732e-4b0c-8bb2-f4c010292701-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0836ae4f-732e-4b0c-8bb2-f4c010292701" (UID: "0836ae4f-732e-4b0c-8bb2-f4c010292701"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 4 23:50:23.854695 kubelet[3005]: I1104 23:50:23.854602 3005 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0836ae4f-732e-4b0c-8bb2-f4c010292701-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 4 23:50:23.854695 kubelet[3005]: I1104 23:50:23.854624 3005 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0836ae4f-732e-4b0c-8bb2-f4c010292701-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 4 23:50:23.854695 kubelet[3005]: I1104 23:50:23.854633 3005 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fh6d9\" (UniqueName: \"kubernetes.io/projected/0836ae4f-732e-4b0c-8bb2-f4c010292701-kube-api-access-fh6d9\") on node \"localhost\" DevicePath \"\"" Nov 4 23:50:23.881086 systemd[1]: Removed slice kubepods-besteffort-pod0836ae4f_732e_4b0c_8bb2_f4c010292701.slice - libcontainer container kubepods-besteffort-pod0836ae4f_732e_4b0c_8bb2_f4c010292701.slice. Nov 4 23:50:23.976820 systemd[1]: Created slice kubepods-besteffort-pod40c5727f_c324_4fe2_b1aa_9b89dbc8158a.slice - libcontainer container kubepods-besteffort-pod40c5727f_c324_4fe2_b1aa_9b89dbc8158a.slice. Nov 4 23:50:24.055914 kubelet[3005]: I1104 23:50:24.055806 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40c5727f-c324-4fe2-b1aa-9b89dbc8158a-whisker-ca-bundle\") pod \"whisker-554c89f774-pdktj\" (UID: \"40c5727f-c324-4fe2-b1aa-9b89dbc8158a\") " pod="calico-system/whisker-554c89f774-pdktj" Nov 4 23:50:24.055914 kubelet[3005]: I1104 23:50:24.055881 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nrz4\" (UniqueName: \"kubernetes.io/projected/40c5727f-c324-4fe2-b1aa-9b89dbc8158a-kube-api-access-8nrz4\") pod \"whisker-554c89f774-pdktj\" (UID: \"40c5727f-c324-4fe2-b1aa-9b89dbc8158a\") " pod="calico-system/whisker-554c89f774-pdktj" Nov 4 23:50:24.056090 kubelet[3005]: I1104 23:50:24.055942 3005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/40c5727f-c324-4fe2-b1aa-9b89dbc8158a-whisker-backend-key-pair\") pod \"whisker-554c89f774-pdktj\" (UID: \"40c5727f-c324-4fe2-b1aa-9b89dbc8158a\") " pod="calico-system/whisker-554c89f774-pdktj" Nov 4 23:50:24.279034 containerd[1684]: time="2025-11-04T23:50:24.278998343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-554c89f774-pdktj,Uid:40c5727f-c324-4fe2-b1aa-9b89dbc8158a,Namespace:calico-system,Attempt:0,}" Nov 4 23:50:25.491185 systemd-networkd[1587]: cali9eb175ecddb: Link UP Nov 4 23:50:25.491291 systemd-networkd[1587]: cali9eb175ecddb: Gained carrier Nov 4 23:50:25.506725 containerd[1684]: 2025-11-04 23:50:24.315 [INFO][4175] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 23:50:25.506725 containerd[1684]: 2025-11-04 23:50:24.413 [INFO][4175] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--554c89f774--pdktj-eth0 whisker-554c89f774- calico-system 40c5727f-c324-4fe2-b1aa-9b89dbc8158a 907 0 2025-11-04 23:50:23 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:554c89f774 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-554c89f774-pdktj eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9eb175ecddb [] [] }} ContainerID="f99b217b2add5ae885594179a0ef043934c7b531bc335c199587611651eaee14" Namespace="calico-system" Pod="whisker-554c89f774-pdktj" WorkloadEndpoint="localhost-k8s-whisker--554c89f774--pdktj-" Nov 4 23:50:25.506725 containerd[1684]: 2025-11-04 23:50:24.413 [INFO][4175] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f99b217b2add5ae885594179a0ef043934c7b531bc335c199587611651eaee14" Namespace="calico-system" Pod="whisker-554c89f774-pdktj" WorkloadEndpoint="localhost-k8s-whisker--554c89f774--pdktj-eth0" Nov 4 23:50:25.506725 containerd[1684]: 2025-11-04 23:50:25.404 [INFO][4195] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f99b217b2add5ae885594179a0ef043934c7b531bc335c199587611651eaee14" HandleID="k8s-pod-network.f99b217b2add5ae885594179a0ef043934c7b531bc335c199587611651eaee14" Workload="localhost-k8s-whisker--554c89f774--pdktj-eth0" Nov 4 23:50:25.511951 containerd[1684]: 2025-11-04 23:50:25.406 [INFO][4195] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f99b217b2add5ae885594179a0ef043934c7b531bc335c199587611651eaee14" HandleID="k8s-pod-network.f99b217b2add5ae885594179a0ef043934c7b531bc335c199587611651eaee14" Workload="localhost-k8s-whisker--554c89f774--pdktj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000341680), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-554c89f774-pdktj", "timestamp":"2025-11-04 23:50:25.40443726 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:50:25.511951 containerd[1684]: 2025-11-04 23:50:25.406 [INFO][4195] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:50:25.511951 containerd[1684]: 2025-11-04 23:50:25.407 [INFO][4195] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:50:25.511951 containerd[1684]: 2025-11-04 23:50:25.407 [INFO][4195] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 23:50:25.511951 containerd[1684]: 2025-11-04 23:50:25.423 [INFO][4195] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f99b217b2add5ae885594179a0ef043934c7b531bc335c199587611651eaee14" host="localhost" Nov 4 23:50:25.511951 containerd[1684]: 2025-11-04 23:50:25.463 [INFO][4195] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 23:50:25.511951 containerd[1684]: 2025-11-04 23:50:25.466 [INFO][4195] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 23:50:25.511951 containerd[1684]: 2025-11-04 23:50:25.467 [INFO][4195] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 23:50:25.511951 containerd[1684]: 2025-11-04 23:50:25.468 [INFO][4195] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 23:50:25.511951 containerd[1684]: 2025-11-04 23:50:25.468 [INFO][4195] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f99b217b2add5ae885594179a0ef043934c7b531bc335c199587611651eaee14" host="localhost" Nov 4 23:50:25.520595 containerd[1684]: 2025-11-04 23:50:25.469 [INFO][4195] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f99b217b2add5ae885594179a0ef043934c7b531bc335c199587611651eaee14 Nov 4 23:50:25.520595 containerd[1684]: 2025-11-04 23:50:25.471 [INFO][4195] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f99b217b2add5ae885594179a0ef043934c7b531bc335c199587611651eaee14" host="localhost" Nov 4 23:50:25.520595 containerd[1684]: 2025-11-04 23:50:25.474 [INFO][4195] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.f99b217b2add5ae885594179a0ef043934c7b531bc335c199587611651eaee14" host="localhost" Nov 4 23:50:25.520595 containerd[1684]: 2025-11-04 23:50:25.474 [INFO][4195] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.f99b217b2add5ae885594179a0ef043934c7b531bc335c199587611651eaee14" host="localhost" Nov 4 23:50:25.520595 containerd[1684]: 2025-11-04 23:50:25.474 [INFO][4195] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:50:25.520595 containerd[1684]: 2025-11-04 23:50:25.474 [INFO][4195] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="f99b217b2add5ae885594179a0ef043934c7b531bc335c199587611651eaee14" HandleID="k8s-pod-network.f99b217b2add5ae885594179a0ef043934c7b531bc335c199587611651eaee14" Workload="localhost-k8s-whisker--554c89f774--pdktj-eth0" Nov 4 23:50:25.525025 containerd[1684]: 2025-11-04 23:50:25.478 [INFO][4175] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f99b217b2add5ae885594179a0ef043934c7b531bc335c199587611651eaee14" Namespace="calico-system" Pod="whisker-554c89f774-pdktj" WorkloadEndpoint="localhost-k8s-whisker--554c89f774--pdktj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--554c89f774--pdktj-eth0", GenerateName:"whisker-554c89f774-", Namespace:"calico-system", SelfLink:"", UID:"40c5727f-c324-4fe2-b1aa-9b89dbc8158a", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 50, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"554c89f774", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-554c89f774-pdktj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9eb175ecddb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:50:25.525025 containerd[1684]: 2025-11-04 23:50:25.478 [INFO][4175] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="f99b217b2add5ae885594179a0ef043934c7b531bc335c199587611651eaee14" Namespace="calico-system" Pod="whisker-554c89f774-pdktj" WorkloadEndpoint="localhost-k8s-whisker--554c89f774--pdktj-eth0" Nov 4 23:50:25.525100 containerd[1684]: 2025-11-04 23:50:25.478 [INFO][4175] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9eb175ecddb ContainerID="f99b217b2add5ae885594179a0ef043934c7b531bc335c199587611651eaee14" Namespace="calico-system" Pod="whisker-554c89f774-pdktj" WorkloadEndpoint="localhost-k8s-whisker--554c89f774--pdktj-eth0" Nov 4 23:50:25.525100 containerd[1684]: 2025-11-04 23:50:25.488 [INFO][4175] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f99b217b2add5ae885594179a0ef043934c7b531bc335c199587611651eaee14" Namespace="calico-system" Pod="whisker-554c89f774-pdktj" WorkloadEndpoint="localhost-k8s-whisker--554c89f774--pdktj-eth0" Nov 4 23:50:25.525134 containerd[1684]: 2025-11-04 23:50:25.491 [INFO][4175] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f99b217b2add5ae885594179a0ef043934c7b531bc335c199587611651eaee14" Namespace="calico-system" Pod="whisker-554c89f774-pdktj" WorkloadEndpoint="localhost-k8s-whisker--554c89f774--pdktj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--554c89f774--pdktj-eth0", GenerateName:"whisker-554c89f774-", Namespace:"calico-system", SelfLink:"", UID:"40c5727f-c324-4fe2-b1aa-9b89dbc8158a", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 50, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"554c89f774", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f99b217b2add5ae885594179a0ef043934c7b531bc335c199587611651eaee14", Pod:"whisker-554c89f774-pdktj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9eb175ecddb", MAC:"16:c1:d6:9d:ae:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:50:25.528240 containerd[1684]: 2025-11-04 23:50:25.500 [INFO][4175] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f99b217b2add5ae885594179a0ef043934c7b531bc335c199587611651eaee14" Namespace="calico-system" Pod="whisker-554c89f774-pdktj" WorkloadEndpoint="localhost-k8s-whisker--554c89f774--pdktj-eth0" Nov 4 23:50:25.649470 containerd[1684]: time="2025-11-04T23:50:25.648995278Z" level=info msg="connecting to shim f99b217b2add5ae885594179a0ef043934c7b531bc335c199587611651eaee14" address="unix:///run/containerd/s/09f236b3f3d5d30b28e90a6ffcc63b9cbbc43d94295606c340cfc48e09049f4f" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:50:25.686834 systemd[1]: Started cri-containerd-f99b217b2add5ae885594179a0ef043934c7b531bc335c199587611651eaee14.scope - libcontainer container f99b217b2add5ae885594179a0ef043934c7b531bc335c199587611651eaee14. Nov 4 23:50:25.724576 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:50:25.740005 kubelet[3005]: I1104 23:50:25.738999 3005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0836ae4f-732e-4b0c-8bb2-f4c010292701" path="/var/lib/kubelet/pods/0836ae4f-732e-4b0c-8bb2-f4c010292701/volumes" Nov 4 23:50:25.799952 containerd[1684]: time="2025-11-04T23:50:25.799260289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-554c89f774-pdktj,Uid:40c5727f-c324-4fe2-b1aa-9b89dbc8158a,Namespace:calico-system,Attempt:0,} returns sandbox id \"f99b217b2add5ae885594179a0ef043934c7b531bc335c199587611651eaee14\"" Nov 4 23:50:25.856886 containerd[1684]: time="2025-11-04T23:50:25.856854697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 23:50:26.213287 containerd[1684]: time="2025-11-04T23:50:26.213225507Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:50:26.217316 containerd[1684]: time="2025-11-04T23:50:26.217227600Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 23:50:26.217316 containerd[1684]: time="2025-11-04T23:50:26.217277125Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 4 23:50:26.247919 kubelet[3005]: E1104 23:50:26.217409 3005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:50:26.254707 kubelet[3005]: E1104 23:50:26.254660 3005 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:50:26.261833 kubelet[3005]: E1104 23:50:26.261759 3005 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f306d436bc0240a0a022e65131a74f29,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8nrz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-554c89f774-pdktj_calico-system(40c5727f-c324-4fe2-b1aa-9b89dbc8158a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 23:50:26.264749 containerd[1684]: time="2025-11-04T23:50:26.264636127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 23:50:26.582579 containerd[1684]: time="2025-11-04T23:50:26.582192749Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:50:26.591357 containerd[1684]: time="2025-11-04T23:50:26.591294015Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 23:50:26.591652 containerd[1684]: time="2025-11-04T23:50:26.591377241Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 4 23:50:26.591717 kubelet[3005]: E1104 23:50:26.591530 3005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:50:26.591717 kubelet[3005]: E1104 23:50:26.591569 3005 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:50:26.591851 kubelet[3005]: E1104 23:50:26.591651 3005 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nrz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-554c89f774-pdktj_calico-system(40c5727f-c324-4fe2-b1aa-9b89dbc8158a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 23:50:26.592985 kubelet[3005]: E1104 23:50:26.592950 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-554c89f774-pdktj" podUID="40c5727f-c324-4fe2-b1aa-9b89dbc8158a" Nov 4 23:50:26.876696 kubelet[3005]: E1104 23:50:26.876577 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-554c89f774-pdktj" podUID="40c5727f-c324-4fe2-b1aa-9b89dbc8158a" Nov 4 23:50:27.202464 systemd-networkd[1587]: cali9eb175ecddb: Gained IPv6LL Nov 4 23:50:27.736132 containerd[1684]: time="2025-11-04T23:50:27.736047331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556c6f6458-cpm97,Uid:bb817c56-dcfd-42dd-9fb6-688549d80317,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:50:27.736573 containerd[1684]: time="2025-11-04T23:50:27.736042107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fx4jk,Uid:91558b45-d6b2-43bf-b363-90b9cd5da166,Namespace:kube-system,Attempt:0,}" Nov 4 23:50:27.882678 systemd-networkd[1587]: cali02ac55677e4: Link UP Nov 4 23:50:27.883530 systemd-networkd[1587]: cali02ac55677e4: Gained carrier Nov 4 23:50:27.898048 containerd[1684]: 2025-11-04 23:50:27.788 [INFO][4387] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 23:50:27.898048 containerd[1684]: 2025-11-04 23:50:27.799 [INFO][4387] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--fx4jk-eth0 coredns-674b8bbfcf- kube-system 91558b45-d6b2-43bf-b363-90b9cd5da166 838 0 2025-11-04 23:49:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-fx4jk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali02ac55677e4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f386870c5deaadc3cd6e208397a69b78ae003a2f6406dd323fc60fad4cf105be" Namespace="kube-system" Pod="coredns-674b8bbfcf-fx4jk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fx4jk-" Nov 4 23:50:27.898048 containerd[1684]: 2025-11-04 23:50:27.799 [INFO][4387] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f386870c5deaadc3cd6e208397a69b78ae003a2f6406dd323fc60fad4cf105be" Namespace="kube-system" Pod="coredns-674b8bbfcf-fx4jk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fx4jk-eth0" Nov 4 23:50:27.898048 containerd[1684]: 2025-11-04 23:50:27.841 [INFO][4413] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f386870c5deaadc3cd6e208397a69b78ae003a2f6406dd323fc60fad4cf105be" HandleID="k8s-pod-network.f386870c5deaadc3cd6e208397a69b78ae003a2f6406dd323fc60fad4cf105be" Workload="localhost-k8s-coredns--674b8bbfcf--fx4jk-eth0" Nov 4 23:50:27.898218 containerd[1684]: 2025-11-04 23:50:27.841 [INFO][4413] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f386870c5deaadc3cd6e208397a69b78ae003a2f6406dd323fc60fad4cf105be" HandleID="k8s-pod-network.f386870c5deaadc3cd6e208397a69b78ae003a2f6406dd323fc60fad4cf105be" Workload="localhost-k8s-coredns--674b8bbfcf--fx4jk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-fx4jk", "timestamp":"2025-11-04 23:50:27.84172395 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:50:27.898218 containerd[1684]: 2025-11-04 23:50:27.841 [INFO][4413] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:50:27.898218 containerd[1684]: 2025-11-04 23:50:27.841 [INFO][4413] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:50:27.898218 containerd[1684]: 2025-11-04 23:50:27.841 [INFO][4413] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 23:50:27.898218 containerd[1684]: 2025-11-04 23:50:27.847 [INFO][4413] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f386870c5deaadc3cd6e208397a69b78ae003a2f6406dd323fc60fad4cf105be" host="localhost" Nov 4 23:50:27.898218 containerd[1684]: 2025-11-04 23:50:27.855 [INFO][4413] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 23:50:27.898218 containerd[1684]: 2025-11-04 23:50:27.861 [INFO][4413] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 23:50:27.898218 containerd[1684]: 2025-11-04 23:50:27.863 [INFO][4413] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 23:50:27.898218 containerd[1684]: 2025-11-04 23:50:27.865 [INFO][4413] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 23:50:27.898218 containerd[1684]: 2025-11-04 23:50:27.865 [INFO][4413] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f386870c5deaadc3cd6e208397a69b78ae003a2f6406dd323fc60fad4cf105be" host="localhost" Nov 4 23:50:27.898435 containerd[1684]: 2025-11-04 23:50:27.866 [INFO][4413] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f386870c5deaadc3cd6e208397a69b78ae003a2f6406dd323fc60fad4cf105be Nov 4 23:50:27.898435 containerd[1684]: 2025-11-04 23:50:27.869 [INFO][4413] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f386870c5deaadc3cd6e208397a69b78ae003a2f6406dd323fc60fad4cf105be" host="localhost" Nov 4 23:50:27.898435 containerd[1684]: 2025-11-04 23:50:27.874 [INFO][4413] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.f386870c5deaadc3cd6e208397a69b78ae003a2f6406dd323fc60fad4cf105be" host="localhost" Nov 4 23:50:27.898435 containerd[1684]: 2025-11-04 23:50:27.874 [INFO][4413] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.f386870c5deaadc3cd6e208397a69b78ae003a2f6406dd323fc60fad4cf105be" host="localhost" Nov 4 23:50:27.898435 containerd[1684]: 2025-11-04 23:50:27.874 [INFO][4413] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:50:27.898435 containerd[1684]: 2025-11-04 23:50:27.874 [INFO][4413] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="f386870c5deaadc3cd6e208397a69b78ae003a2f6406dd323fc60fad4cf105be" HandleID="k8s-pod-network.f386870c5deaadc3cd6e208397a69b78ae003a2f6406dd323fc60fad4cf105be" Workload="localhost-k8s-coredns--674b8bbfcf--fx4jk-eth0" Nov 4 23:50:27.898798 containerd[1684]: 2025-11-04 23:50:27.879 [INFO][4387] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f386870c5deaadc3cd6e208397a69b78ae003a2f6406dd323fc60fad4cf105be" Namespace="kube-system" Pod="coredns-674b8bbfcf-fx4jk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fx4jk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fx4jk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"91558b45-d6b2-43bf-b363-90b9cd5da166", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 49, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-fx4jk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02ac55677e4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:50:27.899221 containerd[1684]: 2025-11-04 23:50:27.879 [INFO][4387] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="f386870c5deaadc3cd6e208397a69b78ae003a2f6406dd323fc60fad4cf105be" Namespace="kube-system" Pod="coredns-674b8bbfcf-fx4jk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fx4jk-eth0" Nov 4 23:50:27.899221 containerd[1684]: 2025-11-04 23:50:27.879 [INFO][4387] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali02ac55677e4 ContainerID="f386870c5deaadc3cd6e208397a69b78ae003a2f6406dd323fc60fad4cf105be" Namespace="kube-system" Pod="coredns-674b8bbfcf-fx4jk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fx4jk-eth0" Nov 4 23:50:27.899221 containerd[1684]: 2025-11-04 23:50:27.884 [INFO][4387] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f386870c5deaadc3cd6e208397a69b78ae003a2f6406dd323fc60fad4cf105be" Namespace="kube-system" Pod="coredns-674b8bbfcf-fx4jk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fx4jk-eth0" Nov 4 23:50:27.899304 containerd[1684]: 2025-11-04 23:50:27.884 [INFO][4387] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f386870c5deaadc3cd6e208397a69b78ae003a2f6406dd323fc60fad4cf105be" Namespace="kube-system" Pod="coredns-674b8bbfcf-fx4jk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fx4jk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fx4jk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"91558b45-d6b2-43bf-b363-90b9cd5da166", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 49, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f386870c5deaadc3cd6e208397a69b78ae003a2f6406dd323fc60fad4cf105be", Pod:"coredns-674b8bbfcf-fx4jk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02ac55677e4", MAC:"32:46:6e:4c:0d:62", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:50:27.899304 containerd[1684]: 2025-11-04 23:50:27.891 [INFO][4387] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f386870c5deaadc3cd6e208397a69b78ae003a2f6406dd323fc60fad4cf105be" Namespace="kube-system" Pod="coredns-674b8bbfcf-fx4jk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fx4jk-eth0" Nov 4 23:50:27.923570 containerd[1684]: time="2025-11-04T23:50:27.922119492Z" level=info msg="connecting to shim f386870c5deaadc3cd6e208397a69b78ae003a2f6406dd323fc60fad4cf105be" address="unix:///run/containerd/s/1565e78f941139b71c2c887a60ef87764751e9216ba063ee83a2c9eb71e5d2f5" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:50:27.952478 systemd[1]: Started cri-containerd-f386870c5deaadc3cd6e208397a69b78ae003a2f6406dd323fc60fad4cf105be.scope - libcontainer container f386870c5deaadc3cd6e208397a69b78ae003a2f6406dd323fc60fad4cf105be. Nov 4 23:50:27.973280 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:50:27.990488 systemd-networkd[1587]: caliac21f2e0ddb: Link UP Nov 4 23:50:27.990670 systemd-networkd[1587]: caliac21f2e0ddb: Gained carrier Nov 4 23:50:28.002303 containerd[1684]: 2025-11-04 23:50:27.779 [INFO][4383] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 23:50:28.002303 containerd[1684]: 2025-11-04 23:50:27.799 [INFO][4383] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--556c6f6458--cpm97-eth0 calico-apiserver-556c6f6458- calico-apiserver bb817c56-dcfd-42dd-9fb6-688549d80317 837 0 2025-11-04 23:49:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:556c6f6458 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-556c6f6458-cpm97 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliac21f2e0ddb [] [] }} ContainerID="2cd71c6bf98c373fce5909eb492fcf19c8d7b069f20a6d6bd6d919a93f972d10" Namespace="calico-apiserver" Pod="calico-apiserver-556c6f6458-cpm97" WorkloadEndpoint="localhost-k8s-calico--apiserver--556c6f6458--cpm97-" Nov 4 23:50:28.002303 containerd[1684]: 2025-11-04 23:50:27.799 [INFO][4383] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2cd71c6bf98c373fce5909eb492fcf19c8d7b069f20a6d6bd6d919a93f972d10" Namespace="calico-apiserver" Pod="calico-apiserver-556c6f6458-cpm97" WorkloadEndpoint="localhost-k8s-calico--apiserver--556c6f6458--cpm97-eth0" Nov 4 23:50:28.002303 containerd[1684]: 2025-11-04 23:50:27.840 [INFO][4408] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2cd71c6bf98c373fce5909eb492fcf19c8d7b069f20a6d6bd6d919a93f972d10" HandleID="k8s-pod-network.2cd71c6bf98c373fce5909eb492fcf19c8d7b069f20a6d6bd6d919a93f972d10" Workload="localhost-k8s-calico--apiserver--556c6f6458--cpm97-eth0" Nov 4 23:50:28.002303 containerd[1684]: 2025-11-04 23:50:27.842 [INFO][4408] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2cd71c6bf98c373fce5909eb492fcf19c8d7b069f20a6d6bd6d919a93f972d10" HandleID="k8s-pod-network.2cd71c6bf98c373fce5909eb492fcf19c8d7b069f20a6d6bd6d919a93f972d10" Workload="localhost-k8s-calico--apiserver--556c6f6458--cpm97-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f080), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-556c6f6458-cpm97", "timestamp":"2025-11-04 23:50:27.84021403 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:50:28.002303 containerd[1684]: 2025-11-04 23:50:27.842 [INFO][4408] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:50:28.002303 containerd[1684]: 2025-11-04 23:50:27.874 [INFO][4408] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:50:28.002303 containerd[1684]: 2025-11-04 23:50:27.874 [INFO][4408] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 23:50:28.002303 containerd[1684]: 2025-11-04 23:50:27.950 [INFO][4408] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2cd71c6bf98c373fce5909eb492fcf19c8d7b069f20a6d6bd6d919a93f972d10" host="localhost" Nov 4 23:50:28.002303 containerd[1684]: 2025-11-04 23:50:27.960 [INFO][4408] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 23:50:28.002303 containerd[1684]: 2025-11-04 23:50:27.968 [INFO][4408] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 23:50:28.002303 containerd[1684]: 2025-11-04 23:50:27.970 [INFO][4408] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 23:50:28.002303 containerd[1684]: 2025-11-04 23:50:27.974 [INFO][4408] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 23:50:28.002303 containerd[1684]: 2025-11-04 23:50:27.974 [INFO][4408] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2cd71c6bf98c373fce5909eb492fcf19c8d7b069f20a6d6bd6d919a93f972d10" host="localhost" Nov 4 23:50:28.002303 containerd[1684]: 2025-11-04 23:50:27.976 [INFO][4408] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2cd71c6bf98c373fce5909eb492fcf19c8d7b069f20a6d6bd6d919a93f972d10 Nov 4 23:50:28.002303 containerd[1684]: 2025-11-04 23:50:27.980 [INFO][4408] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2cd71c6bf98c373fce5909eb492fcf19c8d7b069f20a6d6bd6d919a93f972d10" host="localhost" Nov 4 23:50:28.002303 containerd[1684]: 2025-11-04 23:50:27.985 [INFO][4408] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.2cd71c6bf98c373fce5909eb492fcf19c8d7b069f20a6d6bd6d919a93f972d10" host="localhost" Nov 4 23:50:28.002303 containerd[1684]: 2025-11-04 23:50:27.985 [INFO][4408] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.2cd71c6bf98c373fce5909eb492fcf19c8d7b069f20a6d6bd6d919a93f972d10" host="localhost" Nov 4 23:50:28.002303 containerd[1684]: 2025-11-04 23:50:27.985 [INFO][4408] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:50:28.002303 containerd[1684]: 2025-11-04 23:50:27.985 [INFO][4408] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="2cd71c6bf98c373fce5909eb492fcf19c8d7b069f20a6d6bd6d919a93f972d10" HandleID="k8s-pod-network.2cd71c6bf98c373fce5909eb492fcf19c8d7b069f20a6d6bd6d919a93f972d10" Workload="localhost-k8s-calico--apiserver--556c6f6458--cpm97-eth0" Nov 4 23:50:28.003303 containerd[1684]: 2025-11-04 23:50:27.987 [INFO][4383] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2cd71c6bf98c373fce5909eb492fcf19c8d7b069f20a6d6bd6d919a93f972d10" Namespace="calico-apiserver" Pod="calico-apiserver-556c6f6458-cpm97" WorkloadEndpoint="localhost-k8s-calico--apiserver--556c6f6458--cpm97-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--556c6f6458--cpm97-eth0", GenerateName:"calico-apiserver-556c6f6458-", Namespace:"calico-apiserver", SelfLink:"", UID:"bb817c56-dcfd-42dd-9fb6-688549d80317", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 49, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"556c6f6458", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-556c6f6458-cpm97", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac21f2e0ddb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:50:28.003303 containerd[1684]: 2025-11-04 23:50:27.987 [INFO][4383] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="2cd71c6bf98c373fce5909eb492fcf19c8d7b069f20a6d6bd6d919a93f972d10" Namespace="calico-apiserver" Pod="calico-apiserver-556c6f6458-cpm97" WorkloadEndpoint="localhost-k8s-calico--apiserver--556c6f6458--cpm97-eth0" Nov 4 23:50:28.003303 containerd[1684]: 2025-11-04 23:50:27.987 [INFO][4383] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac21f2e0ddb ContainerID="2cd71c6bf98c373fce5909eb492fcf19c8d7b069f20a6d6bd6d919a93f972d10" Namespace="calico-apiserver" Pod="calico-apiserver-556c6f6458-cpm97" WorkloadEndpoint="localhost-k8s-calico--apiserver--556c6f6458--cpm97-eth0" Nov 4 23:50:28.003303 containerd[1684]: 2025-11-04 23:50:27.990 [INFO][4383] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2cd71c6bf98c373fce5909eb492fcf19c8d7b069f20a6d6bd6d919a93f972d10" Namespace="calico-apiserver" Pod="calico-apiserver-556c6f6458-cpm97" WorkloadEndpoint="localhost-k8s-calico--apiserver--556c6f6458--cpm97-eth0" Nov 4 23:50:28.003303 containerd[1684]: 2025-11-04 23:50:27.991 [INFO][4383] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2cd71c6bf98c373fce5909eb492fcf19c8d7b069f20a6d6bd6d919a93f972d10" Namespace="calico-apiserver" Pod="calico-apiserver-556c6f6458-cpm97" WorkloadEndpoint="localhost-k8s-calico--apiserver--556c6f6458--cpm97-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--556c6f6458--cpm97-eth0", GenerateName:"calico-apiserver-556c6f6458-", Namespace:"calico-apiserver", SelfLink:"", UID:"bb817c56-dcfd-42dd-9fb6-688549d80317", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 49, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"556c6f6458", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2cd71c6bf98c373fce5909eb492fcf19c8d7b069f20a6d6bd6d919a93f972d10", Pod:"calico-apiserver-556c6f6458-cpm97", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac21f2e0ddb", MAC:"e6:0e:b0:6e:a9:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:50:28.003303 containerd[1684]: 2025-11-04 23:50:28.000 [INFO][4383] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2cd71c6bf98c373fce5909eb492fcf19c8d7b069f20a6d6bd6d919a93f972d10" Namespace="calico-apiserver" Pod="calico-apiserver-556c6f6458-cpm97" WorkloadEndpoint="localhost-k8s-calico--apiserver--556c6f6458--cpm97-eth0" Nov 4 23:50:28.027901 containerd[1684]: time="2025-11-04T23:50:28.027810613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fx4jk,Uid:91558b45-d6b2-43bf-b363-90b9cd5da166,Namespace:kube-system,Attempt:0,} returns sandbox id \"f386870c5deaadc3cd6e208397a69b78ae003a2f6406dd323fc60fad4cf105be\"" Nov 4 23:50:28.073034 containerd[1684]: time="2025-11-04T23:50:28.072708793Z" level=info msg="connecting to shim 2cd71c6bf98c373fce5909eb492fcf19c8d7b069f20a6d6bd6d919a93f972d10" address="unix:///run/containerd/s/160689fb1c7f0d6229a57f523c058454cdf747822cf8c5ae2ce198dc4a88c30c" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:50:28.091620 systemd[1]: Started cri-containerd-2cd71c6bf98c373fce5909eb492fcf19c8d7b069f20a6d6bd6d919a93f972d10.scope - libcontainer container 2cd71c6bf98c373fce5909eb492fcf19c8d7b069f20a6d6bd6d919a93f972d10. Nov 4 23:50:28.102599 containerd[1684]: time="2025-11-04T23:50:28.102575759Z" level=info msg="CreateContainer within sandbox \"f386870c5deaadc3cd6e208397a69b78ae003a2f6406dd323fc60fad4cf105be\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 23:50:28.103475 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:50:28.118794 containerd[1684]: time="2025-11-04T23:50:28.118764071Z" level=info msg="Container e399a7e9547685ffce88a6b78315a4519ebfd412d1cc7ec05a0a9f334d247790: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:50:28.124543 containerd[1684]: time="2025-11-04T23:50:28.124408543Z" level=info msg="CreateContainer within sandbox \"f386870c5deaadc3cd6e208397a69b78ae003a2f6406dd323fc60fad4cf105be\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e399a7e9547685ffce88a6b78315a4519ebfd412d1cc7ec05a0a9f334d247790\"" Nov 4 23:50:28.125981 containerd[1684]: time="2025-11-04T23:50:28.125798639Z" level=info msg="StartContainer for \"e399a7e9547685ffce88a6b78315a4519ebfd412d1cc7ec05a0a9f334d247790\"" Nov 4 23:50:28.130080 containerd[1684]: time="2025-11-04T23:50:28.130048954Z" level=info msg="connecting to shim e399a7e9547685ffce88a6b78315a4519ebfd412d1cc7ec05a0a9f334d247790" address="unix:///run/containerd/s/1565e78f941139b71c2c887a60ef87764751e9216ba063ee83a2c9eb71e5d2f5" protocol=ttrpc version=3 Nov 4 23:50:28.145011 containerd[1684]: time="2025-11-04T23:50:28.144987991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556c6f6458-cpm97,Uid:bb817c56-dcfd-42dd-9fb6-688549d80317,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2cd71c6bf98c373fce5909eb492fcf19c8d7b069f20a6d6bd6d919a93f972d10\"" Nov 4 23:50:28.147280 containerd[1684]: time="2025-11-04T23:50:28.146371007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:50:28.147778 systemd[1]: Started cri-containerd-e399a7e9547685ffce88a6b78315a4519ebfd412d1cc7ec05a0a9f334d247790.scope - libcontainer container e399a7e9547685ffce88a6b78315a4519ebfd412d1cc7ec05a0a9f334d247790. Nov 4 23:50:28.210563 containerd[1684]: time="2025-11-04T23:50:28.210484582Z" level=info msg="StartContainer for \"e399a7e9547685ffce88a6b78315a4519ebfd412d1cc7ec05a0a9f334d247790\" returns successfully" Nov 4 23:50:28.324781 kubelet[3005]: I1104 23:50:28.324668 3005 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 23:50:28.497191 containerd[1684]: time="2025-11-04T23:50:28.497002624Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:50:28.504774 containerd[1684]: time="2025-11-04T23:50:28.504561104Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:50:28.504925 containerd[1684]: time="2025-11-04T23:50:28.504663968Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:50:28.505070 kubelet[3005]: E1104 23:50:28.505040 3005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:50:28.505114 kubelet[3005]: E1104 23:50:28.505081 3005 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:50:28.543672 kubelet[3005]: E1104 23:50:28.543205 3005 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4j4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-556c6f6458-cpm97_calico-apiserver(bb817c56-dcfd-42dd-9fb6-688549d80317): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:50:28.544442 kubelet[3005]: E1104 23:50:28.544407 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556c6f6458-cpm97" podUID="bb817c56-dcfd-42dd-9fb6-688549d80317" Nov 4 23:50:28.735866 containerd[1684]: time="2025-11-04T23:50:28.735595710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j2cxk,Uid:87667173-4fdd-43b0-b698-59acc1ea7515,Namespace:calico-system,Attempt:0,}" Nov 4 23:50:28.736063 containerd[1684]: time="2025-11-04T23:50:28.736051835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nb4vh,Uid:cb2b84b4-36c2-466a-bbb6-32feb10e4093,Namespace:kube-system,Attempt:0,}" Nov 4 23:50:28.890038 systemd-networkd[1587]: calic2f99651841: Link UP Nov 4 23:50:28.890952 systemd-networkd[1587]: calic2f99651841: Gained carrier Nov 4 23:50:28.895652 kubelet[3005]: E1104 23:50:28.895611 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556c6f6458-cpm97" podUID="bb817c56-dcfd-42dd-9fb6-688549d80317" Nov 4 23:50:28.910565 containerd[1684]: 2025-11-04 23:50:28.764 [INFO][4576] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 23:50:28.910565 containerd[1684]: 2025-11-04 23:50:28.779 [INFO][4576] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--j2cxk-eth0 csi-node-driver- calico-system 87667173-4fdd-43b0-b698-59acc1ea7515 725 0 2025-11-04 23:50:02 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-j2cxk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic2f99651841 [] [] }} ContainerID="ba174aaee37ae5a11fd610f0a323093410b779f92fb2548d19e9bd8e5d5a150e" Namespace="calico-system" Pod="csi-node-driver-j2cxk" WorkloadEndpoint="localhost-k8s-csi--node--driver--j2cxk-" Nov 4 23:50:28.910565 containerd[1684]: 2025-11-04 23:50:28.779 [INFO][4576] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ba174aaee37ae5a11fd610f0a323093410b779f92fb2548d19e9bd8e5d5a150e" Namespace="calico-system" Pod="csi-node-driver-j2cxk" WorkloadEndpoint="localhost-k8s-csi--node--driver--j2cxk-eth0" Nov 4 23:50:28.910565 containerd[1684]: 2025-11-04 23:50:28.844 [INFO][4600] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ba174aaee37ae5a11fd610f0a323093410b779f92fb2548d19e9bd8e5d5a150e" HandleID="k8s-pod-network.ba174aaee37ae5a11fd610f0a323093410b779f92fb2548d19e9bd8e5d5a150e" Workload="localhost-k8s-csi--node--driver--j2cxk-eth0" Nov 4 23:50:28.910565 containerd[1684]: 2025-11-04 23:50:28.844 [INFO][4600] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ba174aaee37ae5a11fd610f0a323093410b779f92fb2548d19e9bd8e5d5a150e" HandleID="k8s-pod-network.ba174aaee37ae5a11fd610f0a323093410b779f92fb2548d19e9bd8e5d5a150e" Workload="localhost-k8s-csi--node--driver--j2cxk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-j2cxk", "timestamp":"2025-11-04 23:50:28.844068995 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:50:28.910565 containerd[1684]: 2025-11-04 23:50:28.844 [INFO][4600] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:50:28.910565 containerd[1684]: 2025-11-04 23:50:28.844 [INFO][4600] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:50:28.910565 containerd[1684]: 2025-11-04 23:50:28.844 [INFO][4600] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 23:50:28.910565 containerd[1684]: 2025-11-04 23:50:28.853 [INFO][4600] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ba174aaee37ae5a11fd610f0a323093410b779f92fb2548d19e9bd8e5d5a150e" host="localhost" Nov 4 23:50:28.910565 containerd[1684]: 2025-11-04 23:50:28.858 [INFO][4600] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 23:50:28.910565 containerd[1684]: 2025-11-04 23:50:28.863 [INFO][4600] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 23:50:28.910565 containerd[1684]: 2025-11-04 23:50:28.865 [INFO][4600] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 23:50:28.910565 containerd[1684]: 2025-11-04 23:50:28.867 [INFO][4600] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 23:50:28.910565 containerd[1684]: 2025-11-04 23:50:28.867 [INFO][4600] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ba174aaee37ae5a11fd610f0a323093410b779f92fb2548d19e9bd8e5d5a150e" host="localhost" Nov 4 23:50:28.910565 containerd[1684]: 2025-11-04 23:50:28.868 [INFO][4600] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ba174aaee37ae5a11fd610f0a323093410b779f92fb2548d19e9bd8e5d5a150e Nov 4 23:50:28.910565 containerd[1684]: 2025-11-04 23:50:28.872 [INFO][4600] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ba174aaee37ae5a11fd610f0a323093410b779f92fb2548d19e9bd8e5d5a150e" host="localhost" Nov 4 23:50:28.910565 containerd[1684]: 2025-11-04 23:50:28.880 [INFO][4600] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.ba174aaee37ae5a11fd610f0a323093410b779f92fb2548d19e9bd8e5d5a150e" host="localhost" Nov 4 23:50:28.910565 containerd[1684]: 2025-11-04 23:50:28.880 [INFO][4600] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.ba174aaee37ae5a11fd610f0a323093410b779f92fb2548d19e9bd8e5d5a150e" host="localhost" Nov 4 23:50:28.910565 containerd[1684]: 2025-11-04 23:50:28.881 [INFO][4600] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:50:28.910565 containerd[1684]: 2025-11-04 23:50:28.881 [INFO][4600] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="ba174aaee37ae5a11fd610f0a323093410b779f92fb2548d19e9bd8e5d5a150e" HandleID="k8s-pod-network.ba174aaee37ae5a11fd610f0a323093410b779f92fb2548d19e9bd8e5d5a150e" Workload="localhost-k8s-csi--node--driver--j2cxk-eth0" Nov 4 23:50:28.914398 containerd[1684]: 2025-11-04 23:50:28.884 [INFO][4576] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ba174aaee37ae5a11fd610f0a323093410b779f92fb2548d19e9bd8e5d5a150e" Namespace="calico-system" Pod="csi-node-driver-j2cxk" WorkloadEndpoint="localhost-k8s-csi--node--driver--j2cxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--j2cxk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"87667173-4fdd-43b0-b698-59acc1ea7515", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 50, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-j2cxk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic2f99651841", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:50:28.914398 containerd[1684]: 2025-11-04 23:50:28.884 [INFO][4576] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="ba174aaee37ae5a11fd610f0a323093410b779f92fb2548d19e9bd8e5d5a150e" Namespace="calico-system" Pod="csi-node-driver-j2cxk" WorkloadEndpoint="localhost-k8s-csi--node--driver--j2cxk-eth0" Nov 4 23:50:28.914398 containerd[1684]: 2025-11-04 23:50:28.884 [INFO][4576] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic2f99651841 ContainerID="ba174aaee37ae5a11fd610f0a323093410b779f92fb2548d19e9bd8e5d5a150e" Namespace="calico-system" Pod="csi-node-driver-j2cxk" WorkloadEndpoint="localhost-k8s-csi--node--driver--j2cxk-eth0" Nov 4 23:50:28.914398 containerd[1684]: 2025-11-04 23:50:28.890 [INFO][4576] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ba174aaee37ae5a11fd610f0a323093410b779f92fb2548d19e9bd8e5d5a150e" Namespace="calico-system" Pod="csi-node-driver-j2cxk" WorkloadEndpoint="localhost-k8s-csi--node--driver--j2cxk-eth0" Nov 4 23:50:28.914398 containerd[1684]: 2025-11-04 23:50:28.890 [INFO][4576] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ba174aaee37ae5a11fd610f0a323093410b779f92fb2548d19e9bd8e5d5a150e" Namespace="calico-system" Pod="csi-node-driver-j2cxk" WorkloadEndpoint="localhost-k8s-csi--node--driver--j2cxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--j2cxk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"87667173-4fdd-43b0-b698-59acc1ea7515", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 50, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ba174aaee37ae5a11fd610f0a323093410b779f92fb2548d19e9bd8e5d5a150e", Pod:"csi-node-driver-j2cxk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic2f99651841", MAC:"56:6d:b0:fa:9e:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:50:28.914398 containerd[1684]: 2025-11-04 23:50:28.908 [INFO][4576] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ba174aaee37ae5a11fd610f0a323093410b779f92fb2548d19e9bd8e5d5a150e" Namespace="calico-system" Pod="csi-node-driver-j2cxk" WorkloadEndpoint="localhost-k8s-csi--node--driver--j2cxk-eth0" Nov 4 23:50:28.938923 containerd[1684]: time="2025-11-04T23:50:28.938856007Z" level=info msg="connecting to shim ba174aaee37ae5a11fd610f0a323093410b779f92fb2548d19e9bd8e5d5a150e" address="unix:///run/containerd/s/2c51c7898f9f373443d00848af30d99c54363ada8813404c89f9debc425619d4" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:50:28.962702 systemd[1]: Started cri-containerd-ba174aaee37ae5a11fd610f0a323093410b779f92fb2548d19e9bd8e5d5a150e.scope - libcontainer container ba174aaee37ae5a11fd610f0a323093410b779f92fb2548d19e9bd8e5d5a150e. Nov 4 23:50:28.983652 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:50:28.999364 containerd[1684]: time="2025-11-04T23:50:28.998746204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j2cxk,Uid:87667173-4fdd-43b0-b698-59acc1ea7515,Namespace:calico-system,Attempt:0,} returns sandbox id \"ba174aaee37ae5a11fd610f0a323093410b779f92fb2548d19e9bd8e5d5a150e\"" Nov 4 23:50:29.001928 containerd[1684]: time="2025-11-04T23:50:29.001902352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 23:50:29.020508 kubelet[3005]: I1104 23:50:29.020306 3005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fx4jk" podStartSLOduration=40.00639477 podStartE2EDuration="40.00639477s" podCreationTimestamp="2025-11-04 23:49:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:50:29.005960262 +0000 UTC m=+47.446936045" watchObservedRunningTime="2025-11-04 23:50:29.00639477 +0000 UTC m=+47.447370547" Nov 4 23:50:29.055657 systemd-networkd[1587]: calif9a6df7984e: Link UP Nov 4 23:50:29.057713 systemd-networkd[1587]: calif9a6df7984e: Gained carrier Nov 4 23:50:29.081643 containerd[1684]: 2025-11-04 23:50:28.774 [INFO][4582] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 23:50:29.081643 containerd[1684]: 2025-11-04 23:50:28.797 [INFO][4582] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--nb4vh-eth0 coredns-674b8bbfcf- kube-system cb2b84b4-36c2-466a-bbb6-32feb10e4093 839 0 2025-11-04 23:49:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-nb4vh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif9a6df7984e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1d41664a64d1b4c206ea6d38017a853c81ffdf935f0662edb8dadac914678216" Namespace="kube-system" Pod="coredns-674b8bbfcf-nb4vh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nb4vh-" Nov 4 23:50:29.081643 containerd[1684]: 2025-11-04 23:50:28.797 [INFO][4582] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1d41664a64d1b4c206ea6d38017a853c81ffdf935f0662edb8dadac914678216" Namespace="kube-system" Pod="coredns-674b8bbfcf-nb4vh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nb4vh-eth0" Nov 4 23:50:29.081643 containerd[1684]: 2025-11-04 23:50:28.848 [INFO][4605] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d41664a64d1b4c206ea6d38017a853c81ffdf935f0662edb8dadac914678216" HandleID="k8s-pod-network.1d41664a64d1b4c206ea6d38017a853c81ffdf935f0662edb8dadac914678216" Workload="localhost-k8s-coredns--674b8bbfcf--nb4vh-eth0" Nov 4 23:50:29.081643 containerd[1684]: 2025-11-04 23:50:28.848 [INFO][4605] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1d41664a64d1b4c206ea6d38017a853c81ffdf935f0662edb8dadac914678216" HandleID="k8s-pod-network.1d41664a64d1b4c206ea6d38017a853c81ffdf935f0662edb8dadac914678216" Workload="localhost-k8s-coredns--674b8bbfcf--nb4vh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-nb4vh", "timestamp":"2025-11-04 23:50:28.848027825 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:50:29.081643 containerd[1684]: 2025-11-04 23:50:28.849 [INFO][4605] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:50:29.081643 containerd[1684]: 2025-11-04 23:50:28.881 [INFO][4605] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:50:29.081643 containerd[1684]: 2025-11-04 23:50:28.881 [INFO][4605] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 23:50:29.081643 containerd[1684]: 2025-11-04 23:50:28.953 [INFO][4605] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1d41664a64d1b4c206ea6d38017a853c81ffdf935f0662edb8dadac914678216" host="localhost" Nov 4 23:50:29.081643 containerd[1684]: 2025-11-04 23:50:28.968 [INFO][4605] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 23:50:29.081643 containerd[1684]: 2025-11-04 23:50:29.016 [INFO][4605] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 23:50:29.081643 containerd[1684]: 2025-11-04 23:50:29.021 [INFO][4605] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 23:50:29.081643 containerd[1684]: 2025-11-04 23:50:29.026 [INFO][4605] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 23:50:29.081643 containerd[1684]: 2025-11-04 23:50:29.026 [INFO][4605] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1d41664a64d1b4c206ea6d38017a853c81ffdf935f0662edb8dadac914678216" host="localhost" Nov 4 23:50:29.081643 containerd[1684]: 2025-11-04 23:50:29.029 [INFO][4605] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1d41664a64d1b4c206ea6d38017a853c81ffdf935f0662edb8dadac914678216 Nov 4 23:50:29.081643 containerd[1684]: 2025-11-04 23:50:29.038 [INFO][4605] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1d41664a64d1b4c206ea6d38017a853c81ffdf935f0662edb8dadac914678216" host="localhost" Nov 4 23:50:29.081643 containerd[1684]: 2025-11-04 23:50:29.045 [INFO][4605] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.1d41664a64d1b4c206ea6d38017a853c81ffdf935f0662edb8dadac914678216" host="localhost" Nov 4 23:50:29.081643 containerd[1684]: 2025-11-04 23:50:29.045 [INFO][4605] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.1d41664a64d1b4c206ea6d38017a853c81ffdf935f0662edb8dadac914678216" host="localhost" Nov 4 23:50:29.081643 containerd[1684]: 2025-11-04 23:50:29.045 [INFO][4605] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:50:29.081643 containerd[1684]: 2025-11-04 23:50:29.045 [INFO][4605] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="1d41664a64d1b4c206ea6d38017a853c81ffdf935f0662edb8dadac914678216" HandleID="k8s-pod-network.1d41664a64d1b4c206ea6d38017a853c81ffdf935f0662edb8dadac914678216" Workload="localhost-k8s-coredns--674b8bbfcf--nb4vh-eth0" Nov 4 23:50:29.082670 containerd[1684]: 2025-11-04 23:50:29.049 [INFO][4582] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1d41664a64d1b4c206ea6d38017a853c81ffdf935f0662edb8dadac914678216" Namespace="kube-system" Pod="coredns-674b8bbfcf-nb4vh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nb4vh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--nb4vh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"cb2b84b4-36c2-466a-bbb6-32feb10e4093", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 49, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-nb4vh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif9a6df7984e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:50:29.082670 containerd[1684]: 2025-11-04 23:50:29.049 [INFO][4582] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="1d41664a64d1b4c206ea6d38017a853c81ffdf935f0662edb8dadac914678216" Namespace="kube-system" Pod="coredns-674b8bbfcf-nb4vh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nb4vh-eth0" Nov 4 23:50:29.082670 containerd[1684]: 2025-11-04 23:50:29.049 [INFO][4582] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif9a6df7984e ContainerID="1d41664a64d1b4c206ea6d38017a853c81ffdf935f0662edb8dadac914678216" Namespace="kube-system" Pod="coredns-674b8bbfcf-nb4vh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nb4vh-eth0" Nov 4 23:50:29.082670 containerd[1684]: 2025-11-04 23:50:29.061 [INFO][4582] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d41664a64d1b4c206ea6d38017a853c81ffdf935f0662edb8dadac914678216" Namespace="kube-system" Pod="coredns-674b8bbfcf-nb4vh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nb4vh-eth0" Nov 4 23:50:29.082670 containerd[1684]: 2025-11-04 23:50:29.064 [INFO][4582] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1d41664a64d1b4c206ea6d38017a853c81ffdf935f0662edb8dadac914678216" Namespace="kube-system" Pod="coredns-674b8bbfcf-nb4vh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nb4vh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--nb4vh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"cb2b84b4-36c2-466a-bbb6-32feb10e4093", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 49, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1d41664a64d1b4c206ea6d38017a853c81ffdf935f0662edb8dadac914678216", Pod:"coredns-674b8bbfcf-nb4vh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif9a6df7984e", MAC:"32:4b:a4:ed:93:7e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:50:29.082670 containerd[1684]: 2025-11-04 23:50:29.078 [INFO][4582] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1d41664a64d1b4c206ea6d38017a853c81ffdf935f0662edb8dadac914678216" Namespace="kube-system" Pod="coredns-674b8bbfcf-nb4vh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nb4vh-eth0" Nov 4 23:50:29.122782 containerd[1684]: time="2025-11-04T23:50:29.122683068Z" level=info msg="connecting to shim 1d41664a64d1b4c206ea6d38017a853c81ffdf935f0662edb8dadac914678216" address="unix:///run/containerd/s/e4d8347b2ed889ccae8f623238b19895cd1407b33d5f290a252c881bcd9e15f7" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:50:29.160656 systemd[1]: Started cri-containerd-1d41664a64d1b4c206ea6d38017a853c81ffdf935f0662edb8dadac914678216.scope - libcontainer container 1d41664a64d1b4c206ea6d38017a853c81ffdf935f0662edb8dadac914678216. Nov 4 23:50:29.171993 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:50:29.183638 systemd-networkd[1587]: caliac21f2e0ddb: Gained IPv6LL Nov 4 23:50:29.220415 containerd[1684]: time="2025-11-04T23:50:29.220376953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nb4vh,Uid:cb2b84b4-36c2-466a-bbb6-32feb10e4093,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d41664a64d1b4c206ea6d38017a853c81ffdf935f0662edb8dadac914678216\"" Nov 4 23:50:29.226622 containerd[1684]: time="2025-11-04T23:50:29.226239034Z" level=info msg="CreateContainer within sandbox \"1d41664a64d1b4c206ea6d38017a853c81ffdf935f0662edb8dadac914678216\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 23:50:29.237750 containerd[1684]: time="2025-11-04T23:50:29.237715057Z" level=info msg="Container 6b59fbe04fdfe91dca0e85a06707b92d98be256a05d2ab7deb07c14920e846b5: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:50:29.244454 containerd[1684]: time="2025-11-04T23:50:29.243811180Z" level=info msg="CreateContainer within sandbox \"1d41664a64d1b4c206ea6d38017a853c81ffdf935f0662edb8dadac914678216\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6b59fbe04fdfe91dca0e85a06707b92d98be256a05d2ab7deb07c14920e846b5\"" Nov 4 23:50:29.245147 containerd[1684]: time="2025-11-04T23:50:29.245038216Z" level=info msg="StartContainer for \"6b59fbe04fdfe91dca0e85a06707b92d98be256a05d2ab7deb07c14920e846b5\"" Nov 4 23:50:29.246808 containerd[1684]: time="2025-11-04T23:50:29.246750556Z" level=info msg="connecting to shim 6b59fbe04fdfe91dca0e85a06707b92d98be256a05d2ab7deb07c14920e846b5" address="unix:///run/containerd/s/e4d8347b2ed889ccae8f623238b19895cd1407b33d5f290a252c881bcd9e15f7" protocol=ttrpc version=3 Nov 4 23:50:29.272084 systemd[1]: Started cri-containerd-6b59fbe04fdfe91dca0e85a06707b92d98be256a05d2ab7deb07c14920e846b5.scope - libcontainer container 6b59fbe04fdfe91dca0e85a06707b92d98be256a05d2ab7deb07c14920e846b5. Nov 4 23:50:29.324714 containerd[1684]: time="2025-11-04T23:50:29.324680027Z" level=info msg="StartContainer for \"6b59fbe04fdfe91dca0e85a06707b92d98be256a05d2ab7deb07c14920e846b5\" returns successfully" Nov 4 23:50:29.345268 containerd[1684]: time="2025-11-04T23:50:29.345161058Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:50:29.347614 containerd[1684]: time="2025-11-04T23:50:29.347480216Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 23:50:29.347972 containerd[1684]: time="2025-11-04T23:50:29.347740990Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 4 23:50:29.348033 kubelet[3005]: E1104 23:50:29.347994 3005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:50:29.349567 kubelet[3005]: E1104 23:50:29.348032 3005 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:50:29.363859 kubelet[3005]: E1104 23:50:29.363804 3005 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8ng9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-j2cxk_calico-system(87667173-4fdd-43b0-b698-59acc1ea7515): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 23:50:29.367527 containerd[1684]: time="2025-11-04T23:50:29.367058290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 23:50:29.567667 systemd-networkd[1587]: cali02ac55677e4: Gained IPv6LL Nov 4 23:50:29.729852 containerd[1684]: time="2025-11-04T23:50:29.729751249Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:50:29.733486 containerd[1684]: time="2025-11-04T23:50:29.730801824Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 23:50:29.733486 containerd[1684]: time="2025-11-04T23:50:29.730860568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 4 23:50:29.733654 kubelet[3005]: E1104 23:50:29.730976 3005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:50:29.733654 kubelet[3005]: E1104 23:50:29.731019 3005 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:50:29.733654 kubelet[3005]: E1104 23:50:29.731110 3005 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8ng9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-j2cxk_calico-system(87667173-4fdd-43b0-b698-59acc1ea7515): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 23:50:29.733654 kubelet[3005]: E1104 23:50:29.732283 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j2cxk" podUID="87667173-4fdd-43b0-b698-59acc1ea7515" Nov 4 23:50:29.907800 kubelet[3005]: E1104 23:50:29.907769 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j2cxk" podUID="87667173-4fdd-43b0-b698-59acc1ea7515" Nov 4 23:50:29.907964 kubelet[3005]: E1104 23:50:29.907948 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556c6f6458-cpm97" podUID="bb817c56-dcfd-42dd-9fb6-688549d80317" Nov 4 23:50:29.925511 kubelet[3005]: I1104 23:50:29.925423 3005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nb4vh" podStartSLOduration=40.925412791 podStartE2EDuration="40.925412791s" podCreationTimestamp="2025-11-04 23:49:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:50:29.924287395 +0000 UTC m=+48.365263177" watchObservedRunningTime="2025-11-04 23:50:29.925412791 +0000 UTC m=+48.366388573" Nov 4 23:50:30.273568 systemd-networkd[1587]: calic2f99651841: Gained IPv6LL Nov 4 23:50:30.316179 systemd-networkd[1587]: vxlan.calico: Link UP Nov 4 23:50:30.316184 systemd-networkd[1587]: vxlan.calico: Gained carrier Nov 4 23:50:30.400723 systemd-networkd[1587]: calif9a6df7984e: Gained IPv6LL Nov 4 23:50:30.735891 containerd[1684]: time="2025-11-04T23:50:30.735640258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-548bc79dd9-9vmzn,Uid:22a0a317-e976-4828-b61a-a22d937f284c,Namespace:calico-system,Attempt:0,}" Nov 4 23:50:30.861210 systemd-networkd[1587]: calid89200bde04: Link UP Nov 4 23:50:30.861994 systemd-networkd[1587]: calid89200bde04: Gained carrier Nov 4 23:50:30.885963 containerd[1684]: 2025-11-04 23:50:30.777 [INFO][4896] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--548bc79dd9--9vmzn-eth0 calico-kube-controllers-548bc79dd9- calico-system 22a0a317-e976-4828-b61a-a22d937f284c 833 0 2025-11-04 23:50:02 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:548bc79dd9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-548bc79dd9-9vmzn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid89200bde04 [] [] }} ContainerID="1875ffc7496881482a1814509cfaff38a95fd83b250eacd058ca3d461c328d6e" Namespace="calico-system" Pod="calico-kube-controllers-548bc79dd9-9vmzn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--548bc79dd9--9vmzn-" Nov 4 23:50:30.885963 containerd[1684]: 2025-11-04 23:50:30.777 [INFO][4896] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1875ffc7496881482a1814509cfaff38a95fd83b250eacd058ca3d461c328d6e" Namespace="calico-system" Pod="calico-kube-controllers-548bc79dd9-9vmzn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--548bc79dd9--9vmzn-eth0" Nov 4 23:50:30.885963 containerd[1684]: 2025-11-04 23:50:30.813 [INFO][4908] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1875ffc7496881482a1814509cfaff38a95fd83b250eacd058ca3d461c328d6e" HandleID="k8s-pod-network.1875ffc7496881482a1814509cfaff38a95fd83b250eacd058ca3d461c328d6e" Workload="localhost-k8s-calico--kube--controllers--548bc79dd9--9vmzn-eth0" Nov 4 23:50:30.885963 containerd[1684]: 2025-11-04 23:50:30.814 [INFO][4908] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1875ffc7496881482a1814509cfaff38a95fd83b250eacd058ca3d461c328d6e" HandleID="k8s-pod-network.1875ffc7496881482a1814509cfaff38a95fd83b250eacd058ca3d461c328d6e" Workload="localhost-k8s-calico--kube--controllers--548bc79dd9--9vmzn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-548bc79dd9-9vmzn", "timestamp":"2025-11-04 23:50:30.813925819 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:50:30.885963 containerd[1684]: 2025-11-04 23:50:30.814 [INFO][4908] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:50:30.885963 containerd[1684]: 2025-11-04 23:50:30.814 [INFO][4908] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:50:30.885963 containerd[1684]: 2025-11-04 23:50:30.814 [INFO][4908] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 23:50:30.885963 containerd[1684]: 2025-11-04 23:50:30.818 [INFO][4908] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1875ffc7496881482a1814509cfaff38a95fd83b250eacd058ca3d461c328d6e" host="localhost" Nov 4 23:50:30.885963 containerd[1684]: 2025-11-04 23:50:30.822 [INFO][4908] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 23:50:30.885963 containerd[1684]: 2025-11-04 23:50:30.824 [INFO][4908] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 23:50:30.885963 containerd[1684]: 2025-11-04 23:50:30.825 [INFO][4908] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 23:50:30.885963 containerd[1684]: 2025-11-04 23:50:30.827 [INFO][4908] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 23:50:30.885963 containerd[1684]: 2025-11-04 23:50:30.827 [INFO][4908] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1875ffc7496881482a1814509cfaff38a95fd83b250eacd058ca3d461c328d6e" host="localhost" Nov 4 23:50:30.885963 containerd[1684]: 2025-11-04 23:50:30.828 [INFO][4908] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1875ffc7496881482a1814509cfaff38a95fd83b250eacd058ca3d461c328d6e Nov 4 23:50:30.885963 containerd[1684]: 2025-11-04 23:50:30.834 [INFO][4908] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1875ffc7496881482a1814509cfaff38a95fd83b250eacd058ca3d461c328d6e" host="localhost" Nov 4 23:50:30.885963 containerd[1684]: 2025-11-04 23:50:30.854 [INFO][4908] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.1875ffc7496881482a1814509cfaff38a95fd83b250eacd058ca3d461c328d6e" host="localhost" Nov 4 23:50:30.885963 containerd[1684]: 2025-11-04 23:50:30.854 [INFO][4908] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.1875ffc7496881482a1814509cfaff38a95fd83b250eacd058ca3d461c328d6e" host="localhost" Nov 4 23:50:30.885963 containerd[1684]: 2025-11-04 23:50:30.854 [INFO][4908] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:50:30.885963 containerd[1684]: 2025-11-04 23:50:30.854 [INFO][4908] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="1875ffc7496881482a1814509cfaff38a95fd83b250eacd058ca3d461c328d6e" HandleID="k8s-pod-network.1875ffc7496881482a1814509cfaff38a95fd83b250eacd058ca3d461c328d6e" Workload="localhost-k8s-calico--kube--controllers--548bc79dd9--9vmzn-eth0" Nov 4 23:50:30.890320 containerd[1684]: 2025-11-04 23:50:30.858 [INFO][4896] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1875ffc7496881482a1814509cfaff38a95fd83b250eacd058ca3d461c328d6e" Namespace="calico-system" Pod="calico-kube-controllers-548bc79dd9-9vmzn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--548bc79dd9--9vmzn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--548bc79dd9--9vmzn-eth0", GenerateName:"calico-kube-controllers-548bc79dd9-", Namespace:"calico-system", SelfLink:"", UID:"22a0a317-e976-4828-b61a-a22d937f284c", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 50, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"548bc79dd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-548bc79dd9-9vmzn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid89200bde04", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:50:30.890320 containerd[1684]: 2025-11-04 23:50:30.858 [INFO][4896] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="1875ffc7496881482a1814509cfaff38a95fd83b250eacd058ca3d461c328d6e" Namespace="calico-system" Pod="calico-kube-controllers-548bc79dd9-9vmzn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--548bc79dd9--9vmzn-eth0" Nov 4 23:50:30.890320 containerd[1684]: 2025-11-04 23:50:30.858 [INFO][4896] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid89200bde04 ContainerID="1875ffc7496881482a1814509cfaff38a95fd83b250eacd058ca3d461c328d6e" Namespace="calico-system" Pod="calico-kube-controllers-548bc79dd9-9vmzn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--548bc79dd9--9vmzn-eth0" Nov 4 23:50:30.890320 containerd[1684]: 2025-11-04 23:50:30.862 [INFO][4896] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1875ffc7496881482a1814509cfaff38a95fd83b250eacd058ca3d461c328d6e" Namespace="calico-system" Pod="calico-kube-controllers-548bc79dd9-9vmzn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--548bc79dd9--9vmzn-eth0" Nov 4 23:50:30.890320 containerd[1684]: 2025-11-04 23:50:30.863 [INFO][4896] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1875ffc7496881482a1814509cfaff38a95fd83b250eacd058ca3d461c328d6e" Namespace="calico-system" Pod="calico-kube-controllers-548bc79dd9-9vmzn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--548bc79dd9--9vmzn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--548bc79dd9--9vmzn-eth0", GenerateName:"calico-kube-controllers-548bc79dd9-", Namespace:"calico-system", SelfLink:"", UID:"22a0a317-e976-4828-b61a-a22d937f284c", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 50, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"548bc79dd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1875ffc7496881482a1814509cfaff38a95fd83b250eacd058ca3d461c328d6e", Pod:"calico-kube-controllers-548bc79dd9-9vmzn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid89200bde04", MAC:"9e:ea:09:1f:18:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:50:30.890320 containerd[1684]: 2025-11-04 23:50:30.882 [INFO][4896] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1875ffc7496881482a1814509cfaff38a95fd83b250eacd058ca3d461c328d6e" Namespace="calico-system" Pod="calico-kube-controllers-548bc79dd9-9vmzn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--548bc79dd9--9vmzn-eth0" Nov 4 23:50:30.911596 kubelet[3005]: E1104 23:50:30.911535 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j2cxk" podUID="87667173-4fdd-43b0-b698-59acc1ea7515" Nov 4 23:50:31.003161 containerd[1684]: time="2025-11-04T23:50:31.002479741Z" level=info msg="connecting to shim 1875ffc7496881482a1814509cfaff38a95fd83b250eacd058ca3d461c328d6e" address="unix:///run/containerd/s/7089ec1a9310ee1a5cddb0e9f7f2258fb8309dec522cb3f619359d02cb32d812" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:50:31.056617 systemd[1]: Started cri-containerd-1875ffc7496881482a1814509cfaff38a95fd83b250eacd058ca3d461c328d6e.scope - libcontainer container 1875ffc7496881482a1814509cfaff38a95fd83b250eacd058ca3d461c328d6e. Nov 4 23:50:31.066417 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:50:31.100780 containerd[1684]: time="2025-11-04T23:50:31.100749617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-548bc79dd9-9vmzn,Uid:22a0a317-e976-4828-b61a-a22d937f284c,Namespace:calico-system,Attempt:0,} returns sandbox id \"1875ffc7496881482a1814509cfaff38a95fd83b250eacd058ca3d461c328d6e\"" Nov 4 23:50:31.104517 containerd[1684]: time="2025-11-04T23:50:31.104459443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 23:50:31.424255 containerd[1684]: time="2025-11-04T23:50:31.424100200Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:50:31.424493 containerd[1684]: time="2025-11-04T23:50:31.424477343Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 23:50:31.424493 containerd[1684]: time="2025-11-04T23:50:31.424538033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 4 23:50:31.424737 kubelet[3005]: E1104 23:50:31.424705 3005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:50:31.424791 kubelet[3005]: E1104 23:50:31.424745 3005 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:50:31.424906 kubelet[3005]: E1104 23:50:31.424856 3005 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v69cm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-548bc79dd9-9vmzn_calico-system(22a0a317-e976-4828-b61a-a22d937f284c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 23:50:31.426648 kubelet[3005]: E1104 23:50:31.426613 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-548bc79dd9-9vmzn" podUID="22a0a317-e976-4828-b61a-a22d937f284c" Nov 4 23:50:31.487719 systemd-networkd[1587]: vxlan.calico: Gained IPv6LL Nov 4 23:50:31.736055 containerd[1684]: time="2025-11-04T23:50:31.735910167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-x2sxv,Uid:9da2e8c6-5e46-4a22-9ad0-99b948b30cea,Namespace:calico-system,Attempt:0,}" Nov 4 23:50:31.736055 containerd[1684]: time="2025-11-04T23:50:31.736023704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d9799dc6f-nkdjs,Uid:003bf0e1-d270-4d41-a398-c04be88d91c0,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:50:31.740486 containerd[1684]: time="2025-11-04T23:50:31.740458826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556c6f6458-pstkj,Uid:011689d6-d825-4490-866e-209734536e09,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:50:31.916123 kubelet[3005]: E1104 23:50:31.916099 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-548bc79dd9-9vmzn" podUID="22a0a317-e976-4828-b61a-a22d937f284c" Nov 4 23:50:31.944316 systemd-networkd[1587]: cali3c76d6a52d1: Link UP Nov 4 23:50:31.944758 systemd-networkd[1587]: cali3c76d6a52d1: Gained carrier Nov 4 23:50:31.960945 containerd[1684]: 2025-11-04 23:50:31.811 [INFO][4969] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--x2sxv-eth0 goldmane-666569f655- calico-system 9da2e8c6-5e46-4a22-9ad0-99b948b30cea 840 0 2025-11-04 23:49:59 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-x2sxv eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali3c76d6a52d1 [] [] }} ContainerID="6ce0579469912127ae8ab9e621b09cf9b7ee5331e64d02e9c0754de9a610d75c" Namespace="calico-system" Pod="goldmane-666569f655-x2sxv" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--x2sxv-" Nov 4 23:50:31.960945 containerd[1684]: 2025-11-04 23:50:31.811 [INFO][4969] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ce0579469912127ae8ab9e621b09cf9b7ee5331e64d02e9c0754de9a610d75c" Namespace="calico-system" Pod="goldmane-666569f655-x2sxv" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--x2sxv-eth0" Nov 4 23:50:31.960945 containerd[1684]: 2025-11-04 23:50:31.883 [INFO][5006] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ce0579469912127ae8ab9e621b09cf9b7ee5331e64d02e9c0754de9a610d75c" HandleID="k8s-pod-network.6ce0579469912127ae8ab9e621b09cf9b7ee5331e64d02e9c0754de9a610d75c" Workload="localhost-k8s-goldmane--666569f655--x2sxv-eth0" Nov 4 23:50:31.960945 containerd[1684]: 2025-11-04 23:50:31.884 [INFO][5006] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6ce0579469912127ae8ab9e621b09cf9b7ee5331e64d02e9c0754de9a610d75c" HandleID="k8s-pod-network.6ce0579469912127ae8ab9e621b09cf9b7ee5331e64d02e9c0754de9a610d75c" Workload="localhost-k8s-goldmane--666569f655--x2sxv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cfbd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-x2sxv", "timestamp":"2025-11-04 23:50:31.883887352 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:50:31.960945 containerd[1684]: 2025-11-04 23:50:31.884 [INFO][5006] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:50:31.960945 containerd[1684]: 2025-11-04 23:50:31.885 [INFO][5006] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:50:31.960945 containerd[1684]: 2025-11-04 23:50:31.885 [INFO][5006] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 23:50:31.960945 containerd[1684]: 2025-11-04 23:50:31.893 [INFO][5006] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6ce0579469912127ae8ab9e621b09cf9b7ee5331e64d02e9c0754de9a610d75c" host="localhost" Nov 4 23:50:31.960945 containerd[1684]: 2025-11-04 23:50:31.899 [INFO][5006] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 23:50:31.960945 containerd[1684]: 2025-11-04 23:50:31.903 [INFO][5006] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 23:50:31.960945 containerd[1684]: 2025-11-04 23:50:31.905 [INFO][5006] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 23:50:31.960945 containerd[1684]: 2025-11-04 23:50:31.906 [INFO][5006] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 23:50:31.960945 containerd[1684]: 2025-11-04 23:50:31.906 [INFO][5006] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6ce0579469912127ae8ab9e621b09cf9b7ee5331e64d02e9c0754de9a610d75c" host="localhost" Nov 4 23:50:31.960945 containerd[1684]: 2025-11-04 23:50:31.907 [INFO][5006] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6ce0579469912127ae8ab9e621b09cf9b7ee5331e64d02e9c0754de9a610d75c Nov 4 23:50:31.960945 containerd[1684]: 2025-11-04 23:50:31.912 [INFO][5006] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6ce0579469912127ae8ab9e621b09cf9b7ee5331e64d02e9c0754de9a610d75c" host="localhost" Nov 4 23:50:31.960945 containerd[1684]: 2025-11-04 23:50:31.928 [INFO][5006] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.6ce0579469912127ae8ab9e621b09cf9b7ee5331e64d02e9c0754de9a610d75c" host="localhost" Nov 4 23:50:31.960945 containerd[1684]: 2025-11-04 23:50:31.929 [INFO][5006] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.6ce0579469912127ae8ab9e621b09cf9b7ee5331e64d02e9c0754de9a610d75c" host="localhost" Nov 4 23:50:31.960945 containerd[1684]: 2025-11-04 23:50:31.929 [INFO][5006] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:50:31.960945 containerd[1684]: 2025-11-04 23:50:31.929 [INFO][5006] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="6ce0579469912127ae8ab9e621b09cf9b7ee5331e64d02e9c0754de9a610d75c" HandleID="k8s-pod-network.6ce0579469912127ae8ab9e621b09cf9b7ee5331e64d02e9c0754de9a610d75c" Workload="localhost-k8s-goldmane--666569f655--x2sxv-eth0" Nov 4 23:50:31.963403 containerd[1684]: 2025-11-04 23:50:31.938 [INFO][4969] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ce0579469912127ae8ab9e621b09cf9b7ee5331e64d02e9c0754de9a610d75c" Namespace="calico-system" Pod="goldmane-666569f655-x2sxv" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--x2sxv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--x2sxv-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9da2e8c6-5e46-4a22-9ad0-99b948b30cea", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 49, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-x2sxv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3c76d6a52d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:50:31.963403 containerd[1684]: 2025-11-04 23:50:31.938 [INFO][4969] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="6ce0579469912127ae8ab9e621b09cf9b7ee5331e64d02e9c0754de9a610d75c" Namespace="calico-system" Pod="goldmane-666569f655-x2sxv" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--x2sxv-eth0" Nov 4 23:50:31.963403 containerd[1684]: 2025-11-04 23:50:31.938 [INFO][4969] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c76d6a52d1 ContainerID="6ce0579469912127ae8ab9e621b09cf9b7ee5331e64d02e9c0754de9a610d75c" Namespace="calico-system" Pod="goldmane-666569f655-x2sxv" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--x2sxv-eth0" Nov 4 23:50:31.963403 containerd[1684]: 2025-11-04 23:50:31.945 [INFO][4969] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ce0579469912127ae8ab9e621b09cf9b7ee5331e64d02e9c0754de9a610d75c" Namespace="calico-system" Pod="goldmane-666569f655-x2sxv" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--x2sxv-eth0" Nov 4 23:50:31.963403 containerd[1684]: 2025-11-04 23:50:31.946 [INFO][4969] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ce0579469912127ae8ab9e621b09cf9b7ee5331e64d02e9c0754de9a610d75c" Namespace="calico-system" Pod="goldmane-666569f655-x2sxv" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--x2sxv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--x2sxv-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9da2e8c6-5e46-4a22-9ad0-99b948b30cea", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 49, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6ce0579469912127ae8ab9e621b09cf9b7ee5331e64d02e9c0754de9a610d75c", Pod:"goldmane-666569f655-x2sxv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3c76d6a52d1", MAC:"ba:cd:5c:89:f2:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:50:31.963403 containerd[1684]: 2025-11-04 23:50:31.958 [INFO][4969] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ce0579469912127ae8ab9e621b09cf9b7ee5331e64d02e9c0754de9a610d75c" Namespace="calico-system" Pod="goldmane-666569f655-x2sxv" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--x2sxv-eth0" Nov 4 23:50:31.987550 containerd[1684]: time="2025-11-04T23:50:31.986831082Z" level=info msg="connecting to shim 6ce0579469912127ae8ab9e621b09cf9b7ee5331e64d02e9c0754de9a610d75c" address="unix:///run/containerd/s/33877c48194f0f9a5a97561cc3b153c0dd32beb8c7aff0261780f3c40ad6076b" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:50:32.017310 systemd[1]: Started cri-containerd-6ce0579469912127ae8ab9e621b09cf9b7ee5331e64d02e9c0754de9a610d75c.scope - libcontainer container 6ce0579469912127ae8ab9e621b09cf9b7ee5331e64d02e9c0754de9a610d75c. Nov 4 23:50:32.030385 systemd-networkd[1587]: calif6e6507d361: Link UP Nov 4 23:50:32.031694 systemd-networkd[1587]: calif6e6507d361: Gained carrier Nov 4 23:50:32.043637 containerd[1684]: 2025-11-04 23:50:31.840 [INFO][4970] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6d9799dc6f--nkdjs-eth0 calico-apiserver-6d9799dc6f- calico-apiserver 003bf0e1-d270-4d41-a398-c04be88d91c0 841 0 2025-11-04 23:49:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d9799dc6f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6d9799dc6f-nkdjs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif6e6507d361 [] [] }} ContainerID="774c384b57ec09e7bdd5087e270841134c84a7b26b42e45ffee834536d843dde" Namespace="calico-apiserver" Pod="calico-apiserver-6d9799dc6f-nkdjs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d9799dc6f--nkdjs-" Nov 4 23:50:32.043637 containerd[1684]: 2025-11-04 23:50:31.844 [INFO][4970] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="774c384b57ec09e7bdd5087e270841134c84a7b26b42e45ffee834536d843dde" Namespace="calico-apiserver" Pod="calico-apiserver-6d9799dc6f-nkdjs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d9799dc6f--nkdjs-eth0" Nov 4 23:50:32.043637 containerd[1684]: 2025-11-04 23:50:31.896 [INFO][5016] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="774c384b57ec09e7bdd5087e270841134c84a7b26b42e45ffee834536d843dde" HandleID="k8s-pod-network.774c384b57ec09e7bdd5087e270841134c84a7b26b42e45ffee834536d843dde" Workload="localhost-k8s-calico--apiserver--6d9799dc6f--nkdjs-eth0" Nov 4 23:50:32.043637 containerd[1684]: 2025-11-04 23:50:31.896 [INFO][5016] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="774c384b57ec09e7bdd5087e270841134c84a7b26b42e45ffee834536d843dde" HandleID="k8s-pod-network.774c384b57ec09e7bdd5087e270841134c84a7b26b42e45ffee834536d843dde" Workload="localhost-k8s-calico--apiserver--6d9799dc6f--nkdjs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd8c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6d9799dc6f-nkdjs", "timestamp":"2025-11-04 23:50:31.896222905 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:50:32.043637 containerd[1684]: 2025-11-04 23:50:31.896 [INFO][5016] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:50:32.043637 containerd[1684]: 2025-11-04 23:50:31.930 [INFO][5016] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:50:32.043637 containerd[1684]: 2025-11-04 23:50:31.930 [INFO][5016] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 23:50:32.043637 containerd[1684]: 2025-11-04 23:50:31.995 [INFO][5016] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.774c384b57ec09e7bdd5087e270841134c84a7b26b42e45ffee834536d843dde" host="localhost" Nov 4 23:50:32.043637 containerd[1684]: 2025-11-04 23:50:32.002 [INFO][5016] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 23:50:32.043637 containerd[1684]: 2025-11-04 23:50:32.008 [INFO][5016] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 23:50:32.043637 containerd[1684]: 2025-11-04 23:50:32.010 [INFO][5016] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 23:50:32.043637 containerd[1684]: 2025-11-04 23:50:32.014 [INFO][5016] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 23:50:32.043637 containerd[1684]: 2025-11-04 23:50:32.014 [INFO][5016] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.774c384b57ec09e7bdd5087e270841134c84a7b26b42e45ffee834536d843dde" host="localhost" Nov 4 23:50:32.043637 containerd[1684]: 2025-11-04 23:50:32.015 [INFO][5016] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.774c384b57ec09e7bdd5087e270841134c84a7b26b42e45ffee834536d843dde Nov 4 23:50:32.043637 containerd[1684]: 2025-11-04 23:50:32.019 [INFO][5016] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.774c384b57ec09e7bdd5087e270841134c84a7b26b42e45ffee834536d843dde" host="localhost" Nov 4 23:50:32.043637 containerd[1684]: 2025-11-04 23:50:32.023 [INFO][5016] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.774c384b57ec09e7bdd5087e270841134c84a7b26b42e45ffee834536d843dde" host="localhost" Nov 4 23:50:32.043637 containerd[1684]: 2025-11-04 23:50:32.023 [INFO][5016] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.774c384b57ec09e7bdd5087e270841134c84a7b26b42e45ffee834536d843dde" host="localhost" Nov 4 23:50:32.043637 containerd[1684]: 2025-11-04 23:50:32.023 [INFO][5016] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:50:32.043637 containerd[1684]: 2025-11-04 23:50:32.023 [INFO][5016] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="774c384b57ec09e7bdd5087e270841134c84a7b26b42e45ffee834536d843dde" HandleID="k8s-pod-network.774c384b57ec09e7bdd5087e270841134c84a7b26b42e45ffee834536d843dde" Workload="localhost-k8s-calico--apiserver--6d9799dc6f--nkdjs-eth0" Nov 4 23:50:32.044386 containerd[1684]: 2025-11-04 23:50:32.027 [INFO][4970] cni-plugin/k8s.go 418: Populated endpoint ContainerID="774c384b57ec09e7bdd5087e270841134c84a7b26b42e45ffee834536d843dde" Namespace="calico-apiserver" Pod="calico-apiserver-6d9799dc6f-nkdjs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d9799dc6f--nkdjs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d9799dc6f--nkdjs-eth0", GenerateName:"calico-apiserver-6d9799dc6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"003bf0e1-d270-4d41-a398-c04be88d91c0", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 49, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d9799dc6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6d9799dc6f-nkdjs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif6e6507d361", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:50:32.044386 containerd[1684]: 2025-11-04 23:50:32.027 [INFO][4970] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="774c384b57ec09e7bdd5087e270841134c84a7b26b42e45ffee834536d843dde" Namespace="calico-apiserver" Pod="calico-apiserver-6d9799dc6f-nkdjs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d9799dc6f--nkdjs-eth0" Nov 4 23:50:32.044386 containerd[1684]: 2025-11-04 23:50:32.027 [INFO][4970] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif6e6507d361 ContainerID="774c384b57ec09e7bdd5087e270841134c84a7b26b42e45ffee834536d843dde" Namespace="calico-apiserver" Pod="calico-apiserver-6d9799dc6f-nkdjs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d9799dc6f--nkdjs-eth0" Nov 4 23:50:32.044386 containerd[1684]: 2025-11-04 23:50:32.031 [INFO][4970] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="774c384b57ec09e7bdd5087e270841134c84a7b26b42e45ffee834536d843dde" Namespace="calico-apiserver" Pod="calico-apiserver-6d9799dc6f-nkdjs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d9799dc6f--nkdjs-eth0" Nov 4 23:50:32.044386 containerd[1684]: 2025-11-04 23:50:32.032 [INFO][4970] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="774c384b57ec09e7bdd5087e270841134c84a7b26b42e45ffee834536d843dde" Namespace="calico-apiserver" Pod="calico-apiserver-6d9799dc6f-nkdjs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d9799dc6f--nkdjs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d9799dc6f--nkdjs-eth0", GenerateName:"calico-apiserver-6d9799dc6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"003bf0e1-d270-4d41-a398-c04be88d91c0", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 49, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d9799dc6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"774c384b57ec09e7bdd5087e270841134c84a7b26b42e45ffee834536d843dde", Pod:"calico-apiserver-6d9799dc6f-nkdjs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif6e6507d361", MAC:"26:17:1f:70:ed:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:50:32.044386 containerd[1684]: 2025-11-04 23:50:32.040 [INFO][4970] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="774c384b57ec09e7bdd5087e270841134c84a7b26b42e45ffee834536d843dde" Namespace="calico-apiserver" Pod="calico-apiserver-6d9799dc6f-nkdjs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d9799dc6f--nkdjs-eth0" Nov 4 23:50:32.067808 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:50:32.090017 containerd[1684]: time="2025-11-04T23:50:32.089978303Z" level=info msg="connecting to shim 774c384b57ec09e7bdd5087e270841134c84a7b26b42e45ffee834536d843dde" address="unix:///run/containerd/s/737187842f3c674f574a3b7dd740a95f848a6929fdcc9bc513512d74287b4fe4" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:50:32.116647 systemd[1]: Started cri-containerd-774c384b57ec09e7bdd5087e270841134c84a7b26b42e45ffee834536d843dde.scope - libcontainer container 774c384b57ec09e7bdd5087e270841134c84a7b26b42e45ffee834536d843dde. Nov 4 23:50:32.139253 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:50:32.162074 systemd-networkd[1587]: cali91c7a874e23: Link UP Nov 4 23:50:32.163178 systemd-networkd[1587]: cali91c7a874e23: Gained carrier Nov 4 23:50:32.165392 containerd[1684]: time="2025-11-04T23:50:32.165353113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-x2sxv,Uid:9da2e8c6-5e46-4a22-9ad0-99b948b30cea,Namespace:calico-system,Attempt:0,} returns sandbox id \"6ce0579469912127ae8ab9e621b09cf9b7ee5331e64d02e9c0754de9a610d75c\"" Nov 4 23:50:32.168317 containerd[1684]: time="2025-11-04T23:50:32.167895027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 23:50:32.186825 containerd[1684]: 2025-11-04 23:50:31.823 [INFO][4994] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--556c6f6458--pstkj-eth0 calico-apiserver-556c6f6458- calico-apiserver 011689d6-d825-4490-866e-209734536e09 828 0 2025-11-04 23:49:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:556c6f6458 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-556c6f6458-pstkj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali91c7a874e23 [] [] }} ContainerID="13455342e3f0939b5991753b3b636c96eb7fd7cb232c99d05a14b02b9fa7bf4f" Namespace="calico-apiserver" Pod="calico-apiserver-556c6f6458-pstkj" WorkloadEndpoint="localhost-k8s-calico--apiserver--556c6f6458--pstkj-" Nov 4 23:50:32.186825 containerd[1684]: 2025-11-04 23:50:31.824 [INFO][4994] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="13455342e3f0939b5991753b3b636c96eb7fd7cb232c99d05a14b02b9fa7bf4f" Namespace="calico-apiserver" Pod="calico-apiserver-556c6f6458-pstkj" WorkloadEndpoint="localhost-k8s-calico--apiserver--556c6f6458--pstkj-eth0" Nov 4 23:50:32.186825 containerd[1684]: 2025-11-04 23:50:31.904 [INFO][5011] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="13455342e3f0939b5991753b3b636c96eb7fd7cb232c99d05a14b02b9fa7bf4f" HandleID="k8s-pod-network.13455342e3f0939b5991753b3b636c96eb7fd7cb232c99d05a14b02b9fa7bf4f" Workload="localhost-k8s-calico--apiserver--556c6f6458--pstkj-eth0" Nov 4 23:50:32.186825 containerd[1684]: 2025-11-04 23:50:31.904 [INFO][5011] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="13455342e3f0939b5991753b3b636c96eb7fd7cb232c99d05a14b02b9fa7bf4f" HandleID="k8s-pod-network.13455342e3f0939b5991753b3b636c96eb7fd7cb232c99d05a14b02b9fa7bf4f" Workload="localhost-k8s-calico--apiserver--556c6f6458--pstkj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033cc50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-556c6f6458-pstkj", "timestamp":"2025-11-04 23:50:31.904037607 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:50:32.186825 containerd[1684]: 2025-11-04 23:50:31.904 [INFO][5011] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:50:32.186825 containerd[1684]: 2025-11-04 23:50:32.023 [INFO][5011] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:50:32.186825 containerd[1684]: 2025-11-04 23:50:32.023 [INFO][5011] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 23:50:32.186825 containerd[1684]: 2025-11-04 23:50:32.098 [INFO][5011] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.13455342e3f0939b5991753b3b636c96eb7fd7cb232c99d05a14b02b9fa7bf4f" host="localhost" Nov 4 23:50:32.186825 containerd[1684]: 2025-11-04 23:50:32.104 [INFO][5011] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 23:50:32.186825 containerd[1684]: 2025-11-04 23:50:32.114 [INFO][5011] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 23:50:32.186825 containerd[1684]: 2025-11-04 23:50:32.118 [INFO][5011] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 23:50:32.186825 containerd[1684]: 2025-11-04 23:50:32.124 [INFO][5011] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 23:50:32.186825 containerd[1684]: 2025-11-04 23:50:32.124 [INFO][5011] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.13455342e3f0939b5991753b3b636c96eb7fd7cb232c99d05a14b02b9fa7bf4f" host="localhost" Nov 4 23:50:32.186825 containerd[1684]: 2025-11-04 23:50:32.126 [INFO][5011] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.13455342e3f0939b5991753b3b636c96eb7fd7cb232c99d05a14b02b9fa7bf4f Nov 4 23:50:32.186825 containerd[1684]: 2025-11-04 23:50:32.136 [INFO][5011] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.13455342e3f0939b5991753b3b636c96eb7fd7cb232c99d05a14b02b9fa7bf4f" host="localhost" Nov 4 23:50:32.186825 containerd[1684]: 2025-11-04 23:50:32.145 [INFO][5011] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.13455342e3f0939b5991753b3b636c96eb7fd7cb232c99d05a14b02b9fa7bf4f" host="localhost" Nov 4 23:50:32.186825 containerd[1684]: 2025-11-04 23:50:32.145 [INFO][5011] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.13455342e3f0939b5991753b3b636c96eb7fd7cb232c99d05a14b02b9fa7bf4f" host="localhost" Nov 4 23:50:32.186825 containerd[1684]: 2025-11-04 23:50:32.146 [INFO][5011] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:50:32.186825 containerd[1684]: 2025-11-04 23:50:32.146 [INFO][5011] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="13455342e3f0939b5991753b3b636c96eb7fd7cb232c99d05a14b02b9fa7bf4f" HandleID="k8s-pod-network.13455342e3f0939b5991753b3b636c96eb7fd7cb232c99d05a14b02b9fa7bf4f" Workload="localhost-k8s-calico--apiserver--556c6f6458--pstkj-eth0" Nov 4 23:50:32.201062 containerd[1684]: 2025-11-04 23:50:32.156 [INFO][4994] cni-plugin/k8s.go 418: Populated endpoint ContainerID="13455342e3f0939b5991753b3b636c96eb7fd7cb232c99d05a14b02b9fa7bf4f" Namespace="calico-apiserver" Pod="calico-apiserver-556c6f6458-pstkj" WorkloadEndpoint="localhost-k8s-calico--apiserver--556c6f6458--pstkj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--556c6f6458--pstkj-eth0", GenerateName:"calico-apiserver-556c6f6458-", Namespace:"calico-apiserver", SelfLink:"", UID:"011689d6-d825-4490-866e-209734536e09", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 49, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"556c6f6458", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-556c6f6458-pstkj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali91c7a874e23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:50:32.201062 containerd[1684]: 2025-11-04 23:50:32.156 [INFO][4994] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="13455342e3f0939b5991753b3b636c96eb7fd7cb232c99d05a14b02b9fa7bf4f" Namespace="calico-apiserver" Pod="calico-apiserver-556c6f6458-pstkj" WorkloadEndpoint="localhost-k8s-calico--apiserver--556c6f6458--pstkj-eth0" Nov 4 23:50:32.201062 containerd[1684]: 2025-11-04 23:50:32.156 [INFO][4994] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali91c7a874e23 ContainerID="13455342e3f0939b5991753b3b636c96eb7fd7cb232c99d05a14b02b9fa7bf4f" Namespace="calico-apiserver" Pod="calico-apiserver-556c6f6458-pstkj" WorkloadEndpoint="localhost-k8s-calico--apiserver--556c6f6458--pstkj-eth0" Nov 4 23:50:32.201062 containerd[1684]: 2025-11-04 23:50:32.163 [INFO][4994] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="13455342e3f0939b5991753b3b636c96eb7fd7cb232c99d05a14b02b9fa7bf4f" Namespace="calico-apiserver" Pod="calico-apiserver-556c6f6458-pstkj" WorkloadEndpoint="localhost-k8s-calico--apiserver--556c6f6458--pstkj-eth0" Nov 4 23:50:32.201062 containerd[1684]: 2025-11-04 23:50:32.164 [INFO][4994] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="13455342e3f0939b5991753b3b636c96eb7fd7cb232c99d05a14b02b9fa7bf4f" Namespace="calico-apiserver" Pod="calico-apiserver-556c6f6458-pstkj" WorkloadEndpoint="localhost-k8s-calico--apiserver--556c6f6458--pstkj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--556c6f6458--pstkj-eth0", GenerateName:"calico-apiserver-556c6f6458-", Namespace:"calico-apiserver", SelfLink:"", UID:"011689d6-d825-4490-866e-209734536e09", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 49, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"556c6f6458", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"13455342e3f0939b5991753b3b636c96eb7fd7cb232c99d05a14b02b9fa7bf4f", Pod:"calico-apiserver-556c6f6458-pstkj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali91c7a874e23", MAC:"26:53:86:1d:66:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:50:32.201062 containerd[1684]: 2025-11-04 23:50:32.182 [INFO][4994] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="13455342e3f0939b5991753b3b636c96eb7fd7cb232c99d05a14b02b9fa7bf4f" Namespace="calico-apiserver" Pod="calico-apiserver-556c6f6458-pstkj" WorkloadEndpoint="localhost-k8s-calico--apiserver--556c6f6458--pstkj-eth0" Nov 4 23:50:32.236530 containerd[1684]: time="2025-11-04T23:50:32.236493245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d9799dc6f-nkdjs,Uid:003bf0e1-d270-4d41-a398-c04be88d91c0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"774c384b57ec09e7bdd5087e270841134c84a7b26b42e45ffee834536d843dde\"" Nov 4 23:50:32.315942 containerd[1684]: time="2025-11-04T23:50:32.315854592Z" level=info msg="connecting to shim 13455342e3f0939b5991753b3b636c96eb7fd7cb232c99d05a14b02b9fa7bf4f" address="unix:///run/containerd/s/41801efdee6e83fa18f8741872660b6b0233cbf3bd3b325aa9d4593ea8a4c400" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:50:32.333614 systemd[1]: Started cri-containerd-13455342e3f0939b5991753b3b636c96eb7fd7cb232c99d05a14b02b9fa7bf4f.scope - libcontainer container 13455342e3f0939b5991753b3b636c96eb7fd7cb232c99d05a14b02b9fa7bf4f. Nov 4 23:50:32.344257 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:50:32.385882 containerd[1684]: time="2025-11-04T23:50:32.385820234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556c6f6458-pstkj,Uid:011689d6-d825-4490-866e-209734536e09,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"13455342e3f0939b5991753b3b636c96eb7fd7cb232c99d05a14b02b9fa7bf4f\"" Nov 4 23:50:32.575646 systemd-networkd[1587]: calid89200bde04: Gained IPv6LL Nov 4 23:50:32.646963 containerd[1684]: time="2025-11-04T23:50:32.646928805Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:50:32.651487 containerd[1684]: time="2025-11-04T23:50:32.651442321Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 23:50:32.651876 containerd[1684]: time="2025-11-04T23:50:32.651606563Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 4 23:50:32.651918 kubelet[3005]: E1104 23:50:32.651684 3005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:50:32.651918 kubelet[3005]: E1104 23:50:32.651719 3005 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:50:32.652729 containerd[1684]: time="2025-11-04T23:50:32.652224824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:50:32.652984 kubelet[3005]: E1104 23:50:32.652951 3005 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzm8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-x2sxv_calico-system(9da2e8c6-5e46-4a22-9ad0-99b948b30cea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 23:50:32.658868 kubelet[3005]: E1104 23:50:32.654120 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x2sxv" podUID="9da2e8c6-5e46-4a22-9ad0-99b948b30cea" Nov 4 23:50:32.916622 kubelet[3005]: E1104 23:50:32.916587 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-548bc79dd9-9vmzn" podUID="22a0a317-e976-4828-b61a-a22d937f284c" Nov 4 23:50:32.917169 kubelet[3005]: E1104 23:50:32.917105 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x2sxv" podUID="9da2e8c6-5e46-4a22-9ad0-99b948b30cea" Nov 4 23:50:32.959652 systemd-networkd[1587]: cali3c76d6a52d1: Gained IPv6LL Nov 4 23:50:33.009126 containerd[1684]: time="2025-11-04T23:50:33.008991035Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:50:33.017160 containerd[1684]: time="2025-11-04T23:50:33.017074207Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:50:33.017160 containerd[1684]: time="2025-11-04T23:50:33.017138048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:50:33.017336 kubelet[3005]: E1104 23:50:33.017212 3005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:50:33.017336 kubelet[3005]: E1104 23:50:33.017240 3005 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:50:33.017417 kubelet[3005]: E1104 23:50:33.017358 3005 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lpwdl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d9799dc6f-nkdjs_calico-apiserver(003bf0e1-d270-4d41-a398-c04be88d91c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:50:33.017690 containerd[1684]: time="2025-11-04T23:50:33.017671000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:50:33.018614 kubelet[3005]: E1104 23:50:33.018536 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9799dc6f-nkdjs" podUID="003bf0e1-d270-4d41-a398-c04be88d91c0" Nov 4 23:50:33.355694 containerd[1684]: time="2025-11-04T23:50:33.355621241Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:50:33.361788 containerd[1684]: time="2025-11-04T23:50:33.361747875Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:50:33.361882 containerd[1684]: time="2025-11-04T23:50:33.361833397Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:50:33.362129 kubelet[3005]: E1104 23:50:33.361932 3005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:50:33.362129 kubelet[3005]: E1104 23:50:33.361965 3005 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:50:33.362129 kubelet[3005]: E1104 23:50:33.362056 3005 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8kjc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-556c6f6458-pstkj_calico-apiserver(011689d6-d825-4490-866e-209734536e09): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:50:33.363336 kubelet[3005]: E1104 23:50:33.363306 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556c6f6458-pstkj" podUID="011689d6-d825-4490-866e-209734536e09" Nov 4 23:50:33.727624 systemd-networkd[1587]: calif6e6507d361: Gained IPv6LL Nov 4 23:50:33.918817 kubelet[3005]: E1104 23:50:33.918782 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9799dc6f-nkdjs" podUID="003bf0e1-d270-4d41-a398-c04be88d91c0" Nov 4 23:50:33.919205 kubelet[3005]: E1104 23:50:33.918980 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556c6f6458-pstkj" podUID="011689d6-d825-4490-866e-209734536e09" Nov 4 23:50:33.919205 kubelet[3005]: E1104 23:50:33.919026 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x2sxv" podUID="9da2e8c6-5e46-4a22-9ad0-99b948b30cea" Nov 4 23:50:33.983722 systemd-networkd[1587]: cali91c7a874e23: Gained IPv6LL Nov 4 23:50:40.736025 containerd[1684]: time="2025-11-04T23:50:40.735947623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 23:50:41.076134 containerd[1684]: time="2025-11-04T23:50:41.076043764Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:50:41.076449 containerd[1684]: time="2025-11-04T23:50:41.076427343Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 23:50:41.076529 containerd[1684]: time="2025-11-04T23:50:41.076479502Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 4 23:50:41.076868 kubelet[3005]: E1104 23:50:41.076695 3005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:50:41.076868 kubelet[3005]: E1104 23:50:41.076745 3005 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:50:41.076868 kubelet[3005]: E1104 23:50:41.076842 3005 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f306d436bc0240a0a022e65131a74f29,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8nrz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-554c89f774-pdktj_calico-system(40c5727f-c324-4fe2-b1aa-9b89dbc8158a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 23:50:41.079563 containerd[1684]: time="2025-11-04T23:50:41.079163660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 23:50:41.399181 containerd[1684]: time="2025-11-04T23:50:41.398973794Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:50:41.399452 containerd[1684]: time="2025-11-04T23:50:41.399432600Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 23:50:41.399630 containerd[1684]: time="2025-11-04T23:50:41.399515419Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 4 23:50:41.399925 kubelet[3005]: E1104 23:50:41.399771 3005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:50:41.399925 kubelet[3005]: E1104 23:50:41.399805 3005 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:50:41.399925 kubelet[3005]: E1104 23:50:41.399891 3005 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nrz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-554c89f774-pdktj_calico-system(40c5727f-c324-4fe2-b1aa-9b89dbc8158a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 23:50:41.401709 kubelet[3005]: E1104 23:50:41.401221 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-554c89f774-pdktj" podUID="40c5727f-c324-4fe2-b1aa-9b89dbc8158a" Nov 4 23:50:41.820973 containerd[1684]: time="2025-11-04T23:50:41.820473029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:50:42.154123 containerd[1684]: time="2025-11-04T23:50:42.154017233Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:50:42.157083 containerd[1684]: time="2025-11-04T23:50:42.157013706Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:50:42.157083 containerd[1684]: time="2025-11-04T23:50:42.157066190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:50:42.157198 kubelet[3005]: E1104 23:50:42.157168 3005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:50:42.157403 kubelet[3005]: E1104 23:50:42.157201 3005 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:50:42.157403 kubelet[3005]: E1104 23:50:42.157301 3005 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4j4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-556c6f6458-cpm97_calico-apiserver(bb817c56-dcfd-42dd-9fb6-688549d80317): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:50:42.159085 kubelet[3005]: E1104 23:50:42.159064 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556c6f6458-cpm97" podUID="bb817c56-dcfd-42dd-9fb6-688549d80317" Nov 4 23:50:44.735885 containerd[1684]: time="2025-11-04T23:50:44.735741290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 23:50:45.120247 containerd[1684]: time="2025-11-04T23:50:45.120003164Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:50:45.120605 containerd[1684]: time="2025-11-04T23:50:45.120557551Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 23:50:45.120649 containerd[1684]: time="2025-11-04T23:50:45.120621812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 4 23:50:45.121145 kubelet[3005]: E1104 23:50:45.120784 3005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:50:45.121145 kubelet[3005]: E1104 23:50:45.120817 3005 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:50:45.121365 containerd[1684]: time="2025-11-04T23:50:45.120978410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:50:45.121783 kubelet[3005]: E1104 23:50:45.121474 3005 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzm8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-x2sxv_calico-system(9da2e8c6-5e46-4a22-9ad0-99b948b30cea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 23:50:45.123384 kubelet[3005]: E1104 23:50:45.123356 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x2sxv" podUID="9da2e8c6-5e46-4a22-9ad0-99b948b30cea" Nov 4 23:50:45.487864 containerd[1684]: time="2025-11-04T23:50:45.487823958Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:50:45.488143 containerd[1684]: time="2025-11-04T23:50:45.488120377Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:50:45.488212 containerd[1684]: time="2025-11-04T23:50:45.488171340Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:50:45.488299 kubelet[3005]: E1104 23:50:45.488271 3005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:50:45.488351 kubelet[3005]: E1104 23:50:45.488305 3005 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:50:45.488630 containerd[1684]: time="2025-11-04T23:50:45.488606979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 23:50:45.488857 kubelet[3005]: E1104 23:50:45.488810 3005 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lpwdl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d9799dc6f-nkdjs_calico-apiserver(003bf0e1-d270-4d41-a398-c04be88d91c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:50:45.490526 kubelet[3005]: E1104 23:50:45.490330 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9799dc6f-nkdjs" podUID="003bf0e1-d270-4d41-a398-c04be88d91c0" Nov 4 23:50:45.794777 containerd[1684]: time="2025-11-04T23:50:45.794699232Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:50:45.795198 containerd[1684]: time="2025-11-04T23:50:45.795177837Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 23:50:45.795243 containerd[1684]: time="2025-11-04T23:50:45.795229751Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 4 23:50:45.795360 kubelet[3005]: E1104 23:50:45.795339 3005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:50:45.795396 kubelet[3005]: E1104 23:50:45.795371 3005 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:50:45.795562 kubelet[3005]: E1104 23:50:45.795519 3005 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v69cm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-548bc79dd9-9vmzn_calico-system(22a0a317-e976-4828-b61a-a22d937f284c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 23:50:45.795930 containerd[1684]: time="2025-11-04T23:50:45.795906146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 23:50:45.797133 kubelet[3005]: E1104 23:50:45.797022 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-548bc79dd9-9vmzn" podUID="22a0a317-e976-4828-b61a-a22d937f284c" Nov 4 23:50:46.149857 containerd[1684]: time="2025-11-04T23:50:46.149763559Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:50:46.151656 containerd[1684]: time="2025-11-04T23:50:46.151595051Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 23:50:46.151656 containerd[1684]: time="2025-11-04T23:50:46.151644588Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 4 23:50:46.151835 kubelet[3005]: E1104 23:50:46.151810 3005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:50:46.152172 kubelet[3005]: E1104 23:50:46.152044 3005 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:50:46.152172 kubelet[3005]: E1104 23:50:46.152142 3005 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8ng9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-j2cxk_calico-system(87667173-4fdd-43b0-b698-59acc1ea7515): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 23:50:46.154054 containerd[1684]: time="2025-11-04T23:50:46.154030495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 23:50:46.524462 containerd[1684]: time="2025-11-04T23:50:46.524415473Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:50:46.524841 containerd[1684]: time="2025-11-04T23:50:46.524818486Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 23:50:46.524909 containerd[1684]: time="2025-11-04T23:50:46.524875100Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 4 23:50:46.525040 kubelet[3005]: E1104 23:50:46.525005 3005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:50:46.525112 kubelet[3005]: E1104 23:50:46.525041 3005 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:50:46.525413 kubelet[3005]: E1104 23:50:46.525146 3005 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8ng9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-j2cxk_calico-system(87667173-4fdd-43b0-b698-59acc1ea7515): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 23:50:46.527093 kubelet[3005]: E1104 23:50:46.527075 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j2cxk" podUID="87667173-4fdd-43b0-b698-59acc1ea7515" Nov 4 23:50:47.735697 containerd[1684]: time="2025-11-04T23:50:47.735627341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:50:48.059534 containerd[1684]: time="2025-11-04T23:50:48.059419707Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:50:48.062924 containerd[1684]: time="2025-11-04T23:50:48.059834037Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:50:48.062924 containerd[1684]: time="2025-11-04T23:50:48.059886029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:50:48.062990 kubelet[3005]: E1104 23:50:48.060054 3005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:50:48.062990 kubelet[3005]: E1104 23:50:48.060125 3005 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:50:48.062990 kubelet[3005]: E1104 23:50:48.060438 3005 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8kjc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-556c6f6458-pstkj_calico-apiserver(011689d6-d825-4490-866e-209734536e09): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:50:48.062990 kubelet[3005]: E1104 23:50:48.062040 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556c6f6458-pstkj" podUID="011689d6-d825-4490-866e-209734536e09" Nov 4 23:50:52.737179 kubelet[3005]: E1104 23:50:52.737025 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-554c89f774-pdktj" podUID="40c5727f-c324-4fe2-b1aa-9b89dbc8158a" Nov 4 23:50:52.927194 containerd[1684]: time="2025-11-04T23:50:52.927155015Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1587cf69f3d687559b45fb0478326d84937e839c640d1ba52d2a281401b8b76b\" id:\"5846a3268684349690a7e0de600b81a1ac660dc97ef92aef00d50dbba086a850\" pid:5238 exited_at:{seconds:1762300252 nanos:926880308}" Nov 4 23:50:57.742508 kubelet[3005]: E1104 23:50:57.741489 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9799dc6f-nkdjs" podUID="003bf0e1-d270-4d41-a398-c04be88d91c0" Nov 4 23:50:57.745311 kubelet[3005]: E1104 23:50:57.745291 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556c6f6458-cpm97" podUID="bb817c56-dcfd-42dd-9fb6-688549d80317" Nov 4 23:50:57.745573 kubelet[3005]: E1104 23:50:57.745539 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j2cxk" podUID="87667173-4fdd-43b0-b698-59acc1ea7515" Nov 4 23:50:58.734945 kubelet[3005]: E1104 23:50:58.734854 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x2sxv" podUID="9da2e8c6-5e46-4a22-9ad0-99b948b30cea" Nov 4 23:51:01.737256 kubelet[3005]: E1104 23:51:01.737172 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-548bc79dd9-9vmzn" podUID="22a0a317-e976-4828-b61a-a22d937f284c" Nov 4 23:51:02.736231 kubelet[3005]: E1104 23:51:02.736192 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556c6f6458-pstkj" podUID="011689d6-d825-4490-866e-209734536e09" Nov 4 23:51:04.912165 systemd[1]: Started sshd@7-139.178.70.101:22-139.178.68.195:41864.service - OpenSSH per-connection server daemon (139.178.68.195:41864). Nov 4 23:51:05.148957 sshd[5265]: Accepted publickey for core from 139.178.68.195 port 41864 ssh2: RSA SHA256:SN3AO0qmEwLXzHNXZ4jj0XXdhb1SagWw2vkr2zk9BUc Nov 4 23:51:05.169920 sshd-session[5265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:51:05.181060 systemd-logind[1666]: New session 10 of user core. Nov 4 23:51:05.187748 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 4 23:51:07.003312 sshd[5268]: Connection closed by 139.178.68.195 port 41864 Nov 4 23:51:07.003909 sshd-session[5265]: pam_unix(sshd:session): session closed for user core Nov 4 23:51:07.016483 systemd[1]: sshd@7-139.178.70.101:22-139.178.68.195:41864.service: Deactivated successfully. Nov 4 23:51:07.018372 systemd[1]: session-10.scope: Deactivated successfully. Nov 4 23:51:07.019330 systemd-logind[1666]: Session 10 logged out. Waiting for processes to exit. Nov 4 23:51:07.021187 systemd-logind[1666]: Removed session 10. Nov 4 23:51:07.737694 containerd[1684]: time="2025-11-04T23:51:07.737658742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 23:51:08.090115 containerd[1684]: time="2025-11-04T23:51:08.089842738Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:51:08.092240 containerd[1684]: time="2025-11-04T23:51:08.092180863Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 23:51:08.092240 containerd[1684]: time="2025-11-04T23:51:08.092215781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 4 23:51:08.092396 kubelet[3005]: E1104 23:51:08.092346 3005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:51:08.093342 kubelet[3005]: E1104 23:51:08.092407 3005 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:51:08.093342 kubelet[3005]: E1104 23:51:08.092575 3005 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f306d436bc0240a0a022e65131a74f29,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8nrz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-554c89f774-pdktj_calico-system(40c5727f-c324-4fe2-b1aa-9b89dbc8158a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 23:51:08.095325 containerd[1684]: time="2025-11-04T23:51:08.095091768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 23:51:08.453030 containerd[1684]: time="2025-11-04T23:51:08.452991261Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:51:08.453629 containerd[1684]: time="2025-11-04T23:51:08.453586851Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 23:51:08.453773 containerd[1684]: time="2025-11-04T23:51:08.453749215Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 4 23:51:08.454364 kubelet[3005]: E1104 23:51:08.454324 3005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:51:08.454438 kubelet[3005]: E1104 23:51:08.454378 3005 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:51:08.454617 kubelet[3005]: E1104 23:51:08.454493 3005 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nrz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-554c89f774-pdktj_calico-system(40c5727f-c324-4fe2-b1aa-9b89dbc8158a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 23:51:08.456640 kubelet[3005]: E1104 23:51:08.456554 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-554c89f774-pdktj" podUID="40c5727f-c324-4fe2-b1aa-9b89dbc8158a" Nov 4 23:51:09.736714 containerd[1684]: time="2025-11-04T23:51:09.736653563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:51:10.100989 containerd[1684]: time="2025-11-04T23:51:10.100783922Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:51:10.107799 containerd[1684]: time="2025-11-04T23:51:10.107775914Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:51:10.107898 containerd[1684]: time="2025-11-04T23:51:10.107826316Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:51:10.107930 kubelet[3005]: E1104 23:51:10.107905 3005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:51:10.108105 kubelet[3005]: E1104 23:51:10.107943 3005 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:51:10.108160 kubelet[3005]: E1104 23:51:10.108101 3005 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lpwdl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d9799dc6f-nkdjs_calico-apiserver(003bf0e1-d270-4d41-a398-c04be88d91c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:51:10.108619 containerd[1684]: time="2025-11-04T23:51:10.108602563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 23:51:10.109749 kubelet[3005]: E1104 23:51:10.109663 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9799dc6f-nkdjs" podUID="003bf0e1-d270-4d41-a398-c04be88d91c0" Nov 4 23:51:10.453044 containerd[1684]: time="2025-11-04T23:51:10.453014645Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:51:10.460269 containerd[1684]: time="2025-11-04T23:51:10.460234676Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 23:51:10.460356 containerd[1684]: time="2025-11-04T23:51:10.460296953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 4 23:51:10.460474 kubelet[3005]: E1104 23:51:10.460452 3005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:51:10.460637 kubelet[3005]: E1104 23:51:10.460577 3005 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:51:10.461332 containerd[1684]: time="2025-11-04T23:51:10.460881598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:51:10.461369 kubelet[3005]: E1104 23:51:10.461084 3005 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzm8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-x2sxv_calico-system(9da2e8c6-5e46-4a22-9ad0-99b948b30cea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 23:51:10.462541 kubelet[3005]: E1104 23:51:10.462528 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x2sxv" podUID="9da2e8c6-5e46-4a22-9ad0-99b948b30cea" Nov 4 23:51:10.783703 containerd[1684]: time="2025-11-04T23:51:10.783325715Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:51:10.783703 containerd[1684]: time="2025-11-04T23:51:10.783624278Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:51:10.783703 containerd[1684]: time="2025-11-04T23:51:10.783698272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:51:10.783991 kubelet[3005]: E1104 23:51:10.783785 3005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:51:10.783991 kubelet[3005]: E1104 23:51:10.783823 3005 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:51:10.783991 kubelet[3005]: E1104 23:51:10.783920 3005 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4j4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-556c6f6458-cpm97_calico-apiserver(bb817c56-dcfd-42dd-9fb6-688549d80317): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:51:10.785221 kubelet[3005]: E1104 23:51:10.785195 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556c6f6458-cpm97" podUID="bb817c56-dcfd-42dd-9fb6-688549d80317" Nov 4 23:51:11.744692 containerd[1684]: time="2025-11-04T23:51:11.744201927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 23:51:12.013816 systemd[1]: Started sshd@8-139.178.70.101:22-139.178.68.195:41866.service - OpenSSH per-connection server daemon (139.178.68.195:41866). Nov 4 23:51:12.076287 sshd[5302]: Accepted publickey for core from 139.178.68.195 port 41866 ssh2: RSA SHA256:SN3AO0qmEwLXzHNXZ4jj0XXdhb1SagWw2vkr2zk9BUc Nov 4 23:51:12.077877 sshd-session[5302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:51:12.082530 systemd-logind[1666]: New session 11 of user core. Nov 4 23:51:12.087608 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 4 23:51:12.089648 containerd[1684]: time="2025-11-04T23:51:12.089237673Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:51:12.089809 containerd[1684]: time="2025-11-04T23:51:12.089656195Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 23:51:12.089935 containerd[1684]: time="2025-11-04T23:51:12.089870612Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 4 23:51:12.090391 kubelet[3005]: E1104 23:51:12.090024 3005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:51:12.090391 kubelet[3005]: E1104 23:51:12.090071 3005 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:51:12.091037 kubelet[3005]: E1104 23:51:12.090470 3005 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8ng9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-j2cxk_calico-system(87667173-4fdd-43b0-b698-59acc1ea7515): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 23:51:12.094337 containerd[1684]: time="2025-11-04T23:51:12.094292935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 23:51:12.221211 sshd[5305]: Connection closed by 139.178.68.195 port 41866 Nov 4 23:51:12.222814 sshd-session[5302]: pam_unix(sshd:session): session closed for user core Nov 4 23:51:12.227810 systemd[1]: sshd@8-139.178.70.101:22-139.178.68.195:41866.service: Deactivated successfully. Nov 4 23:51:12.231786 systemd[1]: session-11.scope: Deactivated successfully. Nov 4 23:51:12.233547 systemd-logind[1666]: Session 11 logged out. Waiting for processes to exit. Nov 4 23:51:12.234794 systemd-logind[1666]: Removed session 11. Nov 4 23:51:12.423969 containerd[1684]: time="2025-11-04T23:51:12.423924815Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:51:12.424537 containerd[1684]: time="2025-11-04T23:51:12.424467163Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 23:51:12.424600 containerd[1684]: time="2025-11-04T23:51:12.424526032Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 4 23:51:12.424768 kubelet[3005]: E1104 23:51:12.424714 3005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:51:12.424768 kubelet[3005]: E1104 23:51:12.424755 3005 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:51:12.425225 kubelet[3005]: E1104 23:51:12.424963 3005 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8ng9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-j2cxk_calico-system(87667173-4fdd-43b0-b698-59acc1ea7515): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 23:51:12.426756 kubelet[3005]: E1104 23:51:12.426633 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j2cxk" podUID="87667173-4fdd-43b0-b698-59acc1ea7515" Nov 4 23:51:12.736428 containerd[1684]: time="2025-11-04T23:51:12.736063088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 23:51:13.110608 containerd[1684]: time="2025-11-04T23:51:13.109952345Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:51:13.111654 containerd[1684]: time="2025-11-04T23:51:13.111451734Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 4 23:51:13.111654 containerd[1684]: time="2025-11-04T23:51:13.111470390Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 23:51:13.111958 kubelet[3005]: E1104 23:51:13.111916 3005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:51:13.112341 kubelet[3005]: E1104 23:51:13.111967 3005 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:51:13.112341 kubelet[3005]: E1104 23:51:13.112081 3005 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v69cm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-548bc79dd9-9vmzn_calico-system(22a0a317-e976-4828-b61a-a22d937f284c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 23:51:13.113523 kubelet[3005]: E1104 23:51:13.113377 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-548bc79dd9-9vmzn" podUID="22a0a317-e976-4828-b61a-a22d937f284c" Nov 4 23:51:17.234643 systemd[1]: Started sshd@9-139.178.70.101:22-139.178.68.195:54828.service - OpenSSH per-connection server daemon (139.178.68.195:54828). Nov 4 23:51:17.289858 sshd[5320]: Accepted publickey for core from 139.178.68.195 port 54828 ssh2: RSA SHA256:SN3AO0qmEwLXzHNXZ4jj0XXdhb1SagWw2vkr2zk9BUc Nov 4 23:51:17.291566 sshd-session[5320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:51:17.298564 systemd-logind[1666]: New session 12 of user core. Nov 4 23:51:17.303765 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 4 23:51:17.404428 sshd[5324]: Connection closed by 139.178.68.195 port 54828 Nov 4 23:51:17.405971 sshd-session[5320]: pam_unix(sshd:session): session closed for user core Nov 4 23:51:17.415554 systemd[1]: sshd@9-139.178.70.101:22-139.178.68.195:54828.service: Deactivated successfully. Nov 4 23:51:17.417722 systemd[1]: session-12.scope: Deactivated successfully. Nov 4 23:51:17.418911 systemd-logind[1666]: Session 12 logged out. Waiting for processes to exit. Nov 4 23:51:17.423089 systemd[1]: Started sshd@10-139.178.70.101:22-139.178.68.195:54830.service - OpenSSH per-connection server daemon (139.178.68.195:54830). Nov 4 23:51:17.424044 systemd-logind[1666]: Removed session 12. Nov 4 23:51:17.465289 sshd[5337]: Accepted publickey for core from 139.178.68.195 port 54830 ssh2: RSA SHA256:SN3AO0qmEwLXzHNXZ4jj0XXdhb1SagWw2vkr2zk9BUc Nov 4 23:51:17.466266 sshd-session[5337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:51:17.470646 systemd-logind[1666]: New session 13 of user core. Nov 4 23:51:17.476875 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 4 23:51:17.709201 sshd[5340]: Connection closed by 139.178.68.195 port 54830 Nov 4 23:51:17.709125 sshd-session[5337]: pam_unix(sshd:session): session closed for user core Nov 4 23:51:17.719901 systemd[1]: sshd@10-139.178.70.101:22-139.178.68.195:54830.service: Deactivated successfully. Nov 4 23:51:17.721510 systemd[1]: session-13.scope: Deactivated successfully. Nov 4 23:51:17.722283 systemd-logind[1666]: Session 13 logged out. Waiting for processes to exit. Nov 4 23:51:17.724833 systemd[1]: Started sshd@11-139.178.70.101:22-139.178.68.195:54842.service - OpenSSH per-connection server daemon (139.178.68.195:54842). Nov 4 23:51:17.726739 systemd-logind[1666]: Removed session 13. Nov 4 23:51:17.739170 containerd[1684]: time="2025-11-04T23:51:17.739143227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:51:17.771240 sshd[5350]: Accepted publickey for core from 139.178.68.195 port 54842 ssh2: RSA SHA256:SN3AO0qmEwLXzHNXZ4jj0XXdhb1SagWw2vkr2zk9BUc Nov 4 23:51:17.773623 sshd-session[5350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:51:17.779099 systemd-logind[1666]: New session 14 of user core. Nov 4 23:51:17.785650 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 4 23:51:17.955318 sshd[5353]: Connection closed by 139.178.68.195 port 54842 Nov 4 23:51:17.955668 sshd-session[5350]: pam_unix(sshd:session): session closed for user core Nov 4 23:51:17.959883 systemd[1]: sshd@11-139.178.70.101:22-139.178.68.195:54842.service: Deactivated successfully. Nov 4 23:51:17.963037 systemd[1]: session-14.scope: Deactivated successfully. Nov 4 23:51:17.965194 systemd-logind[1666]: Session 14 logged out. Waiting for processes to exit. Nov 4 23:51:17.966362 systemd-logind[1666]: Removed session 14. Nov 4 23:51:18.087891 containerd[1684]: time="2025-11-04T23:51:18.087837193Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:51:18.093365 containerd[1684]: time="2025-11-04T23:51:18.093342347Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:51:18.093414 containerd[1684]: time="2025-11-04T23:51:18.093392469Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:51:18.093550 kubelet[3005]: E1104 23:51:18.093484 3005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:51:18.093855 kubelet[3005]: E1104 23:51:18.093559 3005 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:51:18.093855 kubelet[3005]: E1104 23:51:18.093647 3005 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8kjc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-556c6f6458-pstkj_calico-apiserver(011689d6-d825-4490-866e-209734536e09): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:51:18.095011 kubelet[3005]: E1104 23:51:18.094982 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556c6f6458-pstkj" podUID="011689d6-d825-4490-866e-209734536e09" Nov 4 23:51:21.739838 kubelet[3005]: E1104 23:51:21.739786 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-554c89f774-pdktj" podUID="40c5727f-c324-4fe2-b1aa-9b89dbc8158a" Nov 4 23:51:22.969486 systemd[1]: Started sshd@12-139.178.70.101:22-139.178.68.195:54844.service - OpenSSH per-connection server daemon (139.178.68.195:54844). Nov 4 23:51:23.153409 containerd[1684]: time="2025-11-04T23:51:23.153378941Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1587cf69f3d687559b45fb0478326d84937e839c640d1ba52d2a281401b8b76b\" id:\"3347014998159766effda787ed9e5d6c9ec09a7bd085018f1679e424ef433db5\" pid:5382 exit_status:1 exited_at:{seconds:1762300283 nanos:152896385}" Nov 4 23:51:23.363344 sshd[5388]: Accepted publickey for core from 139.178.68.195 port 54844 ssh2: RSA SHA256:SN3AO0qmEwLXzHNXZ4jj0XXdhb1SagWw2vkr2zk9BUc Nov 4 23:51:23.366277 sshd-session[5388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:51:23.373362 systemd-logind[1666]: New session 15 of user core. Nov 4 23:51:23.380795 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 4 23:51:23.731480 sshd[5396]: Connection closed by 139.178.68.195 port 54844 Nov 4 23:51:23.731936 sshd-session[5388]: pam_unix(sshd:session): session closed for user core Nov 4 23:51:23.735590 systemd-logind[1666]: Session 15 logged out. Waiting for processes to exit. Nov 4 23:51:23.735839 systemd[1]: sshd@12-139.178.70.101:22-139.178.68.195:54844.service: Deactivated successfully. Nov 4 23:51:23.737421 systemd[1]: session-15.scope: Deactivated successfully. Nov 4 23:51:23.738591 systemd-logind[1666]: Removed session 15. Nov 4 23:51:24.736426 kubelet[3005]: E1104 23:51:24.736379 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556c6f6458-cpm97" podUID="bb817c56-dcfd-42dd-9fb6-688549d80317" Nov 4 23:51:24.737372 kubelet[3005]: E1104 23:51:24.736724 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-548bc79dd9-9vmzn" podUID="22a0a317-e976-4828-b61a-a22d937f284c" Nov 4 23:51:24.737822 kubelet[3005]: E1104 23:51:24.737667 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9799dc6f-nkdjs" podUID="003bf0e1-d270-4d41-a398-c04be88d91c0" Nov 4 23:51:25.736593 kubelet[3005]: E1104 23:51:25.736517 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x2sxv" podUID="9da2e8c6-5e46-4a22-9ad0-99b948b30cea" Nov 4 23:51:25.738053 kubelet[3005]: E1104 23:51:25.738001 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j2cxk" podUID="87667173-4fdd-43b0-b698-59acc1ea7515" Nov 4 23:51:28.742653 systemd[1]: Started sshd@13-139.178.70.101:22-139.178.68.195:55508.service - OpenSSH per-connection server daemon (139.178.68.195:55508). Nov 4 23:51:28.797666 sshd[5409]: Accepted publickey for core from 139.178.68.195 port 55508 ssh2: RSA SHA256:SN3AO0qmEwLXzHNXZ4jj0XXdhb1SagWw2vkr2zk9BUc Nov 4 23:51:28.799334 sshd-session[5409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:51:28.802775 systemd-logind[1666]: New session 16 of user core. Nov 4 23:51:28.811820 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 4 23:51:28.940798 sshd[5412]: Connection closed by 139.178.68.195 port 55508 Nov 4 23:51:28.941646 sshd-session[5409]: pam_unix(sshd:session): session closed for user core Nov 4 23:51:28.944835 systemd[1]: sshd@13-139.178.70.101:22-139.178.68.195:55508.service: Deactivated successfully. Nov 4 23:51:28.946194 systemd[1]: session-16.scope: Deactivated successfully. Nov 4 23:51:28.948203 systemd-logind[1666]: Session 16 logged out. Waiting for processes to exit. Nov 4 23:51:28.949236 systemd-logind[1666]: Removed session 16. Nov 4 23:51:31.736555 kubelet[3005]: E1104 23:51:31.736181 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556c6f6458-pstkj" podUID="011689d6-d825-4490-866e-209734536e09" Nov 4 23:51:33.950474 systemd[1]: Started sshd@14-139.178.70.101:22-139.178.68.195:47720.service - OpenSSH per-connection server daemon (139.178.68.195:47720). Nov 4 23:51:33.992654 sshd[5423]: Accepted publickey for core from 139.178.68.195 port 47720 ssh2: RSA SHA256:SN3AO0qmEwLXzHNXZ4jj0XXdhb1SagWw2vkr2zk9BUc Nov 4 23:51:33.993458 sshd-session[5423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:51:33.996231 systemd-logind[1666]: New session 17 of user core. Nov 4 23:51:34.001598 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 4 23:51:34.120892 sshd[5426]: Connection closed by 139.178.68.195 port 47720 Nov 4 23:51:34.142653 sshd-session[5423]: pam_unix(sshd:session): session closed for user core Nov 4 23:51:34.158710 systemd-logind[1666]: Session 17 logged out. Waiting for processes to exit. Nov 4 23:51:34.159697 systemd[1]: sshd@14-139.178.70.101:22-139.178.68.195:47720.service: Deactivated successfully. Nov 4 23:51:34.160937 systemd[1]: session-17.scope: Deactivated successfully. Nov 4 23:51:34.165123 systemd-logind[1666]: Removed session 17. Nov 4 23:51:34.736090 kubelet[3005]: E1104 23:51:34.736052 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-554c89f774-pdktj" podUID="40c5727f-c324-4fe2-b1aa-9b89dbc8158a" Nov 4 23:51:35.738546 kubelet[3005]: E1104 23:51:35.738521 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556c6f6458-cpm97" podUID="bb817c56-dcfd-42dd-9fb6-688549d80317" Nov 4 23:51:36.735995 kubelet[3005]: E1104 23:51:36.735715 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9799dc6f-nkdjs" podUID="003bf0e1-d270-4d41-a398-c04be88d91c0" Nov 4 23:51:37.735524 kubelet[3005]: E1104 23:51:37.735146 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-548bc79dd9-9vmzn" podUID="22a0a317-e976-4828-b61a-a22d937f284c" Nov 4 23:51:39.132432 systemd[1]: Started sshd@15-139.178.70.101:22-139.178.68.195:47728.service - OpenSSH per-connection server daemon (139.178.68.195:47728). Nov 4 23:51:39.283757 sshd[5442]: Accepted publickey for core from 139.178.68.195 port 47728 ssh2: RSA SHA256:SN3AO0qmEwLXzHNXZ4jj0XXdhb1SagWw2vkr2zk9BUc Nov 4 23:51:39.285476 sshd-session[5442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:51:39.293620 systemd-logind[1666]: New session 18 of user core. Nov 4 23:51:39.297760 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 4 23:51:39.447339 sshd[5445]: Connection closed by 139.178.68.195 port 47728 Nov 4 23:51:39.458537 systemd[1]: Started sshd@16-139.178.70.101:22-139.178.68.195:47744.service - OpenSSH per-connection server daemon (139.178.68.195:47744). Nov 4 23:51:39.462033 sshd-session[5442]: pam_unix(sshd:session): session closed for user core Nov 4 23:51:39.497238 systemd[1]: sshd@15-139.178.70.101:22-139.178.68.195:47728.service: Deactivated successfully. Nov 4 23:51:39.498424 systemd[1]: session-18.scope: Deactivated successfully. Nov 4 23:51:39.500231 systemd-logind[1666]: Session 18 logged out. Waiting for processes to exit. Nov 4 23:51:39.500756 systemd-logind[1666]: Removed session 18. Nov 4 23:51:39.587146 sshd[5453]: Accepted publickey for core from 139.178.68.195 port 47744 ssh2: RSA SHA256:SN3AO0qmEwLXzHNXZ4jj0XXdhb1SagWw2vkr2zk9BUc Nov 4 23:51:39.587824 sshd-session[5453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:51:39.590787 systemd-logind[1666]: New session 19 of user core. Nov 4 23:51:39.595587 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 4 23:51:40.138858 sshd[5459]: Connection closed by 139.178.68.195 port 47744 Nov 4 23:51:40.139261 sshd-session[5453]: pam_unix(sshd:session): session closed for user core Nov 4 23:51:40.149156 systemd[1]: Started sshd@17-139.178.70.101:22-139.178.68.195:47760.service - OpenSSH per-connection server daemon (139.178.68.195:47760). Nov 4 23:51:40.149763 systemd[1]: sshd@16-139.178.70.101:22-139.178.68.195:47744.service: Deactivated successfully. Nov 4 23:51:40.151669 systemd[1]: session-19.scope: Deactivated successfully. Nov 4 23:51:40.153796 systemd-logind[1666]: Session 19 logged out. Waiting for processes to exit. Nov 4 23:51:40.155017 systemd-logind[1666]: Removed session 19. Nov 4 23:51:40.230932 sshd[5466]: Accepted publickey for core from 139.178.68.195 port 47760 ssh2: RSA SHA256:SN3AO0qmEwLXzHNXZ4jj0XXdhb1SagWw2vkr2zk9BUc Nov 4 23:51:40.232564 sshd-session[5466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:51:40.237461 systemd-logind[1666]: New session 20 of user core. Nov 4 23:51:40.243321 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 4 23:51:40.736956 kubelet[3005]: E1104 23:51:40.736921 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x2sxv" podUID="9da2e8c6-5e46-4a22-9ad0-99b948b30cea" Nov 4 23:51:40.738096 kubelet[3005]: E1104 23:51:40.738061 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j2cxk" podUID="87667173-4fdd-43b0-b698-59acc1ea7515" Nov 4 23:51:41.043079 sshd[5472]: Connection closed by 139.178.68.195 port 47760 Nov 4 23:51:41.046629 sshd-session[5466]: pam_unix(sshd:session): session closed for user core Nov 4 23:51:41.054686 systemd[1]: Started sshd@18-139.178.70.101:22-139.178.68.195:47766.service - OpenSSH per-connection server daemon (139.178.68.195:47766). Nov 4 23:51:41.055002 systemd[1]: sshd@17-139.178.70.101:22-139.178.68.195:47760.service: Deactivated successfully. Nov 4 23:51:41.057465 systemd[1]: session-20.scope: Deactivated successfully. Nov 4 23:51:41.059073 systemd-logind[1666]: Session 20 logged out. Waiting for processes to exit. Nov 4 23:51:41.063457 systemd-logind[1666]: Removed session 20. Nov 4 23:51:41.137367 sshd[5483]: Accepted publickey for core from 139.178.68.195 port 47766 ssh2: RSA SHA256:SN3AO0qmEwLXzHNXZ4jj0XXdhb1SagWw2vkr2zk9BUc Nov 4 23:51:41.138660 sshd-session[5483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:51:41.143685 systemd-logind[1666]: New session 21 of user core. Nov 4 23:51:41.148013 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 4 23:51:41.495347 sshd[5492]: Connection closed by 139.178.68.195 port 47766 Nov 4 23:51:41.495285 sshd-session[5483]: pam_unix(sshd:session): session closed for user core Nov 4 23:51:41.505030 systemd[1]: sshd@18-139.178.70.101:22-139.178.68.195:47766.service: Deactivated successfully. Nov 4 23:51:41.509197 systemd[1]: session-21.scope: Deactivated successfully. Nov 4 23:51:41.510643 systemd-logind[1666]: Session 21 logged out. Waiting for processes to exit. Nov 4 23:51:41.514595 systemd[1]: Started sshd@19-139.178.70.101:22-139.178.68.195:47778.service - OpenSSH per-connection server daemon (139.178.68.195:47778). Nov 4 23:51:41.515845 systemd-logind[1666]: Removed session 21. Nov 4 23:51:41.568509 sshd[5502]: Accepted publickey for core from 139.178.68.195 port 47778 ssh2: RSA SHA256:SN3AO0qmEwLXzHNXZ4jj0XXdhb1SagWw2vkr2zk9BUc Nov 4 23:51:41.569944 sshd-session[5502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:51:41.574713 systemd-logind[1666]: New session 22 of user core. Nov 4 23:51:41.579719 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 4 23:51:41.694523 sshd[5505]: Connection closed by 139.178.68.195 port 47778 Nov 4 23:51:41.694908 sshd-session[5502]: pam_unix(sshd:session): session closed for user core Nov 4 23:51:41.697468 systemd[1]: sshd@19-139.178.70.101:22-139.178.68.195:47778.service: Deactivated successfully. Nov 4 23:51:41.699012 systemd[1]: session-22.scope: Deactivated successfully. Nov 4 23:51:41.700405 systemd-logind[1666]: Session 22 logged out. Waiting for processes to exit. Nov 4 23:51:41.701162 systemd-logind[1666]: Removed session 22. Nov 4 23:51:43.736061 kubelet[3005]: E1104 23:51:43.736028 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556c6f6458-pstkj" podUID="011689d6-d825-4490-866e-209734536e09" Nov 4 23:51:46.706298 systemd[1]: Started sshd@20-139.178.70.101:22-139.178.68.195:42534.service - OpenSSH per-connection server daemon (139.178.68.195:42534). Nov 4 23:51:46.750976 sshd[5521]: Accepted publickey for core from 139.178.68.195 port 42534 ssh2: RSA SHA256:SN3AO0qmEwLXzHNXZ4jj0XXdhb1SagWw2vkr2zk9BUc Nov 4 23:51:46.751933 sshd-session[5521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:51:46.757255 systemd-logind[1666]: New session 23 of user core. Nov 4 23:51:46.760514 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 4 23:51:46.873753 sshd[5524]: Connection closed by 139.178.68.195 port 42534 Nov 4 23:51:46.873943 sshd-session[5521]: pam_unix(sshd:session): session closed for user core Nov 4 23:51:46.878909 systemd[1]: sshd@20-139.178.70.101:22-139.178.68.195:42534.service: Deactivated successfully. Nov 4 23:51:46.880690 systemd[1]: session-23.scope: Deactivated successfully. Nov 4 23:51:46.881911 systemd-logind[1666]: Session 23 logged out. Waiting for processes to exit. Nov 4 23:51:46.882975 systemd-logind[1666]: Removed session 23. Nov 4 23:51:47.736726 kubelet[3005]: E1104 23:51:47.736419 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d9799dc6f-nkdjs" podUID="003bf0e1-d270-4d41-a398-c04be88d91c0" Nov 4 23:51:47.736726 kubelet[3005]: E1104 23:51:47.736689 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556c6f6458-cpm97" podUID="bb817c56-dcfd-42dd-9fb6-688549d80317" Nov 4 23:51:49.736091 containerd[1684]: time="2025-11-04T23:51:49.736053412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 23:51:50.087249 containerd[1684]: time="2025-11-04T23:51:50.087167073Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:51:50.087901 containerd[1684]: time="2025-11-04T23:51:50.087483326Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 23:51:50.087901 containerd[1684]: time="2025-11-04T23:51:50.087551404Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 4 23:51:50.087951 kubelet[3005]: E1104 23:51:50.087648 3005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:51:50.087951 kubelet[3005]: E1104 23:51:50.087700 3005 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:51:50.087951 kubelet[3005]: E1104 23:51:50.087781 3005 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f306d436bc0240a0a022e65131a74f29,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8nrz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-554c89f774-pdktj_calico-system(40c5727f-c324-4fe2-b1aa-9b89dbc8158a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 23:51:50.090530 containerd[1684]: time="2025-11-04T23:51:50.090378977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 23:51:50.433756 containerd[1684]: time="2025-11-04T23:51:50.433723129Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:51:50.437785 containerd[1684]: time="2025-11-04T23:51:50.437758919Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 23:51:50.437886 containerd[1684]: time="2025-11-04T23:51:50.437810464Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 4 23:51:50.438133 kubelet[3005]: E1104 23:51:50.437976 3005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:51:50.438133 kubelet[3005]: E1104 23:51:50.438024 3005 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:51:50.438133 kubelet[3005]: E1104 23:51:50.438107 3005 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nrz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-554c89f774-pdktj_calico-system(40c5727f-c324-4fe2-b1aa-9b89dbc8158a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 23:51:50.439798 kubelet[3005]: E1104 23:51:50.439769 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-554c89f774-pdktj" podUID="40c5727f-c324-4fe2-b1aa-9b89dbc8158a" Nov 4 23:51:51.737569 kubelet[3005]: E1104 23:51:51.736753 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-548bc79dd9-9vmzn" podUID="22a0a317-e976-4828-b61a-a22d937f284c" Nov 4 23:51:51.883111 systemd[1]: Started sshd@21-139.178.70.101:22-139.178.68.195:42546.service - OpenSSH per-connection server daemon (139.178.68.195:42546). Nov 4 23:51:51.923039 sshd[5545]: Accepted publickey for core from 139.178.68.195 port 42546 ssh2: RSA SHA256:SN3AO0qmEwLXzHNXZ4jj0XXdhb1SagWw2vkr2zk9BUc Nov 4 23:51:51.924019 sshd-session[5545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:51:51.928463 systemd-logind[1666]: New session 24 of user core. Nov 4 23:51:51.934586 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 4 23:51:52.034929 sshd[5548]: Connection closed by 139.178.68.195 port 42546 Nov 4 23:51:52.035477 sshd-session[5545]: pam_unix(sshd:session): session closed for user core Nov 4 23:51:52.039743 systemd[1]: sshd@21-139.178.70.101:22-139.178.68.195:42546.service: Deactivated successfully. Nov 4 23:51:52.041120 systemd[1]: session-24.scope: Deactivated successfully. Nov 4 23:51:52.042012 systemd-logind[1666]: Session 24 logged out. Waiting for processes to exit. Nov 4 23:51:52.042689 systemd-logind[1666]: Removed session 24. Nov 4 23:51:52.736063 containerd[1684]: time="2025-11-04T23:51:52.735997591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 23:51:52.940792 containerd[1684]: time="2025-11-04T23:51:52.940736408Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1587cf69f3d687559b45fb0478326d84937e839c640d1ba52d2a281401b8b76b\" id:\"826f648e239429c336be49f44c24185368573deb6065783d552de5654b39a097\" pid:5573 exited_at:{seconds:1762300312 nanos:940421726}" Nov 4 23:51:53.069581 containerd[1684]: time="2025-11-04T23:51:53.069352657Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:51:53.069851 containerd[1684]: time="2025-11-04T23:51:53.069820705Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 23:51:53.069979 containerd[1684]: time="2025-11-04T23:51:53.069885389Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 4 23:51:53.070048 kubelet[3005]: E1104 23:51:53.069974 3005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:51:53.070048 kubelet[3005]: E1104 23:51:53.070006 3005 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:51:53.070524 kubelet[3005]: E1104 23:51:53.070105 3005 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8ng9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-j2cxk_calico-system(87667173-4fdd-43b0-b698-59acc1ea7515): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 23:51:53.072301 containerd[1684]: time="2025-11-04T23:51:53.072223429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 23:51:53.424158 containerd[1684]: time="2025-11-04T23:51:53.424125145Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:51:53.428578 containerd[1684]: time="2025-11-04T23:51:53.428555488Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 23:51:53.429698 kubelet[3005]: E1104 23:51:53.428792 3005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:51:53.429698 kubelet[3005]: E1104 23:51:53.428827 3005 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:51:53.429698 kubelet[3005]: E1104 23:51:53.428902 3005 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8ng9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-j2cxk_calico-system(87667173-4fdd-43b0-b698-59acc1ea7515): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 23:51:53.430100 kubelet[3005]: E1104 23:51:53.430082 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j2cxk" podUID="87667173-4fdd-43b0-b698-59acc1ea7515" Nov 4 23:51:53.437021 containerd[1684]: time="2025-11-04T23:51:53.428607025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 4 23:51:53.737950 containerd[1684]: time="2025-11-04T23:51:53.737493640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 23:51:54.077531 containerd[1684]: time="2025-11-04T23:51:54.077376299Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:51:54.077925 containerd[1684]: time="2025-11-04T23:51:54.077853874Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 23:51:54.077925 containerd[1684]: time="2025-11-04T23:51:54.077908823Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 4 23:51:54.078074 kubelet[3005]: E1104 23:51:54.078025 3005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:51:54.078599 kubelet[3005]: E1104 23:51:54.078076 3005 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:51:54.078599 kubelet[3005]: E1104 23:51:54.078175 3005 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzm8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-x2sxv_calico-system(9da2e8c6-5e46-4a22-9ad0-99b948b30cea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 23:51:54.079851 kubelet[3005]: E1104 23:51:54.079826 3005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x2sxv" podUID="9da2e8c6-5e46-4a22-9ad0-99b948b30cea"