Oct 31 14:08:15.862640 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 12:16:40 -00 2025 Oct 31 14:08:15.862657 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=7e4f6395c1f11b5d1e07a15155afadb91de20f1aac1cd9cff8fc1baca215a11a Oct 31 14:08:15.862664 kernel: Disabled fast string operations Oct 31 14:08:15.862669 kernel: BIOS-provided physical RAM map: Oct 31 14:08:15.862673 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Oct 31 14:08:15.862677 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Oct 31 14:08:15.862683 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Oct 31 14:08:15.862688 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Oct 31 14:08:15.862693 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Oct 31 14:08:15.862698 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Oct 31 14:08:15.862702 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Oct 31 14:08:15.862707 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Oct 31 14:08:15.862711 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Oct 31 14:08:15.862716 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Oct 31 14:08:15.862723 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Oct 31 14:08:15.862728 kernel: NX (Execute Disable) protection: active Oct 31 14:08:15.862733 kernel: APIC: Static calls initialized Oct 31 14:08:15.862738 kernel: SMBIOS 2.7 present. Oct 31 14:08:15.862743 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Oct 31 14:08:15.862749 kernel: DMI: Memory slots populated: 1/128 Oct 31 14:08:15.862754 kernel: vmware: hypercall mode: 0x00 Oct 31 14:08:15.862760 kernel: Hypervisor detected: VMware Oct 31 14:08:15.862765 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Oct 31 14:08:15.862770 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Oct 31 14:08:15.862775 kernel: vmware: using clock offset of 3321395521 ns Oct 31 14:08:15.862780 kernel: tsc: Detected 3408.000 MHz processor Oct 31 14:08:15.862786 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 31 14:08:15.862792 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 31 14:08:15.862797 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Oct 31 14:08:15.862802 kernel: total RAM covered: 3072M Oct 31 14:08:15.862809 kernel: Found optimal setting for mtrr clean up Oct 31 14:08:15.862815 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Oct 31 14:08:15.862820 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Oct 31 14:08:15.862825 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 31 14:08:15.862830 kernel: Using GB pages for direct mapping Oct 31 14:08:15.862836 kernel: ACPI: Early table checksum verification disabled Oct 31 14:08:15.862841 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Oct 31 14:08:15.862847 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Oct 31 14:08:15.862853 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Oct 31 14:08:15.862858 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Oct 31 14:08:15.862865 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Oct 31 14:08:15.862871 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Oct 31 14:08:15.862877 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Oct 31 14:08:15.862883 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Oct 31 14:08:15.862889 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Oct 31 14:08:15.862895 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Oct 31 14:08:15.862901 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Oct 31 14:08:15.862906 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Oct 31 14:08:15.862912 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Oct 31 14:08:15.862919 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Oct 31 14:08:15.862925 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Oct 31 14:08:15.862930 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Oct 31 14:08:15.862936 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Oct 31 14:08:15.862941 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Oct 31 14:08:15.862947 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Oct 31 14:08:15.862952 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Oct 31 14:08:15.862957 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Oct 31 14:08:15.862964 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Oct 31 14:08:15.862970 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 31 14:08:15.862975 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 31 14:08:15.862981 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Oct 31 14:08:15.862986 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00001000-0x7fffffff] Oct 31 14:08:15.862992 kernel: NODE_DATA(0) allocated [mem 0x7fff8dc0-0x7fffffff] Oct 31 14:08:15.862998 kernel: Zone ranges: Oct 31 14:08:15.863004 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 31 14:08:15.863010 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Oct 31 14:08:15.863015 kernel: Normal empty Oct 31 14:08:15.863034 kernel: Device empty Oct 31 14:08:15.863040 kernel: Movable zone start for each node Oct 31 14:08:15.863046 kernel: Early memory node ranges Oct 31 14:08:15.863051 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Oct 31 14:08:15.863056 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Oct 31 14:08:15.863063 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Oct 31 14:08:15.863069 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Oct 31 14:08:15.863075 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 31 14:08:15.863080 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Oct 31 14:08:15.863086 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Oct 31 14:08:15.863091 kernel: ACPI: PM-Timer IO Port: 0x1008 Oct 31 14:08:15.863105 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Oct 31 14:08:15.863111 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Oct 31 14:08:15.863118 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Oct 31 14:08:15.863124 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Oct 31 14:08:15.863130 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Oct 31 14:08:15.863135 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Oct 31 14:08:15.863141 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Oct 31 14:08:15.863146 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Oct 31 14:08:15.863151 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Oct 31 14:08:15.863157 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Oct 31 14:08:15.863163 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Oct 31 14:08:15.863169 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Oct 31 14:08:15.863174 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Oct 31 14:08:15.863180 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Oct 31 14:08:15.863185 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Oct 31 14:08:15.863190 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Oct 31 14:08:15.863196 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Oct 31 14:08:15.863201 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Oct 31 14:08:15.863208 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Oct 31 14:08:15.863213 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Oct 31 14:08:15.863219 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Oct 31 14:08:15.863224 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Oct 31 14:08:15.863230 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Oct 31 14:08:15.863235 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Oct 31 14:08:15.863241 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Oct 31 14:08:15.863246 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Oct 31 14:08:15.863252 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Oct 31 14:08:15.863258 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Oct 31 14:08:15.863263 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Oct 31 14:08:15.863269 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Oct 31 14:08:15.863274 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Oct 31 14:08:15.863280 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Oct 31 14:08:15.863285 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Oct 31 14:08:15.863291 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Oct 31 14:08:15.863297 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Oct 31 14:08:15.863303 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Oct 31 14:08:15.863308 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Oct 31 14:08:15.863313 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Oct 31 14:08:15.863319 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Oct 31 14:08:15.863325 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Oct 31 14:08:15.863335 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Oct 31 14:08:15.863340 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Oct 31 14:08:15.863346 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Oct 31 14:08:15.863353 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Oct 31 14:08:15.863358 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Oct 31 14:08:15.863364 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Oct 31 14:08:15.863370 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Oct 31 14:08:15.863376 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Oct 31 14:08:15.863381 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Oct 31 14:08:15.863388 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Oct 31 14:08:15.863394 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Oct 31 14:08:15.863399 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Oct 31 14:08:15.863405 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Oct 31 14:08:15.863411 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Oct 31 14:08:15.863417 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Oct 31 14:08:15.863422 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Oct 31 14:08:15.863428 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Oct 31 14:08:15.863435 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Oct 31 14:08:15.863441 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Oct 31 14:08:15.863446 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Oct 31 14:08:15.863452 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Oct 31 14:08:15.863458 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Oct 31 14:08:15.863463 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Oct 31 14:08:15.863469 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Oct 31 14:08:15.863475 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Oct 31 14:08:15.863482 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Oct 31 14:08:15.863487 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Oct 31 14:08:15.863493 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Oct 31 14:08:15.863499 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Oct 31 14:08:15.863504 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Oct 31 14:08:15.863511 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Oct 31 14:08:15.863516 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Oct 31 14:08:15.863522 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Oct 31 14:08:15.863528 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Oct 31 14:08:15.863534 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Oct 31 14:08:15.863540 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Oct 31 14:08:15.863546 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Oct 31 14:08:15.863552 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Oct 31 14:08:15.863557 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Oct 31 14:08:15.863563 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Oct 31 14:08:15.863569 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Oct 31 14:08:15.863575 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Oct 31 14:08:15.863581 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Oct 31 14:08:15.863587 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Oct 31 14:08:15.863593 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Oct 31 14:08:15.863598 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Oct 31 14:08:15.863604 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Oct 31 14:08:15.863610 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Oct 31 14:08:15.863616 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Oct 31 14:08:15.863622 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Oct 31 14:08:15.863628 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Oct 31 14:08:15.863634 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Oct 31 14:08:15.863640 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Oct 31 14:08:15.863645 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Oct 31 14:08:15.863651 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Oct 31 14:08:15.863657 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Oct 31 14:08:15.863662 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Oct 31 14:08:15.863668 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Oct 31 14:08:15.863675 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Oct 31 14:08:15.863680 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Oct 31 14:08:15.863686 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Oct 31 14:08:15.863692 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Oct 31 14:08:15.863698 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Oct 31 14:08:15.863703 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Oct 31 14:08:15.863709 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Oct 31 14:08:15.863715 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Oct 31 14:08:15.863720 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Oct 31 14:08:15.863727 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Oct 31 14:08:15.863733 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Oct 31 14:08:15.863739 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Oct 31 14:08:15.863745 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Oct 31 14:08:15.863750 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Oct 31 14:08:15.863756 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Oct 31 14:08:15.863762 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Oct 31 14:08:15.863768 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Oct 31 14:08:15.863774 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Oct 31 14:08:15.863780 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Oct 31 14:08:15.863786 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Oct 31 14:08:15.863792 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Oct 31 14:08:15.863798 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Oct 31 14:08:15.863803 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Oct 31 14:08:15.863809 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Oct 31 14:08:15.863814 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Oct 31 14:08:15.863821 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Oct 31 14:08:15.863827 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Oct 31 14:08:15.863833 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Oct 31 14:08:15.863839 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Oct 31 14:08:15.863844 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Oct 31 14:08:15.863850 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Oct 31 14:08:15.863856 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Oct 31 14:08:15.863863 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 31 14:08:15.863870 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Oct 31 14:08:15.863875 kernel: TSC deadline timer available Oct 31 14:08:15.863881 kernel: CPU topo: Max. logical packages: 128 Oct 31 14:08:15.863887 kernel: CPU topo: Max. logical dies: 128 Oct 31 14:08:15.863893 kernel: CPU topo: Max. dies per package: 1 Oct 31 14:08:15.863899 kernel: CPU topo: Max. threads per core: 1 Oct 31 14:08:15.863905 kernel: CPU topo: Num. cores per package: 1 Oct 31 14:08:15.863911 kernel: CPU topo: Num. threads per package: 1 Oct 31 14:08:15.863917 kernel: CPU topo: Allowing 2 present CPUs plus 126 hotplug CPUs Oct 31 14:08:15.863923 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Oct 31 14:08:15.863929 kernel: Booting paravirtualized kernel on VMware hypervisor Oct 31 14:08:15.863935 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 31 14:08:15.863941 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Oct 31 14:08:15.863947 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Oct 31 14:08:15.863953 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Oct 31 14:08:15.863960 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Oct 31 14:08:15.863966 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Oct 31 14:08:15.863972 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Oct 31 14:08:15.863978 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Oct 31 14:08:15.863984 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Oct 31 14:08:15.863989 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Oct 31 14:08:15.863995 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Oct 31 14:08:15.864002 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Oct 31 14:08:15.864008 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Oct 31 14:08:15.864014 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Oct 31 14:08:15.864020 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Oct 31 14:08:15.864026 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Oct 31 14:08:15.864032 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Oct 31 14:08:15.864038 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Oct 31 14:08:15.864045 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Oct 31 14:08:15.864051 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Oct 31 14:08:15.864057 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=7e4f6395c1f11b5d1e07a15155afadb91de20f1aac1cd9cff8fc1baca215a11a Oct 31 14:08:15.864064 kernel: random: crng init done Oct 31 14:08:15.864070 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Oct 31 14:08:15.864076 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Oct 31 14:08:15.864081 kernel: printk: log_buf_len min size: 262144 bytes Oct 31 14:08:15.864088 kernel: printk: log_buf_len: 1048576 bytes Oct 31 14:08:15.864103 kernel: printk: early log buf free: 245688(93%) Oct 31 14:08:15.864110 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 31 14:08:15.864116 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 31 14:08:15.864122 kernel: Fallback order for Node 0: 0 Oct 31 14:08:15.864128 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524157 Oct 31 14:08:15.864133 kernel: Policy zone: DMA32 Oct 31 14:08:15.864141 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 31 14:08:15.864147 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Oct 31 14:08:15.864153 kernel: ftrace: allocating 40092 entries in 157 pages Oct 31 14:08:15.864159 kernel: ftrace: allocated 157 pages with 5 groups Oct 31 14:08:15.864165 kernel: Dynamic Preempt: voluntary Oct 31 14:08:15.864171 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 31 14:08:15.864177 kernel: rcu: RCU event tracing is enabled. Oct 31 14:08:15.864183 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Oct 31 14:08:15.864190 kernel: Trampoline variant of Tasks RCU enabled. Oct 31 14:08:15.864196 kernel: Rude variant of Tasks RCU enabled. Oct 31 14:08:15.864202 kernel: Tracing variant of Tasks RCU enabled. Oct 31 14:08:15.864207 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 31 14:08:15.864213 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Oct 31 14:08:15.864219 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Oct 31 14:08:15.864225 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Oct 31 14:08:15.864232 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Oct 31 14:08:15.864238 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Oct 31 14:08:15.864244 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Oct 31 14:08:15.864250 kernel: Console: colour VGA+ 80x25 Oct 31 14:08:15.864256 kernel: printk: legacy console [tty0] enabled Oct 31 14:08:15.864262 kernel: printk: legacy console [ttyS0] enabled Oct 31 14:08:15.864268 kernel: ACPI: Core revision 20240827 Oct 31 14:08:15.864274 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Oct 31 14:08:15.864281 kernel: APIC: Switch to symmetric I/O mode setup Oct 31 14:08:15.864287 kernel: x2apic enabled Oct 31 14:08:15.864293 kernel: APIC: Switched APIC routing to: physical x2apic Oct 31 14:08:15.864299 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 31 14:08:15.864305 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Oct 31 14:08:15.864311 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Oct 31 14:08:15.864317 kernel: Disabled fast string operations Oct 31 14:08:15.864324 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Oct 31 14:08:15.864330 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Oct 31 14:08:15.864336 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 31 14:08:15.864342 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Oct 31 14:08:15.864348 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Oct 31 14:08:15.864354 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Oct 31 14:08:15.864360 kernel: RETBleed: Mitigation: Enhanced IBRS Oct 31 14:08:15.864367 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 31 14:08:15.864373 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 31 14:08:15.864380 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 31 14:08:15.864386 kernel: SRBDS: Unknown: Dependent on hypervisor status Oct 31 14:08:15.864392 kernel: GDS: Unknown: Dependent on hypervisor status Oct 31 14:08:15.864397 kernel: active return thunk: its_return_thunk Oct 31 14:08:15.864403 kernel: ITS: Mitigation: Aligned branch/return thunks Oct 31 14:08:15.864410 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 31 14:08:15.864416 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 31 14:08:15.864422 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 31 14:08:15.864429 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 31 14:08:15.864438 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 31 14:08:15.864445 kernel: Freeing SMP alternatives memory: 32K Oct 31 14:08:15.864451 kernel: pid_max: default: 131072 minimum: 1024 Oct 31 14:08:15.864458 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 31 14:08:15.864464 kernel: landlock: Up and running. Oct 31 14:08:15.864470 kernel: SELinux: Initializing. Oct 31 14:08:15.864477 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 31 14:08:15.864483 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 31 14:08:15.864489 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Oct 31 14:08:15.864495 kernel: Performance Events: Skylake events, core PMU driver. Oct 31 14:08:15.864502 kernel: core: CPUID marked event: 'cpu cycles' unavailable Oct 31 14:08:15.864508 kernel: core: CPUID marked event: 'instructions' unavailable Oct 31 14:08:15.864514 kernel: core: CPUID marked event: 'bus cycles' unavailable Oct 31 14:08:15.864520 kernel: core: CPUID marked event: 'cache references' unavailable Oct 31 14:08:15.864526 kernel: core: CPUID marked event: 'cache misses' unavailable Oct 31 14:08:15.864532 kernel: core: CPUID marked event: 'branch instructions' unavailable Oct 31 14:08:15.864543 kernel: core: CPUID marked event: 'branch misses' unavailable Oct 31 14:08:15.864557 kernel: ... version: 1 Oct 31 14:08:15.864563 kernel: ... bit width: 48 Oct 31 14:08:15.864569 kernel: ... generic registers: 4 Oct 31 14:08:15.864575 kernel: ... value mask: 0000ffffffffffff Oct 31 14:08:15.864581 kernel: ... max period: 000000007fffffff Oct 31 14:08:15.864587 kernel: ... fixed-purpose events: 0 Oct 31 14:08:15.864593 kernel: ... event mask: 000000000000000f Oct 31 14:08:15.864601 kernel: signal: max sigframe size: 1776 Oct 31 14:08:15.864610 kernel: rcu: Hierarchical SRCU implementation. Oct 31 14:08:15.864618 kernel: rcu: Max phase no-delay instances is 400. Oct 31 14:08:15.864624 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Oct 31 14:08:15.864630 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 31 14:08:15.864636 kernel: smp: Bringing up secondary CPUs ... Oct 31 14:08:15.864642 kernel: smpboot: x86: Booting SMP configuration: Oct 31 14:08:15.864650 kernel: .... node #0, CPUs: #1 Oct 31 14:08:15.864656 kernel: Disabled fast string operations Oct 31 14:08:15.864661 kernel: smp: Brought up 1 node, 2 CPUs Oct 31 14:08:15.864667 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Oct 31 14:08:15.864674 kernel: Memory: 1942652K/2096628K available (14336K kernel code, 2443K rwdata, 29892K rodata, 15348K init, 2696K bss, 142592K reserved, 0K cma-reserved) Oct 31 14:08:15.864680 kernel: devtmpfs: initialized Oct 31 14:08:15.864686 kernel: x86/mm: Memory block size: 128MB Oct 31 14:08:15.864692 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Oct 31 14:08:15.864699 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 31 14:08:15.864705 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Oct 31 14:08:15.864711 kernel: pinctrl core: initialized pinctrl subsystem Oct 31 14:08:15.864717 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 31 14:08:15.864723 kernel: audit: initializing netlink subsys (disabled) Oct 31 14:08:15.864729 kernel: audit: type=2000 audit(1761919692.285:1): state=initialized audit_enabled=0 res=1 Oct 31 14:08:15.864735 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 31 14:08:15.864742 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 31 14:08:15.864748 kernel: cpuidle: using governor menu Oct 31 14:08:15.864754 kernel: Simple Boot Flag at 0x36 set to 0x80 Oct 31 14:08:15.864760 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 31 14:08:15.864766 kernel: dca service started, version 1.12.1 Oct 31 14:08:15.864773 kernel: PCI: ECAM [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) for domain 0000 [bus 00-7f] Oct 31 14:08:15.864786 kernel: PCI: Using configuration type 1 for base access Oct 31 14:08:15.864794 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 31 14:08:15.864801 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 31 14:08:15.864807 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 31 14:08:15.864813 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 31 14:08:15.864820 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 31 14:08:15.864826 kernel: ACPI: Added _OSI(Module Device) Oct 31 14:08:15.864832 kernel: ACPI: Added _OSI(Processor Device) Oct 31 14:08:15.864840 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 31 14:08:15.864846 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 31 14:08:15.864852 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Oct 31 14:08:15.864859 kernel: ACPI: Interpreter enabled Oct 31 14:08:15.864865 kernel: ACPI: PM: (supports S0 S1 S5) Oct 31 14:08:15.864871 kernel: ACPI: Using IOAPIC for interrupt routing Oct 31 14:08:15.864878 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 31 14:08:15.864885 kernel: PCI: Using E820 reservations for host bridge windows Oct 31 14:08:15.864891 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Oct 31 14:08:15.864898 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Oct 31 14:08:15.864999 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 31 14:08:15.865069 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Oct 31 14:08:15.866189 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Oct 31 14:08:15.866205 kernel: PCI host bridge to bus 0000:00 Oct 31 14:08:15.866281 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 31 14:08:15.866345 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Oct 31 14:08:15.866407 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 31 14:08:15.866466 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 31 14:08:15.866526 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Oct 31 14:08:15.866594 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Oct 31 14:08:15.866674 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 conventional PCI endpoint Oct 31 14:08:15.866747 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 conventional PCI bridge Oct 31 14:08:15.866815 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 31 14:08:15.866889 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Oct 31 14:08:15.866963 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a conventional PCI endpoint Oct 31 14:08:15.867034 kernel: pci 0000:00:07.1: BAR 4 [io 0x1060-0x106f] Oct 31 14:08:15.867108 kernel: pci 0000:00:07.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Oct 31 14:08:15.867180 kernel: pci 0000:00:07.1: BAR 1 [io 0x03f6]: legacy IDE quirk Oct 31 14:08:15.867246 kernel: pci 0000:00:07.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Oct 31 14:08:15.867312 kernel: pci 0000:00:07.1: BAR 3 [io 0x0376]: legacy IDE quirk Oct 31 14:08:15.867389 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Oct 31 14:08:15.867457 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Oct 31 14:08:15.867522 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Oct 31 14:08:15.867604 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 conventional PCI endpoint Oct 31 14:08:15.867671 kernel: pci 0000:00:07.7: BAR 0 [io 0x1080-0x10bf] Oct 31 14:08:15.867737 kernel: pci 0000:00:07.7: BAR 1 [mem 0xfebfe000-0xfebfffff 64bit] Oct 31 14:08:15.867807 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 conventional PCI endpoint Oct 31 14:08:15.867874 kernel: pci 0000:00:0f.0: BAR 0 [io 0x1070-0x107f] Oct 31 14:08:15.867943 kernel: pci 0000:00:0f.0: BAR 1 [mem 0xe8000000-0xefffffff pref] Oct 31 14:08:15.868008 kernel: pci 0000:00:0f.0: BAR 2 [mem 0xfe000000-0xfe7fffff] Oct 31 14:08:15.868074 kernel: pci 0000:00:0f.0: ROM [mem 0x00000000-0x00007fff pref] Oct 31 14:08:15.868159 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 31 14:08:15.868239 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 conventional PCI bridge Oct 31 14:08:15.868309 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Oct 31 14:08:15.868381 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Oct 31 14:08:15.868448 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Oct 31 14:08:15.868515 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 31 14:08:15.868586 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 14:08:15.868654 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 31 14:08:15.868721 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Oct 31 14:08:15.868789 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Oct 31 14:08:15.868856 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Oct 31 14:08:15.868927 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 14:08:15.868995 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 31 14:08:15.869061 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Oct 31 14:08:15.869141 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Oct 31 14:08:15.869219 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Oct 31 14:08:15.869289 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Oct 31 14:08:15.869369 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 14:08:15.869436 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 31 14:08:15.869506 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Oct 31 14:08:15.869572 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Oct 31 14:08:15.869639 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Oct 31 14:08:15.869705 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Oct 31 14:08:15.869779 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 14:08:15.869845 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 31 14:08:15.869915 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Oct 31 14:08:15.869981 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Oct 31 14:08:15.870048 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Oct 31 14:08:15.870133 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 14:08:15.870202 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 31 14:08:15.870274 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Oct 31 14:08:15.870347 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 31 14:08:15.870413 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Oct 31 14:08:15.870484 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 14:08:15.870550 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 31 14:08:15.870624 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Oct 31 14:08:15.870702 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Oct 31 14:08:15.870776 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Oct 31 14:08:15.870847 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 14:08:15.870914 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 31 14:08:15.870980 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Oct 31 14:08:15.871045 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Oct 31 14:08:15.871122 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Oct 31 14:08:15.871341 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 14:08:15.871412 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 31 14:08:15.871480 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Oct 31 14:08:15.871547 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Oct 31 14:08:15.871613 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Oct 31 14:08:15.871685 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 14:08:15.871756 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 31 14:08:15.871823 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Oct 31 14:08:15.871890 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Oct 31 14:08:15.871955 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Oct 31 14:08:15.872027 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 14:08:15.872100 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 31 14:08:15.872173 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Oct 31 14:08:15.872238 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Oct 31 14:08:15.872304 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Oct 31 14:08:15.872369 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Oct 31 14:08:15.872440 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 14:08:15.872506 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 31 14:08:15.872581 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Oct 31 14:08:15.872649 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Oct 31 14:08:15.872715 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Oct 31 14:08:15.872780 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Oct 31 14:08:15.872850 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 14:08:15.872919 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 31 14:08:15.872984 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Oct 31 14:08:15.873049 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 31 14:08:15.873660 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Oct 31 14:08:15.873752 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 14:08:15.874005 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 31 14:08:15.874079 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Oct 31 14:08:15.874163 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 31 14:08:15.874231 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Oct 31 14:08:15.874304 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 14:08:15.874371 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 31 14:08:15.874437 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Oct 31 14:08:15.874505 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Oct 31 14:08:15.874570 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Oct 31 14:08:15.874640 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 14:08:15.874707 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 31 14:08:15.874772 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Oct 31 14:08:15.874838 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Oct 31 14:08:15.874907 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Oct 31 14:08:15.874977 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 14:08:15.875044 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 31 14:08:15.875127 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Oct 31 14:08:15.875197 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 31 14:08:15.875262 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Oct 31 14:08:15.875337 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 14:08:15.875403 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 31 14:08:15.875468 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Oct 31 14:08:15.875532 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Oct 31 14:08:15.875598 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 31 14:08:15.875663 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Oct 31 14:08:15.875736 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 14:08:15.875802 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 31 14:08:15.875867 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Oct 31 14:08:15.875935 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Oct 31 14:08:15.876000 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Oct 31 14:08:15.876065 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Oct 31 14:08:15.876144 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 14:08:15.876211 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 31 14:08:15.876280 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Oct 31 14:08:15.876348 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Oct 31 14:08:15.876413 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Oct 31 14:08:15.876477 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Oct 31 14:08:15.876556 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 14:08:15.876625 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 31 14:08:15.876691 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Oct 31 14:08:15.876759 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 31 14:08:15.876825 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Oct 31 14:08:15.876894 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 14:08:15.876959 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 31 14:08:15.877024 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Oct 31 14:08:15.877090 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 31 14:08:15.877168 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Oct 31 14:08:15.877240 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 14:08:15.877307 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 31 14:08:15.877373 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Oct 31 14:08:15.877438 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Oct 31 14:08:15.877504 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Oct 31 14:08:15.877576 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 14:08:15.877642 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 31 14:08:15.877708 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Oct 31 14:08:15.877772 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Oct 31 14:08:15.877837 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Oct 31 14:08:15.877908 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 14:08:15.877977 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 31 14:08:15.878042 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Oct 31 14:08:15.878641 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 31 14:08:15.878716 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Oct 31 14:08:15.879002 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 14:08:15.879080 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 31 14:08:15.879174 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Oct 31 14:08:15.879241 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Oct 31 14:08:15.879309 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Oct 31 14:08:15.879375 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Oct 31 14:08:15.879449 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 14:08:15.879519 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 31 14:08:15.879591 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Oct 31 14:08:15.879658 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Oct 31 14:08:15.879724 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Oct 31 14:08:15.879788 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Oct 31 14:08:15.879858 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 14:08:15.879928 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 31 14:08:15.879993 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Oct 31 14:08:15.880058 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Oct 31 14:08:15.880132 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Oct 31 14:08:15.880201 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 14:08:15.880267 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 31 14:08:15.880336 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Oct 31 14:08:15.880401 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 31 14:08:15.880466 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Oct 31 14:08:15.880535 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 14:08:15.880600 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 31 14:08:15.880665 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Oct 31 14:08:15.880733 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Oct 31 14:08:15.880797 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Oct 31 14:08:15.880869 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 14:08:15.880934 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 31 14:08:15.880999 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Oct 31 14:08:15.881064 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Oct 31 14:08:15.881149 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Oct 31 14:08:15.881220 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 14:08:15.881286 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 31 14:08:15.881351 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Oct 31 14:08:15.881415 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Oct 31 14:08:15.881480 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Oct 31 14:08:15.881556 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 31 14:08:15.881622 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 31 14:08:15.881688 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Oct 31 14:08:15.881753 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 31 14:08:15.881818 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Oct 31 14:08:15.881885 kernel: pci_bus 0000:01: extended config space not accessible Oct 31 14:08:15.881954 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 31 14:08:15.882021 kernel: pci_bus 0000:02: extended config space not accessible Oct 31 14:08:15.882030 kernel: acpiphp: Slot [32] registered Oct 31 14:08:15.882037 kernel: acpiphp: Slot [33] registered Oct 31 14:08:15.882044 kernel: acpiphp: Slot [34] registered Oct 31 14:08:15.882050 kernel: acpiphp: Slot [35] registered Oct 31 14:08:15.882059 kernel: acpiphp: Slot [36] registered Oct 31 14:08:15.882065 kernel: acpiphp: Slot [37] registered Oct 31 14:08:15.882071 kernel: acpiphp: Slot [38] registered Oct 31 14:08:15.882078 kernel: acpiphp: Slot [39] registered Oct 31 14:08:15.882084 kernel: acpiphp: Slot [40] registered Oct 31 14:08:15.882091 kernel: acpiphp: Slot [41] registered Oct 31 14:08:15.882105 kernel: acpiphp: Slot [42] registered Oct 31 14:08:15.882111 kernel: acpiphp: Slot [43] registered Oct 31 14:08:15.882119 kernel: acpiphp: Slot [44] registered Oct 31 14:08:15.882125 kernel: acpiphp: Slot [45] registered Oct 31 14:08:15.882132 kernel: acpiphp: Slot [46] registered Oct 31 14:08:15.882138 kernel: acpiphp: Slot [47] registered Oct 31 14:08:15.882144 kernel: acpiphp: Slot [48] registered Oct 31 14:08:15.882151 kernel: acpiphp: Slot [49] registered Oct 31 14:08:15.882157 kernel: acpiphp: Slot [50] registered Oct 31 14:08:15.882164 kernel: acpiphp: Slot [51] registered Oct 31 14:08:15.882171 kernel: acpiphp: Slot [52] registered Oct 31 14:08:15.882177 kernel: acpiphp: Slot [53] registered Oct 31 14:08:15.882184 kernel: acpiphp: Slot [54] registered Oct 31 14:08:15.882190 kernel: acpiphp: Slot [55] registered Oct 31 14:08:15.882196 kernel: acpiphp: Slot [56] registered Oct 31 14:08:15.882203 kernel: acpiphp: Slot [57] registered Oct 31 14:08:15.882209 kernel: acpiphp: Slot [58] registered Oct 31 14:08:15.882216 kernel: acpiphp: Slot [59] registered Oct 31 14:08:15.882223 kernel: acpiphp: Slot [60] registered Oct 31 14:08:15.882229 kernel: acpiphp: Slot [61] registered Oct 31 14:08:15.882235 kernel: acpiphp: Slot [62] registered Oct 31 14:08:15.882242 kernel: acpiphp: Slot [63] registered Oct 31 14:08:15.882309 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Oct 31 14:08:15.882376 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Oct 31 14:08:15.882444 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Oct 31 14:08:15.882509 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Oct 31 14:08:15.882580 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Oct 31 14:08:15.882645 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Oct 31 14:08:15.882719 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 PCIe Endpoint Oct 31 14:08:15.882787 kernel: pci 0000:03:00.0: BAR 0 [io 0x4000-0x4007] Oct 31 14:08:15.882857 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfd5f8000-0xfd5fffff 64bit] Oct 31 14:08:15.882924 kernel: pci 0000:03:00.0: ROM [mem 0x00000000-0x0000ffff pref] Oct 31 14:08:15.882991 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Oct 31 14:08:15.883057 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Oct 31 14:08:15.883139 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 31 14:08:15.883208 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 31 14:08:15.883277 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 31 14:08:15.883343 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 31 14:08:15.883409 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 31 14:08:15.883475 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 31 14:08:15.883541 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 31 14:08:15.883608 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 31 14:08:15.883682 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 PCIe Endpoint Oct 31 14:08:15.884666 kernel: pci 0000:0b:00.0: BAR 0 [mem 0xfd4fc000-0xfd4fcfff] Oct 31 14:08:15.884740 kernel: pci 0000:0b:00.0: BAR 1 [mem 0xfd4fd000-0xfd4fdfff] Oct 31 14:08:15.884809 kernel: pci 0000:0b:00.0: BAR 2 [mem 0xfd4fe000-0xfd4fffff] Oct 31 14:08:15.884876 kernel: pci 0000:0b:00.0: BAR 3 [io 0x5000-0x500f] Oct 31 14:08:15.884945 kernel: pci 0000:0b:00.0: ROM [mem 0x00000000-0x0000ffff pref] Oct 31 14:08:15.885016 kernel: pci 0000:0b:00.0: supports D1 D2 Oct 31 14:08:15.885084 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 31 14:08:15.885162 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Oct 31 14:08:15.885230 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 31 14:08:15.885298 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 31 14:08:15.885366 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 31 14:08:15.885436 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 31 14:08:15.885502 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 31 14:08:15.885572 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 31 14:08:15.885642 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 31 14:08:15.885709 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 31 14:08:15.885776 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 31 14:08:15.885845 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 31 14:08:15.885911 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 31 14:08:15.885977 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 31 14:08:15.886043 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 31 14:08:15.886123 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 31 14:08:15.886194 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 31 14:08:15.886264 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 31 14:08:15.886331 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 31 14:08:15.886397 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 31 14:08:15.886464 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 31 14:08:15.886532 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 31 14:08:15.886604 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 31 14:08:15.886674 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 31 14:08:15.886740 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 31 14:08:15.886806 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 31 14:08:15.886816 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Oct 31 14:08:15.886823 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Oct 31 14:08:15.886830 kernel: ACPI: PCI: Interrupt link LNKB disabled Oct 31 14:08:15.886838 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 31 14:08:15.886844 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Oct 31 14:08:15.886851 kernel: iommu: Default domain type: Translated Oct 31 14:08:15.886857 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 31 14:08:15.886864 kernel: PCI: Using ACPI for IRQ routing Oct 31 14:08:15.886870 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 31 14:08:15.886877 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Oct 31 14:08:15.886883 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Oct 31 14:08:15.886949 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Oct 31 14:08:15.887014 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Oct 31 14:08:15.887078 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 31 14:08:15.887088 kernel: vgaarb: loaded Oct 31 14:08:15.887104 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Oct 31 14:08:15.887112 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Oct 31 14:08:15.887118 kernel: clocksource: Switched to clocksource tsc-early Oct 31 14:08:15.887127 kernel: VFS: Disk quotas dquot_6.6.0 Oct 31 14:08:15.887134 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 31 14:08:15.887141 kernel: pnp: PnP ACPI init Oct 31 14:08:15.887217 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Oct 31 14:08:15.887281 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Oct 31 14:08:15.887342 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Oct 31 14:08:15.887414 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Oct 31 14:08:15.887479 kernel: pnp 00:06: [dma 2] Oct 31 14:08:15.887545 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Oct 31 14:08:15.887607 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Oct 31 14:08:15.887667 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Oct 31 14:08:15.887678 kernel: pnp: PnP ACPI: found 8 devices Oct 31 14:08:15.887685 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 31 14:08:15.887692 kernel: NET: Registered PF_INET protocol family Oct 31 14:08:15.887698 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 31 14:08:15.887705 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 31 14:08:15.887711 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 31 14:08:15.887718 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 31 14:08:15.887726 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 31 14:08:15.887733 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 31 14:08:15.887739 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 31 14:08:15.887745 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 31 14:08:15.887752 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 31 14:08:15.887759 kernel: NET: Registered PF_XDP protocol family Oct 31 14:08:15.887826 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Oct 31 14:08:15.887918 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Oct 31 14:08:15.888003 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 31 14:08:15.888081 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 31 14:08:15.888176 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 31 14:08:15.888263 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Oct 31 14:08:15.888345 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Oct 31 14:08:15.888433 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Oct 31 14:08:15.888518 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Oct 31 14:08:15.888592 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Oct 31 14:08:15.888659 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Oct 31 14:08:15.888724 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Oct 31 14:08:15.888789 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Oct 31 14:08:15.888858 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Oct 31 14:08:15.889813 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Oct 31 14:08:15.889887 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Oct 31 14:08:15.889955 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Oct 31 14:08:15.890022 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Oct 31 14:08:15.890091 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Oct 31 14:08:15.890174 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Oct 31 14:08:15.890244 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Oct 31 14:08:15.890310 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Oct 31 14:08:15.890377 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Oct 31 14:08:15.890444 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref]: assigned Oct 31 14:08:15.890511 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref]: assigned Oct 31 14:08:15.890577 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.890646 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.894555 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.894642 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.894714 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.894783 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.894851 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.894922 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.894989 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.895056 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.895138 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.895206 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.895272 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.895338 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.895407 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.895473 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.895540 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.895605 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.895673 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.895739 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.895806 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.895872 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.895937 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.896002 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.896069 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.896145 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.896212 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.896291 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.896360 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.896426 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.896491 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.896562 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.896635 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.896704 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.896772 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.896837 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.896903 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.896969 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.897035 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.897115 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.897185 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.897251 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.897316 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.897381 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.897447 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.897513 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.897591 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.897677 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.897743 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.897809 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.897875 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.897940 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.898005 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.898071 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.898148 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.898215 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.898281 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.898348 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.898414 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.898480 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.898546 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.898616 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.898683 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.898749 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.900715 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.900788 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.900859 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.900928 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.901029 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.901125 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.901222 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.901292 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.901360 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.901427 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.901498 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.901564 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.901632 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.901697 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.901763 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.901829 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.901896 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.901965 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.902031 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Oct 31 14:08:15.903351 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Oct 31 14:08:15.903779 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 31 14:08:15.903854 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Oct 31 14:08:15.903922 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Oct 31 14:08:15.903989 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Oct 31 14:08:15.904060 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 31 14:08:15.904143 kernel: pci 0000:03:00.0: ROM [mem 0xfd500000-0xfd50ffff pref]: assigned Oct 31 14:08:15.904212 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 31 14:08:15.904281 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Oct 31 14:08:15.904348 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Oct 31 14:08:15.904416 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Oct 31 14:08:15.904489 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 31 14:08:15.904565 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Oct 31 14:08:15.904632 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Oct 31 14:08:15.905243 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Oct 31 14:08:15.905319 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 31 14:08:15.905389 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Oct 31 14:08:15.905456 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Oct 31 14:08:15.906911 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Oct 31 14:08:15.906999 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 31 14:08:15.907070 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Oct 31 14:08:15.907256 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Oct 31 14:08:15.907327 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 31 14:08:15.907397 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Oct 31 14:08:15.907464 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 31 14:08:15.907534 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 31 14:08:15.907600 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Oct 31 14:08:15.907667 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Oct 31 14:08:15.907734 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 31 14:08:15.907801 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Oct 31 14:08:15.907867 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Oct 31 14:08:15.907937 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 31 14:08:15.908002 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Oct 31 14:08:15.908068 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Oct 31 14:08:15.908147 kernel: pci 0000:0b:00.0: ROM [mem 0xfd400000-0xfd40ffff pref]: assigned Oct 31 14:08:15.908216 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 31 14:08:15.908281 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Oct 31 14:08:15.908349 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Oct 31 14:08:15.908415 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Oct 31 14:08:15.908491 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 31 14:08:15.908564 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Oct 31 14:08:15.908632 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Oct 31 14:08:15.908700 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Oct 31 14:08:15.908769 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 31 14:08:15.908839 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Oct 31 14:08:15.908909 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Oct 31 14:08:15.908974 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Oct 31 14:08:15.909041 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 31 14:08:15.909123 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Oct 31 14:08:15.909189 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 31 14:08:15.909256 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 31 14:08:15.909322 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Oct 31 14:08:15.909391 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 31 14:08:15.909459 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 31 14:08:15.909525 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Oct 31 14:08:15.909596 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Oct 31 14:08:15.909663 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 31 14:08:15.909729 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Oct 31 14:08:15.909797 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Oct 31 14:08:15.909865 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 31 14:08:15.909947 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Oct 31 14:08:15.910017 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 31 14:08:15.910085 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 31 14:08:15.910172 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Oct 31 14:08:15.910243 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Oct 31 14:08:15.910309 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 31 14:08:15.910377 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 31 14:08:15.910443 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Oct 31 14:08:15.910509 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Oct 31 14:08:15.910575 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Oct 31 14:08:15.910643 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 31 14:08:15.910712 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Oct 31 14:08:15.910778 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Oct 31 14:08:15.910843 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Oct 31 14:08:15.910910 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 31 14:08:15.910976 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Oct 31 14:08:15.911041 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 31 14:08:15.911124 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 31 14:08:15.911195 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Oct 31 14:08:15.911261 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 31 14:08:15.911330 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 31 14:08:15.911395 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Oct 31 14:08:15.911461 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Oct 31 14:08:15.911529 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 31 14:08:15.911598 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Oct 31 14:08:15.911663 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Oct 31 14:08:15.911730 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 31 14:08:15.911796 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Oct 31 14:08:15.911865 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 31 14:08:15.911934 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 31 14:08:15.912014 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Oct 31 14:08:15.912088 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Oct 31 14:08:15.913198 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Oct 31 14:08:15.913272 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 31 14:08:15.913340 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Oct 31 14:08:15.913408 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Oct 31 14:08:15.913474 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Oct 31 14:08:15.913545 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 31 14:08:15.913610 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Oct 31 14:08:15.913676 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Oct 31 14:08:15.913744 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 31 14:08:15.913810 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Oct 31 14:08:15.913875 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 31 14:08:15.913945 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 31 14:08:15.914010 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Oct 31 14:08:15.914076 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Oct 31 14:08:15.915177 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 31 14:08:15.915253 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Oct 31 14:08:15.915322 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Oct 31 14:08:15.915396 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 31 14:08:15.915463 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Oct 31 14:08:15.915529 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Oct 31 14:08:15.915597 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 31 14:08:15.915664 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Oct 31 14:08:15.915730 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 31 14:08:15.915798 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Oct 31 14:08:15.915859 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Oct 31 14:08:15.915918 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Oct 31 14:08:15.915977 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Oct 31 14:08:15.916035 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Oct 31 14:08:15.916140 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Oct 31 14:08:15.916207 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Oct 31 14:08:15.916268 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 31 14:08:15.916340 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Oct 31 14:08:15.916402 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Oct 31 14:08:15.916463 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Oct 31 14:08:15.916524 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Oct 31 14:08:15.916593 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Oct 31 14:08:15.916661 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Oct 31 14:08:15.916722 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Oct 31 14:08:15.916783 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Oct 31 14:08:15.916849 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Oct 31 14:08:15.916910 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Oct 31 14:08:15.916973 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Oct 31 14:08:15.917038 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Oct 31 14:08:15.917111 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Oct 31 14:08:15.917735 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Oct 31 14:08:15.917817 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Oct 31 14:08:15.917883 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Oct 31 14:08:15.917956 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Oct 31 14:08:15.918033 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 31 14:08:15.918762 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Oct 31 14:08:15.918858 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Oct 31 14:08:15.918926 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Oct 31 14:08:15.918993 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Oct 31 14:08:15.919058 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Oct 31 14:08:15.919133 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Oct 31 14:08:15.919208 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Oct 31 14:08:15.919278 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Oct 31 14:08:15.919365 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Oct 31 14:08:15.919432 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Oct 31 14:08:15.919602 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Oct 31 14:08:15.919671 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Oct 31 14:08:15.919740 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Oct 31 14:08:15.919805 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Oct 31 14:08:15.919867 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Oct 31 14:08:15.919933 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Oct 31 14:08:15.919996 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 31 14:08:15.920065 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Oct 31 14:08:15.920149 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 31 14:08:15.920222 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Oct 31 14:08:15.920284 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Oct 31 14:08:15.920350 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Oct 31 14:08:15.920411 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Oct 31 14:08:15.920477 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Oct 31 14:08:15.920542 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 31 14:08:15.920611 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Oct 31 14:08:15.920672 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Oct 31 14:08:15.920733 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 31 14:08:15.920798 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Oct 31 14:08:15.920862 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Oct 31 14:08:15.920923 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Oct 31 14:08:15.920988 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Oct 31 14:08:15.921048 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Oct 31 14:08:15.921124 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Oct 31 14:08:15.921193 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Oct 31 14:08:15.921258 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 31 14:08:15.921325 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Oct 31 14:08:15.921387 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 31 14:08:15.921547 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Oct 31 14:08:15.921611 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Oct 31 14:08:15.921680 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Oct 31 14:08:15.921742 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Oct 31 14:08:15.921808 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Oct 31 14:08:15.921873 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 31 14:08:15.921938 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Oct 31 14:08:15.922001 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Oct 31 14:08:15.922061 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Oct 31 14:08:15.922139 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Oct 31 14:08:15.922201 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Oct 31 14:08:15.922262 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Oct 31 14:08:15.922326 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Oct 31 14:08:15.922391 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Oct 31 14:08:15.922456 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Oct 31 14:08:15.922517 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 31 14:08:15.922583 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Oct 31 14:08:15.922644 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Oct 31 14:08:15.922713 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Oct 31 14:08:15.922774 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Oct 31 14:08:15.922839 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Oct 31 14:08:15.922901 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Oct 31 14:08:15.922966 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Oct 31 14:08:15.923027 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 31 14:08:15.923111 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 31 14:08:15.923122 kernel: PCI: CLS 32 bytes, default 64 Oct 31 14:08:15.923129 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 31 14:08:15.923136 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Oct 31 14:08:15.923143 kernel: clocksource: Switched to clocksource tsc Oct 31 14:08:15.923149 kernel: Initialise system trusted keyrings Oct 31 14:08:15.923158 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 31 14:08:15.923165 kernel: Key type asymmetric registered Oct 31 14:08:15.923171 kernel: Asymmetric key parser 'x509' registered Oct 31 14:08:15.923178 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 31 14:08:15.923185 kernel: io scheduler mq-deadline registered Oct 31 14:08:15.923191 kernel: io scheduler kyber registered Oct 31 14:08:15.923198 kernel: io scheduler bfq registered Oct 31 14:08:15.923270 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Oct 31 14:08:15.923338 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 14:08:15.923407 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Oct 31 14:08:15.923475 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 14:08:15.923547 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Oct 31 14:08:15.923615 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 14:08:15.923684 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Oct 31 14:08:15.923751 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 14:08:15.923818 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Oct 31 14:08:15.923884 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 14:08:15.923950 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Oct 31 14:08:15.924017 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 14:08:15.924086 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Oct 31 14:08:15.924180 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 14:08:15.924247 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Oct 31 14:08:15.924313 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 14:08:15.924381 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Oct 31 14:08:15.924447 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 14:08:15.924517 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Oct 31 14:08:15.924597 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 14:08:15.924666 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Oct 31 14:08:15.924732 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 14:08:15.924800 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Oct 31 14:08:15.924867 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 14:08:15.924945 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Oct 31 14:08:15.925014 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 14:08:15.925084 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Oct 31 14:08:15.925167 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 14:08:15.925236 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Oct 31 14:08:15.925306 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 14:08:15.925378 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Oct 31 14:08:15.925444 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 14:08:15.925512 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Oct 31 14:08:15.925579 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 14:08:15.925647 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Oct 31 14:08:15.925714 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 14:08:15.925784 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Oct 31 14:08:15.925851 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 14:08:15.925918 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Oct 31 14:08:15.925984 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 14:08:15.926051 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Oct 31 14:08:15.926132 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 14:08:15.926206 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Oct 31 14:08:15.926272 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 14:08:15.926338 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Oct 31 14:08:15.926406 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 14:08:15.926473 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Oct 31 14:08:15.926546 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 14:08:15.926618 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Oct 31 14:08:15.926684 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 14:08:15.926751 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Oct 31 14:08:15.926819 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 14:08:15.926885 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Oct 31 14:08:15.926951 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 14:08:15.927022 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Oct 31 14:08:15.927089 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 14:08:15.927166 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Oct 31 14:08:15.927314 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 14:08:15.927386 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Oct 31 14:08:15.927462 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 14:08:15.927531 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Oct 31 14:08:15.927603 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 14:08:15.927671 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Oct 31 14:08:15.927738 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 14:08:15.927750 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 31 14:08:15.927758 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 31 14:08:15.927766 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 31 14:08:15.927773 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Oct 31 14:08:15.927780 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 31 14:08:15.927787 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 31 14:08:15.927857 kernel: rtc_cmos 00:01: registered as rtc0 Oct 31 14:08:15.929358 kernel: rtc_cmos 00:01: setting system clock to 2025-10-31T14:08:14 UTC (1761919694) Oct 31 14:08:15.929371 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 31 14:08:15.929438 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Oct 31 14:08:15.929448 kernel: intel_pstate: CPU model not supported Oct 31 14:08:15.929455 kernel: NET: Registered PF_INET6 protocol family Oct 31 14:08:15.929462 kernel: Segment Routing with IPv6 Oct 31 14:08:15.929469 kernel: In-situ OAM (IOAM) with IPv6 Oct 31 14:08:15.929476 kernel: NET: Registered PF_PACKET protocol family Oct 31 14:08:15.929483 kernel: Key type dns_resolver registered Oct 31 14:08:15.929491 kernel: IPI shorthand broadcast: enabled Oct 31 14:08:15.929498 kernel: sched_clock: Marking stable (1820003511, 172309391)->(2005789604, -13476702) Oct 31 14:08:15.929505 kernel: registered taskstats version 1 Oct 31 14:08:15.929512 kernel: Loading compiled-in X.509 certificates Oct 31 14:08:15.929519 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: d5b1c22885a28a952e9fe2b5fe942003d6c5c8b4' Oct 31 14:08:15.929526 kernel: Demotion targets for Node 0: null Oct 31 14:08:15.929533 kernel: Key type .fscrypt registered Oct 31 14:08:15.929545 kernel: Key type fscrypt-provisioning registered Oct 31 14:08:15.929553 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 31 14:08:15.929560 kernel: ima: Allocated hash algorithm: sha1 Oct 31 14:08:15.929567 kernel: ima: No architecture policies found Oct 31 14:08:15.929573 kernel: clk: Disabling unused clocks Oct 31 14:08:15.929580 kernel: Freeing unused kernel image (initmem) memory: 15348K Oct 31 14:08:15.929587 kernel: Write protecting the kernel read-only data: 45056k Oct 31 14:08:15.929595 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Oct 31 14:08:15.929602 kernel: Run /init as init process Oct 31 14:08:15.929609 kernel: with arguments: Oct 31 14:08:15.929616 kernel: /init Oct 31 14:08:15.929622 kernel: with environment: Oct 31 14:08:15.929629 kernel: HOME=/ Oct 31 14:08:15.929635 kernel: TERM=linux Oct 31 14:08:15.929643 kernel: SCSI subsystem initialized Oct 31 14:08:15.929650 kernel: VMware PVSCSI driver - version 1.0.7.0-k Oct 31 14:08:15.929657 kernel: vmw_pvscsi: using 64bit dma Oct 31 14:08:15.929663 kernel: vmw_pvscsi: max_id: 16 Oct 31 14:08:15.929670 kernel: vmw_pvscsi: setting ring_pages to 8 Oct 31 14:08:15.929677 kernel: vmw_pvscsi: enabling reqCallThreshold Oct 31 14:08:15.929684 kernel: vmw_pvscsi: driver-based request coalescing enabled Oct 31 14:08:15.929690 kernel: vmw_pvscsi: using MSI-X Oct 31 14:08:15.929771 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Oct 31 14:08:15.929846 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Oct 31 14:08:15.929927 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Oct 31 14:08:15.930001 kernel: sd 0:0:0:0: [sda] 25804800 512-byte logical blocks: (13.2 GB/12.3 GiB) Oct 31 14:08:15.930073 kernel: sd 0:0:0:0: [sda] Write Protect is off Oct 31 14:08:15.930157 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Oct 31 14:08:15.930228 kernel: sd 0:0:0:0: [sda] Cache data unavailable Oct 31 14:08:15.930299 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Oct 31 14:08:15.930309 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 31 14:08:15.930378 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Oct 31 14:08:15.930388 kernel: libata version 3.00 loaded. Oct 31 14:08:15.930456 kernel: ata_piix 0000:00:07.1: version 2.13 Oct 31 14:08:15.930531 kernel: scsi host1: ata_piix Oct 31 14:08:15.930605 kernel: scsi host2: ata_piix Oct 31 14:08:15.930615 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 lpm-pol 0 Oct 31 14:08:15.930622 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 lpm-pol 0 Oct 31 14:08:15.930629 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Oct 31 14:08:15.930722 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Oct 31 14:08:15.930736 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 31 14:08:15.930743 kernel: device-mapper: uevent: version 1.0.3 Oct 31 14:08:15.930750 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 31 14:08:15.930821 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Oct 31 14:08:15.930831 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 31 14:08:15.930838 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 31 14:08:15.930909 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 31 14:08:15.930918 kernel: raid6: avx2x4 gen() 45208 MB/s Oct 31 14:08:15.930925 kernel: raid6: avx2x2 gen() 52129 MB/s Oct 31 14:08:15.930932 kernel: raid6: avx2x1 gen() 43072 MB/s Oct 31 14:08:15.930939 kernel: raid6: using algorithm avx2x2 gen() 52129 MB/s Oct 31 14:08:15.930946 kernel: raid6: .... xor() 29648 MB/s, rmw enabled Oct 31 14:08:15.930953 kernel: raid6: using avx2x2 recovery algorithm Oct 31 14:08:15.930962 kernel: xor: automatically using best checksumming function avx Oct 31 14:08:15.930968 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 31 14:08:15.930975 kernel: BTRFS: device fsid 5e8ba8f1-db13-4075-a8cb-1b945120d0ee devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (196) Oct 31 14:08:15.930982 kernel: BTRFS info (device dm-0): first mount of filesystem 5e8ba8f1-db13-4075-a8cb-1b945120d0ee Oct 31 14:08:15.930989 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 31 14:08:15.930996 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 31 14:08:15.931003 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 31 14:08:15.931011 kernel: BTRFS info (device dm-0): enabling free space tree Oct 31 14:08:15.931019 kernel: loop: module loaded Oct 31 14:08:15.931026 kernel: loop0: detected capacity change from 0 to 100128 Oct 31 14:08:15.931033 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 31 14:08:15.931040 systemd[1]: Successfully made /usr/ read-only. Oct 31 14:08:15.931049 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 31 14:08:15.931059 systemd[1]: Detected virtualization vmware. Oct 31 14:08:15.931065 systemd[1]: Detected architecture x86-64. Oct 31 14:08:15.931072 systemd[1]: Running in initrd. Oct 31 14:08:15.931079 systemd[1]: No hostname configured, using default hostname. Oct 31 14:08:15.931086 systemd[1]: Hostname set to . Oct 31 14:08:15.931110 systemd[1]: Initializing machine ID from random generator. Oct 31 14:08:15.931120 systemd[1]: Queued start job for default target initrd.target. Oct 31 14:08:15.931130 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 31 14:08:15.931137 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 14:08:15.931144 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 14:08:15.931152 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 31 14:08:15.931160 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 31 14:08:15.931167 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 31 14:08:15.931175 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 31 14:08:15.931183 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 14:08:15.931190 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 31 14:08:15.931197 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 31 14:08:15.931204 systemd[1]: Reached target paths.target - Path Units. Oct 31 14:08:15.931211 systemd[1]: Reached target slices.target - Slice Units. Oct 31 14:08:15.931219 systemd[1]: Reached target swap.target - Swaps. Oct 31 14:08:15.931227 systemd[1]: Reached target timers.target - Timer Units. Oct 31 14:08:15.931233 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 31 14:08:15.931240 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 31 14:08:15.931248 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 31 14:08:15.931255 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 31 14:08:15.931262 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 31 14:08:15.931270 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 31 14:08:15.931278 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 14:08:15.931285 systemd[1]: Reached target sockets.target - Socket Units. Oct 31 14:08:15.931292 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Oct 31 14:08:15.931299 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 31 14:08:15.931306 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 31 14:08:15.931314 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 31 14:08:15.931323 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 31 14:08:15.931330 systemd[1]: Starting systemd-fsck-usr.service... Oct 31 14:08:15.931337 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 31 14:08:15.931344 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 31 14:08:15.931351 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 14:08:15.931360 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 31 14:08:15.931367 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 14:08:15.931375 systemd[1]: Finished systemd-fsck-usr.service. Oct 31 14:08:15.931397 systemd-journald[332]: Collecting audit messages is disabled. Oct 31 14:08:15.931417 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 31 14:08:15.931425 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 31 14:08:15.931432 kernel: Bridge firewalling registered Oct 31 14:08:15.931439 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 14:08:15.931448 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 31 14:08:15.931455 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 31 14:08:15.931462 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 31 14:08:15.931469 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 14:08:15.931477 systemd-journald[332]: Journal started Oct 31 14:08:15.931492 systemd-journald[332]: Runtime Journal (/run/log/journal/6c51ee8beb5d413783e4395afa5f84ec) is 4.8M, max 38.4M, 33.6M free. Oct 31 14:08:15.905525 systemd-modules-load[334]: Inserted module 'br_netfilter' Oct 31 14:08:15.933114 systemd[1]: Started systemd-journald.service - Journal Service. Oct 31 14:08:15.935171 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 31 14:08:15.942064 systemd-tmpfiles[356]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 31 14:08:15.945150 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 31 14:08:15.945537 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 14:08:15.952237 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 31 14:08:15.952492 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 14:08:15.954164 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 31 14:08:15.973134 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 14:08:15.975179 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 31 14:08:16.012640 dracut-cmdline[378]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=7e4f6395c1f11b5d1e07a15155afadb91de20f1aac1cd9cff8fc1baca215a11a Oct 31 14:08:16.045830 systemd-resolved[362]: Positive Trust Anchors: Oct 31 14:08:16.045846 systemd-resolved[362]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 14:08:16.045848 systemd-resolved[362]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 31 14:08:16.045870 systemd-resolved[362]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 31 14:08:16.062903 systemd-resolved[362]: Defaulting to hostname 'linux'. Oct 31 14:08:16.065854 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 31 14:08:16.066206 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 31 14:08:16.070381 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Oct 31 14:08:16.121113 kernel: Loading iSCSI transport class v2.0-870. Oct 31 14:08:16.137116 kernel: iscsi: registered transport (tcp) Oct 31 14:08:16.164120 kernel: iscsi: registered transport (qla4xxx) Oct 31 14:08:16.164169 kernel: QLogic iSCSI HBA Driver Oct 31 14:08:16.181831 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 31 14:08:16.194335 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 31 14:08:16.195436 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 31 14:08:16.220154 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 31 14:08:16.221175 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 31 14:08:16.222169 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 31 14:08:16.245743 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 31 14:08:16.246839 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 14:08:16.264883 systemd-udevd[619]: Using default interface naming scheme 'v257'. Oct 31 14:08:16.272013 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 14:08:16.273774 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 31 14:08:16.290070 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 31 14:08:16.291337 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 31 14:08:16.293254 dracut-pre-trigger[697]: rd.md=0: removing MD RAID activation Oct 31 14:08:16.308512 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 31 14:08:16.311175 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 31 14:08:16.323695 systemd-networkd[731]: lo: Link UP Oct 31 14:08:16.323903 systemd-networkd[731]: lo: Gained carrier Oct 31 14:08:16.324412 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 31 14:08:16.324582 systemd[1]: Reached target network.target - Network. Oct 31 14:08:16.392556 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 14:08:16.393714 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 31 14:08:16.492483 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Oct 31 14:08:16.499900 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Oct 31 14:08:16.509783 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Oct 31 14:08:16.510485 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 31 14:08:16.520368 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Oct 31 14:08:16.524372 kernel: VMware vmxnet3 virtual NIC driver - version 1.9.0.0-k-NAPI Oct 31 14:08:16.526109 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Oct 31 14:08:16.539112 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Oct 31 14:08:16.564108 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Oct 31 14:08:16.572171 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Oct 31 14:08:16.574040 (udev-worker)[761]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Oct 31 14:08:16.575424 systemd-networkd[731]: eth0: Interface name change detected, renamed to ens192. Oct 31 14:08:16.580110 kernel: cryptd: max_cpu_qlen set to 1000 Oct 31 14:08:16.582836 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 14:08:16.582912 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 14:08:16.583093 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 14:08:16.584206 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 14:08:16.587428 systemd-networkd[731]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Oct 31 14:08:16.590347 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Oct 31 14:08:16.590467 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Oct 31 14:08:16.592822 systemd-networkd[731]: ens192: Link UP Oct 31 14:08:16.592943 systemd-networkd[731]: ens192: Gained carrier Oct 31 14:08:16.616119 kernel: AES CTR mode by8 optimization enabled Oct 31 14:08:16.625778 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 14:08:16.662045 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 31 14:08:16.662584 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 31 14:08:16.662727 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 14:08:16.662926 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 31 14:08:16.663679 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 31 14:08:16.680609 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 31 14:08:16.946541 systemd-resolved[362]: Detected conflict on linux IN A 139.178.70.103 Oct 31 14:08:16.946558 systemd-resolved[362]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Oct 31 14:08:17.577727 disk-uuid[791]: Warning: The kernel is still using the old partition table. Oct 31 14:08:17.577727 disk-uuid[791]: The new table will be used at the next reboot or after you Oct 31 14:08:17.577727 disk-uuid[791]: run partprobe(8) or kpartx(8) Oct 31 14:08:17.577727 disk-uuid[791]: The operation has completed successfully. Oct 31 14:08:17.582436 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 31 14:08:17.582521 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 31 14:08:17.583366 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 31 14:08:17.702134 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (881) Oct 31 14:08:17.731010 kernel: BTRFS info (device sda6): first mount of filesystem dd2b9397-9351-49e9-bd32-bf3668fba946 Oct 31 14:08:17.731053 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 14:08:17.780884 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 31 14:08:17.780932 kernel: BTRFS info (device sda6): enabling free space tree Oct 31 14:08:17.786125 kernel: BTRFS info (device sda6): last unmount of filesystem dd2b9397-9351-49e9-bd32-bf3668fba946 Oct 31 14:08:17.787218 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 31 14:08:17.788500 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 31 14:08:18.124525 ignition[900]: Ignition 2.22.0 Oct 31 14:08:18.124782 ignition[900]: Stage: fetch-offline Oct 31 14:08:18.124806 ignition[900]: no configs at "/usr/lib/ignition/base.d" Oct 31 14:08:18.124813 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 14:08:18.124866 ignition[900]: parsed url from cmdline: "" Oct 31 14:08:18.124868 ignition[900]: no config URL provided Oct 31 14:08:18.124871 ignition[900]: reading system config file "/usr/lib/ignition/user.ign" Oct 31 14:08:18.124877 ignition[900]: no config at "/usr/lib/ignition/user.ign" Oct 31 14:08:18.125306 ignition[900]: config successfully fetched Oct 31 14:08:18.125326 ignition[900]: parsing config with SHA512: c06352c36231f19d5654b94c4e293361e410daf6aa0dfa7d98ecf91f83d16a2382d48e40c63f8d6ea116bca9eabeddfa0e986aab433e005a77f940e8719d4a94 Oct 31 14:08:18.128825 unknown[900]: fetched base config from "system" Oct 31 14:08:18.128834 unknown[900]: fetched user config from "vmware" Oct 31 14:08:18.129057 ignition[900]: fetch-offline: fetch-offline passed Oct 31 14:08:18.129104 ignition[900]: Ignition finished successfully Oct 31 14:08:18.130130 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 31 14:08:18.130335 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 31 14:08:18.130858 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 31 14:08:18.151186 ignition[907]: Ignition 2.22.0 Oct 31 14:08:18.151471 ignition[907]: Stage: kargs Oct 31 14:08:18.151585 ignition[907]: no configs at "/usr/lib/ignition/base.d" Oct 31 14:08:18.151592 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 14:08:18.152743 ignition[907]: kargs: kargs passed Oct 31 14:08:18.152895 ignition[907]: Ignition finished successfully Oct 31 14:08:18.154199 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 31 14:08:18.155055 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 31 14:08:18.171407 ignition[913]: Ignition 2.22.0 Oct 31 14:08:18.171689 ignition[913]: Stage: disks Oct 31 14:08:18.171885 ignition[913]: no configs at "/usr/lib/ignition/base.d" Oct 31 14:08:18.172013 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 14:08:18.172689 ignition[913]: disks: disks passed Oct 31 14:08:18.172825 ignition[913]: Ignition finished successfully Oct 31 14:08:18.173927 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 31 14:08:18.174365 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 31 14:08:18.174491 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 31 14:08:18.174683 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 31 14:08:18.174868 systemd[1]: Reached target sysinit.target - System Initialization. Oct 31 14:08:18.175051 systemd[1]: Reached target basic.target - Basic System. Oct 31 14:08:18.175969 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 31 14:08:18.205017 systemd-fsck[921]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Oct 31 14:08:18.206444 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 31 14:08:18.207453 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 31 14:08:18.299819 kernel: EXT4-fs (sda9): mounted filesystem cbeebc11-9f40-4f51-91db-fa53497e9ba3 r/w with ordered data mode. Quota mode: none. Oct 31 14:08:18.299185 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 31 14:08:18.299640 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 31 14:08:18.301115 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 31 14:08:18.303171 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 31 14:08:18.303836 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 31 14:08:18.304163 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 31 14:08:18.304433 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 31 14:08:18.316947 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 31 14:08:18.317260 systemd-networkd[731]: ens192: Gained IPv6LL Oct 31 14:08:18.318756 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 31 14:08:18.323111 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (929) Oct 31 14:08:18.325595 kernel: BTRFS info (device sda6): first mount of filesystem dd2b9397-9351-49e9-bd32-bf3668fba946 Oct 31 14:08:18.325625 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 14:08:18.330362 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 31 14:08:18.330414 kernel: BTRFS info (device sda6): enabling free space tree Oct 31 14:08:18.332120 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 31 14:08:18.364567 initrd-setup-root[953]: cut: /sysroot/etc/passwd: No such file or directory Oct 31 14:08:18.367087 initrd-setup-root[960]: cut: /sysroot/etc/group: No such file or directory Oct 31 14:08:18.369475 initrd-setup-root[967]: cut: /sysroot/etc/shadow: No such file or directory Oct 31 14:08:18.371846 initrd-setup-root[974]: cut: /sysroot/etc/gshadow: No such file or directory Oct 31 14:08:18.464452 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 31 14:08:18.465516 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 31 14:08:18.466181 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 31 14:08:18.478154 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 31 14:08:18.479310 kernel: BTRFS info (device sda6): last unmount of filesystem dd2b9397-9351-49e9-bd32-bf3668fba946 Oct 31 14:08:18.493616 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 31 14:08:18.498578 ignition[1042]: INFO : Ignition 2.22.0 Oct 31 14:08:18.498578 ignition[1042]: INFO : Stage: mount Oct 31 14:08:18.499043 ignition[1042]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 14:08:18.499043 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 14:08:18.499889 ignition[1042]: INFO : mount: mount passed Oct 31 14:08:18.500009 ignition[1042]: INFO : Ignition finished successfully Oct 31 14:08:18.500922 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 31 14:08:18.501647 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 31 14:08:19.300079 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 31 14:08:19.358116 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1055) Oct 31 14:08:19.361029 kernel: BTRFS info (device sda6): first mount of filesystem dd2b9397-9351-49e9-bd32-bf3668fba946 Oct 31 14:08:19.361067 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 14:08:19.364669 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 31 14:08:19.364694 kernel: BTRFS info (device sda6): enabling free space tree Oct 31 14:08:19.366044 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 31 14:08:19.391106 ignition[1072]: INFO : Ignition 2.22.0 Oct 31 14:08:19.391106 ignition[1072]: INFO : Stage: files Oct 31 14:08:19.391106 ignition[1072]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 14:08:19.391106 ignition[1072]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 14:08:19.392037 ignition[1072]: DEBUG : files: compiled without relabeling support, skipping Oct 31 14:08:19.398700 ignition[1072]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 31 14:08:19.398700 ignition[1072]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 31 14:08:19.423922 ignition[1072]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 31 14:08:19.424228 ignition[1072]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 31 14:08:19.424539 unknown[1072]: wrote ssh authorized keys file for user: core Oct 31 14:08:19.424795 ignition[1072]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 31 14:08:19.444040 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 31 14:08:19.444040 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 31 14:08:19.492117 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 31 14:08:19.532058 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 31 14:08:19.532336 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 31 14:08:19.532336 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 31 14:08:19.532336 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 31 14:08:19.532336 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 31 14:08:19.532336 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 14:08:19.533099 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 14:08:19.533099 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 14:08:19.533099 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 14:08:19.533577 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 14:08:19.533728 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 14:08:19.533728 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 31 14:08:19.535935 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 31 14:08:19.535935 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 31 14:08:19.536373 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Oct 31 14:08:19.988597 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 31 14:08:20.283661 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 31 14:08:20.283661 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Oct 31 14:08:20.294406 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Oct 31 14:08:20.294406 ignition[1072]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Oct 31 14:08:20.299838 ignition[1072]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 14:08:20.306415 ignition[1072]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 14:08:20.306415 ignition[1072]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Oct 31 14:08:20.306415 ignition[1072]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Oct 31 14:08:20.306877 ignition[1072]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 14:08:20.306877 ignition[1072]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 14:08:20.306877 ignition[1072]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Oct 31 14:08:20.306877 ignition[1072]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Oct 31 14:08:20.432330 ignition[1072]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 14:08:20.434576 ignition[1072]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 14:08:20.434820 ignition[1072]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Oct 31 14:08:20.434820 ignition[1072]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 31 14:08:20.434820 ignition[1072]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 31 14:08:20.436227 ignition[1072]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 31 14:08:20.436227 ignition[1072]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 31 14:08:20.436227 ignition[1072]: INFO : files: files passed Oct 31 14:08:20.436227 ignition[1072]: INFO : Ignition finished successfully Oct 31 14:08:20.436911 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 31 14:08:20.437986 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 31 14:08:20.439193 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 31 14:08:20.451953 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 31 14:08:20.452372 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 31 14:08:20.455615 initrd-setup-root-after-ignition[1105]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 14:08:20.455615 initrd-setup-root-after-ignition[1105]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 31 14:08:20.456584 initrd-setup-root-after-ignition[1109]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 14:08:20.457328 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 31 14:08:20.457681 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 31 14:08:20.458266 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 31 14:08:20.494317 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 31 14:08:20.494379 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 31 14:08:20.494663 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 31 14:08:20.494788 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 31 14:08:20.495109 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 31 14:08:20.495584 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 31 14:08:20.519939 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 31 14:08:20.520877 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 31 14:08:20.535315 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 31 14:08:20.535404 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 31 14:08:20.535635 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 14:08:20.535836 systemd[1]: Stopped target timers.target - Timer Units. Oct 31 14:08:20.536028 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 31 14:08:20.536139 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 31 14:08:20.536485 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 31 14:08:20.536654 systemd[1]: Stopped target basic.target - Basic System. Oct 31 14:08:20.536841 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 31 14:08:20.537042 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 31 14:08:20.537249 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 31 14:08:20.537464 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 31 14:08:20.537670 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 31 14:08:20.537876 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 31 14:08:20.538085 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 31 14:08:20.538304 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 31 14:08:20.538503 systemd[1]: Stopped target swap.target - Swaps. Oct 31 14:08:20.538664 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 31 14:08:20.538733 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 31 14:08:20.538998 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 31 14:08:20.539254 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 14:08:20.539437 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 31 14:08:20.539484 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 14:08:20.539661 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 31 14:08:20.539723 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 31 14:08:20.540007 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 31 14:08:20.540073 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 31 14:08:20.540303 systemd[1]: Stopped target paths.target - Path Units. Oct 31 14:08:20.540459 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 31 14:08:20.544114 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 14:08:20.544289 systemd[1]: Stopped target slices.target - Slice Units. Oct 31 14:08:20.544510 systemd[1]: Stopped target sockets.target - Socket Units. Oct 31 14:08:20.544693 systemd[1]: iscsid.socket: Deactivated successfully. Oct 31 14:08:20.544743 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 31 14:08:20.544899 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 31 14:08:20.544947 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 31 14:08:20.545111 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 31 14:08:20.545177 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 31 14:08:20.545432 systemd[1]: ignition-files.service: Deactivated successfully. Oct 31 14:08:20.545495 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 31 14:08:20.546186 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 31 14:08:20.547559 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 31 14:08:20.547672 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 31 14:08:20.547739 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 14:08:20.547907 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 31 14:08:20.547966 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 14:08:20.549193 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 31 14:08:20.549258 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 31 14:08:20.552213 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 31 14:08:20.559619 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 31 14:08:20.570157 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 31 14:08:20.571247 ignition[1129]: INFO : Ignition 2.22.0 Oct 31 14:08:20.571247 ignition[1129]: INFO : Stage: umount Oct 31 14:08:20.571546 ignition[1129]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 14:08:20.571546 ignition[1129]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 14:08:20.571980 ignition[1129]: INFO : umount: umount passed Oct 31 14:08:20.572076 ignition[1129]: INFO : Ignition finished successfully Oct 31 14:08:20.573058 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 31 14:08:20.573147 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 31 14:08:20.573399 systemd[1]: Stopped target network.target - Network. Oct 31 14:08:20.573549 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 31 14:08:20.573577 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 31 14:08:20.573728 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 31 14:08:20.573753 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 31 14:08:20.573889 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 31 14:08:20.573913 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 31 14:08:20.574064 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 31 14:08:20.574086 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 31 14:08:20.574284 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 31 14:08:20.574541 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 31 14:08:20.581701 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 31 14:08:20.581771 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 31 14:08:20.584087 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 31 14:08:20.584153 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 31 14:08:20.585544 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 31 14:08:20.585821 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 31 14:08:20.585952 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 31 14:08:20.586736 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 31 14:08:20.586944 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 31 14:08:20.586971 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 31 14:08:20.587264 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Oct 31 14:08:20.587288 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Oct 31 14:08:20.587563 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 31 14:08:20.587587 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 31 14:08:20.587977 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 31 14:08:20.588000 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 31 14:08:20.588165 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 14:08:20.599027 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 31 14:08:20.599298 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 14:08:20.599684 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 31 14:08:20.599821 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 31 14:08:20.599939 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 31 14:08:20.599957 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 14:08:20.600062 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 31 14:08:20.600088 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 31 14:08:20.600246 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 31 14:08:20.600269 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 31 14:08:20.600405 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 31 14:08:20.600428 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 14:08:20.602159 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 31 14:08:20.602381 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 31 14:08:20.602411 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 31 14:08:20.602678 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 31 14:08:20.602701 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 14:08:20.602974 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 31 14:08:20.602999 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 14:08:20.603412 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 31 14:08:20.603435 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 14:08:20.603696 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 14:08:20.603719 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 14:08:20.613630 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 31 14:08:20.613934 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 31 14:08:20.645400 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 31 14:08:20.645493 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 31 14:08:20.883866 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 31 14:08:20.883974 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 31 14:08:20.884369 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 31 14:08:20.884492 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 31 14:08:20.884526 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 31 14:08:20.885149 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 31 14:08:20.902214 systemd[1]: Switching root. Oct 31 14:08:20.926722 systemd-journald[332]: Journal stopped Oct 31 14:08:22.243940 systemd-journald[332]: Received SIGTERM from PID 1 (systemd). Oct 31 14:08:22.243976 kernel: SELinux: policy capability network_peer_controls=1 Oct 31 14:08:22.243985 kernel: SELinux: policy capability open_perms=1 Oct 31 14:08:22.243992 kernel: SELinux: policy capability extended_socket_class=1 Oct 31 14:08:22.243998 kernel: SELinux: policy capability always_check_network=0 Oct 31 14:08:22.244004 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 31 14:08:22.244013 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 31 14:08:22.244019 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 31 14:08:22.244025 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 31 14:08:22.244033 kernel: SELinux: policy capability userspace_initial_context=0 Oct 31 14:08:22.244040 systemd[1]: Successfully loaded SELinux policy in 92.175ms. Oct 31 14:08:22.244047 kernel: audit: type=1403 audit(1761919701.585:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 31 14:08:22.244056 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.004ms. Oct 31 14:08:22.244063 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 31 14:08:22.244071 systemd[1]: Detected virtualization vmware. Oct 31 14:08:22.244079 systemd[1]: Detected architecture x86-64. Oct 31 14:08:22.244087 systemd[1]: Detected first boot. Oct 31 14:08:22.244118 systemd[1]: Initializing machine ID from random generator. Oct 31 14:08:22.244130 zram_generator::config[1174]: No configuration found. Oct 31 14:08:22.244255 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Oct 31 14:08:22.244267 kernel: Guest personality initialized and is active Oct 31 14:08:22.244276 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 31 14:08:22.244283 kernel: Initialized host personality Oct 31 14:08:22.244290 kernel: NET: Registered PF_VSOCK protocol family Oct 31 14:08:22.244297 systemd[1]: Populated /etc with preset unit settings. Oct 31 14:08:22.244306 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 31 14:08:22.244315 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Oct 31 14:08:22.244322 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 31 14:08:22.244330 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 31 14:08:22.244337 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 31 14:08:22.244344 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 31 14:08:22.244352 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 31 14:08:22.244360 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 31 14:08:22.244368 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 31 14:08:22.244375 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 31 14:08:22.244382 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 31 14:08:22.244390 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 31 14:08:22.244397 systemd[1]: Created slice user.slice - User and Session Slice. Oct 31 14:08:22.244404 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 14:08:22.244413 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 14:08:22.244422 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 31 14:08:22.244430 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 31 14:08:22.244437 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 31 14:08:22.244445 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 31 14:08:22.244453 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 31 14:08:22.244461 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 14:08:22.244469 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 31 14:08:22.244476 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 31 14:08:22.244484 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 31 14:08:22.244492 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 31 14:08:22.244499 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 31 14:08:22.244508 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 14:08:22.244516 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 31 14:08:22.244523 systemd[1]: Reached target slices.target - Slice Units. Oct 31 14:08:22.244531 systemd[1]: Reached target swap.target - Swaps. Oct 31 14:08:22.244540 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 31 14:08:22.244549 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 31 14:08:22.244558 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 31 14:08:22.244566 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 31 14:08:22.244574 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 31 14:08:22.244585 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 14:08:22.244599 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 31 14:08:22.244612 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 31 14:08:22.244620 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 31 14:08:22.244628 systemd[1]: Mounting media.mount - External Media Directory... Oct 31 14:08:22.244635 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 14:08:22.244643 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 31 14:08:22.244651 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 31 14:08:22.244658 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 31 14:08:22.244667 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 31 14:08:22.244675 systemd[1]: Reached target machines.target - Containers. Oct 31 14:08:22.244683 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 31 14:08:22.244691 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Oct 31 14:08:22.244698 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 31 14:08:22.244706 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 31 14:08:22.244715 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 14:08:22.244722 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 31 14:08:22.244730 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 14:08:22.244737 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 31 14:08:22.244745 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 14:08:22.244752 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 31 14:08:22.244760 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 31 14:08:22.244769 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 31 14:08:22.244776 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 31 14:08:22.244784 systemd[1]: Stopped systemd-fsck-usr.service. Oct 31 14:08:22.244792 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 31 14:08:22.244799 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 31 14:08:22.244807 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 31 14:08:22.244814 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 31 14:08:22.244823 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 31 14:08:22.244831 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 31 14:08:22.244839 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 31 14:08:22.244847 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 14:08:22.244854 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 31 14:08:22.244862 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 31 14:08:22.244871 systemd[1]: Mounted media.mount - External Media Directory. Oct 31 14:08:22.244879 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 31 14:08:22.244886 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 31 14:08:22.244894 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 31 14:08:22.244901 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 14:08:22.244909 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 31 14:08:22.244916 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 31 14:08:22.244925 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 14:08:22.244933 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 14:08:22.244940 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 14:08:22.244948 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 14:08:22.244957 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 14:08:22.244964 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 14:08:22.244972 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 31 14:08:22.244981 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 31 14:08:22.244988 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 31 14:08:22.244996 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 31 14:08:22.245004 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 31 14:08:22.245011 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 31 14:08:22.245019 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 31 14:08:22.245028 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 31 14:08:22.245036 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 31 14:08:22.245043 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 31 14:08:22.245053 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 31 14:08:22.245063 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 31 14:08:22.245071 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 31 14:08:22.245079 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 31 14:08:22.245086 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 14:08:22.245226 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 31 14:08:22.245238 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 14:08:22.245246 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 31 14:08:22.245254 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 31 14:08:22.245264 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 31 14:08:22.245272 kernel: fuse: init (API version 7.41) Oct 31 14:08:22.245280 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 31 14:08:22.245288 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 31 14:08:22.245296 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 31 14:08:22.245304 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 14:08:22.245311 kernel: ACPI: bus type drm_connector registered Oct 31 14:08:22.245320 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 31 14:08:22.245348 systemd-journald[1267]: Collecting audit messages is disabled. Oct 31 14:08:22.245365 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 14:08:22.245375 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 31 14:08:22.245386 systemd-journald[1267]: Journal started Oct 31 14:08:22.245401 systemd-journald[1267]: Runtime Journal (/run/log/journal/623d68bbe0d247e4b50e4d11a7b2dfcd) is 4.8M, max 38.4M, 33.6M free. Oct 31 14:08:21.973416 systemd[1]: Queued start job for default target multi-user.target. Oct 31 14:08:22.225865 ignition[1292]: Ignition 2.22.0 Oct 31 14:08:21.986037 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Oct 31 14:08:22.248956 systemd[1]: Started systemd-journald.service - Journal Service. Oct 31 14:08:22.226043 ignition[1292]: deleting config from guestinfo properties Oct 31 14:08:21.986303 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 31 14:08:22.242837 ignition[1292]: Successfully deleted config Oct 31 14:08:22.218464 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Oct 31 14:08:22.218487 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Oct 31 14:08:22.250001 jq[1244]: true Oct 31 14:08:22.250186 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 31 14:08:22.251840 jq[1287]: true Oct 31 14:08:22.256592 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 31 14:08:22.258432 kernel: loop1: detected capacity change from 0 to 128912 Oct 31 14:08:22.261298 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Oct 31 14:08:22.262665 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 31 14:08:22.268456 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 31 14:08:22.276162 systemd-journald[1267]: Time spent on flushing to /var/log/journal/623d68bbe0d247e4b50e4d11a7b2dfcd is 38.660ms for 1760 entries. Oct 31 14:08:22.276162 systemd-journald[1267]: System Journal (/var/log/journal/623d68bbe0d247e4b50e4d11a7b2dfcd) is 8M, max 588.1M, 580.1M free. Oct 31 14:08:22.322081 systemd-journald[1267]: Received client request to flush runtime journal. Oct 31 14:08:22.309134 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 14:08:22.323055 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 31 14:08:22.349292 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 31 14:08:22.359403 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 31 14:08:22.363241 kernel: loop2: detected capacity change from 0 to 111544 Oct 31 14:08:22.362206 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 31 14:08:22.364242 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 31 14:08:22.375212 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 31 14:08:22.382901 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Oct 31 14:08:22.383104 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Oct 31 14:08:22.387248 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 14:08:22.396113 kernel: loop3: detected capacity change from 0 to 229808 Oct 31 14:08:22.412643 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 31 14:08:22.435106 kernel: loop4: detected capacity change from 0 to 2960 Oct 31 14:08:22.456113 kernel: loop5: detected capacity change from 0 to 128912 Oct 31 14:08:22.533631 systemd-resolved[1343]: Positive Trust Anchors: Oct 31 14:08:22.533848 systemd-resolved[1343]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 14:08:22.533882 systemd-resolved[1343]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 31 14:08:22.533930 systemd-resolved[1343]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 31 14:08:22.536962 systemd-resolved[1343]: Defaulting to hostname 'linux'. Oct 31 14:08:22.538081 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 31 14:08:22.538270 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 31 14:08:22.551112 kernel: loop6: detected capacity change from 0 to 111544 Oct 31 14:08:22.561147 kernel: loop7: detected capacity change from 0 to 229808 Oct 31 14:08:22.577112 kernel: loop1: detected capacity change from 0 to 2960 Oct 31 14:08:22.584158 (sd-merge)[1357]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-vmware.raw'. Oct 31 14:08:22.586527 (sd-merge)[1357]: Merged extensions into '/usr'. Oct 31 14:08:22.590162 systemd[1]: Reload requested from client PID 1305 ('systemd-sysext') (unit systemd-sysext.service)... Oct 31 14:08:22.590174 systemd[1]: Reloading... Oct 31 14:08:22.649110 zram_generator::config[1385]: No configuration found. Oct 31 14:08:22.737474 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 31 14:08:22.786212 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 31 14:08:22.786470 systemd[1]: Reloading finished in 195 ms. Oct 31 14:08:22.817663 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 31 14:08:22.818059 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 31 14:08:22.824052 systemd[1]: Starting ensure-sysext.service... Oct 31 14:08:22.826194 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 31 14:08:22.827280 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 14:08:22.834989 systemd[1]: Reload requested from client PID 1441 ('systemctl') (unit ensure-sysext.service)... Oct 31 14:08:22.834998 systemd[1]: Reloading... Oct 31 14:08:22.848597 systemd-tmpfiles[1442]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 31 14:08:22.848811 systemd-tmpfiles[1442]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 31 14:08:22.848990 systemd-tmpfiles[1442]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 31 14:08:22.849163 systemd-tmpfiles[1442]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 31 14:08:22.849690 systemd-tmpfiles[1442]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 31 14:08:22.849850 systemd-tmpfiles[1442]: ACLs are not supported, ignoring. Oct 31 14:08:22.849887 systemd-tmpfiles[1442]: ACLs are not supported, ignoring. Oct 31 14:08:22.851866 systemd-udevd[1443]: Using default interface naming scheme 'v257'. Oct 31 14:08:22.858202 systemd-tmpfiles[1442]: Detected autofs mount point /boot during canonicalization of boot. Oct 31 14:08:22.858210 systemd-tmpfiles[1442]: Skipping /boot Oct 31 14:08:22.862654 systemd-tmpfiles[1442]: Detected autofs mount point /boot during canonicalization of boot. Oct 31 14:08:22.862660 systemd-tmpfiles[1442]: Skipping /boot Oct 31 14:08:22.888116 zram_generator::config[1473]: No configuration found. Oct 31 14:08:22.971128 kernel: mousedev: PS/2 mouse device common for all mice Oct 31 14:08:22.981118 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 31 14:08:22.988268 kernel: ACPI: button: Power Button [PWRF] Oct 31 14:08:23.031538 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 31 14:08:23.092849 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 31 14:08:23.092994 systemd[1]: Reloading finished in 257 ms. Oct 31 14:08:23.102241 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 14:08:23.107118 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Oct 31 14:08:23.109119 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 14:08:23.129401 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Oct 31 14:08:23.131229 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 14:08:23.132375 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 31 14:08:23.134685 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 31 14:08:23.135884 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 31 14:08:23.138295 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 14:08:23.139537 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 14:08:23.143417 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 31 14:08:23.149310 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 14:08:23.150661 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 14:08:23.153404 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 31 14:08:23.153551 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 31 14:08:23.156729 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 31 14:08:23.164344 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 31 14:08:23.168444 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 31 14:08:23.168581 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 14:08:23.175190 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 14:08:23.175296 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 14:08:23.175354 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 31 14:08:23.175418 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 14:08:23.178351 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 14:08:23.181512 (udev-worker)[1481]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Oct 31 14:08:23.183368 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 31 14:08:23.183583 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 14:08:23.183655 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 31 14:08:23.183751 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 14:08:23.184444 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 14:08:23.185194 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 14:08:23.187575 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 14:08:23.202750 systemd[1]: Finished ensure-sysext.service. Oct 31 14:08:23.209736 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 31 14:08:23.225880 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 31 14:08:23.230957 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 14:08:23.234251 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 14:08:23.235266 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 31 14:08:23.241512 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 14:08:23.241646 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 14:08:23.242013 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 31 14:08:23.243795 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 31 14:08:23.244155 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 31 14:08:23.249299 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 31 14:08:23.252232 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 14:08:23.276566 augenrules[1619]: No rules Oct 31 14:08:23.277950 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 31 14:08:23.282284 systemd[1]: audit-rules.service: Deactivated successfully. Oct 31 14:08:23.282427 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 31 14:08:23.282774 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 14:08:23.283134 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 31 14:08:23.283654 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 31 14:08:23.359480 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 31 14:08:23.359708 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 31 14:08:23.376038 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 31 14:08:23.376291 systemd[1]: Reached target time-set.target - System Time Set. Oct 31 14:08:23.380427 systemd-networkd[1576]: lo: Link UP Oct 31 14:08:23.380657 systemd-networkd[1576]: lo: Gained carrier Oct 31 14:08:23.382092 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 31 14:08:23.382245 systemd[1]: Reached target network.target - Network. Oct 31 14:08:23.382255 systemd-networkd[1576]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Oct 31 14:08:23.385175 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Oct 31 14:08:23.385392 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Oct 31 14:08:23.384952 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 31 14:08:23.386004 systemd-networkd[1576]: ens192: Link UP Oct 31 14:08:23.386497 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 31 14:08:23.386635 systemd-networkd[1576]: ens192: Gained carrier Oct 31 14:08:23.390630 systemd-timesyncd[1592]: Network configuration changed, trying to establish connection. Oct 31 14:08:23.432338 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 31 14:08:23.456438 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 14:10:01.865088 systemd-resolved[1343]: Clock change detected. Flushing caches. Oct 31 14:10:01.865225 systemd-timesyncd[1592]: Contacted time server 173.249.203.227:123 (1.flatcar.pool.ntp.org). Oct 31 14:10:01.865269 systemd-timesyncd[1592]: Initial clock synchronization to Fri 2025-10-31 14:10:01.865063 UTC. Oct 31 14:10:02.192708 ldconfig[1559]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 31 14:10:02.194520 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 31 14:10:02.195691 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 31 14:10:02.206301 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 31 14:10:02.206871 systemd[1]: Reached target sysinit.target - System Initialization. Oct 31 14:10:02.207098 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 31 14:10:02.207269 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 31 14:10:02.207425 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 31 14:10:02.207607 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 31 14:10:02.207754 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 31 14:10:02.208027 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 31 14:10:02.208178 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 31 14:10:02.208232 systemd[1]: Reached target paths.target - Path Units. Oct 31 14:10:02.208362 systemd[1]: Reached target timers.target - Timer Units. Oct 31 14:10:02.211560 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 31 14:10:02.212537 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 31 14:10:02.214568 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 31 14:10:02.214887 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 31 14:10:02.215022 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 31 14:10:02.216202 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 31 14:10:02.216458 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 31 14:10:02.216946 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 31 14:10:02.217440 systemd[1]: Reached target sockets.target - Socket Units. Oct 31 14:10:02.217545 systemd[1]: Reached target basic.target - Basic System. Oct 31 14:10:02.217672 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 31 14:10:02.217690 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 31 14:10:02.218356 systemd[1]: Starting containerd.service - containerd container runtime... Oct 31 14:10:02.220864 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 31 14:10:02.222468 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 31 14:10:02.223561 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 31 14:10:02.233873 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 31 14:10:02.234012 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 31 14:10:02.234905 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 31 14:10:02.236201 jq[1649]: false Oct 31 14:10:02.236121 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 31 14:10:02.238599 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 31 14:10:02.239926 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 31 14:10:02.241514 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 31 14:10:02.244501 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 31 14:10:02.244624 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 31 14:10:02.245134 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 31 14:10:02.245534 systemd[1]: Starting update-engine.service - Update Engine... Oct 31 14:10:02.247902 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 31 14:10:02.250294 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Oct 31 14:10:02.254978 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 31 14:10:02.255393 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 31 14:10:02.256946 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 31 14:10:02.259456 google_oslogin_nss_cache[1651]: oslogin_cache_refresh[1651]: Refreshing passwd entry cache Oct 31 14:10:02.259459 oslogin_cache_refresh[1651]: Refreshing passwd entry cache Oct 31 14:10:02.267024 jq[1658]: true Oct 31 14:10:02.266005 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 31 14:10:02.270146 extend-filesystems[1650]: Found /dev/sda6 Oct 31 14:10:02.269434 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 31 14:10:02.279589 extend-filesystems[1650]: Found /dev/sda9 Oct 31 14:10:02.282042 oslogin_cache_refresh[1651]: Failure getting users, quitting Oct 31 14:10:02.282581 extend-filesystems[1650]: Checking size of /dev/sda9 Oct 31 14:10:02.282705 google_oslogin_nss_cache[1651]: oslogin_cache_refresh[1651]: Failure getting users, quitting Oct 31 14:10:02.282705 google_oslogin_nss_cache[1651]: oslogin_cache_refresh[1651]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 31 14:10:02.282705 google_oslogin_nss_cache[1651]: oslogin_cache_refresh[1651]: Refreshing group entry cache Oct 31 14:10:02.282052 oslogin_cache_refresh[1651]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 31 14:10:02.282078 oslogin_cache_refresh[1651]: Refreshing group entry cache Oct 31 14:10:02.283428 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Oct 31 14:10:02.286922 update_engine[1657]: I20251031 14:10:02.286711 1657 main.cc:92] Flatcar Update Engine starting Oct 31 14:10:02.288358 jq[1674]: true Oct 31 14:10:02.288899 google_oslogin_nss_cache[1651]: oslogin_cache_refresh[1651]: Failure getting groups, quitting Oct 31 14:10:02.288899 google_oslogin_nss_cache[1651]: oslogin_cache_refresh[1651]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 31 14:10:02.288895 oslogin_cache_refresh[1651]: Failure getting groups, quitting Oct 31 14:10:02.288902 oslogin_cache_refresh[1651]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 31 14:10:02.289398 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Oct 31 14:10:02.289726 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 31 14:10:02.290190 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 31 14:10:02.299771 tar[1664]: linux-amd64/LICENSE Oct 31 14:10:02.299771 tar[1664]: linux-amd64/helm Oct 31 14:10:02.304527 systemd[1]: motdgen.service: Deactivated successfully. Oct 31 14:10:02.304732 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 31 14:10:02.308100 extend-filesystems[1650]: Resized partition /dev/sda9 Oct 31 14:10:02.320534 extend-filesystems[1703]: resize2fs 1.47.3 (8-Jul-2025) Oct 31 14:10:02.332067 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 1635323 blocks Oct 31 14:10:02.332103 kernel: EXT4-fs (sda9): resized filesystem to 1635323 Oct 31 14:10:02.334642 unknown[1686]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Oct 31 14:10:02.338245 unknown[1686]: Core dump limit set to -1 Oct 31 14:10:02.368807 extend-filesystems[1703]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Oct 31 14:10:02.368807 extend-filesystems[1703]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 31 14:10:02.368807 extend-filesystems[1703]: The filesystem on /dev/sda9 is now 1635323 (4k) blocks long. Oct 31 14:10:02.368560 dbus-daemon[1647]: [system] SELinux support is enabled Oct 31 14:10:02.371995 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 31 14:10:02.379011 extend-filesystems[1650]: Resized filesystem in /dev/sda9 Oct 31 14:10:02.381876 bash[1720]: Updated "/home/core/.ssh/authorized_keys" Oct 31 14:10:02.379249 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Oct 31 14:10:02.379555 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 31 14:10:02.379700 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 31 14:10:02.381067 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 31 14:10:02.382693 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 31 14:10:02.384866 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 31 14:10:02.384884 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 31 14:10:02.385043 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 31 14:10:02.385055 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 31 14:10:02.387518 systemd[1]: Started update-engine.service - Update Engine. Oct 31 14:10:02.388806 update_engine[1657]: I20251031 14:10:02.388324 1657 update_check_scheduler.cc:74] Next update check in 7m31s Oct 31 14:10:02.393404 systemd-logind[1656]: Watching system buttons on /dev/input/event2 (Power Button) Oct 31 14:10:02.394826 systemd-logind[1656]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 31 14:10:02.395001 systemd-logind[1656]: New seat seat0. Oct 31 14:10:02.395555 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 31 14:10:02.395781 systemd[1]: Started systemd-logind.service - User Login Management. Oct 31 14:10:02.466556 sshd_keygen[1696]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 31 14:10:02.536842 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 31 14:10:02.540160 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 31 14:10:02.543650 locksmithd[1727]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 31 14:10:02.570050 systemd[1]: issuegen.service: Deactivated successfully. Oct 31 14:10:02.570224 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 31 14:10:02.573251 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 31 14:10:02.586263 containerd[1690]: time="2025-10-31T14:10:02Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 31 14:10:02.588100 containerd[1690]: time="2025-10-31T14:10:02.587217811Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 31 14:10:02.591131 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 31 14:10:02.593238 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 31 14:10:02.594979 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 31 14:10:02.595159 systemd[1]: Reached target getty.target - Login Prompts. Oct 31 14:10:02.603667 containerd[1690]: time="2025-10-31T14:10:02.603637273Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.535µs" Oct 31 14:10:02.603853 containerd[1690]: time="2025-10-31T14:10:02.603839730Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 31 14:10:02.603898 containerd[1690]: time="2025-10-31T14:10:02.603890517Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 31 14:10:02.604819 containerd[1690]: time="2025-10-31T14:10:02.604588094Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 31 14:10:02.604819 containerd[1690]: time="2025-10-31T14:10:02.604603771Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 31 14:10:02.604819 containerd[1690]: time="2025-10-31T14:10:02.604618948Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 31 14:10:02.604819 containerd[1690]: time="2025-10-31T14:10:02.604657765Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 31 14:10:02.604819 containerd[1690]: time="2025-10-31T14:10:02.604665421Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 31 14:10:02.604819 containerd[1690]: time="2025-10-31T14:10:02.604772770Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 31 14:10:02.604819 containerd[1690]: time="2025-10-31T14:10:02.604781191Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 31 14:10:02.604955 containerd[1690]: time="2025-10-31T14:10:02.604945283Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 31 14:10:02.605042 containerd[1690]: time="2025-10-31T14:10:02.605032683Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 31 14:10:02.605125 containerd[1690]: time="2025-10-31T14:10:02.605115438Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 31 14:10:02.605428 containerd[1690]: time="2025-10-31T14:10:02.605417711Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 31 14:10:02.605477 containerd[1690]: time="2025-10-31T14:10:02.605468223Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 31 14:10:02.606078 containerd[1690]: time="2025-10-31T14:10:02.605805217Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 31 14:10:02.606078 containerd[1690]: time="2025-10-31T14:10:02.605830051Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 31 14:10:02.606078 containerd[1690]: time="2025-10-31T14:10:02.605959750Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 31 14:10:02.606078 containerd[1690]: time="2025-10-31T14:10:02.605994047Z" level=info msg="metadata content store policy set" policy=shared Oct 31 14:10:02.607349 containerd[1690]: time="2025-10-31T14:10:02.607336839Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 31 14:10:02.608007 containerd[1690]: time="2025-10-31T14:10:02.607698249Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 31 14:10:02.608007 containerd[1690]: time="2025-10-31T14:10:02.607716509Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 31 14:10:02.608007 containerd[1690]: time="2025-10-31T14:10:02.607724832Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 31 14:10:02.608007 containerd[1690]: time="2025-10-31T14:10:02.607732053Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 31 14:10:02.608007 containerd[1690]: time="2025-10-31T14:10:02.607738084Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 31 14:10:02.608007 containerd[1690]: time="2025-10-31T14:10:02.607762571Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 31 14:10:02.608007 containerd[1690]: time="2025-10-31T14:10:02.607771711Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 31 14:10:02.608007 containerd[1690]: time="2025-10-31T14:10:02.607783192Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 31 14:10:02.608007 containerd[1690]: time="2025-10-31T14:10:02.607805539Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 31 14:10:02.608007 containerd[1690]: time="2025-10-31T14:10:02.607814622Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 31 14:10:02.608007 containerd[1690]: time="2025-10-31T14:10:02.607821656Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 31 14:10:02.608007 containerd[1690]: time="2025-10-31T14:10:02.607876213Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 31 14:10:02.608007 containerd[1690]: time="2025-10-31T14:10:02.607888249Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 31 14:10:02.608007 containerd[1690]: time="2025-10-31T14:10:02.607896099Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 31 14:10:02.608198 containerd[1690]: time="2025-10-31T14:10:02.607904063Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 31 14:10:02.608198 containerd[1690]: time="2025-10-31T14:10:02.607910778Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 31 14:10:02.608198 containerd[1690]: time="2025-10-31T14:10:02.607916743Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 31 14:10:02.608198 containerd[1690]: time="2025-10-31T14:10:02.607923178Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 31 14:10:02.608198 containerd[1690]: time="2025-10-31T14:10:02.607928597Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 31 14:10:02.608198 containerd[1690]: time="2025-10-31T14:10:02.607939719Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 31 14:10:02.608198 containerd[1690]: time="2025-10-31T14:10:02.607946499Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 31 14:10:02.608198 containerd[1690]: time="2025-10-31T14:10:02.607952853Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 31 14:10:02.608198 containerd[1690]: time="2025-10-31T14:10:02.607985732Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 31 14:10:02.608198 containerd[1690]: time="2025-10-31T14:10:02.607993360Z" level=info msg="Start snapshots syncer" Oct 31 14:10:02.608351 containerd[1690]: time="2025-10-31T14:10:02.608342104Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 31 14:10:02.608551 containerd[1690]: time="2025-10-31T14:10:02.608530430Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 31 14:10:02.608822 containerd[1690]: time="2025-10-31T14:10:02.608810991Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 31 14:10:02.610949 containerd[1690]: time="2025-10-31T14:10:02.610252327Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 31 14:10:02.610949 containerd[1690]: time="2025-10-31T14:10:02.610312975Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 31 14:10:02.610949 containerd[1690]: time="2025-10-31T14:10:02.610327920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 31 14:10:02.610949 containerd[1690]: time="2025-10-31T14:10:02.610334765Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 31 14:10:02.610949 containerd[1690]: time="2025-10-31T14:10:02.610340868Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 31 14:10:02.610949 containerd[1690]: time="2025-10-31T14:10:02.610349974Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 31 14:10:02.610949 containerd[1690]: time="2025-10-31T14:10:02.610358855Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 31 14:10:02.610949 containerd[1690]: time="2025-10-31T14:10:02.610367025Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 31 14:10:02.610949 containerd[1690]: time="2025-10-31T14:10:02.610386617Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 31 14:10:02.610949 containerd[1690]: time="2025-10-31T14:10:02.610394520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 31 14:10:02.610949 containerd[1690]: time="2025-10-31T14:10:02.610401666Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 31 14:10:02.610949 containerd[1690]: time="2025-10-31T14:10:02.610418807Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 31 14:10:02.610949 containerd[1690]: time="2025-10-31T14:10:02.610428041Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 31 14:10:02.610949 containerd[1690]: time="2025-10-31T14:10:02.610433177Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 31 14:10:02.611149 containerd[1690]: time="2025-10-31T14:10:02.610438538Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 31 14:10:02.611149 containerd[1690]: time="2025-10-31T14:10:02.610442901Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 31 14:10:02.611149 containerd[1690]: time="2025-10-31T14:10:02.610448150Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 31 14:10:02.611149 containerd[1690]: time="2025-10-31T14:10:02.610454211Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 31 14:10:02.611149 containerd[1690]: time="2025-10-31T14:10:02.610463705Z" level=info msg="runtime interface created" Oct 31 14:10:02.611149 containerd[1690]: time="2025-10-31T14:10:02.610466833Z" level=info msg="created NRI interface" Oct 31 14:10:02.611149 containerd[1690]: time="2025-10-31T14:10:02.610471261Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 31 14:10:02.611149 containerd[1690]: time="2025-10-31T14:10:02.610477512Z" level=info msg="Connect containerd service" Oct 31 14:10:02.611149 containerd[1690]: time="2025-10-31T14:10:02.610493186Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 31 14:10:02.614772 containerd[1690]: time="2025-10-31T14:10:02.614745708Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 31 14:10:02.729048 tar[1664]: linux-amd64/README.md Oct 31 14:10:02.744467 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 31 14:10:02.751852 containerd[1690]: time="2025-10-31T14:10:02.751649515Z" level=info msg="Start subscribing containerd event" Oct 31 14:10:02.751852 containerd[1690]: time="2025-10-31T14:10:02.751683368Z" level=info msg="Start recovering state" Oct 31 14:10:02.751852 containerd[1690]: time="2025-10-31T14:10:02.751733335Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 31 14:10:02.751852 containerd[1690]: time="2025-10-31T14:10:02.751742025Z" level=info msg="Start event monitor" Oct 31 14:10:02.751852 containerd[1690]: time="2025-10-31T14:10:02.751751569Z" level=info msg="Start cni network conf syncer for default" Oct 31 14:10:02.751852 containerd[1690]: time="2025-10-31T14:10:02.751756411Z" level=info msg="Start streaming server" Oct 31 14:10:02.751852 containerd[1690]: time="2025-10-31T14:10:02.751760542Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 31 14:10:02.751852 containerd[1690]: time="2025-10-31T14:10:02.751761372Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 31 14:10:02.751852 containerd[1690]: time="2025-10-31T14:10:02.751782864Z" level=info msg="runtime interface starting up..." Oct 31 14:10:02.751852 containerd[1690]: time="2025-10-31T14:10:02.751786161Z" level=info msg="starting plugins..." Oct 31 14:10:02.751852 containerd[1690]: time="2025-10-31T14:10:02.751815564Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 31 14:10:02.752229 containerd[1690]: time="2025-10-31T14:10:02.751877020Z" level=info msg="containerd successfully booted in 0.166332s" Oct 31 14:10:02.751951 systemd[1]: Started containerd.service - containerd container runtime. Oct 31 14:10:03.040907 systemd-networkd[1576]: ens192: Gained IPv6LL Oct 31 14:10:03.042037 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 31 14:10:03.042701 systemd[1]: Reached target network-online.target - Network is Online. Oct 31 14:10:03.043726 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Oct 31 14:10:03.044902 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 14:10:03.047865 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 31 14:10:03.066711 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 31 14:10:03.090926 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 31 14:10:03.091167 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Oct 31 14:10:03.091994 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 31 14:10:04.502694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 14:10:04.503070 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 31 14:10:04.503269 systemd[1]: Startup finished in 2.789s (kernel) + 6.016s (initrd) + 4.620s (userspace) = 13.426s. Oct 31 14:10:04.513342 (kubelet)[1854]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 14:10:04.552327 login[1791]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 31 14:10:04.557095 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 31 14:10:04.557663 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 31 14:10:04.565952 systemd-logind[1656]: New session 1 of user core. Oct 31 14:10:04.573299 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 31 14:10:04.575430 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 31 14:10:04.589314 (systemd)[1859]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 31 14:10:04.590835 systemd-logind[1656]: New session c1 of user core. Oct 31 14:10:04.735241 systemd[1859]: Queued start job for default target default.target. Oct 31 14:10:04.742615 systemd[1859]: Created slice app.slice - User Application Slice. Oct 31 14:10:04.742634 systemd[1859]: Reached target paths.target - Paths. Oct 31 14:10:04.742660 systemd[1859]: Reached target timers.target - Timers. Oct 31 14:10:04.743345 systemd[1859]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 31 14:10:04.750235 systemd[1859]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 31 14:10:04.750267 systemd[1859]: Reached target sockets.target - Sockets. Oct 31 14:10:04.750292 systemd[1859]: Reached target basic.target - Basic System. Oct 31 14:10:04.750314 systemd[1859]: Reached target default.target - Main User Target. Oct 31 14:10:04.750331 systemd[1859]: Startup finished in 155ms. Oct 31 14:10:04.750398 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 31 14:10:04.751503 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 31 14:10:04.857655 login[1792]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 31 14:10:04.860723 systemd-logind[1656]: New session 2 of user core. Oct 31 14:10:04.873262 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 31 14:10:05.478649 kubelet[1854]: E1031 14:10:05.478587 1854 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 14:10:05.479896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 14:10:05.479988 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 14:10:05.480254 systemd[1]: kubelet.service: Consumed 653ms CPU time, 266.1M memory peak. Oct 31 14:10:15.730582 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 31 14:10:15.732033 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 14:10:15.991414 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 14:10:16.003045 (kubelet)[1904]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 14:10:16.108174 kubelet[1904]: E1031 14:10:16.108136 1904 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 14:10:16.110870 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 14:10:16.111024 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 14:10:16.111385 systemd[1]: kubelet.service: Consumed 119ms CPU time, 109.3M memory peak. Oct 31 14:10:26.361441 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 31 14:10:26.363087 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 14:10:26.713525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 14:10:26.722039 (kubelet)[1918]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 14:10:26.747794 kubelet[1918]: E1031 14:10:26.747754 1918 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 14:10:26.749292 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 14:10:26.749434 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 14:10:26.749823 systemd[1]: kubelet.service: Consumed 106ms CPU time, 108.5M memory peak. Oct 31 14:10:32.617208 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 31 14:10:32.618820 systemd[1]: Started sshd@0-139.178.70.103:22-139.178.68.195:38196.service - OpenSSH per-connection server daemon (139.178.68.195:38196). Oct 31 14:10:32.678771 sshd[1926]: Accepted publickey for core from 139.178.68.195 port 38196 ssh2: RSA SHA256:zmvueKxp0qYlaAir+MfCpW4n8cnAsn6tp39Aw3BU0Cs Oct 31 14:10:32.679544 sshd-session[1926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:10:32.682354 systemd-logind[1656]: New session 3 of user core. Oct 31 14:10:32.691986 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 31 14:10:32.705982 systemd[1]: Started sshd@1-139.178.70.103:22-139.178.68.195:38202.service - OpenSSH per-connection server daemon (139.178.68.195:38202). Oct 31 14:10:32.748068 sshd[1932]: Accepted publickey for core from 139.178.68.195 port 38202 ssh2: RSA SHA256:zmvueKxp0qYlaAir+MfCpW4n8cnAsn6tp39Aw3BU0Cs Oct 31 14:10:32.749202 sshd-session[1932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:10:32.752923 systemd-logind[1656]: New session 4 of user core. Oct 31 14:10:32.765889 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 31 14:10:32.772551 sshd[1935]: Connection closed by 139.178.68.195 port 38202 Oct 31 14:10:32.773191 sshd-session[1932]: pam_unix(sshd:session): session closed for user core Oct 31 14:10:32.780942 systemd[1]: sshd@1-139.178.70.103:22-139.178.68.195:38202.service: Deactivated successfully. Oct 31 14:10:32.781711 systemd[1]: session-4.scope: Deactivated successfully. Oct 31 14:10:32.782105 systemd-logind[1656]: Session 4 logged out. Waiting for processes to exit. Oct 31 14:10:32.783515 systemd[1]: Started sshd@2-139.178.70.103:22-139.178.68.195:38206.service - OpenSSH per-connection server daemon (139.178.68.195:38206). Oct 31 14:10:32.783934 systemd-logind[1656]: Removed session 4. Oct 31 14:10:32.825650 sshd[1941]: Accepted publickey for core from 139.178.68.195 port 38206 ssh2: RSA SHA256:zmvueKxp0qYlaAir+MfCpW4n8cnAsn6tp39Aw3BU0Cs Oct 31 14:10:32.826364 sshd-session[1941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:10:32.829668 systemd-logind[1656]: New session 5 of user core. Oct 31 14:10:32.839964 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 31 14:10:32.844582 sshd[1944]: Connection closed by 139.178.68.195 port 38206 Oct 31 14:10:32.844897 sshd-session[1941]: pam_unix(sshd:session): session closed for user core Oct 31 14:10:32.855923 systemd[1]: sshd@2-139.178.70.103:22-139.178.68.195:38206.service: Deactivated successfully. Oct 31 14:10:32.856936 systemd[1]: session-5.scope: Deactivated successfully. Oct 31 14:10:32.857452 systemd-logind[1656]: Session 5 logged out. Waiting for processes to exit. Oct 31 14:10:32.858953 systemd[1]: Started sshd@3-139.178.70.103:22-139.178.68.195:38218.service - OpenSSH per-connection server daemon (139.178.68.195:38218). Oct 31 14:10:32.859580 systemd-logind[1656]: Removed session 5. Oct 31 14:10:32.896374 sshd[1950]: Accepted publickey for core from 139.178.68.195 port 38218 ssh2: RSA SHA256:zmvueKxp0qYlaAir+MfCpW4n8cnAsn6tp39Aw3BU0Cs Oct 31 14:10:32.897247 sshd-session[1950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:10:32.900558 systemd-logind[1656]: New session 6 of user core. Oct 31 14:10:32.909930 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 31 14:10:32.917066 sshd[1953]: Connection closed by 139.178.68.195 port 38218 Oct 31 14:10:32.917837 sshd-session[1950]: pam_unix(sshd:session): session closed for user core Oct 31 14:10:32.921349 systemd[1]: sshd@3-139.178.70.103:22-139.178.68.195:38218.service: Deactivated successfully. Oct 31 14:10:32.922288 systemd[1]: session-6.scope: Deactivated successfully. Oct 31 14:10:32.922850 systemd-logind[1656]: Session 6 logged out. Waiting for processes to exit. Oct 31 14:10:32.924534 systemd[1]: Started sshd@4-139.178.70.103:22-139.178.68.195:38234.service - OpenSSH per-connection server daemon (139.178.68.195:38234). Oct 31 14:10:32.925226 systemd-logind[1656]: Removed session 6. Oct 31 14:10:32.962352 sshd[1959]: Accepted publickey for core from 139.178.68.195 port 38234 ssh2: RSA SHA256:zmvueKxp0qYlaAir+MfCpW4n8cnAsn6tp39Aw3BU0Cs Oct 31 14:10:32.963119 sshd-session[1959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:10:32.965735 systemd-logind[1656]: New session 7 of user core. Oct 31 14:10:32.975881 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 31 14:10:33.021405 sudo[1963]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 31 14:10:33.021864 sudo[1963]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 14:10:33.039223 sudo[1963]: pam_unix(sudo:session): session closed for user root Oct 31 14:10:33.040385 sshd[1962]: Connection closed by 139.178.68.195 port 38234 Oct 31 14:10:33.040770 sshd-session[1959]: pam_unix(sshd:session): session closed for user core Oct 31 14:10:33.047468 systemd[1]: sshd@4-139.178.70.103:22-139.178.68.195:38234.service: Deactivated successfully. Oct 31 14:10:33.048980 systemd[1]: session-7.scope: Deactivated successfully. Oct 31 14:10:33.049574 systemd-logind[1656]: Session 7 logged out. Waiting for processes to exit. Oct 31 14:10:33.051611 systemd[1]: Started sshd@5-139.178.70.103:22-139.178.68.195:55684.service - OpenSSH per-connection server daemon (139.178.68.195:55684). Oct 31 14:10:33.052257 systemd-logind[1656]: Removed session 7. Oct 31 14:10:33.092215 sshd[1969]: Accepted publickey for core from 139.178.68.195 port 55684 ssh2: RSA SHA256:zmvueKxp0qYlaAir+MfCpW4n8cnAsn6tp39Aw3BU0Cs Oct 31 14:10:33.093007 sshd-session[1969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:10:33.096419 systemd-logind[1656]: New session 8 of user core. Oct 31 14:10:33.105888 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 31 14:10:33.113172 sudo[1974]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 31 14:10:33.113325 sudo[1974]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 14:10:33.115562 sudo[1974]: pam_unix(sudo:session): session closed for user root Oct 31 14:10:33.119150 sudo[1973]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 31 14:10:33.119300 sudo[1973]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 14:10:33.125674 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 31 14:10:33.152713 augenrules[1996]: No rules Oct 31 14:10:33.152305 systemd[1]: audit-rules.service: Deactivated successfully. Oct 31 14:10:33.152478 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 31 14:10:33.153281 sudo[1973]: pam_unix(sudo:session): session closed for user root Oct 31 14:10:33.154811 sshd[1972]: Connection closed by 139.178.68.195 port 55684 Oct 31 14:10:33.154225 sshd-session[1969]: pam_unix(sshd:session): session closed for user core Oct 31 14:10:33.156607 systemd[1]: sshd@5-139.178.70.103:22-139.178.68.195:55684.service: Deactivated successfully. Oct 31 14:10:33.157635 systemd[1]: session-8.scope: Deactivated successfully. Oct 31 14:10:33.168516 systemd-logind[1656]: Session 8 logged out. Waiting for processes to exit. Oct 31 14:10:33.169477 systemd[1]: Started sshd@6-139.178.70.103:22-139.178.68.195:55698.service - OpenSSH per-connection server daemon (139.178.68.195:55698). Oct 31 14:10:33.170276 systemd-logind[1656]: Removed session 8. Oct 31 14:10:33.204201 sshd[2006]: Accepted publickey for core from 139.178.68.195 port 55698 ssh2: RSA SHA256:zmvueKxp0qYlaAir+MfCpW4n8cnAsn6tp39Aw3BU0Cs Oct 31 14:10:33.204946 sshd-session[2006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:10:33.208226 systemd-logind[1656]: New session 9 of user core. Oct 31 14:10:33.215962 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 31 14:10:33.224482 sudo[2010]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 31 14:10:33.224883 sudo[2010]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 14:10:33.647439 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 31 14:10:33.663265 (dockerd)[2028]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 31 14:10:34.208075 dockerd[2028]: time="2025-10-31T14:10:34.208042421Z" level=info msg="Starting up" Oct 31 14:10:34.208778 dockerd[2028]: time="2025-10-31T14:10:34.208765981Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 31 14:10:34.217163 dockerd[2028]: time="2025-10-31T14:10:34.217131991Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 31 14:10:34.325321 systemd[1]: var-lib-docker-metacopy\x2dcheck2122692565-merged.mount: Deactivated successfully. Oct 31 14:10:34.344186 dockerd[2028]: time="2025-10-31T14:10:34.344155611Z" level=info msg="Loading containers: start." Oct 31 14:10:34.352812 kernel: Initializing XFRM netlink socket Oct 31 14:10:34.722387 systemd-networkd[1576]: docker0: Link UP Oct 31 14:10:34.762876 dockerd[2028]: time="2025-10-31T14:10:34.762818048Z" level=info msg="Loading containers: done." Oct 31 14:10:34.840106 dockerd[2028]: time="2025-10-31T14:10:34.840071434Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 31 14:10:34.840210 dockerd[2028]: time="2025-10-31T14:10:34.840135794Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 31 14:10:34.840210 dockerd[2028]: time="2025-10-31T14:10:34.840192906Z" level=info msg="Initializing buildkit" Oct 31 14:10:34.941033 dockerd[2028]: time="2025-10-31T14:10:34.941003145Z" level=info msg="Completed buildkit initialization" Oct 31 14:10:34.946401 dockerd[2028]: time="2025-10-31T14:10:34.946368675Z" level=info msg="Daemon has completed initialization" Oct 31 14:10:34.946465 dockerd[2028]: time="2025-10-31T14:10:34.946421856Z" level=info msg="API listen on /run/docker.sock" Oct 31 14:10:34.946597 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 31 14:10:36.142407 containerd[1690]: time="2025-10-31T14:10:36.142341612Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Oct 31 14:10:36.848002 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 31 14:10:36.849413 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 14:10:36.863048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1688816241.mount: Deactivated successfully. Oct 31 14:10:37.122445 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 14:10:37.133108 (kubelet)[2251]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 14:10:37.176619 kubelet[2251]: E1031 14:10:37.176585 2251 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 14:10:37.178528 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 14:10:37.178619 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 14:10:37.179063 systemd[1]: kubelet.service: Consumed 112ms CPU time, 110.6M memory peak. Oct 31 14:10:38.618749 containerd[1690]: time="2025-10-31T14:10:38.618713278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:10:38.619785 containerd[1690]: time="2025-10-31T14:10:38.619755732Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Oct 31 14:10:38.620058 containerd[1690]: time="2025-10-31T14:10:38.620034464Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:10:38.622144 containerd[1690]: time="2025-10-31T14:10:38.622125380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:10:38.623500 containerd[1690]: time="2025-10-31T14:10:38.623476461Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.481105899s" Oct 31 14:10:38.623543 containerd[1690]: time="2025-10-31T14:10:38.623506653Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Oct 31 14:10:38.624066 containerd[1690]: time="2025-10-31T14:10:38.623848185Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Oct 31 14:10:41.068148 containerd[1690]: time="2025-10-31T14:10:41.068095926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:10:41.069261 containerd[1690]: time="2025-10-31T14:10:41.069197757Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Oct 31 14:10:41.069651 containerd[1690]: time="2025-10-31T14:10:41.069555443Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:10:41.071677 containerd[1690]: time="2025-10-31T14:10:41.071657654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:10:41.073166 containerd[1690]: time="2025-10-31T14:10:41.073079008Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 2.449212556s" Oct 31 14:10:41.073166 containerd[1690]: time="2025-10-31T14:10:41.073103195Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Oct 31 14:10:41.073423 containerd[1690]: time="2025-10-31T14:10:41.073400471Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Oct 31 14:10:42.171459 containerd[1690]: time="2025-10-31T14:10:42.170832895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:10:42.179179 containerd[1690]: time="2025-10-31T14:10:42.179149406Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Oct 31 14:10:42.186856 containerd[1690]: time="2025-10-31T14:10:42.186835746Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:10:42.201370 containerd[1690]: time="2025-10-31T14:10:42.201353360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:10:42.202083 containerd[1690]: time="2025-10-31T14:10:42.202061057Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.128636523s" Oct 31 14:10:42.202127 containerd[1690]: time="2025-10-31T14:10:42.202083860Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Oct 31 14:10:42.202550 containerd[1690]: time="2025-10-31T14:10:42.202534220Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Oct 31 14:10:43.152592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1355804771.mount: Deactivated successfully. Oct 31 14:10:43.834192 containerd[1690]: time="2025-10-31T14:10:43.834166019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:10:43.838858 containerd[1690]: time="2025-10-31T14:10:43.838818508Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Oct 31 14:10:43.846042 containerd[1690]: time="2025-10-31T14:10:43.845333077Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:10:43.853029 containerd[1690]: time="2025-10-31T14:10:43.853004963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:10:43.853474 containerd[1690]: time="2025-10-31T14:10:43.853452244Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.650897986s" Oct 31 14:10:43.853515 containerd[1690]: time="2025-10-31T14:10:43.853474976Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Oct 31 14:10:43.853756 containerd[1690]: time="2025-10-31T14:10:43.853727743Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Oct 31 14:10:44.766872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1540915240.mount: Deactivated successfully. Oct 31 14:10:45.546818 containerd[1690]: time="2025-10-31T14:10:45.546748590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:10:45.551269 containerd[1690]: time="2025-10-31T14:10:45.551246779Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Oct 31 14:10:45.556940 containerd[1690]: time="2025-10-31T14:10:45.556914417Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:10:45.559451 containerd[1690]: time="2025-10-31T14:10:45.559422981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:10:45.560343 containerd[1690]: time="2025-10-31T14:10:45.560103271Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.706354437s" Oct 31 14:10:45.560343 containerd[1690]: time="2025-10-31T14:10:45.560121583Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Oct 31 14:10:45.560616 containerd[1690]: time="2025-10-31T14:10:45.560524243Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 31 14:10:46.206374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3667225522.mount: Deactivated successfully. Oct 31 14:10:46.209438 containerd[1690]: time="2025-10-31T14:10:46.209414082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 14:10:46.209902 containerd[1690]: time="2025-10-31T14:10:46.209879096Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 31 14:10:46.210024 containerd[1690]: time="2025-10-31T14:10:46.209997584Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 14:10:46.212451 containerd[1690]: time="2025-10-31T14:10:46.212434862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 14:10:46.212951 containerd[1690]: time="2025-10-31T14:10:46.212877568Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 652.326736ms" Oct 31 14:10:46.212951 containerd[1690]: time="2025-10-31T14:10:46.212893215Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 31 14:10:46.213226 containerd[1690]: time="2025-10-31T14:10:46.213211736Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Oct 31 14:10:46.745511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2598187428.mount: Deactivated successfully. Oct 31 14:10:47.428930 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 31 14:10:47.429970 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 14:10:48.090583 update_engine[1657]: I20251031 14:10:48.090538 1657 update_attempter.cc:509] Updating boot flags... Oct 31 14:10:48.689932 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 14:10:48.696995 (kubelet)[2458]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 14:10:48.976000 kubelet[2458]: E1031 14:10:48.975479 2458 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 14:10:48.977723 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 14:10:48.977820 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 14:10:48.978091 systemd[1]: kubelet.service: Consumed 99ms CPU time, 107.9M memory peak. Oct 31 14:10:49.792882 containerd[1690]: time="2025-10-31T14:10:49.792843134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:10:49.793756 containerd[1690]: time="2025-10-31T14:10:49.793732037Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Oct 31 14:10:49.794402 containerd[1690]: time="2025-10-31T14:10:49.794384779Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:10:49.796452 containerd[1690]: time="2025-10-31T14:10:49.796424783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:10:49.797114 containerd[1690]: time="2025-10-31T14:10:49.796748072Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.583440104s" Oct 31 14:10:49.797114 containerd[1690]: time="2025-10-31T14:10:49.796765566Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Oct 31 14:10:52.891170 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 14:10:52.891514 systemd[1]: kubelet.service: Consumed 99ms CPU time, 107.9M memory peak. Oct 31 14:10:52.895985 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 14:10:52.917583 systemd[1]: Reload requested from client PID 2504 ('systemctl') (unit session-9.scope)... Oct 31 14:10:52.917680 systemd[1]: Reloading... Oct 31 14:10:52.997811 zram_generator::config[2548]: No configuration found. Oct 31 14:10:53.066636 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 31 14:10:53.135866 systemd[1]: Reloading finished in 217 ms. Oct 31 14:10:53.163643 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 31 14:10:53.163817 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 31 14:10:53.164034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 14:10:53.164122 systemd[1]: kubelet.service: Consumed 52ms CPU time, 78M memory peak. Oct 31 14:10:53.165600 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 14:10:53.574370 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 14:10:53.577025 (kubelet)[2615]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 31 14:10:53.647713 kubelet[2615]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 14:10:53.647713 kubelet[2615]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 14:10:53.647713 kubelet[2615]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 14:10:53.653061 kubelet[2615]: I1031 14:10:53.652872 2615 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 14:10:53.930058 kubelet[2615]: I1031 14:10:53.929992 2615 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 31 14:10:53.930058 kubelet[2615]: I1031 14:10:53.930013 2615 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 14:10:53.930328 kubelet[2615]: I1031 14:10:53.930187 2615 server.go:956] "Client rotation is on, will bootstrap in background" Oct 31 14:10:53.964919 kubelet[2615]: I1031 14:10:53.964218 2615 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 14:10:53.967133 kubelet[2615]: E1031 14:10:53.967103 2615 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.70.103:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 31 14:10:53.992771 kubelet[2615]: I1031 14:10:53.992751 2615 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 31 14:10:54.000315 kubelet[2615]: I1031 14:10:54.000295 2615 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 31 14:10:54.003686 kubelet[2615]: I1031 14:10:54.003656 2615 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 14:10:54.006170 kubelet[2615]: I1031 14:10:54.003754 2615 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 31 14:10:54.006282 kubelet[2615]: I1031 14:10:54.006274 2615 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 14:10:54.006324 kubelet[2615]: I1031 14:10:54.006319 2615 container_manager_linux.go:303] "Creating device plugin manager" Oct 31 14:10:54.007039 kubelet[2615]: I1031 14:10:54.007031 2615 state_mem.go:36] "Initialized new in-memory state store" Oct 31 14:10:54.009890 kubelet[2615]: I1031 14:10:54.009881 2615 kubelet.go:480] "Attempting to sync node with API server" Oct 31 14:10:54.009957 kubelet[2615]: I1031 14:10:54.009949 2615 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 14:10:54.010004 kubelet[2615]: I1031 14:10:54.010000 2615 kubelet.go:386] "Adding apiserver pod source" Oct 31 14:10:54.011422 kubelet[2615]: I1031 14:10:54.011414 2615 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 14:10:54.017450 kubelet[2615]: E1031 14:10:54.017392 2615 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 31 14:10:54.019846 kubelet[2615]: E1031 14:10:54.019754 2615 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 31 14:10:54.019846 kubelet[2615]: I1031 14:10:54.019833 2615 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 31 14:10:54.020168 kubelet[2615]: I1031 14:10:54.020151 2615 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 31 14:10:54.022294 kubelet[2615]: W1031 14:10:54.022276 2615 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 31 14:10:54.042067 kubelet[2615]: I1031 14:10:54.042044 2615 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 31 14:10:54.042153 kubelet[2615]: I1031 14:10:54.042098 2615 server.go:1289] "Started kubelet" Oct 31 14:10:54.050720 kubelet[2615]: I1031 14:10:54.050677 2615 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 14:10:54.050976 kubelet[2615]: I1031 14:10:54.050904 2615 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 14:10:54.055832 kubelet[2615]: I1031 14:10:54.055813 2615 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 14:10:54.058253 kubelet[2615]: I1031 14:10:54.057977 2615 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 31 14:10:54.058253 kubelet[2615]: E1031 14:10:54.058056 2615 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 14:10:54.059125 kubelet[2615]: I1031 14:10:54.058703 2615 server.go:317] "Adding debug handlers to kubelet server" Oct 31 14:10:54.063055 kubelet[2615]: I1031 14:10:54.063009 2615 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 14:10:54.063308 kubelet[2615]: I1031 14:10:54.063296 2615 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 14:10:54.067058 kubelet[2615]: I1031 14:10:54.067045 2615 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 31 14:10:54.067169 kubelet[2615]: I1031 14:10:54.067162 2615 reconciler.go:26] "Reconciler: start to sync state" Oct 31 14:10:54.067458 kubelet[2615]: E1031 14:10:54.064674 2615 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.103:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.103:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.187398c5aee2a46e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-31 14:10:54.042063982 +0000 UTC m=+0.462516762,LastTimestamp:2025-10-31 14:10:54.042063982 +0000 UTC m=+0.462516762,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 31 14:10:54.069881 kubelet[2615]: I1031 14:10:54.069859 2615 factory.go:223] Registration of the systemd container factory successfully Oct 31 14:10:54.069941 kubelet[2615]: I1031 14:10:54.069932 2615 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 14:10:54.072669 kubelet[2615]: I1031 14:10:54.072654 2615 factory.go:223] Registration of the containerd container factory successfully Oct 31 14:10:54.073736 kubelet[2615]: E1031 14:10:54.073711 2615 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.103:6443: connect: connection refused" interval="200ms" Oct 31 14:10:54.085918 kubelet[2615]: I1031 14:10:54.085894 2615 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 31 14:10:54.086514 kubelet[2615]: I1031 14:10:54.086504 2615 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 31 14:10:54.086538 kubelet[2615]: I1031 14:10:54.086522 2615 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 31 14:10:54.086557 kubelet[2615]: I1031 14:10:54.086547 2615 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 14:10:54.086557 kubelet[2615]: I1031 14:10:54.086553 2615 kubelet.go:2436] "Starting kubelet main sync loop" Oct 31 14:10:54.086586 kubelet[2615]: E1031 14:10:54.086576 2615 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 14:10:54.091416 kubelet[2615]: E1031 14:10:54.091390 2615 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 31 14:10:54.091815 kubelet[2615]: E1031 14:10:54.091771 2615 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 31 14:10:54.095753 kubelet[2615]: I1031 14:10:54.095680 2615 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 14:10:54.095753 kubelet[2615]: I1031 14:10:54.095689 2615 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 14:10:54.095753 kubelet[2615]: I1031 14:10:54.095699 2615 state_mem.go:36] "Initialized new in-memory state store" Oct 31 14:10:54.096741 kubelet[2615]: I1031 14:10:54.096730 2615 policy_none.go:49] "None policy: Start" Oct 31 14:10:54.096741 kubelet[2615]: I1031 14:10:54.096742 2615 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 31 14:10:54.096807 kubelet[2615]: I1031 14:10:54.096748 2615 state_mem.go:35] "Initializing new in-memory state store" Oct 31 14:10:54.102770 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 31 14:10:54.111634 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 31 14:10:54.114501 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 31 14:10:54.123542 kubelet[2615]: E1031 14:10:54.123526 2615 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 31 14:10:54.123949 kubelet[2615]: I1031 14:10:54.123932 2615 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 14:10:54.124075 kubelet[2615]: I1031 14:10:54.124003 2615 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 14:10:54.124694 kubelet[2615]: I1031 14:10:54.124653 2615 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 14:10:54.125333 kubelet[2615]: E1031 14:10:54.125319 2615 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 14:10:54.125365 kubelet[2615]: E1031 14:10:54.125346 2615 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 31 14:10:54.193776 systemd[1]: Created slice kubepods-burstable-pod23ff583ac85c4a53014495eb0aca3652.slice - libcontainer container kubepods-burstable-pod23ff583ac85c4a53014495eb0aca3652.slice. Oct 31 14:10:54.204405 kubelet[2615]: E1031 14:10:54.204373 2615 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 14:10:54.206959 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Oct 31 14:10:54.214684 kubelet[2615]: E1031 14:10:54.214562 2615 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 14:10:54.217143 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Oct 31 14:10:54.218296 kubelet[2615]: E1031 14:10:54.218184 2615 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 14:10:54.226329 kubelet[2615]: I1031 14:10:54.226310 2615 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 14:10:54.226679 kubelet[2615]: E1031 14:10:54.226662 2615 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.103:6443/api/v1/nodes\": dial tcp 139.178.70.103:6443: connect: connection refused" node="localhost" Oct 31 14:10:54.269071 kubelet[2615]: I1031 14:10:54.269043 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 14:10:54.269071 kubelet[2615]: I1031 14:10:54.269069 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 14:10:54.269237 kubelet[2615]: I1031 14:10:54.269082 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 14:10:54.269237 kubelet[2615]: I1031 14:10:54.269091 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23ff583ac85c4a53014495eb0aca3652-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"23ff583ac85c4a53014495eb0aca3652\") " pod="kube-system/kube-apiserver-localhost" Oct 31 14:10:54.269237 kubelet[2615]: I1031 14:10:54.269106 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23ff583ac85c4a53014495eb0aca3652-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"23ff583ac85c4a53014495eb0aca3652\") " pod="kube-system/kube-apiserver-localhost" Oct 31 14:10:54.269237 kubelet[2615]: I1031 14:10:54.269115 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 14:10:54.269237 kubelet[2615]: I1031 14:10:54.269125 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 14:10:54.269317 kubelet[2615]: I1031 14:10:54.269133 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 31 14:10:54.269317 kubelet[2615]: I1031 14:10:54.269141 2615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23ff583ac85c4a53014495eb0aca3652-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"23ff583ac85c4a53014495eb0aca3652\") " pod="kube-system/kube-apiserver-localhost" Oct 31 14:10:54.274489 kubelet[2615]: E1031 14:10:54.274466 2615 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.103:6443: connect: connection refused" interval="400ms" Oct 31 14:10:54.428163 kubelet[2615]: I1031 14:10:54.428141 2615 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 14:10:54.428474 kubelet[2615]: E1031 14:10:54.428460 2615 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.103:6443/api/v1/nodes\": dial tcp 139.178.70.103:6443: connect: connection refused" node="localhost" Oct 31 14:10:54.505951 containerd[1690]: time="2025-10-31T14:10:54.505605690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:23ff583ac85c4a53014495eb0aca3652,Namespace:kube-system,Attempt:0,}" Oct 31 14:10:54.522291 containerd[1690]: time="2025-10-31T14:10:54.522172443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Oct 31 14:10:54.522443 containerd[1690]: time="2025-10-31T14:10:54.522428163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Oct 31 14:10:54.675317 kubelet[2615]: E1031 14:10:54.675289 2615 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.103:6443: connect: connection refused" interval="800ms" Oct 31 14:10:54.808779 containerd[1690]: time="2025-10-31T14:10:54.808151177Z" level=info msg="connecting to shim 9d3906abfa659f9f74c1ee0106472b0875beef41467796166626c427371b0264" address="unix:///run/containerd/s/d7a2a9caa80ab2a02e71c7a6309c6887dc985da0c21ab59d1f385805e1ba00b7" namespace=k8s.io protocol=ttrpc version=3 Oct 31 14:10:54.808779 containerd[1690]: time="2025-10-31T14:10:54.808543925Z" level=info msg="connecting to shim 57712e746015de8c495c8d0c0db3bcb54a6339fd5152e5fa1b1c400e8eb7ff0f" address="unix:///run/containerd/s/a44d423efeda34d0ab3a44fbf8da1088f7c533e22808066d200cc49b346928c0" namespace=k8s.io protocol=ttrpc version=3 Oct 31 14:10:54.810055 containerd[1690]: time="2025-10-31T14:10:54.810035946Z" level=info msg="connecting to shim c66298dbe34c98b6696ba27d4ec61caf4fb9e65ae62ee919a726753cc3374cb9" address="unix:///run/containerd/s/d285a42dc0b437473e571fd515fe426465285636102715b64d1f40606af831bc" namespace=k8s.io protocol=ttrpc version=3 Oct 31 14:10:54.830112 kubelet[2615]: I1031 14:10:54.830096 2615 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 14:10:54.830410 kubelet[2615]: E1031 14:10:54.830397 2615 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.103:6443/api/v1/nodes\": dial tcp 139.178.70.103:6443: connect: connection refused" node="localhost" Oct 31 14:10:54.915940 kubelet[2615]: E1031 14:10:54.915597 2615 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.103:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.103:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.187398c5aee2a46e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-31 14:10:54.042063982 +0000 UTC m=+0.462516762,LastTimestamp:2025-10-31 14:10:54.042063982 +0000 UTC m=+0.462516762,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 31 14:10:54.973933 systemd[1]: Started cri-containerd-57712e746015de8c495c8d0c0db3bcb54a6339fd5152e5fa1b1c400e8eb7ff0f.scope - libcontainer container 57712e746015de8c495c8d0c0db3bcb54a6339fd5152e5fa1b1c400e8eb7ff0f. Oct 31 14:10:54.978929 systemd[1]: Started cri-containerd-9d3906abfa659f9f74c1ee0106472b0875beef41467796166626c427371b0264.scope - libcontainer container 9d3906abfa659f9f74c1ee0106472b0875beef41467796166626c427371b0264. Oct 31 14:10:54.980290 kubelet[2615]: E1031 14:10:54.979997 2615 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 31 14:10:54.980441 systemd[1]: Started cri-containerd-c66298dbe34c98b6696ba27d4ec61caf4fb9e65ae62ee919a726753cc3374cb9.scope - libcontainer container c66298dbe34c98b6696ba27d4ec61caf4fb9e65ae62ee919a726753cc3374cb9. Oct 31 14:10:55.042477 containerd[1690]: time="2025-10-31T14:10:55.042448417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d3906abfa659f9f74c1ee0106472b0875beef41467796166626c427371b0264\"" Oct 31 14:10:55.049145 containerd[1690]: time="2025-10-31T14:10:55.048707212Z" level=info msg="CreateContainer within sandbox \"9d3906abfa659f9f74c1ee0106472b0875beef41467796166626c427371b0264\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 31 14:10:55.066026 containerd[1690]: time="2025-10-31T14:10:55.065481316Z" level=info msg="Container 450a2df8d7218dabff6bdb78f35c3ba799f509308dc70375bc65297c9e93aa7e: CDI devices from CRI Config.CDIDevices: []" Oct 31 14:10:55.074527 kubelet[2615]: E1031 14:10:55.074495 2615 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 31 14:10:55.075169 containerd[1690]: time="2025-10-31T14:10:55.074959801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"57712e746015de8c495c8d0c0db3bcb54a6339fd5152e5fa1b1c400e8eb7ff0f\"" Oct 31 14:10:55.095590 containerd[1690]: time="2025-10-31T14:10:55.095546265Z" level=info msg="CreateContainer within sandbox \"57712e746015de8c495c8d0c0db3bcb54a6339fd5152e5fa1b1c400e8eb7ff0f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 31 14:10:55.151710 containerd[1690]: time="2025-10-31T14:10:55.151601743Z" level=info msg="CreateContainer within sandbox \"9d3906abfa659f9f74c1ee0106472b0875beef41467796166626c427371b0264\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"450a2df8d7218dabff6bdb78f35c3ba799f509308dc70375bc65297c9e93aa7e\"" Oct 31 14:10:55.152143 containerd[1690]: time="2025-10-31T14:10:55.152117985Z" level=info msg="StartContainer for \"450a2df8d7218dabff6bdb78f35c3ba799f509308dc70375bc65297c9e93aa7e\"" Oct 31 14:10:55.154509 containerd[1690]: time="2025-10-31T14:10:55.154488016Z" level=info msg="connecting to shim 450a2df8d7218dabff6bdb78f35c3ba799f509308dc70375bc65297c9e93aa7e" address="unix:///run/containerd/s/d7a2a9caa80ab2a02e71c7a6309c6887dc985da0c21ab59d1f385805e1ba00b7" protocol=ttrpc version=3 Oct 31 14:10:55.155724 containerd[1690]: time="2025-10-31T14:10:55.155673234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:23ff583ac85c4a53014495eb0aca3652,Namespace:kube-system,Attempt:0,} returns sandbox id \"c66298dbe34c98b6696ba27d4ec61caf4fb9e65ae62ee919a726753cc3374cb9\"" Oct 31 14:10:55.170905 systemd[1]: Started cri-containerd-450a2df8d7218dabff6bdb78f35c3ba799f509308dc70375bc65297c9e93aa7e.scope - libcontainer container 450a2df8d7218dabff6bdb78f35c3ba799f509308dc70375bc65297c9e93aa7e. Oct 31 14:10:55.182591 containerd[1690]: time="2025-10-31T14:10:55.182567627Z" level=info msg="CreateContainer within sandbox \"c66298dbe34c98b6696ba27d4ec61caf4fb9e65ae62ee919a726753cc3374cb9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 31 14:10:55.199918 kubelet[2615]: E1031 14:10:55.199888 2615 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 31 14:10:55.231804 containerd[1690]: time="2025-10-31T14:10:55.231771431Z" level=info msg="StartContainer for \"450a2df8d7218dabff6bdb78f35c3ba799f509308dc70375bc65297c9e93aa7e\" returns successfully" Oct 31 14:10:55.277672 containerd[1690]: time="2025-10-31T14:10:55.277648155Z" level=info msg="Container d65c9f9205af62c7510789fbcd26df9433bd185a833a6d9fc98fc3382582fc99: CDI devices from CRI Config.CDIDevices: []" Oct 31 14:10:55.339561 containerd[1690]: time="2025-10-31T14:10:55.339478601Z" level=info msg="Container df4658d30a27ec107d4bc3b8fcac38aed03cee7b648f17a823a7a17aac3096d8: CDI devices from CRI Config.CDIDevices: []" Oct 31 14:10:55.374197 kubelet[2615]: E1031 14:10:55.374088 2615 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 31 14:10:55.376167 containerd[1690]: time="2025-10-31T14:10:55.376064822Z" level=info msg="CreateContainer within sandbox \"57712e746015de8c495c8d0c0db3bcb54a6339fd5152e5fa1b1c400e8eb7ff0f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"df4658d30a27ec107d4bc3b8fcac38aed03cee7b648f17a823a7a17aac3096d8\"" Oct 31 14:10:55.376496 containerd[1690]: time="2025-10-31T14:10:55.376477951Z" level=info msg="StartContainer for \"df4658d30a27ec107d4bc3b8fcac38aed03cee7b648f17a823a7a17aac3096d8\"" Oct 31 14:10:55.376707 containerd[1690]: time="2025-10-31T14:10:55.376541658Z" level=info msg="CreateContainer within sandbox \"c66298dbe34c98b6696ba27d4ec61caf4fb9e65ae62ee919a726753cc3374cb9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d65c9f9205af62c7510789fbcd26df9433bd185a833a6d9fc98fc3382582fc99\"" Oct 31 14:10:55.377307 containerd[1690]: time="2025-10-31T14:10:55.377295205Z" level=info msg="connecting to shim df4658d30a27ec107d4bc3b8fcac38aed03cee7b648f17a823a7a17aac3096d8" address="unix:///run/containerd/s/a44d423efeda34d0ab3a44fbf8da1088f7c533e22808066d200cc49b346928c0" protocol=ttrpc version=3 Oct 31 14:10:55.377652 containerd[1690]: time="2025-10-31T14:10:55.377631587Z" level=info msg="StartContainer for \"d65c9f9205af62c7510789fbcd26df9433bd185a833a6d9fc98fc3382582fc99\"" Oct 31 14:10:55.379012 containerd[1690]: time="2025-10-31T14:10:55.378979725Z" level=info msg="connecting to shim d65c9f9205af62c7510789fbcd26df9433bd185a833a6d9fc98fc3382582fc99" address="unix:///run/containerd/s/d285a42dc0b437473e571fd515fe426465285636102715b64d1f40606af831bc" protocol=ttrpc version=3 Oct 31 14:10:55.399897 systemd[1]: Started cri-containerd-df4658d30a27ec107d4bc3b8fcac38aed03cee7b648f17a823a7a17aac3096d8.scope - libcontainer container df4658d30a27ec107d4bc3b8fcac38aed03cee7b648f17a823a7a17aac3096d8. Oct 31 14:10:55.402464 systemd[1]: Started cri-containerd-d65c9f9205af62c7510789fbcd26df9433bd185a833a6d9fc98fc3382582fc99.scope - libcontainer container d65c9f9205af62c7510789fbcd26df9433bd185a833a6d9fc98fc3382582fc99. Oct 31 14:10:55.452420 containerd[1690]: time="2025-10-31T14:10:55.452386353Z" level=info msg="StartContainer for \"d65c9f9205af62c7510789fbcd26df9433bd185a833a6d9fc98fc3382582fc99\" returns successfully" Oct 31 14:10:55.470989 containerd[1690]: time="2025-10-31T14:10:55.470961074Z" level=info msg="StartContainer for \"df4658d30a27ec107d4bc3b8fcac38aed03cee7b648f17a823a7a17aac3096d8\" returns successfully" Oct 31 14:10:55.476370 kubelet[2615]: E1031 14:10:55.476347 2615 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.103:6443: connect: connection refused" interval="1.6s" Oct 31 14:10:55.633248 kubelet[2615]: I1031 14:10:55.633184 2615 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 14:10:55.633517 kubelet[2615]: E1031 14:10:55.633451 2615 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.103:6443/api/v1/nodes\": dial tcp 139.178.70.103:6443: connect: connection refused" node="localhost" Oct 31 14:10:56.101717 kubelet[2615]: E1031 14:10:56.101696 2615 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 14:10:56.104856 kubelet[2615]: E1031 14:10:56.104834 2615 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 14:10:56.105164 kubelet[2615]: E1031 14:10:56.105154 2615 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 14:10:57.107759 kubelet[2615]: E1031 14:10:57.107738 2615 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 14:10:57.108040 kubelet[2615]: E1031 14:10:57.108026 2615 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 14:10:57.108249 kubelet[2615]: E1031 14:10:57.108235 2615 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 14:10:57.236940 kubelet[2615]: I1031 14:10:57.236898 2615 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 14:10:57.290934 kubelet[2615]: E1031 14:10:57.290898 2615 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 31 14:10:57.376847 kubelet[2615]: I1031 14:10:57.376332 2615 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 14:10:57.376847 kubelet[2615]: E1031 14:10:57.376358 2615 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 31 14:10:57.467750 kubelet[2615]: I1031 14:10:57.467715 2615 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 14:10:57.471680 kubelet[2615]: E1031 14:10:57.471660 2615 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 31 14:10:57.471680 kubelet[2615]: I1031 14:10:57.471680 2615 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 14:10:57.472565 kubelet[2615]: E1031 14:10:57.472545 2615 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 31 14:10:57.472600 kubelet[2615]: I1031 14:10:57.472572 2615 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 14:10:57.473459 kubelet[2615]: E1031 14:10:57.473443 2615 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 31 14:10:58.020808 kubelet[2615]: I1031 14:10:58.020656 2615 apiserver.go:52] "Watching apiserver" Oct 31 14:10:58.067845 kubelet[2615]: I1031 14:10:58.067817 2615 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 31 14:10:58.107073 kubelet[2615]: I1031 14:10:58.107043 2615 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 14:10:58.108722 kubelet[2615]: E1031 14:10:58.108688 2615 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 31 14:10:59.090020 systemd[1]: Reload requested from client PID 2887 ('systemctl') (unit session-9.scope)... Oct 31 14:10:59.090033 systemd[1]: Reloading... Oct 31 14:10:59.139905 zram_generator::config[2931]: No configuration found. Oct 31 14:10:59.236743 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 31 14:10:59.318158 systemd[1]: Reloading finished in 227 ms. Oct 31 14:10:59.337070 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 14:10:59.352482 systemd[1]: kubelet.service: Deactivated successfully. Oct 31 14:10:59.352725 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 14:10:59.352770 systemd[1]: kubelet.service: Consumed 619ms CPU time, 130.1M memory peak. Oct 31 14:10:59.355513 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 14:10:59.958877 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 14:10:59.965040 (kubelet)[2999]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 31 14:11:00.078019 kubelet[2999]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 14:11:00.078202 kubelet[2999]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 14:11:00.078229 kubelet[2999]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 14:11:00.086243 kubelet[2999]: I1031 14:11:00.086192 2999 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 14:11:00.110800 kubelet[2999]: I1031 14:11:00.110186 2999 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 31 14:11:00.110800 kubelet[2999]: I1031 14:11:00.110205 2999 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 14:11:00.110800 kubelet[2999]: I1031 14:11:00.110351 2999 server.go:956] "Client rotation is on, will bootstrap in background" Oct 31 14:11:00.119284 kubelet[2999]: I1031 14:11:00.119179 2999 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 31 14:11:00.127462 kubelet[2999]: I1031 14:11:00.126823 2999 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 14:11:00.139228 kubelet[2999]: I1031 14:11:00.139068 2999 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 31 14:11:00.151884 kubelet[2999]: I1031 14:11:00.151245 2999 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 31 14:11:00.151884 kubelet[2999]: I1031 14:11:00.151456 2999 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 14:11:00.151884 kubelet[2999]: I1031 14:11:00.151477 2999 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 31 14:11:00.152987 kubelet[2999]: I1031 14:11:00.152568 2999 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 14:11:00.152987 kubelet[2999]: I1031 14:11:00.152586 2999 container_manager_linux.go:303] "Creating device plugin manager" Oct 31 14:11:00.152987 kubelet[2999]: I1031 14:11:00.152638 2999 state_mem.go:36] "Initialized new in-memory state store" Oct 31 14:11:00.156868 kubelet[2999]: I1031 14:11:00.155437 2999 kubelet.go:480] "Attempting to sync node with API server" Oct 31 14:11:00.156868 kubelet[2999]: I1031 14:11:00.155471 2999 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 14:11:00.156868 kubelet[2999]: I1031 14:11:00.156594 2999 kubelet.go:386] "Adding apiserver pod source" Oct 31 14:11:00.156868 kubelet[2999]: I1031 14:11:00.156611 2999 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 14:11:00.168489 kubelet[2999]: I1031 14:11:00.168445 2999 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 31 14:11:00.169013 kubelet[2999]: I1031 14:11:00.169002 2999 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 31 14:11:00.179073 kubelet[2999]: I1031 14:11:00.179060 2999 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 31 14:11:00.179224 kubelet[2999]: I1031 14:11:00.179183 2999 server.go:1289] "Started kubelet" Oct 31 14:11:00.179289 kubelet[2999]: I1031 14:11:00.179270 2999 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 14:11:00.179890 kubelet[2999]: I1031 14:11:00.179882 2999 server.go:317] "Adding debug handlers to kubelet server" Oct 31 14:11:00.187806 kubelet[2999]: I1031 14:11:00.187116 2999 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 14:11:00.187806 kubelet[2999]: I1031 14:11:00.187230 2999 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 14:11:00.207494 kubelet[2999]: I1031 14:11:00.207476 2999 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 14:11:00.208530 kubelet[2999]: E1031 14:11:00.208510 2999 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 31 14:11:00.208941 kubelet[2999]: I1031 14:11:00.208900 2999 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 14:11:00.218816 kubelet[2999]: I1031 14:11:00.218771 2999 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 31 14:11:00.219448 kubelet[2999]: I1031 14:11:00.219204 2999 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 31 14:11:00.219448 kubelet[2999]: I1031 14:11:00.219261 2999 reconciler.go:26] "Reconciler: start to sync state" Oct 31 14:11:00.222197 kubelet[2999]: I1031 14:11:00.222186 2999 factory.go:223] Registration of the systemd container factory successfully Oct 31 14:11:00.222411 kubelet[2999]: I1031 14:11:00.222400 2999 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 14:11:00.224449 kubelet[2999]: I1031 14:11:00.224433 2999 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 31 14:11:00.225915 kubelet[2999]: I1031 14:11:00.225906 2999 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 31 14:11:00.230515 kubelet[2999]: I1031 14:11:00.230504 2999 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 31 14:11:00.230588 kubelet[2999]: I1031 14:11:00.230582 2999 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 14:11:00.230624 kubelet[2999]: I1031 14:11:00.230620 2999 kubelet.go:2436] "Starting kubelet main sync loop" Oct 31 14:11:00.230693 kubelet[2999]: E1031 14:11:00.230683 2999 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 14:11:00.235722 kubelet[2999]: I1031 14:11:00.235712 2999 factory.go:223] Registration of the containerd container factory successfully Oct 31 14:11:00.269163 kubelet[2999]: I1031 14:11:00.269144 2999 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 14:11:00.269422 kubelet[2999]: I1031 14:11:00.269265 2999 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 14:11:00.269422 kubelet[2999]: I1031 14:11:00.269281 2999 state_mem.go:36] "Initialized new in-memory state store" Oct 31 14:11:00.269422 kubelet[2999]: I1031 14:11:00.269376 2999 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 31 14:11:00.269422 kubelet[2999]: I1031 14:11:00.269385 2999 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 31 14:11:00.269562 kubelet[2999]: I1031 14:11:00.269555 2999 policy_none.go:49] "None policy: Start" Oct 31 14:11:00.269612 kubelet[2999]: I1031 14:11:00.269607 2999 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 31 14:11:00.269654 kubelet[2999]: I1031 14:11:00.269641 2999 state_mem.go:35] "Initializing new in-memory state store" Oct 31 14:11:00.269852 kubelet[2999]: I1031 14:11:00.269805 2999 state_mem.go:75] "Updated machine memory state" Oct 31 14:11:00.272906 kubelet[2999]: E1031 14:11:00.272889 2999 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 31 14:11:00.273543 kubelet[2999]: I1031 14:11:00.273535 2999 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 14:11:00.274584 kubelet[2999]: I1031 14:11:00.274216 2999 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 14:11:00.274915 kubelet[2999]: I1031 14:11:00.274849 2999 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 14:11:00.278258 kubelet[2999]: E1031 14:11:00.278219 2999 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 14:11:00.331803 kubelet[2999]: I1031 14:11:00.331720 2999 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 14:11:00.332332 kubelet[2999]: I1031 14:11:00.332179 2999 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 14:11:00.332332 kubelet[2999]: I1031 14:11:00.332258 2999 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 14:11:00.379770 kubelet[2999]: I1031 14:11:00.379729 2999 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 14:11:00.404066 kubelet[2999]: I1031 14:11:00.403997 2999 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 31 14:11:00.404928 kubelet[2999]: I1031 14:11:00.404911 2999 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 14:11:00.420354 kubelet[2999]: I1031 14:11:00.420199 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23ff583ac85c4a53014495eb0aca3652-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"23ff583ac85c4a53014495eb0aca3652\") " pod="kube-system/kube-apiserver-localhost" Oct 31 14:11:00.420354 kubelet[2999]: I1031 14:11:00.420226 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23ff583ac85c4a53014495eb0aca3652-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"23ff583ac85c4a53014495eb0aca3652\") " pod="kube-system/kube-apiserver-localhost" Oct 31 14:11:00.420354 kubelet[2999]: I1031 14:11:00.420247 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 14:11:00.420354 kubelet[2999]: I1031 14:11:00.420264 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 14:11:00.420354 kubelet[2999]: I1031 14:11:00.420279 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 14:11:00.420553 kubelet[2999]: I1031 14:11:00.420291 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 31 14:11:00.420553 kubelet[2999]: I1031 14:11:00.420305 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23ff583ac85c4a53014495eb0aca3652-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"23ff583ac85c4a53014495eb0aca3652\") " pod="kube-system/kube-apiserver-localhost" Oct 31 14:11:00.420553 kubelet[2999]: I1031 14:11:00.420314 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 14:11:00.420553 kubelet[2999]: I1031 14:11:00.420325 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 14:11:01.158501 kubelet[2999]: I1031 14:11:01.158468 2999 apiserver.go:52] "Watching apiserver" Oct 31 14:11:01.219389 kubelet[2999]: I1031 14:11:01.219329 2999 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 31 14:11:01.304666 kubelet[2999]: I1031 14:11:01.304629 2999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.304618358 podStartE2EDuration="1.304618358s" podCreationTimestamp="2025-10-31 14:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 14:11:01.3042636 +0000 UTC m=+1.285114998" watchObservedRunningTime="2025-10-31 14:11:01.304618358 +0000 UTC m=+1.285469750" Oct 31 14:11:01.304815 kubelet[2999]: I1031 14:11:01.304693 2999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.304689735 podStartE2EDuration="1.304689735s" podCreationTimestamp="2025-10-31 14:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 14:11:01.282565606 +0000 UTC m=+1.263417005" watchObservedRunningTime="2025-10-31 14:11:01.304689735 +0000 UTC m=+1.285541164" Oct 31 14:11:01.320774 kubelet[2999]: I1031 14:11:01.320696 2999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.320686685 podStartE2EDuration="1.320686685s" podCreationTimestamp="2025-10-31 14:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 14:11:01.320509994 +0000 UTC m=+1.301361392" watchObservedRunningTime="2025-10-31 14:11:01.320686685 +0000 UTC m=+1.301538077" Oct 31 14:11:04.818067 kubelet[2999]: I1031 14:11:04.818001 2999 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 31 14:11:04.819592 containerd[1690]: time="2025-10-31T14:11:04.819504614Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 31 14:11:04.819784 kubelet[2999]: I1031 14:11:04.819640 2999 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 31 14:11:05.436841 systemd[1]: Created slice kubepods-besteffort-pod73de8e2b_916e_4889_bf66_89b741089bbe.slice - libcontainer container kubepods-besteffort-pod73de8e2b_916e_4889_bf66_89b741089bbe.slice. Oct 31 14:11:05.449014 kubelet[2999]: I1031 14:11:05.448984 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73de8e2b-916e-4889-bf66-89b741089bbe-lib-modules\") pod \"kube-proxy-ft6l2\" (UID: \"73de8e2b-916e-4889-bf66-89b741089bbe\") " pod="kube-system/kube-proxy-ft6l2" Oct 31 14:11:05.449014 kubelet[2999]: I1031 14:11:05.449012 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s8vj\" (UniqueName: \"kubernetes.io/projected/73de8e2b-916e-4889-bf66-89b741089bbe-kube-api-access-7s8vj\") pod \"kube-proxy-ft6l2\" (UID: \"73de8e2b-916e-4889-bf66-89b741089bbe\") " pod="kube-system/kube-proxy-ft6l2" Oct 31 14:11:05.449125 kubelet[2999]: I1031 14:11:05.449028 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/73de8e2b-916e-4889-bf66-89b741089bbe-kube-proxy\") pod \"kube-proxy-ft6l2\" (UID: \"73de8e2b-916e-4889-bf66-89b741089bbe\") " pod="kube-system/kube-proxy-ft6l2" Oct 31 14:11:05.449125 kubelet[2999]: I1031 14:11:05.449039 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73de8e2b-916e-4889-bf66-89b741089bbe-xtables-lock\") pod \"kube-proxy-ft6l2\" (UID: \"73de8e2b-916e-4889-bf66-89b741089bbe\") " pod="kube-system/kube-proxy-ft6l2" Oct 31 14:11:05.556812 kubelet[2999]: E1031 14:11:05.556770 2999 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 31 14:11:05.556812 kubelet[2999]: E1031 14:11:05.556810 2999 projected.go:194] Error preparing data for projected volume kube-api-access-7s8vj for pod kube-system/kube-proxy-ft6l2: configmap "kube-root-ca.crt" not found Oct 31 14:11:05.557865 kubelet[2999]: E1031 14:11:05.556861 2999 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/73de8e2b-916e-4889-bf66-89b741089bbe-kube-api-access-7s8vj podName:73de8e2b-916e-4889-bf66-89b741089bbe nodeName:}" failed. No retries permitted until 2025-10-31 14:11:06.056845708 +0000 UTC m=+6.037697100 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7s8vj" (UniqueName: "kubernetes.io/projected/73de8e2b-916e-4889-bf66-89b741089bbe-kube-api-access-7s8vj") pod "kube-proxy-ft6l2" (UID: "73de8e2b-916e-4889-bf66-89b741089bbe") : configmap "kube-root-ca.crt" not found Oct 31 14:11:06.060147 systemd[1]: Created slice kubepods-besteffort-pod31879255_11ef_445a_9d70_ca3e4276818e.slice - libcontainer container kubepods-besteffort-pod31879255_11ef_445a_9d70_ca3e4276818e.slice. Oct 31 14:11:06.154061 kubelet[2999]: I1031 14:11:06.154024 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km4gv\" (UniqueName: \"kubernetes.io/projected/31879255-11ef-445a-9d70-ca3e4276818e-kube-api-access-km4gv\") pod \"tigera-operator-7dcd859c48-pvvnr\" (UID: \"31879255-11ef-445a-9d70-ca3e4276818e\") " pod="tigera-operator/tigera-operator-7dcd859c48-pvvnr" Oct 31 14:11:06.154061 kubelet[2999]: I1031 14:11:06.154055 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/31879255-11ef-445a-9d70-ca3e4276818e-var-lib-calico\") pod \"tigera-operator-7dcd859c48-pvvnr\" (UID: \"31879255-11ef-445a-9d70-ca3e4276818e\") " pod="tigera-operator/tigera-operator-7dcd859c48-pvvnr" Oct 31 14:11:06.343770 containerd[1690]: time="2025-10-31T14:11:06.343644048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ft6l2,Uid:73de8e2b-916e-4889-bf66-89b741089bbe,Namespace:kube-system,Attempt:0,}" Oct 31 14:11:06.364571 containerd[1690]: time="2025-10-31T14:11:06.364546406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-pvvnr,Uid:31879255-11ef-445a-9d70-ca3e4276818e,Namespace:tigera-operator,Attempt:0,}" Oct 31 14:11:06.479243 containerd[1690]: time="2025-10-31T14:11:06.479212629Z" level=info msg="connecting to shim 567a5b6b9d4c4453daf3b4a8f18e4d856a563d0a650e115230078148397c426a" address="unix:///run/containerd/s/8faedf2d68ec4be9a6972348475647c05b2060f1167f2492b555c8b8c0a63ca8" namespace=k8s.io protocol=ttrpc version=3 Oct 31 14:11:06.501880 systemd[1]: Started cri-containerd-567a5b6b9d4c4453daf3b4a8f18e4d856a563d0a650e115230078148397c426a.scope - libcontainer container 567a5b6b9d4c4453daf3b4a8f18e4d856a563d0a650e115230078148397c426a. Oct 31 14:11:06.532804 containerd[1690]: time="2025-10-31T14:11:06.532759040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ft6l2,Uid:73de8e2b-916e-4889-bf66-89b741089bbe,Namespace:kube-system,Attempt:0,} returns sandbox id \"567a5b6b9d4c4453daf3b4a8f18e4d856a563d0a650e115230078148397c426a\"" Oct 31 14:11:06.546884 containerd[1690]: time="2025-10-31T14:11:06.546855868Z" level=info msg="CreateContainer within sandbox \"567a5b6b9d4c4453daf3b4a8f18e4d856a563d0a650e115230078148397c426a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 31 14:11:06.561786 containerd[1690]: time="2025-10-31T14:11:06.561735953Z" level=info msg="connecting to shim 701b950110f2358d8512b97f13d00c530bd0ee871df459b2675ec08925619fe6" address="unix:///run/containerd/s/84bbf16e77af242fb463dc086feeddd49cb16fb997be7c168d4a86a38e6452f7" namespace=k8s.io protocol=ttrpc version=3 Oct 31 14:11:06.578876 systemd[1]: Started cri-containerd-701b950110f2358d8512b97f13d00c530bd0ee871df459b2675ec08925619fe6.scope - libcontainer container 701b950110f2358d8512b97f13d00c530bd0ee871df459b2675ec08925619fe6. Oct 31 14:11:06.617025 containerd[1690]: time="2025-10-31T14:11:06.616953738Z" level=info msg="Container 0e68432de4b80351b4b7688fee4279815a1fe4fa9de79f78309f07cd4234032c: CDI devices from CRI Config.CDIDevices: []" Oct 31 14:11:06.623837 containerd[1690]: time="2025-10-31T14:11:06.623771555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-pvvnr,Uid:31879255-11ef-445a-9d70-ca3e4276818e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"701b950110f2358d8512b97f13d00c530bd0ee871df459b2675ec08925619fe6\"" Oct 31 14:11:06.625265 containerd[1690]: time="2025-10-31T14:11:06.625227796Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 31 14:11:06.721960 containerd[1690]: time="2025-10-31T14:11:06.721933889Z" level=info msg="CreateContainer within sandbox \"567a5b6b9d4c4453daf3b4a8f18e4d856a563d0a650e115230078148397c426a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0e68432de4b80351b4b7688fee4279815a1fe4fa9de79f78309f07cd4234032c\"" Oct 31 14:11:06.722476 containerd[1690]: time="2025-10-31T14:11:06.722457162Z" level=info msg="StartContainer for \"0e68432de4b80351b4b7688fee4279815a1fe4fa9de79f78309f07cd4234032c\"" Oct 31 14:11:06.723413 containerd[1690]: time="2025-10-31T14:11:06.723396505Z" level=info msg="connecting to shim 0e68432de4b80351b4b7688fee4279815a1fe4fa9de79f78309f07cd4234032c" address="unix:///run/containerd/s/8faedf2d68ec4be9a6972348475647c05b2060f1167f2492b555c8b8c0a63ca8" protocol=ttrpc version=3 Oct 31 14:11:06.738882 systemd[1]: Started cri-containerd-0e68432de4b80351b4b7688fee4279815a1fe4fa9de79f78309f07cd4234032c.scope - libcontainer container 0e68432de4b80351b4b7688fee4279815a1fe4fa9de79f78309f07cd4234032c. Oct 31 14:11:06.779681 containerd[1690]: time="2025-10-31T14:11:06.779659071Z" level=info msg="StartContainer for \"0e68432de4b80351b4b7688fee4279815a1fe4fa9de79f78309f07cd4234032c\" returns successfully" Oct 31 14:11:07.292745 kubelet[2999]: I1031 14:11:07.292702 2999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ft6l2" podStartSLOduration=2.292691967 podStartE2EDuration="2.292691967s" podCreationTimestamp="2025-10-31 14:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 14:11:07.292616158 +0000 UTC m=+7.273467556" watchObservedRunningTime="2025-10-31 14:11:07.292691967 +0000 UTC m=+7.273543366" Oct 31 14:11:08.593947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount384291799.mount: Deactivated successfully. Oct 31 14:11:09.210109 containerd[1690]: time="2025-10-31T14:11:09.210075167Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:11:09.210728 containerd[1690]: time="2025-10-31T14:11:09.210705404Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 31 14:11:09.210990 containerd[1690]: time="2025-10-31T14:11:09.210974807Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:11:09.212294 containerd[1690]: time="2025-10-31T14:11:09.212276627Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:11:09.212671 containerd[1690]: time="2025-10-31T14:11:09.212654356Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.587409905s" Oct 31 14:11:09.212701 containerd[1690]: time="2025-10-31T14:11:09.212671506Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 31 14:11:09.215304 containerd[1690]: time="2025-10-31T14:11:09.215169956Z" level=info msg="CreateContainer within sandbox \"701b950110f2358d8512b97f13d00c530bd0ee871df459b2675ec08925619fe6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 31 14:11:09.224262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2076762115.mount: Deactivated successfully. Oct 31 14:11:09.226193 containerd[1690]: time="2025-10-31T14:11:09.226170624Z" level=info msg="Container 2455edae058ea6867accbfe2f970d0c38cf4ccf933403f6bd2c2f1c4b610d4da: CDI devices from CRI Config.CDIDevices: []" Oct 31 14:11:09.229596 containerd[1690]: time="2025-10-31T14:11:09.229567907Z" level=info msg="CreateContainer within sandbox \"701b950110f2358d8512b97f13d00c530bd0ee871df459b2675ec08925619fe6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2455edae058ea6867accbfe2f970d0c38cf4ccf933403f6bd2c2f1c4b610d4da\"" Oct 31 14:11:09.229858 containerd[1690]: time="2025-10-31T14:11:09.229845831Z" level=info msg="StartContainer for \"2455edae058ea6867accbfe2f970d0c38cf4ccf933403f6bd2c2f1c4b610d4da\"" Oct 31 14:11:09.230637 containerd[1690]: time="2025-10-31T14:11:09.230591491Z" level=info msg="connecting to shim 2455edae058ea6867accbfe2f970d0c38cf4ccf933403f6bd2c2f1c4b610d4da" address="unix:///run/containerd/s/84bbf16e77af242fb463dc086feeddd49cb16fb997be7c168d4a86a38e6452f7" protocol=ttrpc version=3 Oct 31 14:11:09.248928 systemd[1]: Started cri-containerd-2455edae058ea6867accbfe2f970d0c38cf4ccf933403f6bd2c2f1c4b610d4da.scope - libcontainer container 2455edae058ea6867accbfe2f970d0c38cf4ccf933403f6bd2c2f1c4b610d4da. Oct 31 14:11:09.273773 containerd[1690]: time="2025-10-31T14:11:09.273748257Z" level=info msg="StartContainer for \"2455edae058ea6867accbfe2f970d0c38cf4ccf933403f6bd2c2f1c4b610d4da\" returns successfully" Oct 31 14:11:45.396625 sudo[2010]: pam_unix(sudo:session): session closed for user root Oct 31 14:11:45.402148 sshd-session[2006]: pam_unix(sshd:session): session closed for user core Oct 31 14:11:45.405065 systemd[1]: sshd@6-139.178.70.103:22-139.178.68.195:55698.service: Deactivated successfully. Oct 31 14:11:45.412883 sshd[2009]: Connection closed by 139.178.68.195 port 55698 Oct 31 14:11:45.407732 systemd[1]: session-9.scope: Deactivated successfully. Oct 31 14:11:45.408031 systemd[1]: session-9.scope: Consumed 4.433s CPU time, 154M memory peak. Oct 31 14:11:45.411416 systemd-logind[1656]: Session 9 logged out. Waiting for processes to exit. Oct 31 14:11:45.412030 systemd-logind[1656]: Removed session 9. Oct 31 14:11:49.125362 kubelet[2999]: I1031 14:11:49.124740 2999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-pvvnr" podStartSLOduration=41.53627125 podStartE2EDuration="44.124728192s" podCreationTimestamp="2025-10-31 14:11:05 +0000 UTC" firstStartedPulling="2025-10-31 14:11:06.624748506 +0000 UTC m=+6.605599892" lastFinishedPulling="2025-10-31 14:11:09.213205445 +0000 UTC m=+9.194056834" observedRunningTime="2025-10-31 14:11:10.296642033 +0000 UTC m=+10.277493433" watchObservedRunningTime="2025-10-31 14:11:49.124728192 +0000 UTC m=+49.105579590" Oct 31 14:11:49.132292 systemd[1]: Created slice kubepods-besteffort-poda86f45b1_993e_409d_9fc9_3feca66c75fb.slice - libcontainer container kubepods-besteffort-poda86f45b1_993e_409d_9fc9_3feca66c75fb.slice. Oct 31 14:11:49.220567 kubelet[2999]: I1031 14:11:49.220540 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2fvl\" (UniqueName: \"kubernetes.io/projected/a86f45b1-993e-409d-9fc9-3feca66c75fb-kube-api-access-j2fvl\") pod \"calico-typha-69784d5fb6-xr67j\" (UID: \"a86f45b1-993e-409d-9fc9-3feca66c75fb\") " pod="calico-system/calico-typha-69784d5fb6-xr67j" Oct 31 14:11:49.220660 kubelet[2999]: I1031 14:11:49.220582 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a86f45b1-993e-409d-9fc9-3feca66c75fb-tigera-ca-bundle\") pod \"calico-typha-69784d5fb6-xr67j\" (UID: \"a86f45b1-993e-409d-9fc9-3feca66c75fb\") " pod="calico-system/calico-typha-69784d5fb6-xr67j" Oct 31 14:11:49.220660 kubelet[2999]: I1031 14:11:49.220595 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a86f45b1-993e-409d-9fc9-3feca66c75fb-typha-certs\") pod \"calico-typha-69784d5fb6-xr67j\" (UID: \"a86f45b1-993e-409d-9fc9-3feca66c75fb\") " pod="calico-system/calico-typha-69784d5fb6-xr67j" Oct 31 14:11:49.366438 systemd[1]: Created slice kubepods-besteffort-pod3567d2d4_f06f_4902_8a2a_a6a1552b935a.slice - libcontainer container kubepods-besteffort-pod3567d2d4_f06f_4902_8a2a_a6a1552b935a.slice. Oct 31 14:11:49.421863 kubelet[2999]: I1031 14:11:49.421635 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3567d2d4-f06f-4902-8a2a-a6a1552b935a-var-run-calico\") pod \"calico-node-dc9qf\" (UID: \"3567d2d4-f06f-4902-8a2a-a6a1552b935a\") " pod="calico-system/calico-node-dc9qf" Oct 31 14:11:49.421863 kubelet[2999]: I1031 14:11:49.421671 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3567d2d4-f06f-4902-8a2a-a6a1552b935a-cni-bin-dir\") pod \"calico-node-dc9qf\" (UID: \"3567d2d4-f06f-4902-8a2a-a6a1552b935a\") " pod="calico-system/calico-node-dc9qf" Oct 31 14:11:49.421863 kubelet[2999]: I1031 14:11:49.421689 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3567d2d4-f06f-4902-8a2a-a6a1552b935a-cni-net-dir\") pod \"calico-node-dc9qf\" (UID: \"3567d2d4-f06f-4902-8a2a-a6a1552b935a\") " pod="calico-system/calico-node-dc9qf" Oct 31 14:11:49.421863 kubelet[2999]: I1031 14:11:49.421719 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gqb4\" (UniqueName: \"kubernetes.io/projected/3567d2d4-f06f-4902-8a2a-a6a1552b935a-kube-api-access-7gqb4\") pod \"calico-node-dc9qf\" (UID: \"3567d2d4-f06f-4902-8a2a-a6a1552b935a\") " pod="calico-system/calico-node-dc9qf" Oct 31 14:11:49.421863 kubelet[2999]: I1031 14:11:49.421748 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3567d2d4-f06f-4902-8a2a-a6a1552b935a-flexvol-driver-host\") pod \"calico-node-dc9qf\" (UID: \"3567d2d4-f06f-4902-8a2a-a6a1552b935a\") " pod="calico-system/calico-node-dc9qf" Oct 31 14:11:49.422212 kubelet[2999]: I1031 14:11:49.421762 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3567d2d4-f06f-4902-8a2a-a6a1552b935a-policysync\") pod \"calico-node-dc9qf\" (UID: \"3567d2d4-f06f-4902-8a2a-a6a1552b935a\") " pod="calico-system/calico-node-dc9qf" Oct 31 14:11:49.422212 kubelet[2999]: I1031 14:11:49.421781 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3567d2d4-f06f-4902-8a2a-a6a1552b935a-cni-log-dir\") pod \"calico-node-dc9qf\" (UID: \"3567d2d4-f06f-4902-8a2a-a6a1552b935a\") " pod="calico-system/calico-node-dc9qf" Oct 31 14:11:49.422212 kubelet[2999]: I1031 14:11:49.421829 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3567d2d4-f06f-4902-8a2a-a6a1552b935a-lib-modules\") pod \"calico-node-dc9qf\" (UID: \"3567d2d4-f06f-4902-8a2a-a6a1552b935a\") " pod="calico-system/calico-node-dc9qf" Oct 31 14:11:49.422212 kubelet[2999]: I1031 14:11:49.421862 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3567d2d4-f06f-4902-8a2a-a6a1552b935a-xtables-lock\") pod \"calico-node-dc9qf\" (UID: \"3567d2d4-f06f-4902-8a2a-a6a1552b935a\") " pod="calico-system/calico-node-dc9qf" Oct 31 14:11:49.422212 kubelet[2999]: I1031 14:11:49.421881 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3567d2d4-f06f-4902-8a2a-a6a1552b935a-var-lib-calico\") pod \"calico-node-dc9qf\" (UID: \"3567d2d4-f06f-4902-8a2a-a6a1552b935a\") " pod="calico-system/calico-node-dc9qf" Oct 31 14:11:49.422337 kubelet[2999]: I1031 14:11:49.421896 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3567d2d4-f06f-4902-8a2a-a6a1552b935a-tigera-ca-bundle\") pod \"calico-node-dc9qf\" (UID: \"3567d2d4-f06f-4902-8a2a-a6a1552b935a\") " pod="calico-system/calico-node-dc9qf" Oct 31 14:11:49.422337 kubelet[2999]: I1031 14:11:49.421915 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3567d2d4-f06f-4902-8a2a-a6a1552b935a-node-certs\") pod \"calico-node-dc9qf\" (UID: \"3567d2d4-f06f-4902-8a2a-a6a1552b935a\") " pod="calico-system/calico-node-dc9qf" Oct 31 14:11:49.436500 containerd[1690]: time="2025-10-31T14:11:49.436465149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69784d5fb6-xr67j,Uid:a86f45b1-993e-409d-9fc9-3feca66c75fb,Namespace:calico-system,Attempt:0,}" Oct 31 14:11:49.540881 kubelet[2999]: E1031 14:11:49.540854 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.540881 kubelet[2999]: W1031 14:11:49.540875 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.540990 kubelet[2999]: E1031 14:11:49.540893 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.548833 kubelet[2999]: E1031 14:11:49.548813 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.548833 kubelet[2999]: W1031 14:11:49.548828 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.548833 kubelet[2999]: E1031 14:11:49.548844 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.560106 containerd[1690]: time="2025-10-31T14:11:49.559482061Z" level=info msg="connecting to shim ef4ba31a4ea9d7ae7ea38f561742cfac0a88b31dfaf4610f37beb69c4ad3a2cb" address="unix:///run/containerd/s/2111b7d7165b5472dc7ad90280f132747b3478ee70bfd0fd89c96b2a832bda83" namespace=k8s.io protocol=ttrpc version=3 Oct 31 14:11:49.583811 kubelet[2999]: E1031 14:11:49.583635 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxdrt" podUID="1f06339a-fad3-4388-83e5-d004196ee955" Oct 31 14:11:49.620254 kubelet[2999]: E1031 14:11:49.620233 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.620254 kubelet[2999]: W1031 14:11:49.620249 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.620254 kubelet[2999]: E1031 14:11:49.620263 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.624011 kubelet[2999]: E1031 14:11:49.621323 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.624011 kubelet[2999]: W1031 14:11:49.621329 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.624011 kubelet[2999]: E1031 14:11:49.621336 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.624011 kubelet[2999]: E1031 14:11:49.621994 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.624011 kubelet[2999]: W1031 14:11:49.622000 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.624011 kubelet[2999]: E1031 14:11:49.622006 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.624011 kubelet[2999]: E1031 14:11:49.623246 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.624011 kubelet[2999]: W1031 14:11:49.623252 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.624011 kubelet[2999]: E1031 14:11:49.623262 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.624011 kubelet[2999]: E1031 14:11:49.623408 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.624208 kubelet[2999]: W1031 14:11:49.623412 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.624208 kubelet[2999]: E1031 14:11:49.623417 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.624208 kubelet[2999]: E1031 14:11:49.623716 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.624208 kubelet[2999]: W1031 14:11:49.623722 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.624208 kubelet[2999]: E1031 14:11:49.623730 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.624367 kubelet[2999]: E1031 14:11:49.624356 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.624367 kubelet[2999]: W1031 14:11:49.624364 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.624449 kubelet[2999]: E1031 14:11:49.624371 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.625256 kubelet[2999]: E1031 14:11:49.625244 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.625256 kubelet[2999]: W1031 14:11:49.625251 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.625256 kubelet[2999]: E1031 14:11:49.625258 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.625533 kubelet[2999]: E1031 14:11:49.625514 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.625533 kubelet[2999]: W1031 14:11:49.625523 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.625533 kubelet[2999]: E1031 14:11:49.625532 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.626024 systemd[1]: Started cri-containerd-ef4ba31a4ea9d7ae7ea38f561742cfac0a88b31dfaf4610f37beb69c4ad3a2cb.scope - libcontainer container ef4ba31a4ea9d7ae7ea38f561742cfac0a88b31dfaf4610f37beb69c4ad3a2cb. Oct 31 14:11:49.626901 kubelet[2999]: E1031 14:11:49.626881 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.626901 kubelet[2999]: W1031 14:11:49.626893 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.626956 kubelet[2999]: E1031 14:11:49.626905 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.627272 kubelet[2999]: E1031 14:11:49.627262 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.627272 kubelet[2999]: W1031 14:11:49.627271 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.627328 kubelet[2999]: E1031 14:11:49.627280 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.628196 kubelet[2999]: E1031 14:11:49.628182 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.628196 kubelet[2999]: W1031 14:11:49.628191 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.628337 kubelet[2999]: E1031 14:11:49.628240 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.628450 kubelet[2999]: E1031 14:11:49.628417 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.628450 kubelet[2999]: W1031 14:11:49.628426 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.628450 kubelet[2999]: E1031 14:11:49.628434 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.628640 kubelet[2999]: E1031 14:11:49.628633 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.628675 kubelet[2999]: W1031 14:11:49.628670 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.628772 kubelet[2999]: E1031 14:11:49.628695 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.628897 kubelet[2999]: E1031 14:11:49.628841 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.628897 kubelet[2999]: W1031 14:11:49.628847 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.628897 kubelet[2999]: E1031 14:11:49.628853 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.629805 kubelet[2999]: E1031 14:11:49.628985 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.629805 kubelet[2999]: W1031 14:11:49.628991 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.629805 kubelet[2999]: E1031 14:11:49.628996 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.630029 kubelet[2999]: E1031 14:11:49.629975 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.630029 kubelet[2999]: W1031 14:11:49.629983 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.630029 kubelet[2999]: E1031 14:11:49.629990 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.630202 kubelet[2999]: E1031 14:11:49.630159 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.630202 kubelet[2999]: W1031 14:11:49.630166 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.630202 kubelet[2999]: E1031 14:11:49.630173 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.630354 kubelet[2999]: E1031 14:11:49.630332 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.630354 kubelet[2999]: W1031 14:11:49.630338 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.630497 kubelet[2999]: E1031 14:11:49.630344 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.630701 kubelet[2999]: E1031 14:11:49.630588 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.630701 kubelet[2999]: W1031 14:11:49.630596 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.630701 kubelet[2999]: E1031 14:11:49.630601 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.630823 kubelet[2999]: E1031 14:11:49.630817 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.630880 kubelet[2999]: W1031 14:11:49.630859 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.630880 kubelet[2999]: E1031 14:11:49.630867 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.631020 kubelet[2999]: I1031 14:11:49.630939 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1f06339a-fad3-4388-83e5-d004196ee955-varrun\") pod \"csi-node-driver-wxdrt\" (UID: \"1f06339a-fad3-4388-83e5-d004196ee955\") " pod="calico-system/csi-node-driver-wxdrt" Oct 31 14:11:49.631180 kubelet[2999]: E1031 14:11:49.631160 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.631180 kubelet[2999]: W1031 14:11:49.631169 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.631180 kubelet[2999]: E1031 14:11:49.631179 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.631393 kubelet[2999]: E1031 14:11:49.631385 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.631393 kubelet[2999]: W1031 14:11:49.631393 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.631476 kubelet[2999]: E1031 14:11:49.631400 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.632027 kubelet[2999]: E1031 14:11:49.632018 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.632027 kubelet[2999]: W1031 14:11:49.632026 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.632084 kubelet[2999]: E1031 14:11:49.632035 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.632856 kubelet[2999]: E1031 14:11:49.632844 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.632856 kubelet[2999]: W1031 14:11:49.632854 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.632939 kubelet[2999]: E1031 14:11:49.632863 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.632939 kubelet[2999]: I1031 14:11:49.632921 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1f06339a-fad3-4388-83e5-d004196ee955-registration-dir\") pod \"csi-node-driver-wxdrt\" (UID: \"1f06339a-fad3-4388-83e5-d004196ee955\") " pod="calico-system/csi-node-driver-wxdrt" Oct 31 14:11:49.632976 kubelet[2999]: E1031 14:11:49.632967 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.632976 kubelet[2999]: W1031 14:11:49.632973 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.633035 kubelet[2999]: E1031 14:11:49.632979 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.633137 kubelet[2999]: E1031 14:11:49.633128 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.633137 kubelet[2999]: W1031 14:11:49.633136 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.633178 kubelet[2999]: E1031 14:11:49.633143 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.633196 kubelet[2999]: I1031 14:11:49.633176 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1f06339a-fad3-4388-83e5-d004196ee955-socket-dir\") pod \"csi-node-driver-wxdrt\" (UID: \"1f06339a-fad3-4388-83e5-d004196ee955\") " pod="calico-system/csi-node-driver-wxdrt" Oct 31 14:11:49.633939 kubelet[2999]: E1031 14:11:49.633924 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.633939 kubelet[2999]: W1031 14:11:49.633934 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.633996 kubelet[2999]: E1031 14:11:49.633943 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.633996 kubelet[2999]: I1031 14:11:49.633964 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1f06339a-fad3-4388-83e5-d004196ee955-kubelet-dir\") pod \"csi-node-driver-wxdrt\" (UID: \"1f06339a-fad3-4388-83e5-d004196ee955\") " pod="calico-system/csi-node-driver-wxdrt" Oct 31 14:11:49.634152 kubelet[2999]: E1031 14:11:49.634131 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.634152 kubelet[2999]: W1031 14:11:49.634148 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.634225 kubelet[2999]: E1031 14:11:49.634157 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.634225 kubelet[2999]: I1031 14:11:49.634172 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqqw4\" (UniqueName: \"kubernetes.io/projected/1f06339a-fad3-4388-83e5-d004196ee955-kube-api-access-gqqw4\") pod \"csi-node-driver-wxdrt\" (UID: \"1f06339a-fad3-4388-83e5-d004196ee955\") " pod="calico-system/csi-node-driver-wxdrt" Oct 31 14:11:49.634496 kubelet[2999]: E1031 14:11:49.634467 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.634496 kubelet[2999]: W1031 14:11:49.634474 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.634496 kubelet[2999]: E1031 14:11:49.634482 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.641841 kubelet[2999]: E1031 14:11:49.634849 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.641841 kubelet[2999]: W1031 14:11:49.634856 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.641841 kubelet[2999]: E1031 14:11:49.634864 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.641841 kubelet[2999]: E1031 14:11:49.634994 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.641841 kubelet[2999]: W1031 14:11:49.635001 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.641841 kubelet[2999]: E1031 14:11:49.635008 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.641841 kubelet[2999]: E1031 14:11:49.635315 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.641841 kubelet[2999]: W1031 14:11:49.635321 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.641841 kubelet[2999]: E1031 14:11:49.635341 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.641841 kubelet[2999]: E1031 14:11:49.635919 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.642016 kubelet[2999]: W1031 14:11:49.635926 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.642016 kubelet[2999]: E1031 14:11:49.635933 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.642016 kubelet[2999]: E1031 14:11:49.636828 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.642016 kubelet[2999]: W1031 14:11:49.636835 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.642016 kubelet[2999]: E1031 14:11:49.636842 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.671063 containerd[1690]: time="2025-10-31T14:11:49.671026258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dc9qf,Uid:3567d2d4-f06f-4902-8a2a-a6a1552b935a,Namespace:calico-system,Attempt:0,}" Oct 31 14:11:49.694721 containerd[1690]: time="2025-10-31T14:11:49.694203482Z" level=info msg="connecting to shim cc93fce26eaf4b92c0b4894f4c9d540e5b83117c49097a77c73b962fdec27071" address="unix:///run/containerd/s/91dbfcb6c8e9870b5df861b61b7009f09e4d06c8d86429d20c34f303025e52b0" namespace=k8s.io protocol=ttrpc version=3 Oct 31 14:11:49.714456 containerd[1690]: time="2025-10-31T14:11:49.714406194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69784d5fb6-xr67j,Uid:a86f45b1-993e-409d-9fc9-3feca66c75fb,Namespace:calico-system,Attempt:0,} returns sandbox id \"ef4ba31a4ea9d7ae7ea38f561742cfac0a88b31dfaf4610f37beb69c4ad3a2cb\"" Oct 31 14:11:49.716393 containerd[1690]: time="2025-10-31T14:11:49.716371403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 31 14:11:49.720948 systemd[1]: Started cri-containerd-cc93fce26eaf4b92c0b4894f4c9d540e5b83117c49097a77c73b962fdec27071.scope - libcontainer container cc93fce26eaf4b92c0b4894f4c9d540e5b83117c49097a77c73b962fdec27071. Oct 31 14:11:49.735226 kubelet[2999]: E1031 14:11:49.735206 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.735226 kubelet[2999]: W1031 14:11:49.735220 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.735620 kubelet[2999]: E1031 14:11:49.735235 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.735620 kubelet[2999]: E1031 14:11:49.735370 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.735620 kubelet[2999]: W1031 14:11:49.735375 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.735620 kubelet[2999]: E1031 14:11:49.735380 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.735620 kubelet[2999]: E1031 14:11:49.735479 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.735620 kubelet[2999]: W1031 14:11:49.735486 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.735620 kubelet[2999]: E1031 14:11:49.735493 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.736099 kubelet[2999]: E1031 14:11:49.736002 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.736099 kubelet[2999]: W1031 14:11:49.736010 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.736099 kubelet[2999]: E1031 14:11:49.736018 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.736419 kubelet[2999]: E1031 14:11:49.736170 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.736419 kubelet[2999]: W1031 14:11:49.736180 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.736419 kubelet[2999]: E1031 14:11:49.736188 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.736419 kubelet[2999]: E1031 14:11:49.736308 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.736419 kubelet[2999]: W1031 14:11:49.736315 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.736419 kubelet[2999]: E1031 14:11:49.736321 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.736796 kubelet[2999]: E1031 14:11:49.736613 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.736796 kubelet[2999]: W1031 14:11:49.736708 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.736796 kubelet[2999]: E1031 14:11:49.736719 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.737082 kubelet[2999]: E1031 14:11:49.737015 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.737082 kubelet[2999]: W1031 14:11:49.737024 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.737082 kubelet[2999]: E1031 14:11:49.737031 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.737478 kubelet[2999]: E1031 14:11:49.737395 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.737478 kubelet[2999]: W1031 14:11:49.737405 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.737478 kubelet[2999]: E1031 14:11:49.737413 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.737617 kubelet[2999]: E1031 14:11:49.737598 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.737617 kubelet[2999]: W1031 14:11:49.737604 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.737617 kubelet[2999]: E1031 14:11:49.737610 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.738222 kubelet[2999]: E1031 14:11:49.738054 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.738222 kubelet[2999]: W1031 14:11:49.738063 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.738222 kubelet[2999]: E1031 14:11:49.738071 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.738432 kubelet[2999]: E1031 14:11:49.738424 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.738620 kubelet[2999]: W1031 14:11:49.738541 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.738620 kubelet[2999]: E1031 14:11:49.738562 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.739244 kubelet[2999]: E1031 14:11:49.739132 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.739244 kubelet[2999]: W1031 14:11:49.739143 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.739244 kubelet[2999]: E1031 14:11:49.739152 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.739498 kubelet[2999]: E1031 14:11:49.739489 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.739607 kubelet[2999]: W1031 14:11:49.739542 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.739607 kubelet[2999]: E1031 14:11:49.739554 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.741041 kubelet[2999]: E1031 14:11:49.740953 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.741041 kubelet[2999]: W1031 14:11:49.740960 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.741041 kubelet[2999]: E1031 14:11:49.740969 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.741265 kubelet[2999]: E1031 14:11:49.741188 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.741265 kubelet[2999]: W1031 14:11:49.741198 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.741265 kubelet[2999]: E1031 14:11:49.741206 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.741552 kubelet[2999]: E1031 14:11:49.741502 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.741552 kubelet[2999]: W1031 14:11:49.741510 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.741552 kubelet[2999]: E1031 14:11:49.741518 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.741923 kubelet[2999]: E1031 14:11:49.741869 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.742009 kubelet[2999]: W1031 14:11:49.741963 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.742009 kubelet[2999]: E1031 14:11:49.741975 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.742442 kubelet[2999]: E1031 14:11:49.742360 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.742442 kubelet[2999]: W1031 14:11:49.742368 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.742442 kubelet[2999]: E1031 14:11:49.742374 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.742708 kubelet[2999]: E1031 14:11:49.742643 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.742708 kubelet[2999]: W1031 14:11:49.742649 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.742708 kubelet[2999]: E1031 14:11:49.742666 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.743778 kubelet[2999]: E1031 14:11:49.743522 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.743778 kubelet[2999]: W1031 14:11:49.743530 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.743778 kubelet[2999]: E1031 14:11:49.743538 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.743778 kubelet[2999]: E1031 14:11:49.743677 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.743778 kubelet[2999]: W1031 14:11:49.743683 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.743778 kubelet[2999]: E1031 14:11:49.743698 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.744089 kubelet[2999]: E1031 14:11:49.743992 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.744089 kubelet[2999]: W1031 14:11:49.743999 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.744089 kubelet[2999]: E1031 14:11:49.744006 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.744319 kubelet[2999]: E1031 14:11:49.744207 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.744319 kubelet[2999]: W1031 14:11:49.744214 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.744319 kubelet[2999]: E1031 14:11:49.744221 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.744408 kubelet[2999]: E1031 14:11:49.744398 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.744408 kubelet[2999]: W1031 14:11:49.744406 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.744466 kubelet[2999]: E1031 14:11:49.744412 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.752178 kubelet[2999]: E1031 14:11:49.752160 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:49.752344 kubelet[2999]: W1031 14:11:49.752280 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:49.752344 kubelet[2999]: E1031 14:11:49.752299 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:49.754545 containerd[1690]: time="2025-10-31T14:11:49.754523536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dc9qf,Uid:3567d2d4-f06f-4902-8a2a-a6a1552b935a,Namespace:calico-system,Attempt:0,} returns sandbox id \"cc93fce26eaf4b92c0b4894f4c9d540e5b83117c49097a77c73b962fdec27071\"" Oct 31 14:11:51.232437 kubelet[2999]: E1031 14:11:51.232375 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxdrt" podUID="1f06339a-fad3-4388-83e5-d004196ee955" Oct 31 14:11:51.251988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount862355379.mount: Deactivated successfully. Oct 31 14:11:52.481158 containerd[1690]: time="2025-10-31T14:11:52.481124046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:11:52.481930 containerd[1690]: time="2025-10-31T14:11:52.481525494Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 31 14:11:52.482834 containerd[1690]: time="2025-10-31T14:11:52.482817296Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:11:52.484446 containerd[1690]: time="2025-10-31T14:11:52.484115295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:11:52.484446 containerd[1690]: time="2025-10-31T14:11:52.484359815Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.767845516s" Oct 31 14:11:52.484446 containerd[1690]: time="2025-10-31T14:11:52.484380567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 31 14:11:52.485400 containerd[1690]: time="2025-10-31T14:11:52.485385563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 31 14:11:52.495717 containerd[1690]: time="2025-10-31T14:11:52.495697161Z" level=info msg="CreateContainer within sandbox \"ef4ba31a4ea9d7ae7ea38f561742cfac0a88b31dfaf4610f37beb69c4ad3a2cb\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 31 14:11:52.505929 containerd[1690]: time="2025-10-31T14:11:52.505902977Z" level=info msg="Container e686a6db03216985587d30efe03c6f2755f306de119cd05dee6462d9ddb9666e: CDI devices from CRI Config.CDIDevices: []" Oct 31 14:11:52.509168 containerd[1690]: time="2025-10-31T14:11:52.509139336Z" level=info msg="CreateContainer within sandbox \"ef4ba31a4ea9d7ae7ea38f561742cfac0a88b31dfaf4610f37beb69c4ad3a2cb\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e686a6db03216985587d30efe03c6f2755f306de119cd05dee6462d9ddb9666e\"" Oct 31 14:11:52.509899 containerd[1690]: time="2025-10-31T14:11:52.509850239Z" level=info msg="StartContainer for \"e686a6db03216985587d30efe03c6f2755f306de119cd05dee6462d9ddb9666e\"" Oct 31 14:11:52.511086 containerd[1690]: time="2025-10-31T14:11:52.511026103Z" level=info msg="connecting to shim e686a6db03216985587d30efe03c6f2755f306de119cd05dee6462d9ddb9666e" address="unix:///run/containerd/s/2111b7d7165b5472dc7ad90280f132747b3478ee70bfd0fd89c96b2a832bda83" protocol=ttrpc version=3 Oct 31 14:11:52.536066 systemd[1]: Started cri-containerd-e686a6db03216985587d30efe03c6f2755f306de119cd05dee6462d9ddb9666e.scope - libcontainer container e686a6db03216985587d30efe03c6f2755f306de119cd05dee6462d9ddb9666e. Oct 31 14:11:52.589977 containerd[1690]: time="2025-10-31T14:11:52.589899210Z" level=info msg="StartContainer for \"e686a6db03216985587d30efe03c6f2755f306de119cd05dee6462d9ddb9666e\" returns successfully" Oct 31 14:11:53.246151 kubelet[2999]: E1031 14:11:53.245889 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxdrt" podUID="1f06339a-fad3-4388-83e5-d004196ee955" Oct 31 14:11:53.365324 kubelet[2999]: E1031 14:11:53.365163 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:53.365324 kubelet[2999]: W1031 14:11:53.365201 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:53.366118 kubelet[2999]: E1031 14:11:53.365952 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:53.366343 kubelet[2999]: E1031 14:11:53.366332 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:53.366478 kubelet[2999]: W1031 14:11:53.366391 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:53.366478 kubelet[2999]: E1031 14:11:53.366407 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:53.366980 kubelet[2999]: E1031 14:11:53.366693 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:53.366980 kubelet[2999]: W1031 14:11:53.366703 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:53.366980 kubelet[2999]: E1031 14:11:53.366712 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:53.367324 kubelet[2999]: E1031 14:11:53.367306 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:53.367459 kubelet[2999]: W1031 14:11:53.367451 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:53.367727 kubelet[2999]: E1031 14:11:53.367506 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:53.367944 kubelet[2999]: E1031 14:11:53.367929 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:53.368083 kubelet[2999]: W1031 14:11:53.368002 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:53.368083 kubelet[2999]: E1031 14:11:53.368034 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:53.368248 kubelet[2999]: E1031 14:11:53.368241 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:53.368363 kubelet[2999]: W1031 14:11:53.368265 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:53.368363 kubelet[2999]: E1031 14:11:53.368274 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:53.368604 kubelet[2999]: E1031 14:11:53.368527 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:53.368604 kubelet[2999]: W1031 14:11:53.368536 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:53.368604 kubelet[2999]: E1031 14:11:53.368544 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:53.368759 kubelet[2999]: E1031 14:11:53.368726 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:53.368759 kubelet[2999]: W1031 14:11:53.368733 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:53.368759 kubelet[2999]: E1031 14:11:53.368739 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:53.369241 kubelet[2999]: E1031 14:11:53.369224 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:53.369292 kubelet[2999]: W1031 14:11:53.369283 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:53.369319 kubelet[2999]: E1031 14:11:53.369293 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:53.369910 kubelet[2999]: E1031 14:11:53.369897 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:53.369910 kubelet[2999]: W1031 14:11:53.369908 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:53.369980 kubelet[2999]: E1031 14:11:53.369916 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:53.370027 kubelet[2999]: E1031 14:11:53.370015 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:53.370027 kubelet[2999]: W1031 14:11:53.370024 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:53.370102 kubelet[2999]: E1031 14:11:53.370038 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:53.370143 kubelet[2999]: E1031 14:11:53.370130 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:53.370143 kubelet[2999]: W1031 14:11:53.370139 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:53.370213 kubelet[2999]: E1031 14:11:53.370145 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:53.370374 kubelet[2999]: E1031 14:11:53.370360 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:53.370374 kubelet[2999]: W1031 14:11:53.370371 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:53.370452 kubelet[2999]: E1031 14:11:53.370377 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:53.370650 kubelet[2999]: E1031 14:11:53.370564 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:53.370650 kubelet[2999]: W1031 14:11:53.370571 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:53.370650 kubelet[2999]: E1031 14:11:53.370576 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:53.370945 kubelet[2999]: E1031 14:11:53.370926 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:53.370945 kubelet[2999]: W1031 14:11:53.370936 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:53.370945 kubelet[2999]: E1031 14:11:53.370944 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:53.374175 kubelet[2999]: I1031 14:11:53.374035 2999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-69784d5fb6-xr67j" podStartSLOduration=1.604713759 podStartE2EDuration="4.374024164s" podCreationTimestamp="2025-10-31 14:11:49 +0000 UTC" firstStartedPulling="2025-10-31 14:11:49.715634194 +0000 UTC m=+49.696485583" lastFinishedPulling="2025-10-31 14:11:52.484944602 +0000 UTC m=+52.465795988" observedRunningTime="2025-10-31 14:11:53.373994488 +0000 UTC m=+53.354845886" watchObservedRunningTime="2025-10-31 14:11:53.374024164 +0000 UTC m=+53.354875563" Oct 31 14:11:53.470033 kubelet[2999]: E1031 14:11:53.470002 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:53.470033 kubelet[2999]: W1031 14:11:53.470025 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:53.470033 kubelet[2999]: E1031 14:11:53.470038 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:53.472895 kubelet[2999]: E1031 14:11:53.472884 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:53.472895 kubelet[2999]: W1031 14:11:53.472893 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:53.472957 kubelet[2999]: E1031 14:11:53.472899 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:53.473016 kubelet[2999]: E1031 14:11:53.473004 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:53.473016 kubelet[2999]: W1031 14:11:53.473012 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:53.477232 kubelet[2999]: E1031 14:11:53.473018 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:53.477232 kubelet[2999]: E1031 14:11:53.473121 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:53.477232 kubelet[2999]: W1031 14:11:53.473126 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:53.477232 kubelet[2999]: E1031 14:11:53.473130 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:53.477232 kubelet[2999]: E1031 14:11:53.473237 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:53.477232 kubelet[2999]: W1031 14:11:53.473242 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:53.477232 kubelet[2999]: E1031 14:11:53.473247 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:53.477232 kubelet[2999]: E1031 14:11:53.473344 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:53.477232 kubelet[2999]: W1031 14:11:53.473349 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:53.477232 kubelet[2999]: E1031 14:11:53.473354 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:53.484295 kubelet[2999]: E1031 14:11:53.479605 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:53.484295 kubelet[2999]: W1031 14:11:53.479610 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:53.484295 kubelet[2999]: E1031 14:11:53.479616 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:53.484295 kubelet[2999]: E1031 14:11:53.479847 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:53.484295 kubelet[2999]: W1031 14:11:53.479852 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:53.484295 kubelet[2999]: E1031 14:11:53.479857 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:53.484295 kubelet[2999]: E1031 14:11:53.479958 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:53.484295 kubelet[2999]: W1031 14:11:53.479962 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:53.484295 kubelet[2999]: E1031 14:11:53.479967 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:53.484295 kubelet[2999]: E1031 14:11:53.480152 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:53.484478 kubelet[2999]: W1031 14:11:53.480157 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:53.484478 kubelet[2999]: E1031 14:11:53.480162 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:53.484478 kubelet[2999]: E1031 14:11:53.480288 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:53.484478 kubelet[2999]: W1031 14:11:53.480293 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:53.484478 kubelet[2999]: E1031 14:11:53.480298 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:53.484478 kubelet[2999]: E1031 14:11:53.480393 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:53.484478 kubelet[2999]: W1031 14:11:53.480397 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:53.484478 kubelet[2999]: E1031 14:11:53.480401 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:53.484478 kubelet[2999]: E1031 14:11:53.480497 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:53.484478 kubelet[2999]: W1031 14:11:53.480501 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:53.484635 kubelet[2999]: E1031 14:11:53.480505 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:53.484635 kubelet[2999]: E1031 14:11:53.480645 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:53.484635 kubelet[2999]: W1031 14:11:53.480652 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:53.484635 kubelet[2999]: E1031 14:11:53.480659 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:53.484635 kubelet[2999]: E1031 14:11:53.480730 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:53.484635 kubelet[2999]: W1031 14:11:53.480734 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:53.484635 kubelet[2999]: E1031 14:11:53.480739 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:53.484635 kubelet[2999]: E1031 14:11:53.480905 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:53.484635 kubelet[2999]: W1031 14:11:53.480910 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:53.484635 kubelet[2999]: E1031 14:11:53.480915 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:53.484786 kubelet[2999]: E1031 14:11:53.481074 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:53.484786 kubelet[2999]: W1031 14:11:53.481081 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:53.484786 kubelet[2999]: E1031 14:11:53.481087 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:53.484786 kubelet[2999]: E1031 14:11:53.481281 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:53.484786 kubelet[2999]: W1031 14:11:53.481357 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:53.484786 kubelet[2999]: E1031 14:11:53.481365 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:54.238138 containerd[1690]: time="2025-10-31T14:11:54.238108526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:11:54.245434 containerd[1690]: time="2025-10-31T14:11:54.245415914Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 31 14:11:54.254087 containerd[1690]: time="2025-10-31T14:11:54.254067003Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:11:54.265728 containerd[1690]: time="2025-10-31T14:11:54.265699494Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:11:54.266278 containerd[1690]: time="2025-10-31T14:11:54.266259938Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.780791154s" Oct 31 14:11:54.266315 containerd[1690]: time="2025-10-31T14:11:54.266281069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 31 14:11:54.279092 containerd[1690]: time="2025-10-31T14:11:54.279064793Z" level=info msg="CreateContainer within sandbox \"cc93fce26eaf4b92c0b4894f4c9d540e5b83117c49097a77c73b962fdec27071\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 31 14:11:54.303490 containerd[1690]: time="2025-10-31T14:11:54.302845260Z" level=info msg="Container e3e02f98df829458e1e85a05dad4fbe36d528eea4800b1f4153a9cb5d03ad822: CDI devices from CRI Config.CDIDevices: []" Oct 31 14:11:54.319295 containerd[1690]: time="2025-10-31T14:11:54.319264821Z" level=info msg="CreateContainer within sandbox \"cc93fce26eaf4b92c0b4894f4c9d540e5b83117c49097a77c73b962fdec27071\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e3e02f98df829458e1e85a05dad4fbe36d528eea4800b1f4153a9cb5d03ad822\"" Oct 31 14:11:54.319828 containerd[1690]: time="2025-10-31T14:11:54.319810576Z" level=info msg="StartContainer for \"e3e02f98df829458e1e85a05dad4fbe36d528eea4800b1f4153a9cb5d03ad822\"" Oct 31 14:11:54.321078 containerd[1690]: time="2025-10-31T14:11:54.321057096Z" level=info msg="connecting to shim e3e02f98df829458e1e85a05dad4fbe36d528eea4800b1f4153a9cb5d03ad822" address="unix:///run/containerd/s/91dbfcb6c8e9870b5df861b61b7009f09e4d06c8d86429d20c34f303025e52b0" protocol=ttrpc version=3 Oct 31 14:11:54.343981 systemd[1]: Started cri-containerd-e3e02f98df829458e1e85a05dad4fbe36d528eea4800b1f4153a9cb5d03ad822.scope - libcontainer container e3e02f98df829458e1e85a05dad4fbe36d528eea4800b1f4153a9cb5d03ad822. Oct 31 14:11:54.376373 kubelet[2999]: E1031 14:11:54.376340 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:54.376373 kubelet[2999]: W1031 14:11:54.376356 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:54.378754 kubelet[2999]: E1031 14:11:54.376370 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:54.378754 kubelet[2999]: E1031 14:11:54.376500 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:54.378754 kubelet[2999]: W1031 14:11:54.376506 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:54.378754 kubelet[2999]: E1031 14:11:54.376513 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:54.378754 kubelet[2999]: E1031 14:11:54.376598 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:54.378754 kubelet[2999]: W1031 14:11:54.376602 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:54.378754 kubelet[2999]: E1031 14:11:54.376607 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:54.378754 kubelet[2999]: E1031 14:11:54.376698 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:54.378754 kubelet[2999]: W1031 14:11:54.376702 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:54.378754 kubelet[2999]: E1031 14:11:54.376707 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:54.379124 kubelet[2999]: E1031 14:11:54.376830 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:54.379124 kubelet[2999]: W1031 14:11:54.376835 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:54.379124 kubelet[2999]: E1031 14:11:54.376839 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:54.379124 kubelet[2999]: E1031 14:11:54.376914 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:54.379124 kubelet[2999]: W1031 14:11:54.376918 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:54.379124 kubelet[2999]: E1031 14:11:54.376922 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:54.379124 kubelet[2999]: E1031 14:11:54.376987 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:54.379124 kubelet[2999]: W1031 14:11:54.376993 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:54.379124 kubelet[2999]: E1031 14:11:54.376997 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:54.379124 kubelet[2999]: E1031 14:11:54.377067 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:54.379465 kubelet[2999]: W1031 14:11:54.377071 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:54.379465 kubelet[2999]: E1031 14:11:54.377076 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:54.379465 kubelet[2999]: E1031 14:11:54.377165 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:54.379465 kubelet[2999]: W1031 14:11:54.377169 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:54.379465 kubelet[2999]: E1031 14:11:54.377173 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:54.379465 kubelet[2999]: E1031 14:11:54.377267 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:54.379465 kubelet[2999]: W1031 14:11:54.377272 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:54.379465 kubelet[2999]: E1031 14:11:54.377276 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:54.379465 kubelet[2999]: E1031 14:11:54.377368 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:54.379465 kubelet[2999]: W1031 14:11:54.377373 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:54.379816 kubelet[2999]: E1031 14:11:54.377378 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:54.379816 kubelet[2999]: E1031 14:11:54.377467 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:54.379816 kubelet[2999]: W1031 14:11:54.377472 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:54.379816 kubelet[2999]: E1031 14:11:54.377476 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:54.379816 kubelet[2999]: E1031 14:11:54.377565 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:54.379816 kubelet[2999]: W1031 14:11:54.377571 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:54.379816 kubelet[2999]: E1031 14:11:54.377577 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:54.379816 kubelet[2999]: E1031 14:11:54.377670 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:54.379816 kubelet[2999]: W1031 14:11:54.377675 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:54.379816 kubelet[2999]: E1031 14:11:54.377679 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:54.380140 kubelet[2999]: E1031 14:11:54.377772 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:54.380140 kubelet[2999]: W1031 14:11:54.377776 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:54.380140 kubelet[2999]: E1031 14:11:54.377781 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:54.380140 kubelet[2999]: E1031 14:11:54.378989 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:54.380140 kubelet[2999]: W1031 14:11:54.378995 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:54.380140 kubelet[2999]: E1031 14:11:54.379000 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:54.380140 kubelet[2999]: E1031 14:11:54.379093 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:54.380140 kubelet[2999]: W1031 14:11:54.379103 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:54.380140 kubelet[2999]: E1031 14:11:54.379108 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:54.380140 kubelet[2999]: E1031 14:11:54.379198 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:54.380292 kubelet[2999]: W1031 14:11:54.379203 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:54.380292 kubelet[2999]: E1031 14:11:54.379208 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:54.380292 kubelet[2999]: E1031 14:11:54.379332 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:54.380292 kubelet[2999]: W1031 14:11:54.379337 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:54.380292 kubelet[2999]: E1031 14:11:54.379341 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:54.380292 kubelet[2999]: E1031 14:11:54.379415 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:54.380292 kubelet[2999]: W1031 14:11:54.379420 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:54.380292 kubelet[2999]: E1031 14:11:54.379425 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:54.380292 kubelet[2999]: E1031 14:11:54.379509 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:54.380292 kubelet[2999]: W1031 14:11:54.379514 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:54.380445 kubelet[2999]: E1031 14:11:54.379518 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:54.380445 kubelet[2999]: E1031 14:11:54.379614 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:54.380445 kubelet[2999]: W1031 14:11:54.379619 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:54.380445 kubelet[2999]: E1031 14:11:54.379623 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:54.380445 kubelet[2999]: E1031 14:11:54.379901 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:54.380445 kubelet[2999]: W1031 14:11:54.379908 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:54.380445 kubelet[2999]: E1031 14:11:54.379915 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:54.381304 kubelet[2999]: E1031 14:11:54.380834 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:54.381304 kubelet[2999]: W1031 14:11:54.380841 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:54.381304 kubelet[2999]: E1031 14:11:54.380847 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:54.381304 kubelet[2999]: E1031 14:11:54.381005 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:54.381304 kubelet[2999]: W1031 14:11:54.381010 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:54.381304 kubelet[2999]: E1031 14:11:54.381015 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:54.381304 kubelet[2999]: E1031 14:11:54.381099 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:54.381304 kubelet[2999]: W1031 14:11:54.381104 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:54.381304 kubelet[2999]: E1031 14:11:54.381109 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:54.381304 kubelet[2999]: E1031 14:11:54.381248 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:54.381570 kubelet[2999]: W1031 14:11:54.381252 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:54.381570 kubelet[2999]: E1031 14:11:54.381258 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:54.382025 kubelet[2999]: E1031 14:11:54.381844 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:54.382025 kubelet[2999]: W1031 14:11:54.381851 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:54.382025 kubelet[2999]: E1031 14:11:54.381856 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:54.382197 kubelet[2999]: E1031 14:11:54.382184 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:54.382197 kubelet[2999]: W1031 14:11:54.382195 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:54.382246 kubelet[2999]: E1031 14:11:54.382202 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:54.382295 kubelet[2999]: E1031 14:11:54.382288 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:54.382295 kubelet[2999]: W1031 14:11:54.382294 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:54.382343 kubelet[2999]: E1031 14:11:54.382298 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:54.382590 kubelet[2999]: E1031 14:11:54.382579 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:54.382590 kubelet[2999]: W1031 14:11:54.382587 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:54.382632 kubelet[2999]: E1031 14:11:54.382593 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:54.382770 kubelet[2999]: E1031 14:11:54.382725 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:54.383392 kubelet[2999]: W1031 14:11:54.382935 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:54.383392 kubelet[2999]: E1031 14:11:54.382945 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:54.383392 kubelet[2999]: E1031 14:11:54.383118 2999 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:11:54.383392 kubelet[2999]: W1031 14:11:54.383123 2999 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:11:54.383392 kubelet[2999]: E1031 14:11:54.383127 2999 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:11:54.386458 containerd[1690]: time="2025-10-31T14:11:54.384582978Z" level=info msg="StartContainer for \"e3e02f98df829458e1e85a05dad4fbe36d528eea4800b1f4153a9cb5d03ad822\" returns successfully" Oct 31 14:11:54.389755 systemd[1]: cri-containerd-e3e02f98df829458e1e85a05dad4fbe36d528eea4800b1f4153a9cb5d03ad822.scope: Deactivated successfully. Oct 31 14:11:54.417883 containerd[1690]: time="2025-10-31T14:11:54.417844409Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e3e02f98df829458e1e85a05dad4fbe36d528eea4800b1f4153a9cb5d03ad822\" id:\"e3e02f98df829458e1e85a05dad4fbe36d528eea4800b1f4153a9cb5d03ad822\" pid:3685 exited_at:{seconds:1761919914 nanos:393312171}" Oct 31 14:11:54.432509 containerd[1690]: time="2025-10-31T14:11:54.432427394Z" level=info msg="received exit event container_id:\"e3e02f98df829458e1e85a05dad4fbe36d528eea4800b1f4153a9cb5d03ad822\" id:\"e3e02f98df829458e1e85a05dad4fbe36d528eea4800b1f4153a9cb5d03ad822\" pid:3685 exited_at:{seconds:1761919914 nanos:393312171}" Oct 31 14:11:54.452851 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3e02f98df829458e1e85a05dad4fbe36d528eea4800b1f4153a9cb5d03ad822-rootfs.mount: Deactivated successfully. Oct 31 14:11:55.231385 kubelet[2999]: E1031 14:11:55.231348 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxdrt" podUID="1f06339a-fad3-4388-83e5-d004196ee955" Oct 31 14:11:55.370808 containerd[1690]: time="2025-10-31T14:11:55.370464558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 31 14:11:57.231783 kubelet[2999]: E1031 14:11:57.231737 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxdrt" podUID="1f06339a-fad3-4388-83e5-d004196ee955" Oct 31 14:11:59.231896 kubelet[2999]: E1031 14:11:59.231846 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxdrt" podUID="1f06339a-fad3-4388-83e5-d004196ee955" Oct 31 14:12:01.231608 kubelet[2999]: E1031 14:12:01.231552 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxdrt" podUID="1f06339a-fad3-4388-83e5-d004196ee955" Oct 31 14:12:02.643983 containerd[1690]: time="2025-10-31T14:12:02.643932054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:12:02.657132 containerd[1690]: time="2025-10-31T14:12:02.657092597Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 31 14:12:02.700055 containerd[1690]: time="2025-10-31T14:12:02.700006543Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:12:02.750115 containerd[1690]: time="2025-10-31T14:12:02.749336018Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:12:02.750251 containerd[1690]: time="2025-10-31T14:12:02.750226773Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 7.378051566s" Oct 31 14:12:02.750288 containerd[1690]: time="2025-10-31T14:12:02.750251473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 31 14:12:02.798748 containerd[1690]: time="2025-10-31T14:12:02.798720580Z" level=info msg="CreateContainer within sandbox \"cc93fce26eaf4b92c0b4894f4c9d540e5b83117c49097a77c73b962fdec27071\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 31 14:12:02.820955 containerd[1690]: time="2025-10-31T14:12:02.820873507Z" level=info msg="Container 5128d0699d7635c5dce7bc69de2dec6c403e5261088733b48d4a8204b0dbfe59: CDI devices from CRI Config.CDIDevices: []" Oct 31 14:12:02.853470 containerd[1690]: time="2025-10-31T14:12:02.853380817Z" level=info msg="CreateContainer within sandbox \"cc93fce26eaf4b92c0b4894f4c9d540e5b83117c49097a77c73b962fdec27071\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5128d0699d7635c5dce7bc69de2dec6c403e5261088733b48d4a8204b0dbfe59\"" Oct 31 14:12:02.854841 containerd[1690]: time="2025-10-31T14:12:02.854129324Z" level=info msg="StartContainer for \"5128d0699d7635c5dce7bc69de2dec6c403e5261088733b48d4a8204b0dbfe59\"" Oct 31 14:12:02.856951 containerd[1690]: time="2025-10-31T14:12:02.856916869Z" level=info msg="connecting to shim 5128d0699d7635c5dce7bc69de2dec6c403e5261088733b48d4a8204b0dbfe59" address="unix:///run/containerd/s/91dbfcb6c8e9870b5df861b61b7009f09e4d06c8d86429d20c34f303025e52b0" protocol=ttrpc version=3 Oct 31 14:12:02.894953 systemd[1]: Started cri-containerd-5128d0699d7635c5dce7bc69de2dec6c403e5261088733b48d4a8204b0dbfe59.scope - libcontainer container 5128d0699d7635c5dce7bc69de2dec6c403e5261088733b48d4a8204b0dbfe59. Oct 31 14:12:02.948318 containerd[1690]: time="2025-10-31T14:12:02.948290646Z" level=info msg="StartContainer for \"5128d0699d7635c5dce7bc69de2dec6c403e5261088733b48d4a8204b0dbfe59\" returns successfully" Oct 31 14:12:03.232041 kubelet[2999]: E1031 14:12:03.231945 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxdrt" podUID="1f06339a-fad3-4388-83e5-d004196ee955" Oct 31 14:12:05.231816 kubelet[2999]: E1031 14:12:05.231541 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxdrt" podUID="1f06339a-fad3-4388-83e5-d004196ee955" Oct 31 14:12:06.552509 systemd[1]: cri-containerd-5128d0699d7635c5dce7bc69de2dec6c403e5261088733b48d4a8204b0dbfe59.scope: Deactivated successfully. Oct 31 14:12:06.553296 systemd[1]: cri-containerd-5128d0699d7635c5dce7bc69de2dec6c403e5261088733b48d4a8204b0dbfe59.scope: Consumed 336ms CPU time, 167.1M memory peak, 16K read from disk, 171.3M written to disk. Oct 31 14:12:06.558028 containerd[1690]: time="2025-10-31T14:12:06.557979414Z" level=info msg="received exit event container_id:\"5128d0699d7635c5dce7bc69de2dec6c403e5261088733b48d4a8204b0dbfe59\" id:\"5128d0699d7635c5dce7bc69de2dec6c403e5261088733b48d4a8204b0dbfe59\" pid:3785 exited_at:{seconds:1761919926 nanos:557457698}" Oct 31 14:12:06.559189 containerd[1690]: time="2025-10-31T14:12:06.559165847Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5128d0699d7635c5dce7bc69de2dec6c403e5261088733b48d4a8204b0dbfe59\" id:\"5128d0699d7635c5dce7bc69de2dec6c403e5261088733b48d4a8204b0dbfe59\" pid:3785 exited_at:{seconds:1761919926 nanos:557457698}" Oct 31 14:12:06.602958 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5128d0699d7635c5dce7bc69de2dec6c403e5261088733b48d4a8204b0dbfe59-rootfs.mount: Deactivated successfully. Oct 31 14:12:06.767118 kubelet[2999]: I1031 14:12:06.766942 2999 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 31 14:12:07.018504 systemd[1]: Created slice kubepods-burstable-poda7be1d56_2672_493d_a7ed_388bbed8d7a1.slice - libcontainer container kubepods-burstable-poda7be1d56_2672_493d_a7ed_388bbed8d7a1.slice. Oct 31 14:12:07.033184 systemd[1]: Created slice kubepods-besteffort-podea6f82f9_8ca6_4f27_9ac4_6759edba2cc0.slice - libcontainer container kubepods-besteffort-podea6f82f9_8ca6_4f27_9ac4_6759edba2cc0.slice. Oct 31 14:12:07.040277 systemd[1]: Created slice kubepods-burstable-podf6f90695_af57_4839_916e_e4a5c9c9cd41.slice - libcontainer container kubepods-burstable-podf6f90695_af57_4839_916e_e4a5c9c9cd41.slice. Oct 31 14:12:07.044453 systemd[1]: Created slice kubepods-besteffort-pod2e033899_3e9d_4bfb_ab07_0f3a26593557.slice - libcontainer container kubepods-besteffort-pod2e033899_3e9d_4bfb_ab07_0f3a26593557.slice. Oct 31 14:12:07.058084 systemd[1]: Created slice kubepods-besteffort-pod191ace75_d386_424f_8d71_10eff7da195e.slice - libcontainer container kubepods-besteffort-pod191ace75_d386_424f_8d71_10eff7da195e.slice. Oct 31 14:12:07.062447 systemd[1]: Created slice kubepods-besteffort-pod1aeae2c5_1e57_4c36_a44b_9f56d90d27b3.slice - libcontainer container kubepods-besteffort-pod1aeae2c5_1e57_4c36_a44b_9f56d90d27b3.slice. Oct 31 14:12:07.070934 systemd[1]: Created slice kubepods-besteffort-pode90732a1_22ed_4940_ac91_cf89a2ddbb03.slice - libcontainer container kubepods-besteffort-pode90732a1_22ed_4940_ac91_cf89a2ddbb03.slice. Oct 31 14:12:07.079771 kubelet[2999]: I1031 14:12:07.079743 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z45x2\" (UniqueName: \"kubernetes.io/projected/1aeae2c5-1e57-4c36-a44b-9f56d90d27b3-kube-api-access-z45x2\") pod \"goldmane-666569f655-r5k7t\" (UID: \"1aeae2c5-1e57-4c36-a44b-9f56d90d27b3\") " pod="calico-system/goldmane-666569f655-r5k7t" Oct 31 14:12:07.079771 kubelet[2999]: I1031 14:12:07.079769 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5262\" (UniqueName: \"kubernetes.io/projected/f6f90695-af57-4839-916e-e4a5c9c9cd41-kube-api-access-c5262\") pod \"coredns-674b8bbfcf-88wcn\" (UID: \"f6f90695-af57-4839-916e-e4a5c9c9cd41\") " pod="kube-system/coredns-674b8bbfcf-88wcn" Oct 31 14:12:07.080012 kubelet[2999]: I1031 14:12:07.079798 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aeae2c5-1e57-4c36-a44b-9f56d90d27b3-config\") pod \"goldmane-666569f655-r5k7t\" (UID: \"1aeae2c5-1e57-4c36-a44b-9f56d90d27b3\") " pod="calico-system/goldmane-666569f655-r5k7t" Oct 31 14:12:07.080012 kubelet[2999]: I1031 14:12:07.079811 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/191ace75-d386-424f-8d71-10eff7da195e-calico-apiserver-certs\") pod \"calico-apiserver-f6ffcff69-wstnf\" (UID: \"191ace75-d386-424f-8d71-10eff7da195e\") " pod="calico-apiserver/calico-apiserver-f6ffcff69-wstnf" Oct 31 14:12:07.080012 kubelet[2999]: I1031 14:12:07.079822 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx24w\" (UniqueName: \"kubernetes.io/projected/a7be1d56-2672-493d-a7ed-388bbed8d7a1-kube-api-access-sx24w\") pod \"coredns-674b8bbfcf-tsnkd\" (UID: \"a7be1d56-2672-493d-a7ed-388bbed8d7a1\") " pod="kube-system/coredns-674b8bbfcf-tsnkd" Oct 31 14:12:07.080012 kubelet[2999]: I1031 14:12:07.079831 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e90732a1-22ed-4940-ac91-cf89a2ddbb03-whisker-backend-key-pair\") pod \"whisker-79664d6db-sstb7\" (UID: \"e90732a1-22ed-4940-ac91-cf89a2ddbb03\") " pod="calico-system/whisker-79664d6db-sstb7" Oct 31 14:12:07.080012 kubelet[2999]: I1031 14:12:07.079839 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt2sm\" (UniqueName: \"kubernetes.io/projected/e90732a1-22ed-4940-ac91-cf89a2ddbb03-kube-api-access-mt2sm\") pod \"whisker-79664d6db-sstb7\" (UID: \"e90732a1-22ed-4940-ac91-cf89a2ddbb03\") " pod="calico-system/whisker-79664d6db-sstb7" Oct 31 14:12:07.080105 kubelet[2999]: I1031 14:12:07.079849 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e033899-3e9d-4bfb-ab07-0f3a26593557-tigera-ca-bundle\") pod \"calico-kube-controllers-6c4674d488-9cl4s\" (UID: \"2e033899-3e9d-4bfb-ab07-0f3a26593557\") " pod="calico-system/calico-kube-controllers-6c4674d488-9cl4s" Oct 31 14:12:07.080105 kubelet[2999]: I1031 14:12:07.079870 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf9gv\" (UniqueName: \"kubernetes.io/projected/ea6f82f9-8ca6-4f27-9ac4-6759edba2cc0-kube-api-access-qf9gv\") pod \"calico-apiserver-f6ffcff69-lv42k\" (UID: \"ea6f82f9-8ca6-4f27-9ac4-6759edba2cc0\") " pod="calico-apiserver/calico-apiserver-f6ffcff69-lv42k" Oct 31 14:12:07.080105 kubelet[2999]: I1031 14:12:07.079885 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1aeae2c5-1e57-4c36-a44b-9f56d90d27b3-goldmane-ca-bundle\") pod \"goldmane-666569f655-r5k7t\" (UID: \"1aeae2c5-1e57-4c36-a44b-9f56d90d27b3\") " pod="calico-system/goldmane-666569f655-r5k7t" Oct 31 14:12:07.080105 kubelet[2999]: I1031 14:12:07.079894 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6f90695-af57-4839-916e-e4a5c9c9cd41-config-volume\") pod \"coredns-674b8bbfcf-88wcn\" (UID: \"f6f90695-af57-4839-916e-e4a5c9c9cd41\") " pod="kube-system/coredns-674b8bbfcf-88wcn" Oct 31 14:12:07.080105 kubelet[2999]: I1031 14:12:07.079903 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e90732a1-22ed-4940-ac91-cf89a2ddbb03-whisker-ca-bundle\") pod \"whisker-79664d6db-sstb7\" (UID: \"e90732a1-22ed-4940-ac91-cf89a2ddbb03\") " pod="calico-system/whisker-79664d6db-sstb7" Oct 31 14:12:07.080191 kubelet[2999]: I1031 14:12:07.079945 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2hxw\" (UniqueName: \"kubernetes.io/projected/2e033899-3e9d-4bfb-ab07-0f3a26593557-kube-api-access-r2hxw\") pod \"calico-kube-controllers-6c4674d488-9cl4s\" (UID: \"2e033899-3e9d-4bfb-ab07-0f3a26593557\") " pod="calico-system/calico-kube-controllers-6c4674d488-9cl4s" Oct 31 14:12:07.080191 kubelet[2999]: I1031 14:12:07.079980 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqzv6\" (UniqueName: \"kubernetes.io/projected/191ace75-d386-424f-8d71-10eff7da195e-kube-api-access-gqzv6\") pod \"calico-apiserver-f6ffcff69-wstnf\" (UID: \"191ace75-d386-424f-8d71-10eff7da195e\") " pod="calico-apiserver/calico-apiserver-f6ffcff69-wstnf" Oct 31 14:12:07.080191 kubelet[2999]: I1031 14:12:07.079994 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1aeae2c5-1e57-4c36-a44b-9f56d90d27b3-goldmane-key-pair\") pod \"goldmane-666569f655-r5k7t\" (UID: \"1aeae2c5-1e57-4c36-a44b-9f56d90d27b3\") " pod="calico-system/goldmane-666569f655-r5k7t" Oct 31 14:12:07.080191 kubelet[2999]: I1031 14:12:07.080017 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7be1d56-2672-493d-a7ed-388bbed8d7a1-config-volume\") pod \"coredns-674b8bbfcf-tsnkd\" (UID: \"a7be1d56-2672-493d-a7ed-388bbed8d7a1\") " pod="kube-system/coredns-674b8bbfcf-tsnkd" Oct 31 14:12:07.080191 kubelet[2999]: I1031 14:12:07.080028 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ea6f82f9-8ca6-4f27-9ac4-6759edba2cc0-calico-apiserver-certs\") pod \"calico-apiserver-f6ffcff69-lv42k\" (UID: \"ea6f82f9-8ca6-4f27-9ac4-6759edba2cc0\") " pod="calico-apiserver/calico-apiserver-f6ffcff69-lv42k" Oct 31 14:12:07.237892 systemd[1]: Created slice kubepods-besteffort-pod1f06339a_fad3_4388_83e5_d004196ee955.slice - libcontainer container kubepods-besteffort-pod1f06339a_fad3_4388_83e5_d004196ee955.slice. Oct 31 14:12:07.260880 containerd[1690]: time="2025-10-31T14:12:07.260492399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wxdrt,Uid:1f06339a-fad3-4388-83e5-d004196ee955,Namespace:calico-system,Attempt:0,}" Oct 31 14:12:07.369860 containerd[1690]: time="2025-10-31T14:12:07.369138110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tsnkd,Uid:a7be1d56-2672-493d-a7ed-388bbed8d7a1,Namespace:kube-system,Attempt:0,}" Oct 31 14:12:07.370168 containerd[1690]: time="2025-10-31T14:12:07.369872346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f6ffcff69-lv42k,Uid:ea6f82f9-8ca6-4f27-9ac4-6759edba2cc0,Namespace:calico-apiserver,Attempt:0,}" Oct 31 14:12:07.370274 containerd[1690]: time="2025-10-31T14:12:07.369899332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-r5k7t,Uid:1aeae2c5-1e57-4c36-a44b-9f56d90d27b3,Namespace:calico-system,Attempt:0,}" Oct 31 14:12:07.370411 containerd[1690]: time="2025-10-31T14:12:07.369919662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f6ffcff69-wstnf,Uid:191ace75-d386-424f-8d71-10eff7da195e,Namespace:calico-apiserver,Attempt:0,}" Oct 31 14:12:07.370545 containerd[1690]: time="2025-10-31T14:12:07.369991665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-88wcn,Uid:f6f90695-af57-4839-916e-e4a5c9c9cd41,Namespace:kube-system,Attempt:0,}" Oct 31 14:12:07.370702 containerd[1690]: time="2025-10-31T14:12:07.370009467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c4674d488-9cl4s,Uid:2e033899-3e9d-4bfb-ab07-0f3a26593557,Namespace:calico-system,Attempt:0,}" Oct 31 14:12:07.372900 containerd[1690]: time="2025-10-31T14:12:07.372881940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79664d6db-sstb7,Uid:e90732a1-22ed-4940-ac91-cf89a2ddbb03,Namespace:calico-system,Attempt:0,}" Oct 31 14:12:07.590476 containerd[1690]: time="2025-10-31T14:12:07.589846970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 31 14:12:07.983833 containerd[1690]: time="2025-10-31T14:12:07.983449847Z" level=error msg="Failed to destroy network for sandbox \"474f8ead47be89798c7aec5a961408f3066f2a0008c3c927532eacf8ddb367c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:12:07.985732 systemd[1]: run-netns-cni\x2dde9f85e0\x2df681\x2d483d\x2de92c\x2dc56198dd129c.mount: Deactivated successfully. Oct 31 14:12:07.990713 containerd[1690]: time="2025-10-31T14:12:07.990667673Z" level=error msg="Failed to destroy network for sandbox \"dca8e67b475f89848ae4a0b723ef84fe3a1bf81242c264c4ed21b5db6dd33296\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:12:07.992314 containerd[1690]: time="2025-10-31T14:12:07.992220667Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f6ffcff69-wstnf,Uid:191ace75-d386-424f-8d71-10eff7da195e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"474f8ead47be89798c7aec5a961408f3066f2a0008c3c927532eacf8ddb367c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:12:07.995349 systemd[1]: run-netns-cni\x2d1d0f7848\x2d3e08\x2d5554\x2d9af3\x2d4ed5e38db38f.mount: Deactivated successfully. Oct 31 14:12:08.000930 containerd[1690]: time="2025-10-31T14:12:08.000859945Z" level=error msg="Failed to destroy network for sandbox \"e49355d5c4c11339fd9ce3612152542fd0288c64d92c8899f8f8f47fa3347574\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:12:08.003157 containerd[1690]: time="2025-10-31T14:12:08.003124888Z" level=error msg="Failed to destroy network for sandbox \"744b330b28fe71bc8b18130999affe8c5b28ccc6fc7cb032787bdfe49efcc840\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:12:08.003861 containerd[1690]: time="2025-10-31T14:12:08.003433050Z" level=error msg="Failed to destroy network for sandbox \"08538719940d8c3929975c418eae1cc36f006340ffe7ecde23a243bfd8917590\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:12:08.004346 systemd[1]: run-netns-cni\x2d0f2c7802\x2d7f5a\x2d1e1a\x2d6cd1\x2ddb5bea098e48.mount: Deactivated successfully. Oct 31 14:12:08.004474 containerd[1690]: time="2025-10-31T14:12:08.003452186Z" level=error msg="Failed to destroy network for sandbox \"cd724ffa912b7e8de2648ab9ae680a1c5831b8c57805b42edbb9b64ff06ee510\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:12:08.008844 kubelet[2999]: E1031 14:12:08.004880 2999 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"474f8ead47be89798c7aec5a961408f3066f2a0008c3c927532eacf8ddb367c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:12:08.008844 kubelet[2999]: E1031 14:12:08.006735 2999 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"474f8ead47be89798c7aec5a961408f3066f2a0008c3c927532eacf8ddb367c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f6ffcff69-wstnf" Oct 31 14:12:08.008844 kubelet[2999]: E1031 14:12:08.006776 2999 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"474f8ead47be89798c7aec5a961408f3066f2a0008c3c927532eacf8ddb367c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f6ffcff69-wstnf" Oct 31 14:12:08.015003 kubelet[2999]: E1031 14:12:08.007723 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f6ffcff69-wstnf_calico-apiserver(191ace75-d386-424f-8d71-10eff7da195e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f6ffcff69-wstnf_calico-apiserver(191ace75-d386-424f-8d71-10eff7da195e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"474f8ead47be89798c7aec5a961408f3066f2a0008c3c927532eacf8ddb367c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f6ffcff69-wstnf" podUID="191ace75-d386-424f-8d71-10eff7da195e" Oct 31 14:12:08.015089 containerd[1690]: time="2025-10-31T14:12:08.012034256Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-r5k7t,Uid:1aeae2c5-1e57-4c36-a44b-9f56d90d27b3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dca8e67b475f89848ae4a0b723ef84fe3a1bf81242c264c4ed21b5db6dd33296\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:12:08.013069 systemd[1]: run-netns-cni\x2df88875d0\x2da2f1\x2dbf1c\x2dce7d\x2d30702844bd3e.mount: Deactivated successfully. Oct 31 14:12:08.013205 systemd[1]: run-netns-cni\x2d04d1934b\x2d97be\x2de8f8\x2d9386\x2dbd24f23614b2.mount: Deactivated successfully. Oct 31 14:12:08.013276 systemd[1]: run-netns-cni\x2dd1615f84\x2d6479\x2dfeb2\x2debf1\x2d961569a9ae3a.mount: Deactivated successfully. Oct 31 14:12:08.019579 kubelet[2999]: E1031 14:12:08.017975 2999 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dca8e67b475f89848ae4a0b723ef84fe3a1bf81242c264c4ed21b5db6dd33296\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:12:08.019579 kubelet[2999]: E1031 14:12:08.018062 2999 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dca8e67b475f89848ae4a0b723ef84fe3a1bf81242c264c4ed21b5db6dd33296\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-r5k7t" Oct 31 14:12:08.019579 kubelet[2999]: E1031 14:12:08.018095 2999 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dca8e67b475f89848ae4a0b723ef84fe3a1bf81242c264c4ed21b5db6dd33296\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-r5k7t" Oct 31 14:12:08.019757 kubelet[2999]: E1031 14:12:08.018148 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-r5k7t_calico-system(1aeae2c5-1e57-4c36-a44b-9f56d90d27b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-r5k7t_calico-system(1aeae2c5-1e57-4c36-a44b-9f56d90d27b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dca8e67b475f89848ae4a0b723ef84fe3a1bf81242c264c4ed21b5db6dd33296\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-r5k7t" podUID="1aeae2c5-1e57-4c36-a44b-9f56d90d27b3" Oct 31 14:12:08.023203 containerd[1690]: time="2025-10-31T14:12:08.023153297Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-88wcn,Uid:f6f90695-af57-4839-916e-e4a5c9c9cd41,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e49355d5c4c11339fd9ce3612152542fd0288c64d92c8899f8f8f47fa3347574\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:12:08.023627 kubelet[2999]: E1031 14:12:08.023567 2999 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e49355d5c4c11339fd9ce3612152542fd0288c64d92c8899f8f8f47fa3347574\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:12:08.023707 kubelet[2999]: E1031 14:12:08.023647 2999 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e49355d5c4c11339fd9ce3612152542fd0288c64d92c8899f8f8f47fa3347574\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-88wcn" Oct 31 14:12:08.023707 kubelet[2999]: E1031 14:12:08.023672 2999 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e49355d5c4c11339fd9ce3612152542fd0288c64d92c8899f8f8f47fa3347574\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-88wcn" Oct 31 14:12:08.023777 kubelet[2999]: E1031 14:12:08.023728 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-88wcn_kube-system(f6f90695-af57-4839-916e-e4a5c9c9cd41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-88wcn_kube-system(f6f90695-af57-4839-916e-e4a5c9c9cd41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e49355d5c4c11339fd9ce3612152542fd0288c64d92c8899f8f8f47fa3347574\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-88wcn" podUID="f6f90695-af57-4839-916e-e4a5c9c9cd41" Oct 31 14:12:08.025127 containerd[1690]: time="2025-10-31T14:12:08.024937829Z" level=error msg="Failed to destroy network for sandbox \"4deb4ad87873964ef3ecfdff04f989ede9f4a25544fde504fce07ff4c75f7656\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:12:08.026772 containerd[1690]: time="2025-10-31T14:12:08.026722656Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tsnkd,Uid:a7be1d56-2672-493d-a7ed-388bbed8d7a1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"744b330b28fe71bc8b18130999affe8c5b28ccc6fc7cb032787bdfe49efcc840\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:12:08.027293 containerd[1690]: time="2025-10-31T14:12:08.027254603Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f6ffcff69-lv42k,Uid:ea6f82f9-8ca6-4f27-9ac4-6759edba2cc0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"08538719940d8c3929975c418eae1cc36f006340ffe7ecde23a243bfd8917590\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:12:08.027384 kubelet[2999]: E1031 14:12:08.027344 2999 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"744b330b28fe71bc8b18130999affe8c5b28ccc6fc7cb032787bdfe49efcc840\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:12:08.027426 kubelet[2999]: E1031 14:12:08.027395 2999 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"744b330b28fe71bc8b18130999affe8c5b28ccc6fc7cb032787bdfe49efcc840\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-tsnkd" Oct 31 14:12:08.027426 kubelet[2999]: E1031 14:12:08.027416 2999 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"744b330b28fe71bc8b18130999affe8c5b28ccc6fc7cb032787bdfe49efcc840\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-tsnkd" Oct 31 14:12:08.028249 containerd[1690]: time="2025-10-31T14:12:08.028216657Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wxdrt,Uid:1f06339a-fad3-4388-83e5-d004196ee955,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd724ffa912b7e8de2648ab9ae680a1c5831b8c57805b42edbb9b64ff06ee510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:12:08.028519 kubelet[2999]: E1031 14:12:08.028497 2999 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd724ffa912b7e8de2648ab9ae680a1c5831b8c57805b42edbb9b64ff06ee510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:12:08.028560 kubelet[2999]: E1031 14:12:08.028535 2999 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd724ffa912b7e8de2648ab9ae680a1c5831b8c57805b42edbb9b64ff06ee510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wxdrt" Oct 31 14:12:08.028759 kubelet[2999]: E1031 14:12:08.028555 2999 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd724ffa912b7e8de2648ab9ae680a1c5831b8c57805b42edbb9b64ff06ee510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wxdrt" Oct 31 14:12:08.028759 kubelet[2999]: E1031 14:12:08.028598 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wxdrt_calico-system(1f06339a-fad3-4388-83e5-d004196ee955)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wxdrt_calico-system(1f06339a-fad3-4388-83e5-d004196ee955)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd724ffa912b7e8de2648ab9ae680a1c5831b8c57805b42edbb9b64ff06ee510\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wxdrt" podUID="1f06339a-fad3-4388-83e5-d004196ee955" Oct 31 14:12:08.028759 kubelet[2999]: E1031 14:12:08.028630 2999 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08538719940d8c3929975c418eae1cc36f006340ffe7ecde23a243bfd8917590\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:12:08.029019 kubelet[2999]: E1031 14:12:08.028649 2999 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08538719940d8c3929975c418eae1cc36f006340ffe7ecde23a243bfd8917590\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f6ffcff69-lv42k" Oct 31 14:12:08.029019 kubelet[2999]: E1031 14:12:08.028663 2999 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08538719940d8c3929975c418eae1cc36f006340ffe7ecde23a243bfd8917590\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f6ffcff69-lv42k" Oct 31 14:12:08.029019 kubelet[2999]: E1031 14:12:08.028690 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f6ffcff69-lv42k_calico-apiserver(ea6f82f9-8ca6-4f27-9ac4-6759edba2cc0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f6ffcff69-lv42k_calico-apiserver(ea6f82f9-8ca6-4f27-9ac4-6759edba2cc0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08538719940d8c3929975c418eae1cc36f006340ffe7ecde23a243bfd8917590\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f6ffcff69-lv42k" podUID="ea6f82f9-8ca6-4f27-9ac4-6759edba2cc0" Oct 31 14:12:08.029103 kubelet[2999]: E1031 14:12:08.027576 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-tsnkd_kube-system(a7be1d56-2672-493d-a7ed-388bbed8d7a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-tsnkd_kube-system(a7be1d56-2672-493d-a7ed-388bbed8d7a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"744b330b28fe71bc8b18130999affe8c5b28ccc6fc7cb032787bdfe49efcc840\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-tsnkd" podUID="a7be1d56-2672-493d-a7ed-388bbed8d7a1" Oct 31 14:12:08.029155 containerd[1690]: time="2025-10-31T14:12:08.029010718Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c4674d488-9cl4s,Uid:2e033899-3e9d-4bfb-ab07-0f3a26593557,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4deb4ad87873964ef3ecfdff04f989ede9f4a25544fde504fce07ff4c75f7656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:12:08.029195 kubelet[2999]: E1031 14:12:08.029110 2999 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4deb4ad87873964ef3ecfdff04f989ede9f4a25544fde504fce07ff4c75f7656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:12:08.029195 kubelet[2999]: E1031 14:12:08.029142 2999 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4deb4ad87873964ef3ecfdff04f989ede9f4a25544fde504fce07ff4c75f7656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c4674d488-9cl4s" Oct 31 14:12:08.029195 kubelet[2999]: E1031 14:12:08.029158 2999 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4deb4ad87873964ef3ecfdff04f989ede9f4a25544fde504fce07ff4c75f7656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c4674d488-9cl4s" Oct 31 14:12:08.029256 kubelet[2999]: E1031 14:12:08.029192 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c4674d488-9cl4s_calico-system(2e033899-3e9d-4bfb-ab07-0f3a26593557)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c4674d488-9cl4s_calico-system(2e033899-3e9d-4bfb-ab07-0f3a26593557)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4deb4ad87873964ef3ecfdff04f989ede9f4a25544fde504fce07ff4c75f7656\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c4674d488-9cl4s" podUID="2e033899-3e9d-4bfb-ab07-0f3a26593557" Oct 31 14:12:08.029764 containerd[1690]: time="2025-10-31T14:12:08.029646816Z" level=error msg="Failed to destroy network for sandbox \"7524027c028ac480747a31336baff742658f45620727ad9bfcb4ef384467d2e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:12:08.031544 containerd[1690]: time="2025-10-31T14:12:08.030775700Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79664d6db-sstb7,Uid:e90732a1-22ed-4940-ac91-cf89a2ddbb03,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7524027c028ac480747a31336baff742658f45620727ad9bfcb4ef384467d2e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:12:08.032773 kubelet[2999]: E1031 14:12:08.032638 2999 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7524027c028ac480747a31336baff742658f45620727ad9bfcb4ef384467d2e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:12:08.032773 kubelet[2999]: E1031 14:12:08.032749 2999 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7524027c028ac480747a31336baff742658f45620727ad9bfcb4ef384467d2e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79664d6db-sstb7" Oct 31 14:12:08.032773 kubelet[2999]: E1031 14:12:08.032774 2999 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7524027c028ac480747a31336baff742658f45620727ad9bfcb4ef384467d2e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79664d6db-sstb7" Oct 31 14:12:08.033003 kubelet[2999]: E1031 14:12:08.032865 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-79664d6db-sstb7_calico-system(e90732a1-22ed-4940-ac91-cf89a2ddbb03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-79664d6db-sstb7_calico-system(e90732a1-22ed-4940-ac91-cf89a2ddbb03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7524027c028ac480747a31336baff742658f45620727ad9bfcb4ef384467d2e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-79664d6db-sstb7" podUID="e90732a1-22ed-4940-ac91-cf89a2ddbb03" Oct 31 14:12:08.603562 systemd[1]: run-netns-cni\x2d47318df5\x2d155d\x2ddef9\x2d17eb\x2d6a93fcb3c5eb.mount: Deactivated successfully. Oct 31 14:12:08.603733 systemd[1]: run-netns-cni\x2ddb834907\x2d4ea4\x2d3bc0\x2dcb1a\x2da4b71fd17145.mount: Deactivated successfully. Oct 31 14:12:14.952281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3400190724.mount: Deactivated successfully. Oct 31 14:12:15.174877 containerd[1690]: time="2025-10-31T14:12:15.174844635Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:12:15.189768 containerd[1690]: time="2025-10-31T14:12:15.189737910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 31 14:12:15.191717 containerd[1690]: time="2025-10-31T14:12:15.191683704Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:12:15.193411 containerd[1690]: time="2025-10-31T14:12:15.193338379Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:12:15.195337 containerd[1690]: time="2025-10-31T14:12:15.195318203Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.603696073s" Oct 31 14:12:15.195475 containerd[1690]: time="2025-10-31T14:12:15.195415094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 31 14:12:15.291046 containerd[1690]: time="2025-10-31T14:12:15.290975866Z" level=info msg="CreateContainer within sandbox \"cc93fce26eaf4b92c0b4894f4c9d540e5b83117c49097a77c73b962fdec27071\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 31 14:12:15.415806 containerd[1690]: time="2025-10-31T14:12:15.415499250Z" level=info msg="Container b54aa4fa1c3dfd3f33f578c5fb9912854a15a6a9cc1f0dbe3accb3535fed7296: CDI devices from CRI Config.CDIDevices: []" Oct 31 14:12:15.417712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3212981407.mount: Deactivated successfully. Oct 31 14:12:15.498599 containerd[1690]: time="2025-10-31T14:12:15.498566708Z" level=info msg="CreateContainer within sandbox \"cc93fce26eaf4b92c0b4894f4c9d540e5b83117c49097a77c73b962fdec27071\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b54aa4fa1c3dfd3f33f578c5fb9912854a15a6a9cc1f0dbe3accb3535fed7296\"" Oct 31 14:12:15.501525 containerd[1690]: time="2025-10-31T14:12:15.499183422Z" level=info msg="StartContainer for \"b54aa4fa1c3dfd3f33f578c5fb9912854a15a6a9cc1f0dbe3accb3535fed7296\"" Oct 31 14:12:15.501525 containerd[1690]: time="2025-10-31T14:12:15.501381586Z" level=info msg="connecting to shim b54aa4fa1c3dfd3f33f578c5fb9912854a15a6a9cc1f0dbe3accb3535fed7296" address="unix:///run/containerd/s/91dbfcb6c8e9870b5df861b61b7009f09e4d06c8d86429d20c34f303025e52b0" protocol=ttrpc version=3 Oct 31 14:12:15.613926 systemd[1]: Started cri-containerd-b54aa4fa1c3dfd3f33f578c5fb9912854a15a6a9cc1f0dbe3accb3535fed7296.scope - libcontainer container b54aa4fa1c3dfd3f33f578c5fb9912854a15a6a9cc1f0dbe3accb3535fed7296. Oct 31 14:12:15.666501 containerd[1690]: time="2025-10-31T14:12:15.666086473Z" level=info msg="StartContainer for \"b54aa4fa1c3dfd3f33f578c5fb9912854a15a6a9cc1f0dbe3accb3535fed7296\" returns successfully" Oct 31 14:12:16.050295 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 31 14:12:16.088841 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 31 14:12:16.555665 kubelet[2999]: I1031 14:12:16.555482 2999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e90732a1-22ed-4940-ac91-cf89a2ddbb03-whisker-backend-key-pair\") pod \"e90732a1-22ed-4940-ac91-cf89a2ddbb03\" (UID: \"e90732a1-22ed-4940-ac91-cf89a2ddbb03\") " Oct 31 14:12:16.555665 kubelet[2999]: I1031 14:12:16.555519 2999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt2sm\" (UniqueName: \"kubernetes.io/projected/e90732a1-22ed-4940-ac91-cf89a2ddbb03-kube-api-access-mt2sm\") pod \"e90732a1-22ed-4940-ac91-cf89a2ddbb03\" (UID: \"e90732a1-22ed-4940-ac91-cf89a2ddbb03\") " Oct 31 14:12:16.555665 kubelet[2999]: I1031 14:12:16.555545 2999 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e90732a1-22ed-4940-ac91-cf89a2ddbb03-whisker-ca-bundle\") pod \"e90732a1-22ed-4940-ac91-cf89a2ddbb03\" (UID: \"e90732a1-22ed-4940-ac91-cf89a2ddbb03\") " Oct 31 14:12:16.561276 kubelet[2999]: I1031 14:12:16.561117 2999 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e90732a1-22ed-4940-ac91-cf89a2ddbb03-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e90732a1-22ed-4940-ac91-cf89a2ddbb03" (UID: "e90732a1-22ed-4940-ac91-cf89a2ddbb03"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 31 14:12:16.570423 systemd[1]: var-lib-kubelet-pods-e90732a1\x2d22ed\x2d4940\x2dac91\x2dcf89a2ddbb03-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmt2sm.mount: Deactivated successfully. Oct 31 14:12:16.571109 kubelet[2999]: I1031 14:12:16.571089 2999 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e90732a1-22ed-4940-ac91-cf89a2ddbb03-kube-api-access-mt2sm" (OuterVolumeSpecName: "kube-api-access-mt2sm") pod "e90732a1-22ed-4940-ac91-cf89a2ddbb03" (UID: "e90732a1-22ed-4940-ac91-cf89a2ddbb03"). InnerVolumeSpecName "kube-api-access-mt2sm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 31 14:12:16.573030 systemd[1]: var-lib-kubelet-pods-e90732a1\x2d22ed\x2d4940\x2dac91\x2dcf89a2ddbb03-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 31 14:12:16.574049 kubelet[2999]: I1031 14:12:16.573984 2999 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e90732a1-22ed-4940-ac91-cf89a2ddbb03-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e90732a1-22ed-4940-ac91-cf89a2ddbb03" (UID: "e90732a1-22ed-4940-ac91-cf89a2ddbb03"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 31 14:12:16.615694 systemd[1]: Removed slice kubepods-besteffort-pode90732a1_22ed_4940_ac91_cf89a2ddbb03.slice - libcontainer container kubepods-besteffort-pode90732a1_22ed_4940_ac91_cf89a2ddbb03.slice. Oct 31 14:12:16.646470 kubelet[2999]: I1031 14:12:16.644672 2999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dc9qf" podStartSLOduration=2.191549861 podStartE2EDuration="27.640651118s" podCreationTimestamp="2025-10-31 14:11:49 +0000 UTC" firstStartedPulling="2025-10-31 14:11:49.75549474 +0000 UTC m=+49.736346133" lastFinishedPulling="2025-10-31 14:12:15.204596004 +0000 UTC m=+75.185447390" observedRunningTime="2025-10-31 14:12:16.640384166 +0000 UTC m=+76.621235565" watchObservedRunningTime="2025-10-31 14:12:16.640651118 +0000 UTC m=+76.621502512" Oct 31 14:12:16.656823 kubelet[2999]: I1031 14:12:16.656514 2999 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e90732a1-22ed-4940-ac91-cf89a2ddbb03-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 31 14:12:16.656823 kubelet[2999]: I1031 14:12:16.656536 2999 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mt2sm\" (UniqueName: \"kubernetes.io/projected/e90732a1-22ed-4940-ac91-cf89a2ddbb03-kube-api-access-mt2sm\") on node \"localhost\" DevicePath \"\"" Oct 31 14:12:16.656823 kubelet[2999]: I1031 14:12:16.656543 2999 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e90732a1-22ed-4940-ac91-cf89a2ddbb03-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 31 14:12:16.716680 systemd[1]: Created slice kubepods-besteffort-pod5503919a_74e2_4390_b833_02013e670ac6.slice - libcontainer container kubepods-besteffort-pod5503919a_74e2_4390_b833_02013e670ac6.slice. Oct 31 14:12:16.757370 kubelet[2999]: I1031 14:12:16.757338 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4256t\" (UniqueName: \"kubernetes.io/projected/5503919a-74e2-4390-b833-02013e670ac6-kube-api-access-4256t\") pod \"whisker-5dc458bdfb-x2fbx\" (UID: \"5503919a-74e2-4390-b833-02013e670ac6\") " pod="calico-system/whisker-5dc458bdfb-x2fbx" Oct 31 14:12:16.757528 kubelet[2999]: I1031 14:12:16.757515 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5503919a-74e2-4390-b833-02013e670ac6-whisker-ca-bundle\") pod \"whisker-5dc458bdfb-x2fbx\" (UID: \"5503919a-74e2-4390-b833-02013e670ac6\") " pod="calico-system/whisker-5dc458bdfb-x2fbx" Oct 31 14:12:16.757600 kubelet[2999]: I1031 14:12:16.757589 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5503919a-74e2-4390-b833-02013e670ac6-whisker-backend-key-pair\") pod \"whisker-5dc458bdfb-x2fbx\" (UID: \"5503919a-74e2-4390-b833-02013e670ac6\") " pod="calico-system/whisker-5dc458bdfb-x2fbx" Oct 31 14:12:16.840986 containerd[1690]: time="2025-10-31T14:12:16.840916742Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b54aa4fa1c3dfd3f33f578c5fb9912854a15a6a9cc1f0dbe3accb3535fed7296\" id:\"15bde44126f68327eaaa75c1e025be8a377d5d6076d76ffb13eec207c9d50446\" pid:4136 exit_status:1 exited_at:{seconds:1761919936 nanos:840521252}" Oct 31 14:12:17.020450 containerd[1690]: time="2025-10-31T14:12:17.020275772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dc458bdfb-x2fbx,Uid:5503919a-74e2-4390-b833-02013e670ac6,Namespace:calico-system,Attempt:0,}" Oct 31 14:12:17.723707 containerd[1690]: time="2025-10-31T14:12:17.723652315Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b54aa4fa1c3dfd3f33f578c5fb9912854a15a6a9cc1f0dbe3accb3535fed7296\" id:\"cc72e259a15611891c3b19c969a65565980f536424bd3cceca317b432df5b07c\" pid:4176 exit_status:1 exited_at:{seconds:1761919937 nanos:723368913}" Oct 31 14:12:17.997273 systemd-networkd[1576]: calid715d865123: Link UP Oct 31 14:12:17.997466 systemd-networkd[1576]: calid715d865123: Gained carrier Oct 31 14:12:18.015002 containerd[1690]: 2025-10-31 14:12:17.049 [INFO][4148] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 14:12:18.015002 containerd[1690]: 2025-10-31 14:12:17.329 [INFO][4148] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5dc458bdfb--x2fbx-eth0 whisker-5dc458bdfb- calico-system 5503919a-74e2-4390-b833-02013e670ac6 943 0 2025-10-31 14:12:16 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5dc458bdfb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5dc458bdfb-x2fbx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid715d865123 [] [] }} ContainerID="5013c1506c392e13691bc43a613d01bea8c778127e72e6528a9b32679f2542ae" Namespace="calico-system" Pod="whisker-5dc458bdfb-x2fbx" WorkloadEndpoint="localhost-k8s-whisker--5dc458bdfb--x2fbx-" Oct 31 14:12:18.015002 containerd[1690]: 2025-10-31 14:12:17.329 [INFO][4148] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5013c1506c392e13691bc43a613d01bea8c778127e72e6528a9b32679f2542ae" Namespace="calico-system" Pod="whisker-5dc458bdfb-x2fbx" WorkloadEndpoint="localhost-k8s-whisker--5dc458bdfb--x2fbx-eth0" Oct 31 14:12:18.015002 containerd[1690]: 2025-10-31 14:12:17.897 [INFO][4160] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5013c1506c392e13691bc43a613d01bea8c778127e72e6528a9b32679f2542ae" HandleID="k8s-pod-network.5013c1506c392e13691bc43a613d01bea8c778127e72e6528a9b32679f2542ae" Workload="localhost-k8s-whisker--5dc458bdfb--x2fbx-eth0" Oct 31 14:12:18.021150 containerd[1690]: 2025-10-31 14:12:17.900 [INFO][4160] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5013c1506c392e13691bc43a613d01bea8c778127e72e6528a9b32679f2542ae" HandleID="k8s-pod-network.5013c1506c392e13691bc43a613d01bea8c778127e72e6528a9b32679f2542ae" Workload="localhost-k8s-whisker--5dc458bdfb--x2fbx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037e280), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5dc458bdfb-x2fbx", "timestamp":"2025-10-31 14:12:17.897393372 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 14:12:18.021150 containerd[1690]: 2025-10-31 14:12:17.900 [INFO][4160] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 14:12:18.021150 containerd[1690]: 2025-10-31 14:12:17.901 [INFO][4160] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 14:12:18.021150 containerd[1690]: 2025-10-31 14:12:17.901 [INFO][4160] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 14:12:18.021150 containerd[1690]: 2025-10-31 14:12:17.935 [INFO][4160] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5013c1506c392e13691bc43a613d01bea8c778127e72e6528a9b32679f2542ae" host="localhost" Oct 31 14:12:18.021150 containerd[1690]: 2025-10-31 14:12:17.949 [INFO][4160] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 14:12:18.021150 containerd[1690]: 2025-10-31 14:12:17.955 [INFO][4160] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 14:12:18.021150 containerd[1690]: 2025-10-31 14:12:17.957 [INFO][4160] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 14:12:18.021150 containerd[1690]: 2025-10-31 14:12:17.959 [INFO][4160] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 14:12:18.021150 containerd[1690]: 2025-10-31 14:12:17.959 [INFO][4160] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5013c1506c392e13691bc43a613d01bea8c778127e72e6528a9b32679f2542ae" host="localhost" Oct 31 14:12:18.024206 containerd[1690]: 2025-10-31 14:12:17.961 [INFO][4160] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5013c1506c392e13691bc43a613d01bea8c778127e72e6528a9b32679f2542ae Oct 31 14:12:18.024206 containerd[1690]: 2025-10-31 14:12:17.965 [INFO][4160] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5013c1506c392e13691bc43a613d01bea8c778127e72e6528a9b32679f2542ae" host="localhost" Oct 31 14:12:18.024206 containerd[1690]: 2025-10-31 14:12:17.970 [INFO][4160] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.5013c1506c392e13691bc43a613d01bea8c778127e72e6528a9b32679f2542ae" host="localhost" Oct 31 14:12:18.024206 containerd[1690]: 2025-10-31 14:12:17.970 [INFO][4160] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.5013c1506c392e13691bc43a613d01bea8c778127e72e6528a9b32679f2542ae" host="localhost" Oct 31 14:12:18.024206 containerd[1690]: 2025-10-31 14:12:17.970 [INFO][4160] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 14:12:18.024206 containerd[1690]: 2025-10-31 14:12:17.970 [INFO][4160] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="5013c1506c392e13691bc43a613d01bea8c778127e72e6528a9b32679f2542ae" HandleID="k8s-pod-network.5013c1506c392e13691bc43a613d01bea8c778127e72e6528a9b32679f2542ae" Workload="localhost-k8s-whisker--5dc458bdfb--x2fbx-eth0" Oct 31 14:12:18.024312 containerd[1690]: 2025-10-31 14:12:17.972 [INFO][4148] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5013c1506c392e13691bc43a613d01bea8c778127e72e6528a9b32679f2542ae" Namespace="calico-system" Pod="whisker-5dc458bdfb-x2fbx" WorkloadEndpoint="localhost-k8s-whisker--5dc458bdfb--x2fbx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5dc458bdfb--x2fbx-eth0", GenerateName:"whisker-5dc458bdfb-", Namespace:"calico-system", SelfLink:"", UID:"5503919a-74e2-4390-b833-02013e670ac6", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 14, 12, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5dc458bdfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5dc458bdfb-x2fbx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid715d865123", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 14:12:18.024312 containerd[1690]: 2025-10-31 14:12:17.973 [INFO][4148] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="5013c1506c392e13691bc43a613d01bea8c778127e72e6528a9b32679f2542ae" Namespace="calico-system" Pod="whisker-5dc458bdfb-x2fbx" WorkloadEndpoint="localhost-k8s-whisker--5dc458bdfb--x2fbx-eth0" Oct 31 14:12:18.024386 containerd[1690]: 2025-10-31 14:12:17.973 [INFO][4148] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid715d865123 ContainerID="5013c1506c392e13691bc43a613d01bea8c778127e72e6528a9b32679f2542ae" Namespace="calico-system" Pod="whisker-5dc458bdfb-x2fbx" WorkloadEndpoint="localhost-k8s-whisker--5dc458bdfb--x2fbx-eth0" Oct 31 14:12:18.024386 containerd[1690]: 2025-10-31 14:12:17.997 [INFO][4148] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5013c1506c392e13691bc43a613d01bea8c778127e72e6528a9b32679f2542ae" Namespace="calico-system" Pod="whisker-5dc458bdfb-x2fbx" WorkloadEndpoint="localhost-k8s-whisker--5dc458bdfb--x2fbx-eth0" Oct 31 14:12:18.024424 containerd[1690]: 2025-10-31 14:12:17.997 [INFO][4148] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5013c1506c392e13691bc43a613d01bea8c778127e72e6528a9b32679f2542ae" Namespace="calico-system" Pod="whisker-5dc458bdfb-x2fbx" WorkloadEndpoint="localhost-k8s-whisker--5dc458bdfb--x2fbx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5dc458bdfb--x2fbx-eth0", GenerateName:"whisker-5dc458bdfb-", Namespace:"calico-system", SelfLink:"", UID:"5503919a-74e2-4390-b833-02013e670ac6", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 14, 12, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5dc458bdfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5013c1506c392e13691bc43a613d01bea8c778127e72e6528a9b32679f2542ae", Pod:"whisker-5dc458bdfb-x2fbx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid715d865123", MAC:"5a:46:11:70:26:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 14:12:18.024468 containerd[1690]: 2025-10-31 14:12:18.009 [INFO][4148] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5013c1506c392e13691bc43a613d01bea8c778127e72e6528a9b32679f2542ae" Namespace="calico-system" Pod="whisker-5dc458bdfb-x2fbx" WorkloadEndpoint="localhost-k8s-whisker--5dc458bdfb--x2fbx-eth0" Oct 31 14:12:18.253431 containerd[1690]: time="2025-10-31T14:12:18.253233068Z" level=info msg="connecting to shim 5013c1506c392e13691bc43a613d01bea8c778127e72e6528a9b32679f2542ae" address="unix:///run/containerd/s/4465db762f64d591a24e9bef6694381bb9b4d5650dc1d4518e5480d51d49a42d" namespace=k8s.io protocol=ttrpc version=3 Oct 31 14:12:18.275882 systemd[1]: Started cri-containerd-5013c1506c392e13691bc43a613d01bea8c778127e72e6528a9b32679f2542ae.scope - libcontainer container 5013c1506c392e13691bc43a613d01bea8c778127e72e6528a9b32679f2542ae. Oct 31 14:12:18.284292 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 14:12:18.354028 containerd[1690]: time="2025-10-31T14:12:18.353964060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dc458bdfb-x2fbx,Uid:5503919a-74e2-4390-b833-02013e670ac6,Namespace:calico-system,Attempt:0,} returns sandbox id \"5013c1506c392e13691bc43a613d01bea8c778127e72e6528a9b32679f2542ae\"" Oct 31 14:12:18.726359 containerd[1690]: time="2025-10-31T14:12:18.726160323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 14:12:18.732545 kubelet[2999]: I1031 14:12:18.732509 2999 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e90732a1-22ed-4940-ac91-cf89a2ddbb03" path="/var/lib/kubelet/pods/e90732a1-22ed-4940-ac91-cf89a2ddbb03/volumes" Oct 31 14:12:18.857690 systemd-networkd[1576]: vxlan.calico: Link UP Oct 31 14:12:18.857699 systemd-networkd[1576]: vxlan.calico: Gained carrier Oct 31 14:12:19.094199 containerd[1690]: time="2025-10-31T14:12:19.094087344Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:12:19.099352 containerd[1690]: time="2025-10-31T14:12:19.098996513Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 14:12:19.099352 containerd[1690]: time="2025-10-31T14:12:19.099043077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 14:12:19.099718 kubelet[2999]: E1031 14:12:19.099582 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 14:12:19.099718 kubelet[2999]: E1031 14:12:19.099618 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 14:12:19.159298 kubelet[2999]: E1031 14:12:19.159251 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:156dac623667454f86a595f0201421ec,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4256t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5dc458bdfb-x2fbx_calico-system(5503919a-74e2-4390-b833-02013e670ac6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 14:12:19.161691 containerd[1690]: time="2025-10-31T14:12:19.161539656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 14:12:19.278078 containerd[1690]: time="2025-10-31T14:12:19.277982429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c4674d488-9cl4s,Uid:2e033899-3e9d-4bfb-ab07-0f3a26593557,Namespace:calico-system,Attempt:0,}" Oct 31 14:12:19.289816 containerd[1690]: time="2025-10-31T14:12:19.289616238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-88wcn,Uid:f6f90695-af57-4839-916e-e4a5c9c9cd41,Namespace:kube-system,Attempt:0,}" Oct 31 14:12:19.412039 systemd-networkd[1576]: cali21d3036fd73: Link UP Oct 31 14:12:19.412714 systemd-networkd[1576]: cali21d3036fd73: Gained carrier Oct 31 14:12:19.455861 containerd[1690]: 2025-10-31 14:12:19.335 [INFO][4441] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6c4674d488--9cl4s-eth0 calico-kube-controllers-6c4674d488- calico-system 2e033899-3e9d-4bfb-ab07-0f3a26593557 871 0 2025-10-31 14:11:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6c4674d488 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6c4674d488-9cl4s eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali21d3036fd73 [] [] }} ContainerID="5aae74c4881fc0bd7304200abd13ee4dcd97817ec073db0c85049c146fe859a1" Namespace="calico-system" Pod="calico-kube-controllers-6c4674d488-9cl4s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c4674d488--9cl4s-" Oct 31 14:12:19.455861 containerd[1690]: 2025-10-31 14:12:19.335 [INFO][4441] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5aae74c4881fc0bd7304200abd13ee4dcd97817ec073db0c85049c146fe859a1" Namespace="calico-system" Pod="calico-kube-controllers-6c4674d488-9cl4s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c4674d488--9cl4s-eth0" Oct 31 14:12:19.455861 containerd[1690]: 2025-10-31 14:12:19.366 [INFO][4453] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5aae74c4881fc0bd7304200abd13ee4dcd97817ec073db0c85049c146fe859a1" HandleID="k8s-pod-network.5aae74c4881fc0bd7304200abd13ee4dcd97817ec073db0c85049c146fe859a1" Workload="localhost-k8s-calico--kube--controllers--6c4674d488--9cl4s-eth0" Oct 31 14:12:19.455996 containerd[1690]: 2025-10-31 14:12:19.366 [INFO][4453] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5aae74c4881fc0bd7304200abd13ee4dcd97817ec073db0c85049c146fe859a1" HandleID="k8s-pod-network.5aae74c4881fc0bd7304200abd13ee4dcd97817ec073db0c85049c146fe859a1" Workload="localhost-k8s-calico--kube--controllers--6c4674d488--9cl4s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f6f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6c4674d488-9cl4s", "timestamp":"2025-10-31 14:12:19.366459576 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 14:12:19.455996 containerd[1690]: 2025-10-31 14:12:19.366 [INFO][4453] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 14:12:19.455996 containerd[1690]: 2025-10-31 14:12:19.366 [INFO][4453] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 14:12:19.455996 containerd[1690]: 2025-10-31 14:12:19.366 [INFO][4453] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 14:12:19.455996 containerd[1690]: 2025-10-31 14:12:19.371 [INFO][4453] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5aae74c4881fc0bd7304200abd13ee4dcd97817ec073db0c85049c146fe859a1" host="localhost" Oct 31 14:12:19.455996 containerd[1690]: 2025-10-31 14:12:19.373 [INFO][4453] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 14:12:19.455996 containerd[1690]: 2025-10-31 14:12:19.375 [INFO][4453] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 14:12:19.455996 containerd[1690]: 2025-10-31 14:12:19.376 [INFO][4453] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 14:12:19.455996 containerd[1690]: 2025-10-31 14:12:19.377 [INFO][4453] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 14:12:19.455996 containerd[1690]: 2025-10-31 14:12:19.377 [INFO][4453] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5aae74c4881fc0bd7304200abd13ee4dcd97817ec073db0c85049c146fe859a1" host="localhost" Oct 31 14:12:19.456174 containerd[1690]: 2025-10-31 14:12:19.378 [INFO][4453] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5aae74c4881fc0bd7304200abd13ee4dcd97817ec073db0c85049c146fe859a1 Oct 31 14:12:19.456174 containerd[1690]: 2025-10-31 14:12:19.383 [INFO][4453] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5aae74c4881fc0bd7304200abd13ee4dcd97817ec073db0c85049c146fe859a1" host="localhost" Oct 31 14:12:19.456174 containerd[1690]: 2025-10-31 14:12:19.407 [INFO][4453] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.5aae74c4881fc0bd7304200abd13ee4dcd97817ec073db0c85049c146fe859a1" host="localhost" Oct 31 14:12:19.456174 containerd[1690]: 2025-10-31 14:12:19.407 [INFO][4453] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.5aae74c4881fc0bd7304200abd13ee4dcd97817ec073db0c85049c146fe859a1" host="localhost" Oct 31 14:12:19.456174 containerd[1690]: 2025-10-31 14:12:19.407 [INFO][4453] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 14:12:19.456174 containerd[1690]: 2025-10-31 14:12:19.407 [INFO][4453] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="5aae74c4881fc0bd7304200abd13ee4dcd97817ec073db0c85049c146fe859a1" HandleID="k8s-pod-network.5aae74c4881fc0bd7304200abd13ee4dcd97817ec073db0c85049c146fe859a1" Workload="localhost-k8s-calico--kube--controllers--6c4674d488--9cl4s-eth0" Oct 31 14:12:19.472327 containerd[1690]: 2025-10-31 14:12:19.410 [INFO][4441] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5aae74c4881fc0bd7304200abd13ee4dcd97817ec073db0c85049c146fe859a1" Namespace="calico-system" Pod="calico-kube-controllers-6c4674d488-9cl4s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c4674d488--9cl4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6c4674d488--9cl4s-eth0", GenerateName:"calico-kube-controllers-6c4674d488-", Namespace:"calico-system", SelfLink:"", UID:"2e033899-3e9d-4bfb-ab07-0f3a26593557", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 14, 11, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c4674d488", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6c4674d488-9cl4s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali21d3036fd73", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 14:12:19.472394 containerd[1690]: 2025-10-31 14:12:19.410 [INFO][4441] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="5aae74c4881fc0bd7304200abd13ee4dcd97817ec073db0c85049c146fe859a1" Namespace="calico-system" Pod="calico-kube-controllers-6c4674d488-9cl4s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c4674d488--9cl4s-eth0" Oct 31 14:12:19.472394 containerd[1690]: 2025-10-31 14:12:19.410 [INFO][4441] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali21d3036fd73 ContainerID="5aae74c4881fc0bd7304200abd13ee4dcd97817ec073db0c85049c146fe859a1" Namespace="calico-system" Pod="calico-kube-controllers-6c4674d488-9cl4s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c4674d488--9cl4s-eth0" Oct 31 14:12:19.472394 containerd[1690]: 2025-10-31 14:12:19.413 [INFO][4441] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5aae74c4881fc0bd7304200abd13ee4dcd97817ec073db0c85049c146fe859a1" Namespace="calico-system" Pod="calico-kube-controllers-6c4674d488-9cl4s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c4674d488--9cl4s-eth0" Oct 31 14:12:19.472443 containerd[1690]: 2025-10-31 14:12:19.413 [INFO][4441] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5aae74c4881fc0bd7304200abd13ee4dcd97817ec073db0c85049c146fe859a1" Namespace="calico-system" Pod="calico-kube-controllers-6c4674d488-9cl4s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c4674d488--9cl4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6c4674d488--9cl4s-eth0", GenerateName:"calico-kube-controllers-6c4674d488-", Namespace:"calico-system", SelfLink:"", UID:"2e033899-3e9d-4bfb-ab07-0f3a26593557", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 14, 11, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c4674d488", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5aae74c4881fc0bd7304200abd13ee4dcd97817ec073db0c85049c146fe859a1", Pod:"calico-kube-controllers-6c4674d488-9cl4s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali21d3036fd73", MAC:"aa:da:60:22:38:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 14:12:19.479125 containerd[1690]: 2025-10-31 14:12:19.443 [INFO][4441] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5aae74c4881fc0bd7304200abd13ee4dcd97817ec073db0c85049c146fe859a1" Namespace="calico-system" Pod="calico-kube-controllers-6c4674d488-9cl4s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c4674d488--9cl4s-eth0" Oct 31 14:12:19.508078 systemd-networkd[1576]: calif44c0401a15: Link UP Oct 31 14:12:19.508701 systemd-networkd[1576]: calif44c0401a15: Gained carrier Oct 31 14:12:19.514415 containerd[1690]: time="2025-10-31T14:12:19.513566803Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:12:19.532699 containerd[1690]: 2025-10-31 14:12:19.418 [INFO][4458] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--88wcn-eth0 coredns-674b8bbfcf- kube-system f6f90695-af57-4839-916e-e4a5c9c9cd41 872 0 2025-10-31 14:11:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-88wcn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif44c0401a15 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0f3b69f5001e0c7216e053845b3a25c43663fe5fedcf517d4454fbc59c1ec722" Namespace="kube-system" Pod="coredns-674b8bbfcf-88wcn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--88wcn-" Oct 31 14:12:19.532699 containerd[1690]: 2025-10-31 14:12:19.418 [INFO][4458] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0f3b69f5001e0c7216e053845b3a25c43663fe5fedcf517d4454fbc59c1ec722" Namespace="kube-system" Pod="coredns-674b8bbfcf-88wcn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--88wcn-eth0" Oct 31 14:12:19.532699 containerd[1690]: 2025-10-31 14:12:19.461 [INFO][4474] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f3b69f5001e0c7216e053845b3a25c43663fe5fedcf517d4454fbc59c1ec722" HandleID="k8s-pod-network.0f3b69f5001e0c7216e053845b3a25c43663fe5fedcf517d4454fbc59c1ec722" Workload="localhost-k8s-coredns--674b8bbfcf--88wcn-eth0" Oct 31 14:12:19.533068 containerd[1690]: 2025-10-31 14:12:19.461 [INFO][4474] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0f3b69f5001e0c7216e053845b3a25c43663fe5fedcf517d4454fbc59c1ec722" HandleID="k8s-pod-network.0f3b69f5001e0c7216e053845b3a25c43663fe5fedcf517d4454fbc59c1ec722" Workload="localhost-k8s-coredns--674b8bbfcf--88wcn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-88wcn", "timestamp":"2025-10-31 14:12:19.461723749 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 14:12:19.533068 containerd[1690]: 2025-10-31 14:12:19.461 [INFO][4474] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 14:12:19.533068 containerd[1690]: 2025-10-31 14:12:19.461 [INFO][4474] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 14:12:19.533068 containerd[1690]: 2025-10-31 14:12:19.461 [INFO][4474] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 14:12:19.533068 containerd[1690]: 2025-10-31 14:12:19.471 [INFO][4474] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0f3b69f5001e0c7216e053845b3a25c43663fe5fedcf517d4454fbc59c1ec722" host="localhost" Oct 31 14:12:19.533068 containerd[1690]: 2025-10-31 14:12:19.482 [INFO][4474] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 14:12:19.533068 containerd[1690]: 2025-10-31 14:12:19.483 [INFO][4474] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 14:12:19.533068 containerd[1690]: 2025-10-31 14:12:19.485 [INFO][4474] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 14:12:19.533068 containerd[1690]: 2025-10-31 14:12:19.487 [INFO][4474] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 14:12:19.533068 containerd[1690]: 2025-10-31 14:12:19.487 [INFO][4474] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0f3b69f5001e0c7216e053845b3a25c43663fe5fedcf517d4454fbc59c1ec722" host="localhost" Oct 31 14:12:19.539459 containerd[1690]: 2025-10-31 14:12:19.487 [INFO][4474] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0f3b69f5001e0c7216e053845b3a25c43663fe5fedcf517d4454fbc59c1ec722 Oct 31 14:12:19.539459 containerd[1690]: 2025-10-31 14:12:19.490 [INFO][4474] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0f3b69f5001e0c7216e053845b3a25c43663fe5fedcf517d4454fbc59c1ec722" host="localhost" Oct 31 14:12:19.539459 containerd[1690]: 2025-10-31 14:12:19.503 [INFO][4474] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.0f3b69f5001e0c7216e053845b3a25c43663fe5fedcf517d4454fbc59c1ec722" host="localhost" Oct 31 14:12:19.539459 containerd[1690]: 2025-10-31 14:12:19.503 [INFO][4474] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.0f3b69f5001e0c7216e053845b3a25c43663fe5fedcf517d4454fbc59c1ec722" host="localhost" Oct 31 14:12:19.539459 containerd[1690]: 2025-10-31 14:12:19.503 [INFO][4474] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 14:12:19.539459 containerd[1690]: 2025-10-31 14:12:19.503 [INFO][4474] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="0f3b69f5001e0c7216e053845b3a25c43663fe5fedcf517d4454fbc59c1ec722" HandleID="k8s-pod-network.0f3b69f5001e0c7216e053845b3a25c43663fe5fedcf517d4454fbc59c1ec722" Workload="localhost-k8s-coredns--674b8bbfcf--88wcn-eth0" Oct 31 14:12:19.539623 containerd[1690]: 2025-10-31 14:12:19.504 [INFO][4458] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0f3b69f5001e0c7216e053845b3a25c43663fe5fedcf517d4454fbc59c1ec722" Namespace="kube-system" Pod="coredns-674b8bbfcf-88wcn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--88wcn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--88wcn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f6f90695-af57-4839-916e-e4a5c9c9cd41", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 14, 11, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-88wcn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif44c0401a15", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 14:12:19.539689 containerd[1690]: 2025-10-31 14:12:19.504 [INFO][4458] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="0f3b69f5001e0c7216e053845b3a25c43663fe5fedcf517d4454fbc59c1ec722" Namespace="kube-system" Pod="coredns-674b8bbfcf-88wcn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--88wcn-eth0" Oct 31 14:12:19.539689 containerd[1690]: 2025-10-31 14:12:19.505 [INFO][4458] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif44c0401a15 ContainerID="0f3b69f5001e0c7216e053845b3a25c43663fe5fedcf517d4454fbc59c1ec722" Namespace="kube-system" Pod="coredns-674b8bbfcf-88wcn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--88wcn-eth0" Oct 31 14:12:19.539689 containerd[1690]: 2025-10-31 14:12:19.509 [INFO][4458] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f3b69f5001e0c7216e053845b3a25c43663fe5fedcf517d4454fbc59c1ec722" Namespace="kube-system" Pod="coredns-674b8bbfcf-88wcn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--88wcn-eth0" Oct 31 14:12:19.539747 containerd[1690]: 2025-10-31 14:12:19.514 [INFO][4458] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0f3b69f5001e0c7216e053845b3a25c43663fe5fedcf517d4454fbc59c1ec722" Namespace="kube-system" Pod="coredns-674b8bbfcf-88wcn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--88wcn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--88wcn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f6f90695-af57-4839-916e-e4a5c9c9cd41", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 14, 11, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0f3b69f5001e0c7216e053845b3a25c43663fe5fedcf517d4454fbc59c1ec722", Pod:"coredns-674b8bbfcf-88wcn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif44c0401a15", MAC:"1a:83:ba:51:67:4d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 14:12:19.539747 containerd[1690]: 2025-10-31 14:12:19.530 [INFO][4458] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0f3b69f5001e0c7216e053845b3a25c43663fe5fedcf517d4454fbc59c1ec722" Namespace="kube-system" Pod="coredns-674b8bbfcf-88wcn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--88wcn-eth0" Oct 31 14:12:19.566367 containerd[1690]: time="2025-10-31T14:12:19.566264262Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 14:12:19.566367 containerd[1690]: time="2025-10-31T14:12:19.566342677Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 14:12:19.569119 kubelet[2999]: E1031 14:12:19.569099 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 14:12:19.569860 kubelet[2999]: E1031 14:12:19.569249 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 14:12:19.575500 kubelet[2999]: E1031 14:12:19.569327 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4256t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5dc458bdfb-x2fbx_calico-system(5503919a-74e2-4390-b833-02013e670ac6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 14:12:19.575500 kubelet[2999]: E1031 14:12:19.571050 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dc458bdfb-x2fbx" podUID="5503919a-74e2-4390-b833-02013e670ac6" Oct 31 14:12:19.745579 systemd-networkd[1576]: calid715d865123: Gained IPv6LL Oct 31 14:12:19.746527 containerd[1690]: time="2025-10-31T14:12:19.745732828Z" level=info msg="connecting to shim 5aae74c4881fc0bd7304200abd13ee4dcd97817ec073db0c85049c146fe859a1" address="unix:///run/containerd/s/1f9ede553a5697d528151f9711cd8f84e9b286b88613981799af42aed6106914" namespace=k8s.io protocol=ttrpc version=3 Oct 31 14:12:19.755475 kubelet[2999]: E1031 14:12:19.755036 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dc458bdfb-x2fbx" podUID="5503919a-74e2-4390-b833-02013e670ac6" Oct 31 14:12:19.756410 containerd[1690]: time="2025-10-31T14:12:19.754574168Z" level=info msg="connecting to shim 0f3b69f5001e0c7216e053845b3a25c43663fe5fedcf517d4454fbc59c1ec722" address="unix:///run/containerd/s/ae6b8e1fa4c574fdc9eadd479a8f94d172be8bdf8f3dba9dccf00afebd090867" namespace=k8s.io protocol=ttrpc version=3 Oct 31 14:12:19.793992 systemd[1]: Started cri-containerd-5aae74c4881fc0bd7304200abd13ee4dcd97817ec073db0c85049c146fe859a1.scope - libcontainer container 5aae74c4881fc0bd7304200abd13ee4dcd97817ec073db0c85049c146fe859a1. Oct 31 14:12:19.799352 systemd[1]: Started cri-containerd-0f3b69f5001e0c7216e053845b3a25c43663fe5fedcf517d4454fbc59c1ec722.scope - libcontainer container 0f3b69f5001e0c7216e053845b3a25c43663fe5fedcf517d4454fbc59c1ec722. Oct 31 14:12:19.810856 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 14:12:19.812547 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 14:12:19.848451 containerd[1690]: time="2025-10-31T14:12:19.848417152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-88wcn,Uid:f6f90695-af57-4839-916e-e4a5c9c9cd41,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f3b69f5001e0c7216e053845b3a25c43663fe5fedcf517d4454fbc59c1ec722\"" Oct 31 14:12:19.852586 containerd[1690]: time="2025-10-31T14:12:19.852460498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c4674d488-9cl4s,Uid:2e033899-3e9d-4bfb-ab07-0f3a26593557,Namespace:calico-system,Attempt:0,} returns sandbox id \"5aae74c4881fc0bd7304200abd13ee4dcd97817ec073db0c85049c146fe859a1\"" Oct 31 14:12:19.854281 containerd[1690]: time="2025-10-31T14:12:19.854233264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 14:12:19.871835 containerd[1690]: time="2025-10-31T14:12:19.871769112Z" level=info msg="CreateContainer within sandbox \"0f3b69f5001e0c7216e053845b3a25c43663fe5fedcf517d4454fbc59c1ec722\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 14:12:19.880502 containerd[1690]: time="2025-10-31T14:12:19.880463171Z" level=info msg="Container bc82a30ce7fb3b355f67a6d93b959de1ed53de23c9cd6ba66704bb22d7acdd44: CDI devices from CRI Config.CDIDevices: []" Oct 31 14:12:19.883486 containerd[1690]: time="2025-10-31T14:12:19.883445729Z" level=info msg="CreateContainer within sandbox \"0f3b69f5001e0c7216e053845b3a25c43663fe5fedcf517d4454fbc59c1ec722\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bc82a30ce7fb3b355f67a6d93b959de1ed53de23c9cd6ba66704bb22d7acdd44\"" Oct 31 14:12:19.884653 containerd[1690]: time="2025-10-31T14:12:19.884109640Z" level=info msg="StartContainer for \"bc82a30ce7fb3b355f67a6d93b959de1ed53de23c9cd6ba66704bb22d7acdd44\"" Oct 31 14:12:19.888002 containerd[1690]: time="2025-10-31T14:12:19.887967359Z" level=info msg="connecting to shim bc82a30ce7fb3b355f67a6d93b959de1ed53de23c9cd6ba66704bb22d7acdd44" address="unix:///run/containerd/s/ae6b8e1fa4c574fdc9eadd479a8f94d172be8bdf8f3dba9dccf00afebd090867" protocol=ttrpc version=3 Oct 31 14:12:19.903969 systemd[1]: Started cri-containerd-bc82a30ce7fb3b355f67a6d93b959de1ed53de23c9cd6ba66704bb22d7acdd44.scope - libcontainer container bc82a30ce7fb3b355f67a6d93b959de1ed53de23c9cd6ba66704bb22d7acdd44. Oct 31 14:12:19.951317 containerd[1690]: time="2025-10-31T14:12:19.951280976Z" level=info msg="StartContainer for \"bc82a30ce7fb3b355f67a6d93b959de1ed53de23c9cd6ba66704bb22d7acdd44\" returns successfully" Oct 31 14:12:20.227630 containerd[1690]: time="2025-10-31T14:12:20.227589658Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:12:20.228617 containerd[1690]: time="2025-10-31T14:12:20.228137765Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 14:12:20.228617 containerd[1690]: time="2025-10-31T14:12:20.228199289Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 14:12:20.228684 kubelet[2999]: E1031 14:12:20.228299 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 14:12:20.228684 kubelet[2999]: E1031 14:12:20.228349 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 14:12:20.228684 kubelet[2999]: E1031 14:12:20.228454 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r2hxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6c4674d488-9cl4s_calico-system(2e033899-3e9d-4bfb-ab07-0f3a26593557): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 14:12:20.229828 kubelet[2999]: E1031 14:12:20.229801 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c4674d488-9cl4s" podUID="2e033899-3e9d-4bfb-ab07-0f3a26593557" Oct 31 14:12:20.235588 containerd[1690]: time="2025-10-31T14:12:20.235468120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f6ffcff69-wstnf,Uid:191ace75-d386-424f-8d71-10eff7da195e,Namespace:calico-apiserver,Attempt:0,}" Oct 31 14:12:20.235588 containerd[1690]: time="2025-10-31T14:12:20.235519365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-r5k7t,Uid:1aeae2c5-1e57-4c36-a44b-9f56d90d27b3,Namespace:calico-system,Attempt:0,}" Oct 31 14:12:20.375293 systemd-networkd[1576]: cali850dfa5f737: Link UP Oct 31 14:12:20.376310 systemd-networkd[1576]: cali850dfa5f737: Gained carrier Oct 31 14:12:20.397666 containerd[1690]: 2025-10-31 14:12:20.280 [INFO][4621] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--f6ffcff69--wstnf-eth0 calico-apiserver-f6ffcff69- calico-apiserver 191ace75-d386-424f-8d71-10eff7da195e 874 0 2025-10-31 14:11:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f6ffcff69 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-f6ffcff69-wstnf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali850dfa5f737 [] [] }} ContainerID="e2ca590641f7a99eddfea81c7f4bbe4e0b0b56374321336ad8cc4d58e964b10f" Namespace="calico-apiserver" Pod="calico-apiserver-f6ffcff69-wstnf" WorkloadEndpoint="localhost-k8s-calico--apiserver--f6ffcff69--wstnf-" Oct 31 14:12:20.397666 containerd[1690]: 2025-10-31 14:12:20.280 [INFO][4621] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e2ca590641f7a99eddfea81c7f4bbe4e0b0b56374321336ad8cc4d58e964b10f" Namespace="calico-apiserver" Pod="calico-apiserver-f6ffcff69-wstnf" WorkloadEndpoint="localhost-k8s-calico--apiserver--f6ffcff69--wstnf-eth0" Oct 31 14:12:20.397666 containerd[1690]: 2025-10-31 14:12:20.329 [INFO][4644] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2ca590641f7a99eddfea81c7f4bbe4e0b0b56374321336ad8cc4d58e964b10f" HandleID="k8s-pod-network.e2ca590641f7a99eddfea81c7f4bbe4e0b0b56374321336ad8cc4d58e964b10f" Workload="localhost-k8s-calico--apiserver--f6ffcff69--wstnf-eth0" Oct 31 14:12:20.397666 containerd[1690]: 2025-10-31 14:12:20.329 [INFO][4644] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e2ca590641f7a99eddfea81c7f4bbe4e0b0b56374321336ad8cc4d58e964b10f" HandleID="k8s-pod-network.e2ca590641f7a99eddfea81c7f4bbe4e0b0b56374321336ad8cc4d58e964b10f" Workload="localhost-k8s-calico--apiserver--f6ffcff69--wstnf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-f6ffcff69-wstnf", "timestamp":"2025-10-31 14:12:20.329687859 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 14:12:20.397666 containerd[1690]: 2025-10-31 14:12:20.329 [INFO][4644] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 14:12:20.397666 containerd[1690]: 2025-10-31 14:12:20.329 [INFO][4644] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 14:12:20.397666 containerd[1690]: 2025-10-31 14:12:20.329 [INFO][4644] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 14:12:20.397666 containerd[1690]: 2025-10-31 14:12:20.336 [INFO][4644] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e2ca590641f7a99eddfea81c7f4bbe4e0b0b56374321336ad8cc4d58e964b10f" host="localhost" Oct 31 14:12:20.397666 containerd[1690]: 2025-10-31 14:12:20.342 [INFO][4644] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 14:12:20.397666 containerd[1690]: 2025-10-31 14:12:20.345 [INFO][4644] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 14:12:20.397666 containerd[1690]: 2025-10-31 14:12:20.349 [INFO][4644] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 14:12:20.397666 containerd[1690]: 2025-10-31 14:12:20.353 [INFO][4644] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 14:12:20.397666 containerd[1690]: 2025-10-31 14:12:20.353 [INFO][4644] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e2ca590641f7a99eddfea81c7f4bbe4e0b0b56374321336ad8cc4d58e964b10f" host="localhost" Oct 31 14:12:20.397666 containerd[1690]: 2025-10-31 14:12:20.355 [INFO][4644] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e2ca590641f7a99eddfea81c7f4bbe4e0b0b56374321336ad8cc4d58e964b10f Oct 31 14:12:20.397666 containerd[1690]: 2025-10-31 14:12:20.358 [INFO][4644] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e2ca590641f7a99eddfea81c7f4bbe4e0b0b56374321336ad8cc4d58e964b10f" host="localhost" Oct 31 14:12:20.397666 containerd[1690]: 2025-10-31 14:12:20.364 [INFO][4644] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.e2ca590641f7a99eddfea81c7f4bbe4e0b0b56374321336ad8cc4d58e964b10f" host="localhost" Oct 31 14:12:20.397666 containerd[1690]: 2025-10-31 14:12:20.364 [INFO][4644] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.e2ca590641f7a99eddfea81c7f4bbe4e0b0b56374321336ad8cc4d58e964b10f" host="localhost" Oct 31 14:12:20.397666 containerd[1690]: 2025-10-31 14:12:20.364 [INFO][4644] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 14:12:20.397666 containerd[1690]: 2025-10-31 14:12:20.364 [INFO][4644] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="e2ca590641f7a99eddfea81c7f4bbe4e0b0b56374321336ad8cc4d58e964b10f" HandleID="k8s-pod-network.e2ca590641f7a99eddfea81c7f4bbe4e0b0b56374321336ad8cc4d58e964b10f" Workload="localhost-k8s-calico--apiserver--f6ffcff69--wstnf-eth0" Oct 31 14:12:20.415398 containerd[1690]: 2025-10-31 14:12:20.368 [INFO][4621] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e2ca590641f7a99eddfea81c7f4bbe4e0b0b56374321336ad8cc4d58e964b10f" Namespace="calico-apiserver" Pod="calico-apiserver-f6ffcff69-wstnf" WorkloadEndpoint="localhost-k8s-calico--apiserver--f6ffcff69--wstnf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f6ffcff69--wstnf-eth0", GenerateName:"calico-apiserver-f6ffcff69-", Namespace:"calico-apiserver", SelfLink:"", UID:"191ace75-d386-424f-8d71-10eff7da195e", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 14, 11, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f6ffcff69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-f6ffcff69-wstnf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali850dfa5f737", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 14:12:20.415398 containerd[1690]: 2025-10-31 14:12:20.368 [INFO][4621] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="e2ca590641f7a99eddfea81c7f4bbe4e0b0b56374321336ad8cc4d58e964b10f" Namespace="calico-apiserver" Pod="calico-apiserver-f6ffcff69-wstnf" WorkloadEndpoint="localhost-k8s-calico--apiserver--f6ffcff69--wstnf-eth0" Oct 31 14:12:20.415398 containerd[1690]: 2025-10-31 14:12:20.369 [INFO][4621] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali850dfa5f737 ContainerID="e2ca590641f7a99eddfea81c7f4bbe4e0b0b56374321336ad8cc4d58e964b10f" Namespace="calico-apiserver" Pod="calico-apiserver-f6ffcff69-wstnf" WorkloadEndpoint="localhost-k8s-calico--apiserver--f6ffcff69--wstnf-eth0" Oct 31 14:12:20.415398 containerd[1690]: 2025-10-31 14:12:20.371 [INFO][4621] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e2ca590641f7a99eddfea81c7f4bbe4e0b0b56374321336ad8cc4d58e964b10f" Namespace="calico-apiserver" Pod="calico-apiserver-f6ffcff69-wstnf" WorkloadEndpoint="localhost-k8s-calico--apiserver--f6ffcff69--wstnf-eth0" Oct 31 14:12:20.415398 containerd[1690]: 2025-10-31 14:12:20.372 [INFO][4621] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e2ca590641f7a99eddfea81c7f4bbe4e0b0b56374321336ad8cc4d58e964b10f" Namespace="calico-apiserver" Pod="calico-apiserver-f6ffcff69-wstnf" WorkloadEndpoint="localhost-k8s-calico--apiserver--f6ffcff69--wstnf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f6ffcff69--wstnf-eth0", GenerateName:"calico-apiserver-f6ffcff69-", Namespace:"calico-apiserver", SelfLink:"", UID:"191ace75-d386-424f-8d71-10eff7da195e", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 14, 11, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f6ffcff69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e2ca590641f7a99eddfea81c7f4bbe4e0b0b56374321336ad8cc4d58e964b10f", Pod:"calico-apiserver-f6ffcff69-wstnf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali850dfa5f737", MAC:"36:fc:2b:a9:5b:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 14:12:20.415398 containerd[1690]: 2025-10-31 14:12:20.389 [INFO][4621] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e2ca590641f7a99eddfea81c7f4bbe4e0b0b56374321336ad8cc4d58e964b10f" Namespace="calico-apiserver" Pod="calico-apiserver-f6ffcff69-wstnf" WorkloadEndpoint="localhost-k8s-calico--apiserver--f6ffcff69--wstnf-eth0" Oct 31 14:12:20.435406 containerd[1690]: time="2025-10-31T14:12:20.435377357Z" level=info msg="connecting to shim e2ca590641f7a99eddfea81c7f4bbe4e0b0b56374321336ad8cc4d58e964b10f" address="unix:///run/containerd/s/056090b86e6fc524685457c2ff6890c434d93d17dbe7efabfb903a9e4a9d1e91" namespace=k8s.io protocol=ttrpc version=3 Oct 31 14:12:20.463053 systemd[1]: Started cri-containerd-e2ca590641f7a99eddfea81c7f4bbe4e0b0b56374321336ad8cc4d58e964b10f.scope - libcontainer container e2ca590641f7a99eddfea81c7f4bbe4e0b0b56374321336ad8cc4d58e964b10f. Oct 31 14:12:20.482181 systemd-networkd[1576]: cali5311fc9a9f0: Link UP Oct 31 14:12:20.485522 systemd-networkd[1576]: cali5311fc9a9f0: Gained carrier Oct 31 14:12:20.487022 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 14:12:20.499465 containerd[1690]: 2025-10-31 14:12:20.303 [INFO][4627] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--r5k7t-eth0 goldmane-666569f655- calico-system 1aeae2c5-1e57-4c36-a44b-9f56d90d27b3 873 0 2025-10-31 14:11:47 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-r5k7t eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5311fc9a9f0 [] [] }} ContainerID="09fb2300f878cfec53675279c13fc8148bfcd8dc689fa2a5f8378d92928ad2f8" Namespace="calico-system" Pod="goldmane-666569f655-r5k7t" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r5k7t-" Oct 31 14:12:20.499465 containerd[1690]: 2025-10-31 14:12:20.304 [INFO][4627] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="09fb2300f878cfec53675279c13fc8148bfcd8dc689fa2a5f8378d92928ad2f8" Namespace="calico-system" Pod="goldmane-666569f655-r5k7t" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r5k7t-eth0" Oct 31 14:12:20.499465 containerd[1690]: 2025-10-31 14:12:20.354 [INFO][4651] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="09fb2300f878cfec53675279c13fc8148bfcd8dc689fa2a5f8378d92928ad2f8" HandleID="k8s-pod-network.09fb2300f878cfec53675279c13fc8148bfcd8dc689fa2a5f8378d92928ad2f8" Workload="localhost-k8s-goldmane--666569f655--r5k7t-eth0" Oct 31 14:12:20.499465 containerd[1690]: 2025-10-31 14:12:20.354 [INFO][4651] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="09fb2300f878cfec53675279c13fc8148bfcd8dc689fa2a5f8378d92928ad2f8" HandleID="k8s-pod-network.09fb2300f878cfec53675279c13fc8148bfcd8dc689fa2a5f8378d92928ad2f8" Workload="localhost-k8s-goldmane--666569f655--r5k7t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034cbf0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-r5k7t", "timestamp":"2025-10-31 14:12:20.354272581 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 14:12:20.499465 containerd[1690]: 2025-10-31 14:12:20.354 [INFO][4651] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 14:12:20.499465 containerd[1690]: 2025-10-31 14:12:20.364 [INFO][4651] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 14:12:20.499465 containerd[1690]: 2025-10-31 14:12:20.364 [INFO][4651] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 14:12:20.499465 containerd[1690]: 2025-10-31 14:12:20.438 [INFO][4651] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.09fb2300f878cfec53675279c13fc8148bfcd8dc689fa2a5f8378d92928ad2f8" host="localhost" Oct 31 14:12:20.499465 containerd[1690]: 2025-10-31 14:12:20.452 [INFO][4651] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 14:12:20.499465 containerd[1690]: 2025-10-31 14:12:20.459 [INFO][4651] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 14:12:20.499465 containerd[1690]: 2025-10-31 14:12:20.461 [INFO][4651] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 14:12:20.499465 containerd[1690]: 2025-10-31 14:12:20.465 [INFO][4651] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 14:12:20.499465 containerd[1690]: 2025-10-31 14:12:20.466 [INFO][4651] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.09fb2300f878cfec53675279c13fc8148bfcd8dc689fa2a5f8378d92928ad2f8" host="localhost" Oct 31 14:12:20.499465 containerd[1690]: 2025-10-31 14:12:20.468 [INFO][4651] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.09fb2300f878cfec53675279c13fc8148bfcd8dc689fa2a5f8378d92928ad2f8 Oct 31 14:12:20.499465 containerd[1690]: 2025-10-31 14:12:20.471 [INFO][4651] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.09fb2300f878cfec53675279c13fc8148bfcd8dc689fa2a5f8378d92928ad2f8" host="localhost" Oct 31 14:12:20.499465 containerd[1690]: 2025-10-31 14:12:20.475 [INFO][4651] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.09fb2300f878cfec53675279c13fc8148bfcd8dc689fa2a5f8378d92928ad2f8" host="localhost" Oct 31 14:12:20.499465 containerd[1690]: 2025-10-31 14:12:20.475 [INFO][4651] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.09fb2300f878cfec53675279c13fc8148bfcd8dc689fa2a5f8378d92928ad2f8" host="localhost" Oct 31 14:12:20.499465 containerd[1690]: 2025-10-31 14:12:20.475 [INFO][4651] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 14:12:20.499465 containerd[1690]: 2025-10-31 14:12:20.475 [INFO][4651] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="09fb2300f878cfec53675279c13fc8148bfcd8dc689fa2a5f8378d92928ad2f8" HandleID="k8s-pod-network.09fb2300f878cfec53675279c13fc8148bfcd8dc689fa2a5f8378d92928ad2f8" Workload="localhost-k8s-goldmane--666569f655--r5k7t-eth0" Oct 31 14:12:20.501407 containerd[1690]: 2025-10-31 14:12:20.478 [INFO][4627] cni-plugin/k8s.go 418: Populated endpoint ContainerID="09fb2300f878cfec53675279c13fc8148bfcd8dc689fa2a5f8378d92928ad2f8" Namespace="calico-system" Pod="goldmane-666569f655-r5k7t" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r5k7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--r5k7t-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"1aeae2c5-1e57-4c36-a44b-9f56d90d27b3", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 14, 11, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-r5k7t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5311fc9a9f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 14:12:20.501407 containerd[1690]: 2025-10-31 14:12:20.478 [INFO][4627] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="09fb2300f878cfec53675279c13fc8148bfcd8dc689fa2a5f8378d92928ad2f8" Namespace="calico-system" Pod="goldmane-666569f655-r5k7t" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r5k7t-eth0" Oct 31 14:12:20.501407 containerd[1690]: 2025-10-31 14:12:20.478 [INFO][4627] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5311fc9a9f0 ContainerID="09fb2300f878cfec53675279c13fc8148bfcd8dc689fa2a5f8378d92928ad2f8" Namespace="calico-system" Pod="goldmane-666569f655-r5k7t" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r5k7t-eth0" Oct 31 14:12:20.501407 containerd[1690]: 2025-10-31 14:12:20.486 [INFO][4627] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="09fb2300f878cfec53675279c13fc8148bfcd8dc689fa2a5f8378d92928ad2f8" Namespace="calico-system" Pod="goldmane-666569f655-r5k7t" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r5k7t-eth0" Oct 31 14:12:20.501407 containerd[1690]: 2025-10-31 14:12:20.487 [INFO][4627] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="09fb2300f878cfec53675279c13fc8148bfcd8dc689fa2a5f8378d92928ad2f8" Namespace="calico-system" Pod="goldmane-666569f655-r5k7t" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r5k7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--r5k7t-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"1aeae2c5-1e57-4c36-a44b-9f56d90d27b3", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 14, 11, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"09fb2300f878cfec53675279c13fc8148bfcd8dc689fa2a5f8378d92928ad2f8", Pod:"goldmane-666569f655-r5k7t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5311fc9a9f0", MAC:"2a:20:3d:28:04:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 14:12:20.501407 containerd[1690]: 2025-10-31 14:12:20.495 [INFO][4627] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="09fb2300f878cfec53675279c13fc8148bfcd8dc689fa2a5f8378d92928ad2f8" Namespace="calico-system" Pod="goldmane-666569f655-r5k7t" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r5k7t-eth0" Oct 31 14:12:20.526077 containerd[1690]: time="2025-10-31T14:12:20.526009650Z" level=info msg="connecting to shim 09fb2300f878cfec53675279c13fc8148bfcd8dc689fa2a5f8378d92928ad2f8" address="unix:///run/containerd/s/0d56de81794cfde78b0baac4dab5b82620ab9da1e04cd838f4447575c7551f45" namespace=k8s.io protocol=ttrpc version=3 Oct 31 14:12:20.547332 containerd[1690]: time="2025-10-31T14:12:20.547236056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f6ffcff69-wstnf,Uid:191ace75-d386-424f-8d71-10eff7da195e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e2ca590641f7a99eddfea81c7f4bbe4e0b0b56374321336ad8cc4d58e964b10f\"" Oct 31 14:12:20.548538 containerd[1690]: time="2025-10-31T14:12:20.548458545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 14:12:20.569951 systemd[1]: Started cri-containerd-09fb2300f878cfec53675279c13fc8148bfcd8dc689fa2a5f8378d92928ad2f8.scope - libcontainer container 09fb2300f878cfec53675279c13fc8148bfcd8dc689fa2a5f8378d92928ad2f8. Oct 31 14:12:20.579192 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 14:12:20.615192 containerd[1690]: time="2025-10-31T14:12:20.615136230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-r5k7t,Uid:1aeae2c5-1e57-4c36-a44b-9f56d90d27b3,Namespace:calico-system,Attempt:0,} returns sandbox id \"09fb2300f878cfec53675279c13fc8148bfcd8dc689fa2a5f8378d92928ad2f8\"" Oct 31 14:12:20.752929 kubelet[2999]: E1031 14:12:20.752745 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c4674d488-9cl4s" podUID="2e033899-3e9d-4bfb-ab07-0f3a26593557" Oct 31 14:12:20.772469 kubelet[2999]: I1031 14:12:20.771056 2999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-88wcn" podStartSLOduration=75.76823094 podStartE2EDuration="1m15.76823094s" podCreationTimestamp="2025-10-31 14:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 14:12:20.768148843 +0000 UTC m=+80.749000241" watchObservedRunningTime="2025-10-31 14:12:20.76823094 +0000 UTC m=+80.749082333" Oct 31 14:12:20.896903 systemd-networkd[1576]: vxlan.calico: Gained IPv6LL Oct 31 14:12:20.960930 systemd-networkd[1576]: calif44c0401a15: Gained IPv6LL Oct 31 14:12:21.035024 containerd[1690]: time="2025-10-31T14:12:21.034934621Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:12:21.035554 containerd[1690]: time="2025-10-31T14:12:21.035529519Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 14:12:21.035603 containerd[1690]: time="2025-10-31T14:12:21.035585342Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 14:12:21.035749 kubelet[2999]: E1031 14:12:21.035715 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 14:12:21.035920 kubelet[2999]: E1031 14:12:21.035752 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 14:12:21.036084 containerd[1690]: time="2025-10-31T14:12:21.036069083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 14:12:21.036866 kubelet[2999]: E1031 14:12:21.036185 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gqzv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-f6ffcff69-wstnf_calico-apiserver(191ace75-d386-424f-8d71-10eff7da195e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 14:12:21.038156 kubelet[2999]: E1031 14:12:21.038070 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f6ffcff69-wstnf" podUID="191ace75-d386-424f-8d71-10eff7da195e" Oct 31 14:12:21.216957 systemd-networkd[1576]: cali21d3036fd73: Gained IPv6LL Oct 31 14:12:21.231813 containerd[1690]: time="2025-10-31T14:12:21.231716837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f6ffcff69-lv42k,Uid:ea6f82f9-8ca6-4f27-9ac4-6759edba2cc0,Namespace:calico-apiserver,Attempt:0,}" Oct 31 14:12:21.363979 systemd-networkd[1576]: cali8d673e2fd71: Link UP Oct 31 14:12:21.364082 systemd-networkd[1576]: cali8d673e2fd71: Gained carrier Oct 31 14:12:21.386447 containerd[1690]: 2025-10-31 14:12:21.281 [INFO][4774] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--f6ffcff69--lv42k-eth0 calico-apiserver-f6ffcff69- calico-apiserver ea6f82f9-8ca6-4f27-9ac4-6759edba2cc0 870 0 2025-10-31 14:11:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f6ffcff69 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-f6ffcff69-lv42k eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8d673e2fd71 [] [] }} ContainerID="29084896870d0175509927fbf1aa444b126f078af4bf20b594140a1e2c7e393e" Namespace="calico-apiserver" Pod="calico-apiserver-f6ffcff69-lv42k" WorkloadEndpoint="localhost-k8s-calico--apiserver--f6ffcff69--lv42k-" Oct 31 14:12:21.386447 containerd[1690]: 2025-10-31 14:12:21.281 [INFO][4774] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="29084896870d0175509927fbf1aa444b126f078af4bf20b594140a1e2c7e393e" Namespace="calico-apiserver" Pod="calico-apiserver-f6ffcff69-lv42k" WorkloadEndpoint="localhost-k8s-calico--apiserver--f6ffcff69--lv42k-eth0" Oct 31 14:12:21.386447 containerd[1690]: 2025-10-31 14:12:21.313 [INFO][4786] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="29084896870d0175509927fbf1aa444b126f078af4bf20b594140a1e2c7e393e" HandleID="k8s-pod-network.29084896870d0175509927fbf1aa444b126f078af4bf20b594140a1e2c7e393e" Workload="localhost-k8s-calico--apiserver--f6ffcff69--lv42k-eth0" Oct 31 14:12:21.386447 containerd[1690]: 2025-10-31 14:12:21.313 [INFO][4786] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="29084896870d0175509927fbf1aa444b126f078af4bf20b594140a1e2c7e393e" HandleID="k8s-pod-network.29084896870d0175509927fbf1aa444b126f078af4bf20b594140a1e2c7e393e" Workload="localhost-k8s-calico--apiserver--f6ffcff69--lv42k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024ef50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-f6ffcff69-lv42k", "timestamp":"2025-10-31 14:12:21.313274424 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 14:12:21.386447 containerd[1690]: 2025-10-31 14:12:21.313 [INFO][4786] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 14:12:21.386447 containerd[1690]: 2025-10-31 14:12:21.313 [INFO][4786] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 14:12:21.386447 containerd[1690]: 2025-10-31 14:12:21.313 [INFO][4786] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 14:12:21.386447 containerd[1690]: 2025-10-31 14:12:21.318 [INFO][4786] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.29084896870d0175509927fbf1aa444b126f078af4bf20b594140a1e2c7e393e" host="localhost" Oct 31 14:12:21.386447 containerd[1690]: 2025-10-31 14:12:21.342 [INFO][4786] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 14:12:21.386447 containerd[1690]: 2025-10-31 14:12:21.344 [INFO][4786] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 14:12:21.386447 containerd[1690]: 2025-10-31 14:12:21.345 [INFO][4786] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 14:12:21.386447 containerd[1690]: 2025-10-31 14:12:21.346 [INFO][4786] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 14:12:21.386447 containerd[1690]: 2025-10-31 14:12:21.346 [INFO][4786] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.29084896870d0175509927fbf1aa444b126f078af4bf20b594140a1e2c7e393e" host="localhost" Oct 31 14:12:21.386447 containerd[1690]: 2025-10-31 14:12:21.347 [INFO][4786] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.29084896870d0175509927fbf1aa444b126f078af4bf20b594140a1e2c7e393e Oct 31 14:12:21.386447 containerd[1690]: 2025-10-31 14:12:21.350 [INFO][4786] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.29084896870d0175509927fbf1aa444b126f078af4bf20b594140a1e2c7e393e" host="localhost" Oct 31 14:12:21.386447 containerd[1690]: 2025-10-31 14:12:21.360 [INFO][4786] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.29084896870d0175509927fbf1aa444b126f078af4bf20b594140a1e2c7e393e" host="localhost" Oct 31 14:12:21.386447 containerd[1690]: 2025-10-31 14:12:21.360 [INFO][4786] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.29084896870d0175509927fbf1aa444b126f078af4bf20b594140a1e2c7e393e" host="localhost" Oct 31 14:12:21.386447 containerd[1690]: 2025-10-31 14:12:21.361 [INFO][4786] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 14:12:21.386447 containerd[1690]: 2025-10-31 14:12:21.361 [INFO][4786] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="29084896870d0175509927fbf1aa444b126f078af4bf20b594140a1e2c7e393e" HandleID="k8s-pod-network.29084896870d0175509927fbf1aa444b126f078af4bf20b594140a1e2c7e393e" Workload="localhost-k8s-calico--apiserver--f6ffcff69--lv42k-eth0" Oct 31 14:12:21.387470 containerd[1690]: 2025-10-31 14:12:21.362 [INFO][4774] cni-plugin/k8s.go 418: Populated endpoint ContainerID="29084896870d0175509927fbf1aa444b126f078af4bf20b594140a1e2c7e393e" Namespace="calico-apiserver" Pod="calico-apiserver-f6ffcff69-lv42k" WorkloadEndpoint="localhost-k8s-calico--apiserver--f6ffcff69--lv42k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f6ffcff69--lv42k-eth0", GenerateName:"calico-apiserver-f6ffcff69-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea6f82f9-8ca6-4f27-9ac4-6759edba2cc0", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 14, 11, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f6ffcff69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-f6ffcff69-lv42k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8d673e2fd71", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 14:12:21.387470 containerd[1690]: 2025-10-31 14:12:21.362 [INFO][4774] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="29084896870d0175509927fbf1aa444b126f078af4bf20b594140a1e2c7e393e" Namespace="calico-apiserver" Pod="calico-apiserver-f6ffcff69-lv42k" WorkloadEndpoint="localhost-k8s-calico--apiserver--f6ffcff69--lv42k-eth0" Oct 31 14:12:21.387470 containerd[1690]: 2025-10-31 14:12:21.362 [INFO][4774] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8d673e2fd71 ContainerID="29084896870d0175509927fbf1aa444b126f078af4bf20b594140a1e2c7e393e" Namespace="calico-apiserver" Pod="calico-apiserver-f6ffcff69-lv42k" WorkloadEndpoint="localhost-k8s-calico--apiserver--f6ffcff69--lv42k-eth0" Oct 31 14:12:21.387470 containerd[1690]: 2025-10-31 14:12:21.365 [INFO][4774] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="29084896870d0175509927fbf1aa444b126f078af4bf20b594140a1e2c7e393e" Namespace="calico-apiserver" Pod="calico-apiserver-f6ffcff69-lv42k" WorkloadEndpoint="localhost-k8s-calico--apiserver--f6ffcff69--lv42k-eth0" Oct 31 14:12:21.387470 containerd[1690]: 2025-10-31 14:12:21.366 [INFO][4774] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="29084896870d0175509927fbf1aa444b126f078af4bf20b594140a1e2c7e393e" Namespace="calico-apiserver" Pod="calico-apiserver-f6ffcff69-lv42k" WorkloadEndpoint="localhost-k8s-calico--apiserver--f6ffcff69--lv42k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f6ffcff69--lv42k-eth0", GenerateName:"calico-apiserver-f6ffcff69-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea6f82f9-8ca6-4f27-9ac4-6759edba2cc0", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 14, 11, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f6ffcff69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"29084896870d0175509927fbf1aa444b126f078af4bf20b594140a1e2c7e393e", Pod:"calico-apiserver-f6ffcff69-lv42k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8d673e2fd71", MAC:"56:36:64:00:a6:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 14:12:21.387470 containerd[1690]: 2025-10-31 14:12:21.383 [INFO][4774] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="29084896870d0175509927fbf1aa444b126f078af4bf20b594140a1e2c7e393e" Namespace="calico-apiserver" Pod="calico-apiserver-f6ffcff69-lv42k" WorkloadEndpoint="localhost-k8s-calico--apiserver--f6ffcff69--lv42k-eth0" Oct 31 14:12:21.408973 containerd[1690]: time="2025-10-31T14:12:21.408944682Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:12:21.421953 containerd[1690]: time="2025-10-31T14:12:21.421906629Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 14:12:21.422258 containerd[1690]: time="2025-10-31T14:12:21.422230258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 14:12:21.422417 kubelet[2999]: E1031 14:12:21.422383 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 14:12:21.422532 kubelet[2999]: E1031 14:12:21.422421 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 14:12:21.423084 kubelet[2999]: E1031 14:12:21.422760 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z45x2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-r5k7t_calico-system(1aeae2c5-1e57-4c36-a44b-9f56d90d27b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 14:12:21.424286 kubelet[2999]: E1031 14:12:21.424265 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r5k7t" podUID="1aeae2c5-1e57-4c36-a44b-9f56d90d27b3" Oct 31 14:12:21.460022 containerd[1690]: time="2025-10-31T14:12:21.459988696Z" level=info msg="connecting to shim 29084896870d0175509927fbf1aa444b126f078af4bf20b594140a1e2c7e393e" address="unix:///run/containerd/s/bc300113ab454813083c8f0e495369b51f4df3754f3867daf3c5b64abf09fc84" namespace=k8s.io protocol=ttrpc version=3 Oct 31 14:12:21.484883 systemd[1]: Started cri-containerd-29084896870d0175509927fbf1aa444b126f078af4bf20b594140a1e2c7e393e.scope - libcontainer container 29084896870d0175509927fbf1aa444b126f078af4bf20b594140a1e2c7e393e. Oct 31 14:12:21.493401 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 14:12:21.524096 containerd[1690]: time="2025-10-31T14:12:21.524063591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f6ffcff69-lv42k,Uid:ea6f82f9-8ca6-4f27-9ac4-6759edba2cc0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"29084896870d0175509927fbf1aa444b126f078af4bf20b594140a1e2c7e393e\"" Oct 31 14:12:21.525112 containerd[1690]: time="2025-10-31T14:12:21.524976398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 14:12:21.728901 systemd-networkd[1576]: cali850dfa5f737: Gained IPv6LL Oct 31 14:12:21.755283 kubelet[2999]: E1031 14:12:21.755146 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c4674d488-9cl4s" podUID="2e033899-3e9d-4bfb-ab07-0f3a26593557" Oct 31 14:12:21.756192 kubelet[2999]: E1031 14:12:21.756152 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r5k7t" podUID="1aeae2c5-1e57-4c36-a44b-9f56d90d27b3" Oct 31 14:12:21.763637 kubelet[2999]: E1031 14:12:21.762942 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f6ffcff69-wstnf" podUID="191ace75-d386-424f-8d71-10eff7da195e" Oct 31 14:12:21.899480 containerd[1690]: time="2025-10-31T14:12:21.899448020Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:12:21.900002 containerd[1690]: time="2025-10-31T14:12:21.899981134Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 14:12:21.900100 containerd[1690]: time="2025-10-31T14:12:21.900049608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 14:12:21.900250 kubelet[2999]: E1031 14:12:21.900221 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 14:12:21.900526 kubelet[2999]: E1031 14:12:21.900258 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 14:12:21.900526 kubelet[2999]: E1031 14:12:21.900352 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qf9gv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-f6ffcff69-lv42k_calico-apiserver(ea6f82f9-8ca6-4f27-9ac4-6759edba2cc0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 14:12:21.902233 kubelet[2999]: E1031 14:12:21.902202 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f6ffcff69-lv42k" podUID="ea6f82f9-8ca6-4f27-9ac4-6759edba2cc0" Oct 31 14:12:21.985004 systemd-networkd[1576]: cali5311fc9a9f0: Gained IPv6LL Oct 31 14:12:22.231359 containerd[1690]: time="2025-10-31T14:12:22.231322890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tsnkd,Uid:a7be1d56-2672-493d-a7ed-388bbed8d7a1,Namespace:kube-system,Attempt:0,}" Oct 31 14:12:22.296217 systemd-networkd[1576]: cali424f3a02e6d: Link UP Oct 31 14:12:22.296619 systemd-networkd[1576]: cali424f3a02e6d: Gained carrier Oct 31 14:12:22.306704 containerd[1690]: 2025-10-31 14:12:22.256 [INFO][4853] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--tsnkd-eth0 coredns-674b8bbfcf- kube-system a7be1d56-2672-493d-a7ed-388bbed8d7a1 869 0 2025-10-31 14:11:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-tsnkd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali424f3a02e6d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d4a2c61d8f4c8f61319f0024b1c532e248205bca967c12df1e24e34d2fc93fe3" Namespace="kube-system" Pod="coredns-674b8bbfcf-tsnkd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tsnkd-" Oct 31 14:12:22.306704 containerd[1690]: 2025-10-31 14:12:22.257 [INFO][4853] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d4a2c61d8f4c8f61319f0024b1c532e248205bca967c12df1e24e34d2fc93fe3" Namespace="kube-system" Pod="coredns-674b8bbfcf-tsnkd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tsnkd-eth0" Oct 31 14:12:22.306704 containerd[1690]: 2025-10-31 14:12:22.275 [INFO][4866] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d4a2c61d8f4c8f61319f0024b1c532e248205bca967c12df1e24e34d2fc93fe3" HandleID="k8s-pod-network.d4a2c61d8f4c8f61319f0024b1c532e248205bca967c12df1e24e34d2fc93fe3" Workload="localhost-k8s-coredns--674b8bbfcf--tsnkd-eth0" Oct 31 14:12:22.306704 containerd[1690]: 2025-10-31 14:12:22.275 [INFO][4866] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d4a2c61d8f4c8f61319f0024b1c532e248205bca967c12df1e24e34d2fc93fe3" HandleID="k8s-pod-network.d4a2c61d8f4c8f61319f0024b1c532e248205bca967c12df1e24e34d2fc93fe3" Workload="localhost-k8s-coredns--674b8bbfcf--tsnkd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-tsnkd", "timestamp":"2025-10-31 14:12:22.275691922 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 14:12:22.306704 containerd[1690]: 2025-10-31 14:12:22.275 [INFO][4866] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 14:12:22.306704 containerd[1690]: 2025-10-31 14:12:22.275 [INFO][4866] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 14:12:22.306704 containerd[1690]: 2025-10-31 14:12:22.275 [INFO][4866] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 14:12:22.306704 containerd[1690]: 2025-10-31 14:12:22.279 [INFO][4866] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d4a2c61d8f4c8f61319f0024b1c532e248205bca967c12df1e24e34d2fc93fe3" host="localhost" Oct 31 14:12:22.306704 containerd[1690]: 2025-10-31 14:12:22.282 [INFO][4866] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 14:12:22.306704 containerd[1690]: 2025-10-31 14:12:22.284 [INFO][4866] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 14:12:22.306704 containerd[1690]: 2025-10-31 14:12:22.285 [INFO][4866] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 14:12:22.306704 containerd[1690]: 2025-10-31 14:12:22.286 [INFO][4866] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 14:12:22.306704 containerd[1690]: 2025-10-31 14:12:22.286 [INFO][4866] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d4a2c61d8f4c8f61319f0024b1c532e248205bca967c12df1e24e34d2fc93fe3" host="localhost" Oct 31 14:12:22.306704 containerd[1690]: 2025-10-31 14:12:22.286 [INFO][4866] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d4a2c61d8f4c8f61319f0024b1c532e248205bca967c12df1e24e34d2fc93fe3 Oct 31 14:12:22.306704 containerd[1690]: 2025-10-31 14:12:22.288 [INFO][4866] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d4a2c61d8f4c8f61319f0024b1c532e248205bca967c12df1e24e34d2fc93fe3" host="localhost" Oct 31 14:12:22.306704 containerd[1690]: 2025-10-31 14:12:22.291 [INFO][4866] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.d4a2c61d8f4c8f61319f0024b1c532e248205bca967c12df1e24e34d2fc93fe3" host="localhost" Oct 31 14:12:22.306704 containerd[1690]: 2025-10-31 14:12:22.291 [INFO][4866] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.d4a2c61d8f4c8f61319f0024b1c532e248205bca967c12df1e24e34d2fc93fe3" host="localhost" Oct 31 14:12:22.306704 containerd[1690]: 2025-10-31 14:12:22.291 [INFO][4866] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 14:12:22.306704 containerd[1690]: 2025-10-31 14:12:22.291 [INFO][4866] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="d4a2c61d8f4c8f61319f0024b1c532e248205bca967c12df1e24e34d2fc93fe3" HandleID="k8s-pod-network.d4a2c61d8f4c8f61319f0024b1c532e248205bca967c12df1e24e34d2fc93fe3" Workload="localhost-k8s-coredns--674b8bbfcf--tsnkd-eth0" Oct 31 14:12:22.308049 containerd[1690]: 2025-10-31 14:12:22.293 [INFO][4853] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d4a2c61d8f4c8f61319f0024b1c532e248205bca967c12df1e24e34d2fc93fe3" Namespace="kube-system" Pod="coredns-674b8bbfcf-tsnkd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tsnkd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--tsnkd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a7be1d56-2672-493d-a7ed-388bbed8d7a1", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 14, 11, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-tsnkd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali424f3a02e6d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 14:12:22.308049 containerd[1690]: 2025-10-31 14:12:22.293 [INFO][4853] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="d4a2c61d8f4c8f61319f0024b1c532e248205bca967c12df1e24e34d2fc93fe3" Namespace="kube-system" Pod="coredns-674b8bbfcf-tsnkd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tsnkd-eth0" Oct 31 14:12:22.308049 containerd[1690]: 2025-10-31 14:12:22.293 [INFO][4853] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali424f3a02e6d ContainerID="d4a2c61d8f4c8f61319f0024b1c532e248205bca967c12df1e24e34d2fc93fe3" Namespace="kube-system" Pod="coredns-674b8bbfcf-tsnkd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tsnkd-eth0" Oct 31 14:12:22.308049 containerd[1690]: 2025-10-31 14:12:22.297 [INFO][4853] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d4a2c61d8f4c8f61319f0024b1c532e248205bca967c12df1e24e34d2fc93fe3" Namespace="kube-system" Pod="coredns-674b8bbfcf-tsnkd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tsnkd-eth0" Oct 31 14:12:22.308049 containerd[1690]: 2025-10-31 14:12:22.297 [INFO][4853] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d4a2c61d8f4c8f61319f0024b1c532e248205bca967c12df1e24e34d2fc93fe3" Namespace="kube-system" Pod="coredns-674b8bbfcf-tsnkd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tsnkd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--tsnkd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a7be1d56-2672-493d-a7ed-388bbed8d7a1", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 14, 11, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d4a2c61d8f4c8f61319f0024b1c532e248205bca967c12df1e24e34d2fc93fe3", Pod:"coredns-674b8bbfcf-tsnkd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali424f3a02e6d", MAC:"b2:1a:ce:63:8f:5d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 14:12:22.308049 containerd[1690]: 2025-10-31 14:12:22.304 [INFO][4853] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d4a2c61d8f4c8f61319f0024b1c532e248205bca967c12df1e24e34d2fc93fe3" Namespace="kube-system" Pod="coredns-674b8bbfcf-tsnkd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--tsnkd-eth0" Oct 31 14:12:22.355057 containerd[1690]: time="2025-10-31T14:12:22.355028094Z" level=info msg="connecting to shim d4a2c61d8f4c8f61319f0024b1c532e248205bca967c12df1e24e34d2fc93fe3" address="unix:///run/containerd/s/cea98e81fc11b6a734be9d8230918ec62b4d8e2d96c3219a1b0658762b6c9f00" namespace=k8s.io protocol=ttrpc version=3 Oct 31 14:12:22.372871 systemd[1]: Started cri-containerd-d4a2c61d8f4c8f61319f0024b1c532e248205bca967c12df1e24e34d2fc93fe3.scope - libcontainer container d4a2c61d8f4c8f61319f0024b1c532e248205bca967c12df1e24e34d2fc93fe3. Oct 31 14:12:22.381277 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 14:12:22.407065 containerd[1690]: time="2025-10-31T14:12:22.407006717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tsnkd,Uid:a7be1d56-2672-493d-a7ed-388bbed8d7a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4a2c61d8f4c8f61319f0024b1c532e248205bca967c12df1e24e34d2fc93fe3\"" Oct 31 14:12:22.410201 containerd[1690]: time="2025-10-31T14:12:22.410168830Z" level=info msg="CreateContainer within sandbox \"d4a2c61d8f4c8f61319f0024b1c532e248205bca967c12df1e24e34d2fc93fe3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 14:12:22.419331 containerd[1690]: time="2025-10-31T14:12:22.419309000Z" level=info msg="Container 471ae951aa261cf4438dbc576d3a779e7cd734fb349d7c6fca95baecbc47ed50: CDI devices from CRI Config.CDIDevices: []" Oct 31 14:12:22.423174 containerd[1690]: time="2025-10-31T14:12:22.423105943Z" level=info msg="CreateContainer within sandbox \"d4a2c61d8f4c8f61319f0024b1c532e248205bca967c12df1e24e34d2fc93fe3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"471ae951aa261cf4438dbc576d3a779e7cd734fb349d7c6fca95baecbc47ed50\"" Oct 31 14:12:22.423727 containerd[1690]: time="2025-10-31T14:12:22.423705057Z" level=info msg="StartContainer for \"471ae951aa261cf4438dbc576d3a779e7cd734fb349d7c6fca95baecbc47ed50\"" Oct 31 14:12:22.424192 containerd[1690]: time="2025-10-31T14:12:22.424175616Z" level=info msg="connecting to shim 471ae951aa261cf4438dbc576d3a779e7cd734fb349d7c6fca95baecbc47ed50" address="unix:///run/containerd/s/cea98e81fc11b6a734be9d8230918ec62b4d8e2d96c3219a1b0658762b6c9f00" protocol=ttrpc version=3 Oct 31 14:12:22.441877 systemd[1]: Started cri-containerd-471ae951aa261cf4438dbc576d3a779e7cd734fb349d7c6fca95baecbc47ed50.scope - libcontainer container 471ae951aa261cf4438dbc576d3a779e7cd734fb349d7c6fca95baecbc47ed50. Oct 31 14:12:22.460364 containerd[1690]: time="2025-10-31T14:12:22.460334729Z" level=info msg="StartContainer for \"471ae951aa261cf4438dbc576d3a779e7cd734fb349d7c6fca95baecbc47ed50\" returns successfully" Oct 31 14:12:22.760488 kubelet[2999]: E1031 14:12:22.758747 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f6ffcff69-lv42k" podUID="ea6f82f9-8ca6-4f27-9ac4-6759edba2cc0" Oct 31 14:12:22.785528 kubelet[2999]: I1031 14:12:22.784913 2999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-tsnkd" podStartSLOduration=77.784901442 podStartE2EDuration="1m17.784901442s" podCreationTimestamp="2025-10-31 14:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 14:12:22.773933454 +0000 UTC m=+82.754784857" watchObservedRunningTime="2025-10-31 14:12:22.784901442 +0000 UTC m=+82.765752835" Oct 31 14:12:23.231706 containerd[1690]: time="2025-10-31T14:12:23.231607288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wxdrt,Uid:1f06339a-fad3-4388-83e5-d004196ee955,Namespace:calico-system,Attempt:0,}" Oct 31 14:12:23.299108 systemd-networkd[1576]: cali59ccb6a8c39: Link UP Oct 31 14:12:23.300107 systemd-networkd[1576]: cali59ccb6a8c39: Gained carrier Oct 31 14:12:23.311207 containerd[1690]: 2025-10-31 14:12:23.257 [INFO][4964] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--wxdrt-eth0 csi-node-driver- calico-system 1f06339a-fad3-4388-83e5-d004196ee955 744 0 2025-10-31 14:11:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-wxdrt eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali59ccb6a8c39 [] [] }} ContainerID="bebd892bf1174f45ad42614beab4625f63b170580a28f5bbdae9404bb3899edc" Namespace="calico-system" Pod="csi-node-driver-wxdrt" WorkloadEndpoint="localhost-k8s-csi--node--driver--wxdrt-" Oct 31 14:12:23.311207 containerd[1690]: 2025-10-31 14:12:23.257 [INFO][4964] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bebd892bf1174f45ad42614beab4625f63b170580a28f5bbdae9404bb3899edc" Namespace="calico-system" Pod="csi-node-driver-wxdrt" WorkloadEndpoint="localhost-k8s-csi--node--driver--wxdrt-eth0" Oct 31 14:12:23.311207 containerd[1690]: 2025-10-31 14:12:23.274 [INFO][4977] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bebd892bf1174f45ad42614beab4625f63b170580a28f5bbdae9404bb3899edc" HandleID="k8s-pod-network.bebd892bf1174f45ad42614beab4625f63b170580a28f5bbdae9404bb3899edc" Workload="localhost-k8s-csi--node--driver--wxdrt-eth0" Oct 31 14:12:23.311207 containerd[1690]: 2025-10-31 14:12:23.274 [INFO][4977] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bebd892bf1174f45ad42614beab4625f63b170580a28f5bbdae9404bb3899edc" HandleID="k8s-pod-network.bebd892bf1174f45ad42614beab4625f63b170580a28f5bbdae9404bb3899edc" Workload="localhost-k8s-csi--node--driver--wxdrt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f060), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-wxdrt", "timestamp":"2025-10-31 14:12:23.274543405 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 14:12:23.311207 containerd[1690]: 2025-10-31 14:12:23.274 [INFO][4977] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 14:12:23.311207 containerd[1690]: 2025-10-31 14:12:23.274 [INFO][4977] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 14:12:23.311207 containerd[1690]: 2025-10-31 14:12:23.274 [INFO][4977] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 14:12:23.311207 containerd[1690]: 2025-10-31 14:12:23.279 [INFO][4977] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bebd892bf1174f45ad42614beab4625f63b170580a28f5bbdae9404bb3899edc" host="localhost" Oct 31 14:12:23.311207 containerd[1690]: 2025-10-31 14:12:23.281 [INFO][4977] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 14:12:23.311207 containerd[1690]: 2025-10-31 14:12:23.283 [INFO][4977] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 14:12:23.311207 containerd[1690]: 2025-10-31 14:12:23.284 [INFO][4977] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 14:12:23.311207 containerd[1690]: 2025-10-31 14:12:23.286 [INFO][4977] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 14:12:23.311207 containerd[1690]: 2025-10-31 14:12:23.286 [INFO][4977] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bebd892bf1174f45ad42614beab4625f63b170580a28f5bbdae9404bb3899edc" host="localhost" Oct 31 14:12:23.311207 containerd[1690]: 2025-10-31 14:12:23.287 [INFO][4977] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bebd892bf1174f45ad42614beab4625f63b170580a28f5bbdae9404bb3899edc Oct 31 14:12:23.311207 containerd[1690]: 2025-10-31 14:12:23.290 [INFO][4977] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bebd892bf1174f45ad42614beab4625f63b170580a28f5bbdae9404bb3899edc" host="localhost" Oct 31 14:12:23.311207 containerd[1690]: 2025-10-31 14:12:23.293 [INFO][4977] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.bebd892bf1174f45ad42614beab4625f63b170580a28f5bbdae9404bb3899edc" host="localhost" Oct 31 14:12:23.311207 containerd[1690]: 2025-10-31 14:12:23.293 [INFO][4977] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.bebd892bf1174f45ad42614beab4625f63b170580a28f5bbdae9404bb3899edc" host="localhost" Oct 31 14:12:23.311207 containerd[1690]: 2025-10-31 14:12:23.293 [INFO][4977] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 14:12:23.311207 containerd[1690]: 2025-10-31 14:12:23.293 [INFO][4977] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="bebd892bf1174f45ad42614beab4625f63b170580a28f5bbdae9404bb3899edc" HandleID="k8s-pod-network.bebd892bf1174f45ad42614beab4625f63b170580a28f5bbdae9404bb3899edc" Workload="localhost-k8s-csi--node--driver--wxdrt-eth0" Oct 31 14:12:23.312208 containerd[1690]: 2025-10-31 14:12:23.294 [INFO][4964] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bebd892bf1174f45ad42614beab4625f63b170580a28f5bbdae9404bb3899edc" Namespace="calico-system" Pod="csi-node-driver-wxdrt" WorkloadEndpoint="localhost-k8s-csi--node--driver--wxdrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wxdrt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1f06339a-fad3-4388-83e5-d004196ee955", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 14, 11, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-wxdrt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali59ccb6a8c39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 14:12:23.312208 containerd[1690]: 2025-10-31 14:12:23.294 [INFO][4964] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="bebd892bf1174f45ad42614beab4625f63b170580a28f5bbdae9404bb3899edc" Namespace="calico-system" Pod="csi-node-driver-wxdrt" WorkloadEndpoint="localhost-k8s-csi--node--driver--wxdrt-eth0" Oct 31 14:12:23.312208 containerd[1690]: 2025-10-31 14:12:23.294 [INFO][4964] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali59ccb6a8c39 ContainerID="bebd892bf1174f45ad42614beab4625f63b170580a28f5bbdae9404bb3899edc" Namespace="calico-system" Pod="csi-node-driver-wxdrt" WorkloadEndpoint="localhost-k8s-csi--node--driver--wxdrt-eth0" Oct 31 14:12:23.312208 containerd[1690]: 2025-10-31 14:12:23.300 [INFO][4964] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bebd892bf1174f45ad42614beab4625f63b170580a28f5bbdae9404bb3899edc" Namespace="calico-system" Pod="csi-node-driver-wxdrt" WorkloadEndpoint="localhost-k8s-csi--node--driver--wxdrt-eth0" Oct 31 14:12:23.312208 containerd[1690]: 2025-10-31 14:12:23.300 [INFO][4964] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bebd892bf1174f45ad42614beab4625f63b170580a28f5bbdae9404bb3899edc" Namespace="calico-system" Pod="csi-node-driver-wxdrt" WorkloadEndpoint="localhost-k8s-csi--node--driver--wxdrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wxdrt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1f06339a-fad3-4388-83e5-d004196ee955", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 14, 11, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bebd892bf1174f45ad42614beab4625f63b170580a28f5bbdae9404bb3899edc", Pod:"csi-node-driver-wxdrt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali59ccb6a8c39", MAC:"c6:9f:60:82:d9:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 14:12:23.312208 containerd[1690]: 2025-10-31 14:12:23.307 [INFO][4964] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bebd892bf1174f45ad42614beab4625f63b170580a28f5bbdae9404bb3899edc" Namespace="calico-system" Pod="csi-node-driver-wxdrt" WorkloadEndpoint="localhost-k8s-csi--node--driver--wxdrt-eth0" Oct 31 14:12:23.333446 containerd[1690]: time="2025-10-31T14:12:23.333385501Z" level=info msg="connecting to shim bebd892bf1174f45ad42614beab4625f63b170580a28f5bbdae9404bb3899edc" address="unix:///run/containerd/s/6c8b78bcf19c27c3aa441d228691c647085814c4b6a63fd52bcbce2d6bd81bc2" namespace=k8s.io protocol=ttrpc version=3 Oct 31 14:12:23.350875 systemd[1]: Started cri-containerd-bebd892bf1174f45ad42614beab4625f63b170580a28f5bbdae9404bb3899edc.scope - libcontainer container bebd892bf1174f45ad42614beab4625f63b170580a28f5bbdae9404bb3899edc. Oct 31 14:12:23.363576 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 14:12:23.372649 containerd[1690]: time="2025-10-31T14:12:23.372625277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wxdrt,Uid:1f06339a-fad3-4388-83e5-d004196ee955,Namespace:calico-system,Attempt:0,} returns sandbox id \"bebd892bf1174f45ad42614beab4625f63b170580a28f5bbdae9404bb3899edc\"" Oct 31 14:12:23.373764 containerd[1690]: time="2025-10-31T14:12:23.373726336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 14:12:23.392877 systemd-networkd[1576]: cali8d673e2fd71: Gained IPv6LL Oct 31 14:12:23.697945 containerd[1690]: time="2025-10-31T14:12:23.697889923Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:12:23.698489 containerd[1690]: time="2025-10-31T14:12:23.698416902Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 14:12:23.698550 containerd[1690]: time="2025-10-31T14:12:23.698477177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 14:12:23.698594 kubelet[2999]: E1031 14:12:23.698569 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 14:12:23.698875 kubelet[2999]: E1031 14:12:23.698603 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 14:12:23.703570 kubelet[2999]: E1031 14:12:23.703530 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gqqw4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wxdrt_calico-system(1f06339a-fad3-4388-83e5-d004196ee955): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 14:12:23.705320 containerd[1690]: time="2025-10-31T14:12:23.705284330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 14:12:23.713877 systemd-networkd[1576]: cali424f3a02e6d: Gained IPv6LL Oct 31 14:12:24.038099 containerd[1690]: time="2025-10-31T14:12:24.038016793Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:12:24.044756 containerd[1690]: time="2025-10-31T14:12:24.044726422Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 14:12:24.044882 containerd[1690]: time="2025-10-31T14:12:24.044785881Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 14:12:24.044941 kubelet[2999]: E1031 14:12:24.044913 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 14:12:24.044988 kubelet[2999]: E1031 14:12:24.044947 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 14:12:24.045081 kubelet[2999]: E1031 14:12:24.045049 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gqqw4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wxdrt_calico-system(1f06339a-fad3-4388-83e5-d004196ee955): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 14:12:24.046801 kubelet[2999]: E1031 14:12:24.046772 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wxdrt" podUID="1f06339a-fad3-4388-83e5-d004196ee955" Oct 31 14:12:24.771808 kubelet[2999]: E1031 14:12:24.771752 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wxdrt" podUID="1f06339a-fad3-4388-83e5-d004196ee955" Oct 31 14:12:25.248991 systemd-networkd[1576]: cali59ccb6a8c39: Gained IPv6LL Oct 31 14:12:31.232209 containerd[1690]: time="2025-10-31T14:12:31.232094261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 14:12:31.597710 containerd[1690]: time="2025-10-31T14:12:31.597678644Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:12:31.598180 containerd[1690]: time="2025-10-31T14:12:31.598113591Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 14:12:31.598180 containerd[1690]: time="2025-10-31T14:12:31.598162055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 14:12:31.598280 kubelet[2999]: E1031 14:12:31.598251 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 14:12:31.598564 kubelet[2999]: E1031 14:12:31.598285 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 14:12:31.598564 kubelet[2999]: E1031 14:12:31.598389 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:156dac623667454f86a595f0201421ec,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4256t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5dc458bdfb-x2fbx_calico-system(5503919a-74e2-4390-b833-02013e670ac6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 14:12:31.600613 containerd[1690]: time="2025-10-31T14:12:31.600457972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 14:12:31.927121 containerd[1690]: time="2025-10-31T14:12:31.926785333Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:12:31.935300 containerd[1690]: time="2025-10-31T14:12:31.935265399Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 14:12:31.935420 containerd[1690]: time="2025-10-31T14:12:31.935322715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 14:12:31.935481 kubelet[2999]: E1031 14:12:31.935402 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 14:12:31.935481 kubelet[2999]: E1031 14:12:31.935437 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 14:12:31.935549 kubelet[2999]: E1031 14:12:31.935520 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4256t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5dc458bdfb-x2fbx_calico-system(5503919a-74e2-4390-b833-02013e670ac6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 14:12:31.936826 kubelet[2999]: E1031 14:12:31.936738 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dc458bdfb-x2fbx" podUID="5503919a-74e2-4390-b833-02013e670ac6" Oct 31 14:12:34.233265 containerd[1690]: time="2025-10-31T14:12:34.233158126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 14:12:34.555392 containerd[1690]: time="2025-10-31T14:12:34.555330918Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:12:34.557833 containerd[1690]: time="2025-10-31T14:12:34.557783779Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 14:12:34.558127 containerd[1690]: time="2025-10-31T14:12:34.557849906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 14:12:34.558177 kubelet[2999]: E1031 14:12:34.557979 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 14:12:34.558177 kubelet[2999]: E1031 14:12:34.558016 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 14:12:34.558427 containerd[1690]: time="2025-10-31T14:12:34.558265114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 14:12:34.558743 kubelet[2999]: E1031 14:12:34.558538 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z45x2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-r5k7t_calico-system(1aeae2c5-1e57-4c36-a44b-9f56d90d27b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 14:12:34.559871 kubelet[2999]: E1031 14:12:34.559844 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r5k7t" podUID="1aeae2c5-1e57-4c36-a44b-9f56d90d27b3" Oct 31 14:12:35.054584 containerd[1690]: time="2025-10-31T14:12:35.054463418Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:12:35.054855 containerd[1690]: time="2025-10-31T14:12:35.054821896Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 14:12:35.056554 kubelet[2999]: E1031 14:12:35.055049 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 14:12:35.056554 kubelet[2999]: E1031 14:12:35.055089 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 14:12:35.056554 kubelet[2999]: E1031 14:12:35.055172 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gqzv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-f6ffcff69-wstnf_calico-apiserver(191ace75-d386-424f-8d71-10eff7da195e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 14:12:35.058121 kubelet[2999]: E1031 14:12:35.056700 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f6ffcff69-wstnf" podUID="191ace75-d386-424f-8d71-10eff7da195e" Oct 31 14:12:35.059406 containerd[1690]: time="2025-10-31T14:12:35.054911637Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 14:12:35.233231 containerd[1690]: time="2025-10-31T14:12:35.233157154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 14:12:35.565979 containerd[1690]: time="2025-10-31T14:12:35.565944020Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:12:35.566317 containerd[1690]: time="2025-10-31T14:12:35.566278159Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 14:12:35.566363 containerd[1690]: time="2025-10-31T14:12:35.566348825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 14:12:35.566511 kubelet[2999]: E1031 14:12:35.566471 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 14:12:35.566933 kubelet[2999]: E1031 14:12:35.566516 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 14:12:35.566933 kubelet[2999]: E1031 14:12:35.566628 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qf9gv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-f6ffcff69-lv42k_calico-apiserver(ea6f82f9-8ca6-4f27-9ac4-6759edba2cc0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 14:12:35.568167 kubelet[2999]: E1031 14:12:35.568135 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f6ffcff69-lv42k" podUID="ea6f82f9-8ca6-4f27-9ac4-6759edba2cc0" Oct 31 14:12:37.232223 containerd[1690]: time="2025-10-31T14:12:37.232185583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 14:12:37.569938 containerd[1690]: time="2025-10-31T14:12:37.569827692Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:12:37.570244 containerd[1690]: time="2025-10-31T14:12:37.570209796Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 14:12:37.570326 containerd[1690]: time="2025-10-31T14:12:37.570279647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 14:12:37.570457 kubelet[2999]: E1031 14:12:37.570414 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 14:12:37.570908 kubelet[2999]: E1031 14:12:37.570463 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 14:12:37.570908 kubelet[2999]: E1031 14:12:37.570600 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r2hxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6c4674d488-9cl4s_calico-system(2e033899-3e9d-4bfb-ab07-0f3a26593557): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 14:12:37.572126 kubelet[2999]: E1031 14:12:37.572092 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c4674d488-9cl4s" podUID="2e033899-3e9d-4bfb-ab07-0f3a26593557" Oct 31 14:12:38.232533 containerd[1690]: time="2025-10-31T14:12:38.232506295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 14:12:39.116229 containerd[1690]: time="2025-10-31T14:12:39.116161278Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:12:39.116567 containerd[1690]: time="2025-10-31T14:12:39.116518992Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 14:12:39.116874 containerd[1690]: time="2025-10-31T14:12:39.116578833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 14:12:39.116914 kubelet[2999]: E1031 14:12:39.116675 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 14:12:39.116914 kubelet[2999]: E1031 14:12:39.116719 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 14:12:39.116914 kubelet[2999]: E1031 14:12:39.116835 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gqqw4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wxdrt_calico-system(1f06339a-fad3-4388-83e5-d004196ee955): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 14:12:39.119729 containerd[1690]: time="2025-10-31T14:12:39.119570000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 14:12:39.485956 containerd[1690]: time="2025-10-31T14:12:39.485874935Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:12:39.486881 containerd[1690]: time="2025-10-31T14:12:39.486784501Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 14:12:39.486881 containerd[1690]: time="2025-10-31T14:12:39.486857796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 14:12:39.487049 kubelet[2999]: E1031 14:12:39.487013 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 14:12:39.487104 kubelet[2999]: E1031 14:12:39.487071 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 14:12:39.487217 kubelet[2999]: E1031 14:12:39.487173 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gqqw4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wxdrt_calico-system(1f06339a-fad3-4388-83e5-d004196ee955): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 14:12:39.488643 kubelet[2999]: E1031 14:12:39.488612 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wxdrt" podUID="1f06339a-fad3-4388-83e5-d004196ee955" Oct 31 14:12:45.233527 kubelet[2999]: E1031 14:12:45.233473 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dc458bdfb-x2fbx" podUID="5503919a-74e2-4390-b833-02013e670ac6" Oct 31 14:12:46.232332 kubelet[2999]: E1031 14:12:46.232106 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r5k7t" podUID="1aeae2c5-1e57-4c36-a44b-9f56d90d27b3" Oct 31 14:12:47.232688 kubelet[2999]: E1031 14:12:47.232625 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f6ffcff69-wstnf" podUID="191ace75-d386-424f-8d71-10eff7da195e" Oct 31 14:12:47.723700 containerd[1690]: time="2025-10-31T14:12:47.723673803Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b54aa4fa1c3dfd3f33f578c5fb9912854a15a6a9cc1f0dbe3accb3535fed7296\" id:\"47a7fae8a086d09cf1bf3d721c9c4a12662af67095609a99a41401cc4e163ad4\" pid:5081 exited_at:{seconds:1761919967 nanos:716717442}" Oct 31 14:12:49.232308 kubelet[2999]: E1031 14:12:49.232270 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f6ffcff69-lv42k" podUID="ea6f82f9-8ca6-4f27-9ac4-6759edba2cc0" Oct 31 14:12:50.234004 kubelet[2999]: E1031 14:12:50.233908 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c4674d488-9cl4s" podUID="2e033899-3e9d-4bfb-ab07-0f3a26593557" Oct 31 14:12:50.703740 systemd[1]: Started sshd@7-139.178.70.103:22-139.178.68.195:47654.service - OpenSSH per-connection server daemon (139.178.68.195:47654). Oct 31 14:12:51.074386 sshd[5100]: Accepted publickey for core from 139.178.68.195 port 47654 ssh2: RSA SHA256:zmvueKxp0qYlaAir+MfCpW4n8cnAsn6tp39Aw3BU0Cs Oct 31 14:12:51.080123 sshd-session[5100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:12:51.084584 systemd-logind[1656]: New session 10 of user core. Oct 31 14:12:51.091204 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 31 14:12:52.135471 sshd[5103]: Connection closed by 139.178.68.195 port 47654 Oct 31 14:12:52.136130 sshd-session[5100]: pam_unix(sshd:session): session closed for user core Oct 31 14:12:52.148430 systemd-logind[1656]: Session 10 logged out. Waiting for processes to exit. Oct 31 14:12:52.156943 systemd[1]: sshd@7-139.178.70.103:22-139.178.68.195:47654.service: Deactivated successfully. Oct 31 14:12:52.158976 systemd[1]: session-10.scope: Deactivated successfully. Oct 31 14:12:52.160979 systemd-logind[1656]: Removed session 10. Oct 31 14:12:52.235372 kubelet[2999]: E1031 14:12:52.234920 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wxdrt" podUID="1f06339a-fad3-4388-83e5-d004196ee955" Oct 31 14:12:57.147382 systemd[1]: Started sshd@8-139.178.70.103:22-139.178.68.195:57966.service - OpenSSH per-connection server daemon (139.178.68.195:57966). Oct 31 14:12:57.195592 sshd[5118]: Accepted publickey for core from 139.178.68.195 port 57966 ssh2: RSA SHA256:zmvueKxp0qYlaAir+MfCpW4n8cnAsn6tp39Aw3BU0Cs Oct 31 14:12:57.196334 sshd-session[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:12:57.199055 systemd-logind[1656]: New session 11 of user core. Oct 31 14:12:57.204875 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 31 14:12:57.290593 sshd[5121]: Connection closed by 139.178.68.195 port 57966 Oct 31 14:12:57.290960 sshd-session[5118]: pam_unix(sshd:session): session closed for user core Oct 31 14:12:57.293660 systemd[1]: sshd@8-139.178.70.103:22-139.178.68.195:57966.service: Deactivated successfully. Oct 31 14:12:57.294999 systemd[1]: session-11.scope: Deactivated successfully. Oct 31 14:12:57.295603 systemd-logind[1656]: Session 11 logged out. Waiting for processes to exit. Oct 31 14:12:57.296285 systemd-logind[1656]: Removed session 11. Oct 31 14:12:58.234817 containerd[1690]: time="2025-10-31T14:12:58.234035895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 14:12:58.593618 containerd[1690]: time="2025-10-31T14:12:58.593579383Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:12:58.599592 containerd[1690]: time="2025-10-31T14:12:58.599568609Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 14:12:58.605019 containerd[1690]: time="2025-10-31T14:12:58.599619164Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 14:12:58.605019 containerd[1690]: time="2025-10-31T14:12:58.601527395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 14:12:58.605093 kubelet[2999]: E1031 14:12:58.599718 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 14:12:58.605093 kubelet[2999]: E1031 14:12:58.599756 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 14:12:58.605093 kubelet[2999]: E1031 14:12:58.599856 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:156dac623667454f86a595f0201421ec,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4256t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5dc458bdfb-x2fbx_calico-system(5503919a-74e2-4390-b833-02013e670ac6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 14:12:58.937653 containerd[1690]: time="2025-10-31T14:12:58.937394684Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:12:58.940584 containerd[1690]: time="2025-10-31T14:12:58.940561307Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 14:12:58.940634 containerd[1690]: time="2025-10-31T14:12:58.940573272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 14:12:58.940892 kubelet[2999]: E1031 14:12:58.940708 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 14:12:58.940892 kubelet[2999]: E1031 14:12:58.940741 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 14:12:58.940892 kubelet[2999]: E1031 14:12:58.940843 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4256t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5dc458bdfb-x2fbx_calico-system(5503919a-74e2-4390-b833-02013e670ac6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 14:12:58.942440 kubelet[2999]: E1031 14:12:58.942015 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dc458bdfb-x2fbx" podUID="5503919a-74e2-4390-b833-02013e670ac6" Oct 31 14:13:00.452035 containerd[1690]: time="2025-10-31T14:13:00.451937541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 14:13:00.827675 containerd[1690]: time="2025-10-31T14:13:00.827629474Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:13:00.831781 containerd[1690]: time="2025-10-31T14:13:00.831758895Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 14:13:00.831890 containerd[1690]: time="2025-10-31T14:13:00.831805505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 14:13:00.831941 kubelet[2999]: E1031 14:13:00.831913 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 14:13:00.832157 kubelet[2999]: E1031 14:13:00.831949 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 14:13:00.832157 kubelet[2999]: E1031 14:13:00.832104 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z45x2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-r5k7t_calico-system(1aeae2c5-1e57-4c36-a44b-9f56d90d27b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 14:13:00.832588 containerd[1690]: time="2025-10-31T14:13:00.832516592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 14:13:00.834054 kubelet[2999]: E1031 14:13:00.834033 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r5k7t" podUID="1aeae2c5-1e57-4c36-a44b-9f56d90d27b3" Oct 31 14:13:01.182198 containerd[1690]: time="2025-10-31T14:13:01.182105981Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:13:01.226145 containerd[1690]: time="2025-10-31T14:13:01.226038058Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 14:13:01.226145 containerd[1690]: time="2025-10-31T14:13:01.226116182Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 14:13:01.226444 kubelet[2999]: E1031 14:13:01.226210 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 14:13:01.226444 kubelet[2999]: E1031 14:13:01.226243 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 14:13:01.226444 kubelet[2999]: E1031 14:13:01.226333 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qf9gv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-f6ffcff69-lv42k_calico-apiserver(ea6f82f9-8ca6-4f27-9ac4-6759edba2cc0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 14:13:01.227845 kubelet[2999]: E1031 14:13:01.227758 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f6ffcff69-lv42k" podUID="ea6f82f9-8ca6-4f27-9ac4-6759edba2cc0" Oct 31 14:13:01.233207 containerd[1690]: time="2025-10-31T14:13:01.233176380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 14:13:01.595269 containerd[1690]: time="2025-10-31T14:13:01.595235343Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:13:01.595864 containerd[1690]: time="2025-10-31T14:13:01.595614432Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 14:13:01.595864 containerd[1690]: time="2025-10-31T14:13:01.595669882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 14:13:01.595911 kubelet[2999]: E1031 14:13:01.595758 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 14:13:01.596090 kubelet[2999]: E1031 14:13:01.595962 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 14:13:01.596090 kubelet[2999]: E1031 14:13:01.596056 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r2hxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6c4674d488-9cl4s_calico-system(2e033899-3e9d-4bfb-ab07-0f3a26593557): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 14:13:01.597807 kubelet[2999]: E1031 14:13:01.597320 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c4674d488-9cl4s" podUID="2e033899-3e9d-4bfb-ab07-0f3a26593557" Oct 31 14:13:02.233368 containerd[1690]: time="2025-10-31T14:13:02.233293577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 14:13:02.301970 systemd[1]: Started sshd@9-139.178.70.103:22-139.178.68.195:57976.service - OpenSSH per-connection server daemon (139.178.68.195:57976). Oct 31 14:13:02.374876 sshd[5141]: Accepted publickey for core from 139.178.68.195 port 57976 ssh2: RSA SHA256:zmvueKxp0qYlaAir+MfCpW4n8cnAsn6tp39Aw3BU0Cs Oct 31 14:13:02.375552 sshd-session[5141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:13:02.378359 systemd-logind[1656]: New session 12 of user core. Oct 31 14:13:02.387874 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 31 14:13:02.504583 sshd[5144]: Connection closed by 139.178.68.195 port 57976 Oct 31 14:13:02.504960 sshd-session[5141]: pam_unix(sshd:session): session closed for user core Oct 31 14:13:02.512167 systemd[1]: sshd@9-139.178.70.103:22-139.178.68.195:57976.service: Deactivated successfully. Oct 31 14:13:02.513482 systemd[1]: session-12.scope: Deactivated successfully. Oct 31 14:13:02.514152 systemd-logind[1656]: Session 12 logged out. Waiting for processes to exit. Oct 31 14:13:02.516016 systemd[1]: Started sshd@10-139.178.70.103:22-139.178.68.195:57992.service - OpenSSH per-connection server daemon (139.178.68.195:57992). Oct 31 14:13:02.516785 systemd-logind[1656]: Removed session 12. Oct 31 14:13:02.583617 sshd[5159]: Accepted publickey for core from 139.178.68.195 port 57992 ssh2: RSA SHA256:zmvueKxp0qYlaAir+MfCpW4n8cnAsn6tp39Aw3BU0Cs Oct 31 14:13:02.584904 sshd-session[5159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:13:02.588332 systemd-logind[1656]: New session 13 of user core. Oct 31 14:13:02.589291 containerd[1690]: time="2025-10-31T14:13:02.588757584Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:13:02.589291 containerd[1690]: time="2025-10-31T14:13:02.589142231Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 14:13:02.589291 containerd[1690]: time="2025-10-31T14:13:02.589199475Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 14:13:02.590543 kubelet[2999]: E1031 14:13:02.589284 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 14:13:02.590543 kubelet[2999]: E1031 14:13:02.589320 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 14:13:02.590543 kubelet[2999]: E1031 14:13:02.589434 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gqzv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-f6ffcff69-wstnf_calico-apiserver(191ace75-d386-424f-8d71-10eff7da195e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 14:13:02.591618 kubelet[2999]: E1031 14:13:02.591590 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f6ffcff69-wstnf" podUID="191ace75-d386-424f-8d71-10eff7da195e" Oct 31 14:13:02.592918 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 31 14:13:02.703296 sshd[5162]: Connection closed by 139.178.68.195 port 57992 Oct 31 14:13:02.703532 sshd-session[5159]: pam_unix(sshd:session): session closed for user core Oct 31 14:13:02.712743 systemd[1]: sshd@10-139.178.70.103:22-139.178.68.195:57992.service: Deactivated successfully. Oct 31 14:13:02.716438 systemd[1]: session-13.scope: Deactivated successfully. Oct 31 14:13:02.717837 systemd-logind[1656]: Session 13 logged out. Waiting for processes to exit. Oct 31 14:13:02.723005 systemd[1]: Started sshd@11-139.178.70.103:22-139.178.68.195:58000.service - OpenSSH per-connection server daemon (139.178.68.195:58000). Oct 31 14:13:02.725127 systemd-logind[1656]: Removed session 13. Oct 31 14:13:02.767826 sshd[5172]: Accepted publickey for core from 139.178.68.195 port 58000 ssh2: RSA SHA256:zmvueKxp0qYlaAir+MfCpW4n8cnAsn6tp39Aw3BU0Cs Oct 31 14:13:02.768636 sshd-session[5172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:13:02.771981 systemd-logind[1656]: New session 14 of user core. Oct 31 14:13:02.775924 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 31 14:13:02.834614 sshd[5175]: Connection closed by 139.178.68.195 port 58000 Oct 31 14:13:02.834995 sshd-session[5172]: pam_unix(sshd:session): session closed for user core Oct 31 14:13:02.837387 systemd[1]: sshd@11-139.178.70.103:22-139.178.68.195:58000.service: Deactivated successfully. Oct 31 14:13:02.838732 systemd[1]: session-14.scope: Deactivated successfully. Oct 31 14:13:02.839342 systemd-logind[1656]: Session 14 logged out. Waiting for processes to exit. Oct 31 14:13:02.840175 systemd-logind[1656]: Removed session 14. Oct 31 14:13:04.234056 containerd[1690]: time="2025-10-31T14:13:04.233860489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 14:13:04.604407 containerd[1690]: time="2025-10-31T14:13:04.604302172Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:13:04.607226 containerd[1690]: time="2025-10-31T14:13:04.607197345Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 14:13:04.607291 containerd[1690]: time="2025-10-31T14:13:04.607257516Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 14:13:04.607605 kubelet[2999]: E1031 14:13:04.607428 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 14:13:04.607605 kubelet[2999]: E1031 14:13:04.607465 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 14:13:04.607605 kubelet[2999]: E1031 14:13:04.607564 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gqqw4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wxdrt_calico-system(1f06339a-fad3-4388-83e5-d004196ee955): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 14:13:04.610307 containerd[1690]: time="2025-10-31T14:13:04.610286109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 14:13:05.089405 containerd[1690]: time="2025-10-31T14:13:05.089326856Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:13:05.096695 containerd[1690]: time="2025-10-31T14:13:05.096661545Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 14:13:05.096797 containerd[1690]: time="2025-10-31T14:13:05.096722269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 14:13:05.096849 kubelet[2999]: E1031 14:13:05.096821 2999 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 14:13:05.096895 kubelet[2999]: E1031 14:13:05.096856 2999 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 14:13:05.096961 kubelet[2999]: E1031 14:13:05.096934 2999 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gqqw4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wxdrt_calico-system(1f06339a-fad3-4388-83e5-d004196ee955): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 14:13:05.098811 kubelet[2999]: E1031 14:13:05.098257 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wxdrt" podUID="1f06339a-fad3-4388-83e5-d004196ee955" Oct 31 14:13:07.844941 systemd[1]: Started sshd@12-139.178.70.103:22-139.178.68.195:34214.service - OpenSSH per-connection server daemon (139.178.68.195:34214). Oct 31 14:13:07.886825 sshd[5193]: Accepted publickey for core from 139.178.68.195 port 34214 ssh2: RSA SHA256:zmvueKxp0qYlaAir+MfCpW4n8cnAsn6tp39Aw3BU0Cs Oct 31 14:13:07.887746 sshd-session[5193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:13:07.890969 systemd-logind[1656]: New session 15 of user core. Oct 31 14:13:07.896923 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 31 14:13:07.954516 sshd[5196]: Connection closed by 139.178.68.195 port 34214 Oct 31 14:13:07.955010 sshd-session[5193]: pam_unix(sshd:session): session closed for user core Oct 31 14:13:07.958210 systemd[1]: sshd@12-139.178.70.103:22-139.178.68.195:34214.service: Deactivated successfully. Oct 31 14:13:07.959762 systemd[1]: session-15.scope: Deactivated successfully. Oct 31 14:13:07.961040 systemd-logind[1656]: Session 15 logged out. Waiting for processes to exit. Oct 31 14:13:07.961778 systemd-logind[1656]: Removed session 15. Oct 31 14:13:10.233323 kubelet[2999]: E1031 14:13:10.233259 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dc458bdfb-x2fbx" podUID="5503919a-74e2-4390-b833-02013e670ac6" Oct 31 14:13:12.971358 systemd[1]: Started sshd@13-139.178.70.103:22-139.178.68.195:59392.service - OpenSSH per-connection server daemon (139.178.68.195:59392). Oct 31 14:13:13.029625 sshd[5209]: Accepted publickey for core from 139.178.68.195 port 59392 ssh2: RSA SHA256:zmvueKxp0qYlaAir+MfCpW4n8cnAsn6tp39Aw3BU0Cs Oct 31 14:13:13.036735 sshd-session[5209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:13:13.039497 systemd-logind[1656]: New session 16 of user core. Oct 31 14:13:13.042934 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 31 14:13:13.153149 sshd[5212]: Connection closed by 139.178.68.195 port 59392 Oct 31 14:13:13.153077 sshd-session[5209]: pam_unix(sshd:session): session closed for user core Oct 31 14:13:13.156634 systemd[1]: sshd@13-139.178.70.103:22-139.178.68.195:59392.service: Deactivated successfully. Oct 31 14:13:13.158159 systemd[1]: session-16.scope: Deactivated successfully. Oct 31 14:13:13.158949 systemd-logind[1656]: Session 16 logged out. Waiting for processes to exit. Oct 31 14:13:13.159747 systemd-logind[1656]: Removed session 16. Oct 31 14:13:14.232047 kubelet[2999]: E1031 14:13:14.232001 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r5k7t" podUID="1aeae2c5-1e57-4c36-a44b-9f56d90d27b3" Oct 31 14:13:15.232435 kubelet[2999]: E1031 14:13:15.232409 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f6ffcff69-lv42k" podUID="ea6f82f9-8ca6-4f27-9ac4-6759edba2cc0" Oct 31 14:13:16.233383 kubelet[2999]: E1031 14:13:16.232826 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c4674d488-9cl4s" podUID="2e033899-3e9d-4bfb-ab07-0f3a26593557" Oct 31 14:13:16.234090 kubelet[2999]: E1031 14:13:16.233581 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wxdrt" podUID="1f06339a-fad3-4388-83e5-d004196ee955" Oct 31 14:13:17.735704 containerd[1690]: time="2025-10-31T14:13:17.735667218Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b54aa4fa1c3dfd3f33f578c5fb9912854a15a6a9cc1f0dbe3accb3535fed7296\" id:\"fec7a49cbe8e6d0c6a0d9683a27d2b5f80f7afb122bb25c288a74d8e252d1344\" pid:5236 exited_at:{seconds:1761919997 nanos:735382826}" Oct 31 14:13:18.165780 systemd[1]: Started sshd@14-139.178.70.103:22-139.178.68.195:59402.service - OpenSSH per-connection server daemon (139.178.68.195:59402). Oct 31 14:13:18.233821 kubelet[2999]: E1031 14:13:18.233183 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f6ffcff69-wstnf" podUID="191ace75-d386-424f-8d71-10eff7da195e" Oct 31 14:13:19.092718 sshd[5248]: Accepted publickey for core from 139.178.68.195 port 59402 ssh2: RSA SHA256:zmvueKxp0qYlaAir+MfCpW4n8cnAsn6tp39Aw3BU0Cs Oct 31 14:13:19.094463 sshd-session[5248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:13:19.097739 systemd-logind[1656]: New session 17 of user core. Oct 31 14:13:19.104068 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 31 14:13:19.277871 sshd[5252]: Connection closed by 139.178.68.195 port 59402 Oct 31 14:13:19.278996 sshd-session[5248]: pam_unix(sshd:session): session closed for user core Oct 31 14:13:19.284852 systemd-logind[1656]: Session 17 logged out. Waiting for processes to exit. Oct 31 14:13:19.285310 systemd[1]: sshd@14-139.178.70.103:22-139.178.68.195:59402.service: Deactivated successfully. Oct 31 14:13:19.286657 systemd[1]: session-17.scope: Deactivated successfully. Oct 31 14:13:19.288010 systemd-logind[1656]: Removed session 17. Oct 31 14:13:22.235397 kubelet[2999]: E1031 14:13:22.235367 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dc458bdfb-x2fbx" podUID="5503919a-74e2-4390-b833-02013e670ac6" Oct 31 14:13:24.289066 systemd[1]: Started sshd@15-139.178.70.103:22-139.178.68.195:37708.service - OpenSSH per-connection server daemon (139.178.68.195:37708). Oct 31 14:13:24.347707 sshd[5265]: Accepted publickey for core from 139.178.68.195 port 37708 ssh2: RSA SHA256:zmvueKxp0qYlaAir+MfCpW4n8cnAsn6tp39Aw3BU0Cs Oct 31 14:13:24.348415 sshd-session[5265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:13:24.353586 systemd-logind[1656]: New session 18 of user core. Oct 31 14:13:24.359207 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 31 14:13:24.412352 sshd[5268]: Connection closed by 139.178.68.195 port 37708 Oct 31 14:13:24.413384 sshd-session[5265]: pam_unix(sshd:session): session closed for user core Oct 31 14:13:24.420033 systemd[1]: sshd@15-139.178.70.103:22-139.178.68.195:37708.service: Deactivated successfully. Oct 31 14:13:24.421263 systemd[1]: session-18.scope: Deactivated successfully. Oct 31 14:13:24.421840 systemd-logind[1656]: Session 18 logged out. Waiting for processes to exit. Oct 31 14:13:24.424235 systemd[1]: Started sshd@16-139.178.70.103:22-139.178.68.195:37714.service - OpenSSH per-connection server daemon (139.178.68.195:37714). Oct 31 14:13:24.424837 systemd-logind[1656]: Removed session 18. Oct 31 14:13:24.460357 sshd[5279]: Accepted publickey for core from 139.178.68.195 port 37714 ssh2: RSA SHA256:zmvueKxp0qYlaAir+MfCpW4n8cnAsn6tp39Aw3BU0Cs Oct 31 14:13:24.461298 sshd-session[5279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:13:24.464829 systemd-logind[1656]: New session 19 of user core. Oct 31 14:13:24.471892 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 31 14:13:24.930160 sshd[5282]: Connection closed by 139.178.68.195 port 37714 Oct 31 14:13:24.930667 sshd-session[5279]: pam_unix(sshd:session): session closed for user core Oct 31 14:13:24.938252 systemd[1]: sshd@16-139.178.70.103:22-139.178.68.195:37714.service: Deactivated successfully. Oct 31 14:13:24.939387 systemd[1]: session-19.scope: Deactivated successfully. Oct 31 14:13:24.939977 systemd-logind[1656]: Session 19 logged out. Waiting for processes to exit. Oct 31 14:13:24.941527 systemd[1]: Started sshd@17-139.178.70.103:22-139.178.68.195:37730.service - OpenSSH per-connection server daemon (139.178.68.195:37730). Oct 31 14:13:24.942436 systemd-logind[1656]: Removed session 19. Oct 31 14:13:25.047459 sshd[5293]: Accepted publickey for core from 139.178.68.195 port 37730 ssh2: RSA SHA256:zmvueKxp0qYlaAir+MfCpW4n8cnAsn6tp39Aw3BU0Cs Oct 31 14:13:25.048029 sshd-session[5293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:13:25.052308 systemd-logind[1656]: New session 20 of user core. Oct 31 14:13:25.058919 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 31 14:13:25.750111 sshd[5296]: Connection closed by 139.178.68.195 port 37730 Oct 31 14:13:25.749837 sshd-session[5293]: pam_unix(sshd:session): session closed for user core Oct 31 14:13:25.762088 systemd[1]: Started sshd@18-139.178.70.103:22-139.178.68.195:37744.service - OpenSSH per-connection server daemon (139.178.68.195:37744). Oct 31 14:13:25.762374 systemd[1]: sshd@17-139.178.70.103:22-139.178.68.195:37730.service: Deactivated successfully. Oct 31 14:13:25.764654 systemd[1]: session-20.scope: Deactivated successfully. Oct 31 14:13:25.766654 systemd-logind[1656]: Session 20 logged out. Waiting for processes to exit. Oct 31 14:13:25.768238 systemd-logind[1656]: Removed session 20. Oct 31 14:13:25.842519 sshd[5308]: Accepted publickey for core from 139.178.68.195 port 37744 ssh2: RSA SHA256:zmvueKxp0qYlaAir+MfCpW4n8cnAsn6tp39Aw3BU0Cs Oct 31 14:13:25.844100 sshd-session[5308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:13:25.848753 systemd-logind[1656]: New session 21 of user core. Oct 31 14:13:25.852953 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 31 14:13:26.235968 kubelet[2999]: E1031 14:13:26.234850 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r5k7t" podUID="1aeae2c5-1e57-4c36-a44b-9f56d90d27b3" Oct 31 14:13:26.325909 sshd[5318]: Connection closed by 139.178.68.195 port 37744 Oct 31 14:13:26.326733 sshd-session[5308]: pam_unix(sshd:session): session closed for user core Oct 31 14:13:26.334771 systemd[1]: sshd@18-139.178.70.103:22-139.178.68.195:37744.service: Deactivated successfully. Oct 31 14:13:26.337309 systemd[1]: session-21.scope: Deactivated successfully. Oct 31 14:13:26.341544 systemd-logind[1656]: Session 21 logged out. Waiting for processes to exit. Oct 31 14:13:26.346933 systemd[1]: Started sshd@19-139.178.70.103:22-139.178.68.195:37748.service - OpenSSH per-connection server daemon (139.178.68.195:37748). Oct 31 14:13:26.348038 systemd-logind[1656]: Removed session 21. Oct 31 14:13:26.395064 sshd[5328]: Accepted publickey for core from 139.178.68.195 port 37748 ssh2: RSA SHA256:zmvueKxp0qYlaAir+MfCpW4n8cnAsn6tp39Aw3BU0Cs Oct 31 14:13:26.396190 sshd-session[5328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:13:26.402525 systemd-logind[1656]: New session 22 of user core. Oct 31 14:13:26.405962 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 31 14:13:26.505124 sshd[5331]: Connection closed by 139.178.68.195 port 37748 Oct 31 14:13:26.505665 sshd-session[5328]: pam_unix(sshd:session): session closed for user core Oct 31 14:13:26.510341 systemd[1]: sshd@19-139.178.70.103:22-139.178.68.195:37748.service: Deactivated successfully. Oct 31 14:13:26.513734 systemd[1]: session-22.scope: Deactivated successfully. Oct 31 14:13:26.516196 systemd-logind[1656]: Session 22 logged out. Waiting for processes to exit. Oct 31 14:13:26.517184 systemd-logind[1656]: Removed session 22. Oct 31 14:13:27.232850 kubelet[2999]: E1031 14:13:27.232816 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f6ffcff69-lv42k" podUID="ea6f82f9-8ca6-4f27-9ac4-6759edba2cc0" Oct 31 14:13:28.233884 kubelet[2999]: E1031 14:13:28.233737 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c4674d488-9cl4s" podUID="2e033899-3e9d-4bfb-ab07-0f3a26593557" Oct 31 14:13:30.241970 kubelet[2999]: E1031 14:13:30.241938 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f6ffcff69-wstnf" podUID="191ace75-d386-424f-8d71-10eff7da195e" Oct 31 14:13:31.233023 kubelet[2999]: E1031 14:13:31.232986 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wxdrt" podUID="1f06339a-fad3-4388-83e5-d004196ee955" Oct 31 14:13:31.513709 systemd[1]: Started sshd@20-139.178.70.103:22-139.178.68.195:37764.service - OpenSSH per-connection server daemon (139.178.68.195:37764). Oct 31 14:13:31.556581 sshd[5344]: Accepted publickey for core from 139.178.68.195 port 37764 ssh2: RSA SHA256:zmvueKxp0qYlaAir+MfCpW4n8cnAsn6tp39Aw3BU0Cs Oct 31 14:13:31.557610 sshd-session[5344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:13:31.560636 systemd-logind[1656]: New session 23 of user core. Oct 31 14:13:31.563871 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 31 14:13:31.641811 sshd[5347]: Connection closed by 139.178.68.195 port 37764 Oct 31 14:13:31.641441 sshd-session[5344]: pam_unix(sshd:session): session closed for user core Oct 31 14:13:31.645138 systemd-logind[1656]: Session 23 logged out. Waiting for processes to exit. Oct 31 14:13:31.645295 systemd[1]: sshd@20-139.178.70.103:22-139.178.68.195:37764.service: Deactivated successfully. Oct 31 14:13:31.646447 systemd[1]: session-23.scope: Deactivated successfully. Oct 31 14:13:31.647654 systemd-logind[1656]: Removed session 23. Oct 31 14:13:35.232929 kubelet[2999]: E1031 14:13:35.232898 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dc458bdfb-x2fbx" podUID="5503919a-74e2-4390-b833-02013e670ac6" Oct 31 14:13:36.649762 systemd[1]: Started sshd@21-139.178.70.103:22-139.178.68.195:41316.service - OpenSSH per-connection server daemon (139.178.68.195:41316). Oct 31 14:13:36.732107 sshd[5361]: Accepted publickey for core from 139.178.68.195 port 41316 ssh2: RSA SHA256:zmvueKxp0qYlaAir+MfCpW4n8cnAsn6tp39Aw3BU0Cs Oct 31 14:13:36.732915 sshd-session[5361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:13:36.737940 systemd-logind[1656]: New session 24 of user core. Oct 31 14:13:36.743880 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 31 14:13:36.817599 sshd[5364]: Connection closed by 139.178.68.195 port 41316 Oct 31 14:13:36.818002 sshd-session[5361]: pam_unix(sshd:session): session closed for user core Oct 31 14:13:36.821614 systemd-logind[1656]: Session 24 logged out. Waiting for processes to exit. Oct 31 14:13:36.821764 systemd[1]: sshd@21-139.178.70.103:22-139.178.68.195:41316.service: Deactivated successfully. Oct 31 14:13:36.823972 systemd[1]: session-24.scope: Deactivated successfully. Oct 31 14:13:36.825863 systemd-logind[1656]: Removed session 24. Oct 31 14:13:37.232356 kubelet[2999]: E1031 14:13:37.232315 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r5k7t" podUID="1aeae2c5-1e57-4c36-a44b-9f56d90d27b3" Oct 31 14:13:40.236870 kubelet[2999]: E1031 14:13:40.236436 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f6ffcff69-lv42k" podUID="ea6f82f9-8ca6-4f27-9ac4-6759edba2cc0" Oct 31 14:13:40.237628 kubelet[2999]: E1031 14:13:40.236584 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c4674d488-9cl4s" podUID="2e033899-3e9d-4bfb-ab07-0f3a26593557" Oct 31 14:13:41.828740 systemd[1]: Started sshd@22-139.178.70.103:22-139.178.68.195:41318.service - OpenSSH per-connection server daemon (139.178.68.195:41318). Oct 31 14:13:41.871742 sshd[5383]: Accepted publickey for core from 139.178.68.195 port 41318 ssh2: RSA SHA256:zmvueKxp0qYlaAir+MfCpW4n8cnAsn6tp39Aw3BU0Cs Oct 31 14:13:41.872728 sshd-session[5383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:13:41.876050 systemd-logind[1656]: New session 25 of user core. Oct 31 14:13:41.881907 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 31 14:13:41.969681 sshd[5386]: Connection closed by 139.178.68.195 port 41318 Oct 31 14:13:41.970964 sshd-session[5383]: pam_unix(sshd:session): session closed for user core Oct 31 14:13:41.977861 systemd[1]: sshd@22-139.178.70.103:22-139.178.68.195:41318.service: Deactivated successfully. Oct 31 14:13:41.978928 systemd[1]: session-25.scope: Deactivated successfully. Oct 31 14:13:41.979423 systemd-logind[1656]: Session 25 logged out. Waiting for processes to exit. Oct 31 14:13:41.980435 systemd-logind[1656]: Removed session 25. Oct 31 14:13:42.233523 kubelet[2999]: E1031 14:13:42.233107 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f6ffcff69-wstnf" podUID="191ace75-d386-424f-8d71-10eff7da195e" Oct 31 14:13:44.276240 kubelet[2999]: E1031 14:13:44.276209 2999 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wxdrt" podUID="1f06339a-fad3-4388-83e5-d004196ee955"