Nov 5 15:41:06.627611 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 13:45:21 -00 2025 Nov 5 15:41:06.627637 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:41:06.627644 kernel: Disabled fast string operations Nov 5 15:41:06.627649 kernel: BIOS-provided physical RAM map: Nov 5 15:41:06.627653 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Nov 5 15:41:06.627658 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Nov 5 15:41:06.627667 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Nov 5 15:41:06.627675 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Nov 5 15:41:06.627683 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Nov 5 15:41:06.627691 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Nov 5 15:41:06.627696 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Nov 5 15:41:06.627700 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Nov 5 15:41:06.627705 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Nov 5 15:41:06.627710 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Nov 5 15:41:06.627718 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Nov 5 15:41:06.627723 kernel: NX (Execute Disable) protection: active Nov 5 15:41:06.627728 kernel: APIC: Static calls initialized Nov 5 15:41:06.627734 kernel: SMBIOS 2.7 present. Nov 5 15:41:06.627739 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Nov 5 15:41:06.627744 kernel: DMI: Memory slots populated: 1/128 Nov 5 15:41:06.627754 kernel: vmware: hypercall mode: 0x00 Nov 5 15:41:06.627761 kernel: Hypervisor detected: VMware Nov 5 15:41:06.627766 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Nov 5 15:41:06.627771 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Nov 5 15:41:06.627776 kernel: vmware: using clock offset of 3739529440 ns Nov 5 15:41:06.627782 kernel: tsc: Detected 3408.000 MHz processor Nov 5 15:41:06.627788 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 5 15:41:06.627794 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 5 15:41:06.627800 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Nov 5 15:41:06.627806 kernel: total RAM covered: 3072M Nov 5 15:41:06.627812 kernel: Found optimal setting for mtrr clean up Nov 5 15:41:06.627818 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Nov 5 15:41:06.627824 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Nov 5 15:41:06.627829 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 5 15:41:06.627835 kernel: Using GB pages for direct mapping Nov 5 15:41:06.627841 kernel: ACPI: Early table checksum verification disabled Nov 5 15:41:06.627846 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Nov 5 15:41:06.627853 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Nov 5 15:41:06.627859 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Nov 5 15:41:06.627865 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Nov 5 15:41:06.627872 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Nov 5 15:41:06.627878 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Nov 5 15:41:06.627885 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Nov 5 15:41:06.627891 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Nov 5 15:41:06.627896 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Nov 5 15:41:06.627902 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Nov 5 15:41:06.627908 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Nov 5 15:41:06.627914 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Nov 5 15:41:06.627921 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Nov 5 15:41:06.627927 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Nov 5 15:41:06.627933 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Nov 5 15:41:06.627938 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Nov 5 15:41:06.627944 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Nov 5 15:41:06.627949 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Nov 5 15:41:06.627955 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Nov 5 15:41:06.627961 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Nov 5 15:41:06.627970 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Nov 5 15:41:06.627976 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Nov 5 15:41:06.627982 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 5 15:41:06.627987 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 5 15:41:06.627993 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Nov 5 15:41:06.627999 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00001000-0x7fffffff] Nov 5 15:41:06.628007 kernel: NODE_DATA(0) allocated [mem 0x7fff8dc0-0x7fffffff] Nov 5 15:41:06.628017 kernel: Zone ranges: Nov 5 15:41:06.628023 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 5 15:41:06.628029 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Nov 5 15:41:06.628034 kernel: Normal empty Nov 5 15:41:06.628040 kernel: Device empty Nov 5 15:41:06.628046 kernel: Movable zone start for each node Nov 5 15:41:06.628052 kernel: Early memory node ranges Nov 5 15:41:06.628057 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Nov 5 15:41:06.628064 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Nov 5 15:41:06.628070 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Nov 5 15:41:06.628075 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Nov 5 15:41:06.628083 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 5 15:41:06.628090 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Nov 5 15:41:06.628096 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Nov 5 15:41:06.628102 kernel: ACPI: PM-Timer IO Port: 0x1008 Nov 5 15:41:06.628109 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Nov 5 15:41:06.628115 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Nov 5 15:41:06.628120 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Nov 5 15:41:06.628126 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Nov 5 15:41:06.628132 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Nov 5 15:41:06.628137 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Nov 5 15:41:06.628145 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Nov 5 15:41:06.628151 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Nov 5 15:41:06.628158 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Nov 5 15:41:06.628163 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Nov 5 15:41:06.628169 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Nov 5 15:41:06.628174 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Nov 5 15:41:06.628180 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Nov 5 15:41:06.628185 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Nov 5 15:41:06.628191 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Nov 5 15:41:06.628196 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Nov 5 15:41:06.628204 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Nov 5 15:41:06.628214 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Nov 5 15:41:06.628221 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Nov 5 15:41:06.628227 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Nov 5 15:41:06.628232 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Nov 5 15:41:06.628238 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Nov 5 15:41:06.628243 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Nov 5 15:41:06.628249 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Nov 5 15:41:06.628255 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Nov 5 15:41:06.628261 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Nov 5 15:41:06.628267 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Nov 5 15:41:06.628272 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Nov 5 15:41:06.628278 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Nov 5 15:41:06.628284 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Nov 5 15:41:06.628292 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Nov 5 15:41:06.628300 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Nov 5 15:41:06.628674 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Nov 5 15:41:06.628684 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Nov 5 15:41:06.628690 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Nov 5 15:41:06.628696 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Nov 5 15:41:06.628701 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Nov 5 15:41:06.628707 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Nov 5 15:41:06.628712 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Nov 5 15:41:06.628719 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Nov 5 15:41:06.628729 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Nov 5 15:41:06.628735 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Nov 5 15:41:06.628741 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Nov 5 15:41:06.628748 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Nov 5 15:41:06.628754 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Nov 5 15:41:06.628759 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Nov 5 15:41:06.628766 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Nov 5 15:41:06.628771 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Nov 5 15:41:06.628779 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Nov 5 15:41:06.628785 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Nov 5 15:41:06.628790 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Nov 5 15:41:06.628796 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Nov 5 15:41:06.628802 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Nov 5 15:41:06.628810 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Nov 5 15:41:06.628818 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Nov 5 15:41:06.628827 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Nov 5 15:41:06.628839 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Nov 5 15:41:06.628848 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Nov 5 15:41:06.628854 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Nov 5 15:41:06.628859 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Nov 5 15:41:06.628865 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Nov 5 15:41:06.628871 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Nov 5 15:41:06.628878 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Nov 5 15:41:06.628884 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Nov 5 15:41:06.628894 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Nov 5 15:41:06.628899 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Nov 5 15:41:06.628905 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Nov 5 15:41:06.628912 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Nov 5 15:41:06.628921 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Nov 5 15:41:06.628927 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Nov 5 15:41:06.628933 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Nov 5 15:41:06.628939 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Nov 5 15:41:06.628946 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Nov 5 15:41:06.628955 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Nov 5 15:41:06.628963 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Nov 5 15:41:06.628969 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Nov 5 15:41:06.628975 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Nov 5 15:41:06.628981 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Nov 5 15:41:06.628987 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Nov 5 15:41:06.628996 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Nov 5 15:41:06.629003 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Nov 5 15:41:06.629010 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Nov 5 15:41:06.629016 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Nov 5 15:41:06.629023 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Nov 5 15:41:06.629032 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Nov 5 15:41:06.629043 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Nov 5 15:41:06.629051 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Nov 5 15:41:06.629057 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Nov 5 15:41:06.629063 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Nov 5 15:41:06.629070 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Nov 5 15:41:06.629077 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Nov 5 15:41:06.629083 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Nov 5 15:41:06.629089 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Nov 5 15:41:06.629095 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Nov 5 15:41:06.629101 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Nov 5 15:41:06.629106 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Nov 5 15:41:06.629112 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Nov 5 15:41:06.629119 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Nov 5 15:41:06.629125 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Nov 5 15:41:06.629131 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Nov 5 15:41:06.629137 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Nov 5 15:41:06.629142 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Nov 5 15:41:06.629148 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Nov 5 15:41:06.629154 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Nov 5 15:41:06.629160 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Nov 5 15:41:06.629166 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Nov 5 15:41:06.629173 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Nov 5 15:41:06.629178 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Nov 5 15:41:06.629185 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Nov 5 15:41:06.629190 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Nov 5 15:41:06.629196 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Nov 5 15:41:06.629202 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Nov 5 15:41:06.629208 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Nov 5 15:41:06.629215 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Nov 5 15:41:06.629221 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Nov 5 15:41:06.629227 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Nov 5 15:41:06.629235 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Nov 5 15:41:06.629242 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Nov 5 15:41:06.629247 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Nov 5 15:41:06.629253 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Nov 5 15:41:06.629259 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Nov 5 15:41:06.629265 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Nov 5 15:41:06.629275 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Nov 5 15:41:06.629282 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Nov 5 15:41:06.629288 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Nov 5 15:41:06.629294 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Nov 5 15:41:06.629299 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Nov 5 15:41:06.629320 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Nov 5 15:41:06.629326 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Nov 5 15:41:06.629332 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Nov 5 15:41:06.629342 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 5 15:41:06.629350 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Nov 5 15:41:06.629356 kernel: TSC deadline timer available Nov 5 15:41:06.629362 kernel: CPU topo: Max. logical packages: 128 Nov 5 15:41:06.629368 kernel: CPU topo: Max. logical dies: 128 Nov 5 15:41:06.629375 kernel: CPU topo: Max. dies per package: 1 Nov 5 15:41:06.629382 kernel: CPU topo: Max. threads per core: 1 Nov 5 15:41:06.629393 kernel: CPU topo: Num. cores per package: 1 Nov 5 15:41:06.629399 kernel: CPU topo: Num. threads per package: 1 Nov 5 15:41:06.629405 kernel: CPU topo: Allowing 2 present CPUs plus 126 hotplug CPUs Nov 5 15:41:06.629411 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Nov 5 15:41:06.629417 kernel: Booting paravirtualized kernel on VMware hypervisor Nov 5 15:41:06.629424 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 5 15:41:06.629430 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Nov 5 15:41:06.629437 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Nov 5 15:41:06.629444 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Nov 5 15:41:06.629451 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Nov 5 15:41:06.629456 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Nov 5 15:41:06.629462 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Nov 5 15:41:06.629468 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Nov 5 15:41:06.629474 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Nov 5 15:41:06.629481 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Nov 5 15:41:06.629488 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Nov 5 15:41:06.629494 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Nov 5 15:41:06.629500 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Nov 5 15:41:06.629506 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Nov 5 15:41:06.629512 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Nov 5 15:41:06.629518 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Nov 5 15:41:06.629524 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Nov 5 15:41:06.629531 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Nov 5 15:41:06.629536 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Nov 5 15:41:06.629542 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Nov 5 15:41:06.629549 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:41:06.629556 kernel: random: crng init done Nov 5 15:41:06.629564 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Nov 5 15:41:06.629572 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Nov 5 15:41:06.629578 kernel: printk: log_buf_len min size: 262144 bytes Nov 5 15:41:06.629584 kernel: printk: log_buf_len: 1048576 bytes Nov 5 15:41:06.629590 kernel: printk: early log buf free: 245688(93%) Nov 5 15:41:06.629596 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 5 15:41:06.629603 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 5 15:41:06.629609 kernel: Fallback order for Node 0: 0 Nov 5 15:41:06.629615 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524157 Nov 5 15:41:06.629622 kernel: Policy zone: DMA32 Nov 5 15:41:06.629628 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 5 15:41:06.629637 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Nov 5 15:41:06.629643 kernel: ftrace: allocating 40092 entries in 157 pages Nov 5 15:41:06.629649 kernel: ftrace: allocated 157 pages with 5 groups Nov 5 15:41:06.629655 kernel: Dynamic Preempt: voluntary Nov 5 15:41:06.629661 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 5 15:41:06.629671 kernel: rcu: RCU event tracing is enabled. Nov 5 15:41:06.629679 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Nov 5 15:41:06.629686 kernel: Trampoline variant of Tasks RCU enabled. Nov 5 15:41:06.629692 kernel: Rude variant of Tasks RCU enabled. Nov 5 15:41:06.629698 kernel: Tracing variant of Tasks RCU enabled. Nov 5 15:41:06.629704 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 5 15:41:06.629710 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Nov 5 15:41:06.629717 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 5 15:41:06.629726 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 5 15:41:06.629733 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 5 15:41:06.629739 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Nov 5 15:41:06.629746 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Nov 5 15:41:06.629756 kernel: Console: colour VGA+ 80x25 Nov 5 15:41:06.629763 kernel: printk: legacy console [tty0] enabled Nov 5 15:41:06.629769 kernel: printk: legacy console [ttyS0] enabled Nov 5 15:41:06.629777 kernel: ACPI: Core revision 20240827 Nov 5 15:41:06.629783 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Nov 5 15:41:06.629789 kernel: APIC: Switch to symmetric I/O mode setup Nov 5 15:41:06.629795 kernel: x2apic enabled Nov 5 15:41:06.629802 kernel: APIC: Switched APIC routing to: physical x2apic Nov 5 15:41:06.629812 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 5 15:41:06.629818 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Nov 5 15:41:06.629826 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Nov 5 15:41:06.629832 kernel: Disabled fast string operations Nov 5 15:41:06.629839 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 5 15:41:06.629845 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 5 15:41:06.629851 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 5 15:41:06.629857 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Nov 5 15:41:06.629863 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 5 15:41:06.629870 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 5 15:41:06.629877 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 5 15:41:06.629883 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 5 15:41:06.629889 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 5 15:41:06.629895 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 5 15:41:06.629902 kernel: SRBDS: Unknown: Dependent on hypervisor status Nov 5 15:41:06.629910 kernel: GDS: Unknown: Dependent on hypervisor status Nov 5 15:41:06.629917 kernel: active return thunk: its_return_thunk Nov 5 15:41:06.629924 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 5 15:41:06.629930 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 5 15:41:06.629936 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 5 15:41:06.629942 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 5 15:41:06.629949 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 5 15:41:06.629955 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 5 15:41:06.629962 kernel: Freeing SMP alternatives memory: 32K Nov 5 15:41:06.629968 kernel: pid_max: default: 131072 minimum: 1024 Nov 5 15:41:06.629974 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 5 15:41:06.629981 kernel: landlock: Up and running. Nov 5 15:41:06.629987 kernel: SELinux: Initializing. Nov 5 15:41:06.629993 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 5 15:41:06.630000 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 5 15:41:06.630007 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Nov 5 15:41:06.630013 kernel: Performance Events: Skylake events, core PMU driver. Nov 5 15:41:06.630019 kernel: core: CPUID marked event: 'cpu cycles' unavailable Nov 5 15:41:06.630026 kernel: core: CPUID marked event: 'instructions' unavailable Nov 5 15:41:06.630032 kernel: core: CPUID marked event: 'bus cycles' unavailable Nov 5 15:41:06.630037 kernel: core: CPUID marked event: 'cache references' unavailable Nov 5 15:41:06.630045 kernel: core: CPUID marked event: 'cache misses' unavailable Nov 5 15:41:06.630052 kernel: core: CPUID marked event: 'branch instructions' unavailable Nov 5 15:41:06.630058 kernel: core: CPUID marked event: 'branch misses' unavailable Nov 5 15:41:06.630064 kernel: ... version: 1 Nov 5 15:41:06.630070 kernel: ... bit width: 48 Nov 5 15:41:06.630076 kernel: ... generic registers: 4 Nov 5 15:41:06.630083 kernel: ... value mask: 0000ffffffffffff Nov 5 15:41:06.630092 kernel: ... max period: 000000007fffffff Nov 5 15:41:06.630099 kernel: ... fixed-purpose events: 0 Nov 5 15:41:06.630105 kernel: ... event mask: 000000000000000f Nov 5 15:41:06.630111 kernel: signal: max sigframe size: 1776 Nov 5 15:41:06.630118 kernel: rcu: Hierarchical SRCU implementation. Nov 5 15:41:06.630124 kernel: rcu: Max phase no-delay instances is 400. Nov 5 15:41:06.630130 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Nov 5 15:41:06.630136 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 5 15:41:06.630142 kernel: smp: Bringing up secondary CPUs ... Nov 5 15:41:06.630149 kernel: smpboot: x86: Booting SMP configuration: Nov 5 15:41:06.630156 kernel: .... node #0, CPUs: #1 Nov 5 15:41:06.630163 kernel: Disabled fast string operations Nov 5 15:41:06.630171 kernel: smp: Brought up 1 node, 2 CPUs Nov 5 15:41:06.630178 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Nov 5 15:41:06.630184 kernel: Memory: 1946772K/2096628K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15964K init, 2080K bss, 138472K reserved, 0K cma-reserved) Nov 5 15:41:06.630190 kernel: devtmpfs: initialized Nov 5 15:41:06.630198 kernel: x86/mm: Memory block size: 128MB Nov 5 15:41:06.630204 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Nov 5 15:41:06.630210 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 5 15:41:06.630216 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Nov 5 15:41:06.630223 kernel: pinctrl core: initialized pinctrl subsystem Nov 5 15:41:06.630229 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 5 15:41:06.630235 kernel: audit: initializing netlink subsys (disabled) Nov 5 15:41:06.630242 kernel: audit: type=2000 audit(1762357264.313:1): state=initialized audit_enabled=0 res=1 Nov 5 15:41:06.630248 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 5 15:41:06.630254 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 5 15:41:06.630260 kernel: cpuidle: using governor menu Nov 5 15:41:06.630266 kernel: Simple Boot Flag at 0x36 set to 0x80 Nov 5 15:41:06.630273 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 5 15:41:06.630279 kernel: dca service started, version 1.12.1 Nov 5 15:41:06.630286 kernel: PCI: ECAM [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) for domain 0000 [bus 00-7f] Nov 5 15:41:06.630299 kernel: PCI: Using configuration type 1 for base access Nov 5 15:41:06.630315 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 5 15:41:06.630323 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 5 15:41:06.630329 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 5 15:41:06.630336 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 5 15:41:06.630344 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 5 15:41:06.630353 kernel: ACPI: Added _OSI(Module Device) Nov 5 15:41:06.630360 kernel: ACPI: Added _OSI(Processor Device) Nov 5 15:41:06.630367 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 5 15:41:06.630373 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 5 15:41:06.630384 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Nov 5 15:41:06.630390 kernel: ACPI: Interpreter enabled Nov 5 15:41:06.630397 kernel: ACPI: PM: (supports S0 S1 S5) Nov 5 15:41:06.630405 kernel: ACPI: Using IOAPIC for interrupt routing Nov 5 15:41:06.630411 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 5 15:41:06.630418 kernel: PCI: Using E820 reservations for host bridge windows Nov 5 15:41:06.630424 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Nov 5 15:41:06.630431 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Nov 5 15:41:06.630546 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 5 15:41:06.630626 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Nov 5 15:41:06.630705 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Nov 5 15:41:06.630715 kernel: PCI host bridge to bus 0000:00 Nov 5 15:41:06.630784 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 5 15:41:06.630845 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Nov 5 15:41:06.630914 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 5 15:41:06.630981 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 5 15:41:06.631040 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Nov 5 15:41:06.633619 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Nov 5 15:41:06.633729 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 conventional PCI endpoint Nov 5 15:41:06.633807 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 conventional PCI bridge Nov 5 15:41:06.633880 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 5 15:41:06.633957 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Nov 5 15:41:06.634030 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a conventional PCI endpoint Nov 5 15:41:06.634101 kernel: pci 0000:00:07.1: BAR 4 [io 0x1060-0x106f] Nov 5 15:41:06.634986 kernel: pci 0000:00:07.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Nov 5 15:41:06.635065 kernel: pci 0000:00:07.1: BAR 1 [io 0x03f6]: legacy IDE quirk Nov 5 15:41:06.635135 kernel: pci 0000:00:07.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Nov 5 15:41:06.635205 kernel: pci 0000:00:07.1: BAR 3 [io 0x0376]: legacy IDE quirk Nov 5 15:41:06.635281 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Nov 5 15:41:06.636416 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Nov 5 15:41:06.636497 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Nov 5 15:41:06.636573 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 conventional PCI endpoint Nov 5 15:41:06.636641 kernel: pci 0000:00:07.7: BAR 0 [io 0x1080-0x10bf] Nov 5 15:41:06.636708 kernel: pci 0000:00:07.7: BAR 1 [mem 0xfebfe000-0xfebfffff 64bit] Nov 5 15:41:06.636779 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 conventional PCI endpoint Nov 5 15:41:06.636851 kernel: pci 0000:00:0f.0: BAR 0 [io 0x1070-0x107f] Nov 5 15:41:06.636917 kernel: pci 0000:00:0f.0: BAR 1 [mem 0xe8000000-0xefffffff pref] Nov 5 15:41:06.636982 kernel: pci 0000:00:0f.0: BAR 2 [mem 0xfe000000-0xfe7fffff] Nov 5 15:41:06.637047 kernel: pci 0000:00:0f.0: ROM [mem 0x00000000-0x00007fff pref] Nov 5 15:41:06.637113 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 5 15:41:06.637186 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 conventional PCI bridge Nov 5 15:41:06.637251 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Nov 5 15:41:06.637935 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Nov 5 15:41:06.638014 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Nov 5 15:41:06.638084 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 5 15:41:06.638158 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:41:06.638229 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 5 15:41:06.638298 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Nov 5 15:41:06.638382 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Nov 5 15:41:06.638450 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Nov 5 15:41:06.638521 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:41:06.638589 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 5 15:41:06.638659 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Nov 5 15:41:06.638725 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Nov 5 15:41:06.638792 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Nov 5 15:41:06.638857 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Nov 5 15:41:06.638928 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:41:06.638995 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 5 15:41:06.639064 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Nov 5 15:41:06.639130 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Nov 5 15:41:06.639196 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Nov 5 15:41:06.639262 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Nov 5 15:41:06.639348 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:41:06.639421 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 5 15:41:06.639488 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Nov 5 15:41:06.639555 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Nov 5 15:41:06.639621 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Nov 5 15:41:06.639692 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:41:06.639757 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 5 15:41:06.639825 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Nov 5 15:41:06.639891 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 5 15:41:06.639957 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Nov 5 15:41:06.640029 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:41:06.640096 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 5 15:41:06.640162 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Nov 5 15:41:06.640231 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Nov 5 15:41:06.640297 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Nov 5 15:41:06.640377 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:41:06.640459 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 5 15:41:06.640525 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Nov 5 15:41:06.640594 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Nov 5 15:41:06.640663 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Nov 5 15:41:06.640732 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:41:06.640799 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 5 15:41:06.640864 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Nov 5 15:41:06.640930 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Nov 5 15:41:06.640995 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Nov 5 15:41:06.641070 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:41:06.641138 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 5 15:41:06.641203 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Nov 5 15:41:06.641268 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Nov 5 15:41:06.641343 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Nov 5 15:41:06.641416 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:41:06.641485 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 5 15:41:06.641551 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Nov 5 15:41:06.641616 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Nov 5 15:41:06.641682 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Nov 5 15:41:06.641747 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Nov 5 15:41:06.641819 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:41:06.641886 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 5 15:41:06.641951 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Nov 5 15:41:06.642017 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Nov 5 15:41:06.642083 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Nov 5 15:41:06.642148 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Nov 5 15:41:06.642220 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:41:06.642286 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 5 15:41:06.642395 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Nov 5 15:41:06.642462 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 5 15:41:06.642528 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Nov 5 15:41:06.642598 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:41:06.642668 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 5 15:41:06.642733 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Nov 5 15:41:06.642798 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 5 15:41:06.642863 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Nov 5 15:41:06.642936 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:41:06.643002 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 5 15:41:06.644419 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Nov 5 15:41:06.644491 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Nov 5 15:41:06.644559 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Nov 5 15:41:06.644632 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:41:06.644700 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 5 15:41:06.644767 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Nov 5 15:41:06.644835 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Nov 5 15:41:06.644901 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Nov 5 15:41:06.644972 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:41:06.645040 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 5 15:41:06.645105 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Nov 5 15:41:06.645170 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 5 15:41:06.645239 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Nov 5 15:41:06.645317 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:41:06.645385 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 5 15:41:06.645451 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Nov 5 15:41:06.645516 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Nov 5 15:41:06.645582 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 5 15:41:06.645655 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Nov 5 15:41:06.645730 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:41:06.645797 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 5 15:41:06.645866 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Nov 5 15:41:06.645930 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Nov 5 15:41:06.645995 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Nov 5 15:41:06.646059 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Nov 5 15:41:06.646129 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:41:06.646197 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 5 15:41:06.646262 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Nov 5 15:41:06.646340 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Nov 5 15:41:06.646422 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Nov 5 15:41:06.646490 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Nov 5 15:41:06.646564 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:41:06.646632 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 5 15:41:06.646698 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Nov 5 15:41:06.646763 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 5 15:41:06.646829 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Nov 5 15:41:06.646899 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:41:06.646965 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 5 15:41:06.647033 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Nov 5 15:41:06.647098 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 5 15:41:06.647163 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Nov 5 15:41:06.647231 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:41:06.647297 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 5 15:41:06.647379 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Nov 5 15:41:06.647447 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Nov 5 15:41:06.647512 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Nov 5 15:41:06.647583 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:41:06.647649 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 5 15:41:06.647715 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Nov 5 15:41:06.647780 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Nov 5 15:41:06.647847 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Nov 5 15:41:06.647915 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:41:06.647980 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 5 15:41:06.648045 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Nov 5 15:41:06.648109 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 5 15:41:06.648173 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Nov 5 15:41:06.648246 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:41:06.649027 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 5 15:41:06.649096 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Nov 5 15:41:06.649164 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Nov 5 15:41:06.649230 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Nov 5 15:41:06.649295 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Nov 5 15:41:06.649389 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:41:06.649457 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 5 15:41:06.649523 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Nov 5 15:41:06.649588 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Nov 5 15:41:06.649652 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Nov 5 15:41:06.649717 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Nov 5 15:41:06.649794 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:41:06.649861 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 5 15:41:06.649925 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Nov 5 15:41:06.649990 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Nov 5 15:41:06.650056 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Nov 5 15:41:06.650124 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:41:06.650193 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 5 15:41:06.650259 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Nov 5 15:41:06.650333 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 5 15:41:06.650400 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Nov 5 15:41:06.650470 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:41:06.650538 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 5 15:41:06.650604 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Nov 5 15:41:06.650669 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Nov 5 15:41:06.650733 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Nov 5 15:41:06.650803 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:41:06.650868 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 5 15:41:06.650936 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Nov 5 15:41:06.651001 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Nov 5 15:41:06.651066 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Nov 5 15:41:06.651137 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:41:06.651203 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 5 15:41:06.651268 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Nov 5 15:41:06.651342 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Nov 5 15:41:06.651413 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Nov 5 15:41:06.651483 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Nov 5 15:41:06.651549 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 5 15:41:06.651613 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Nov 5 15:41:06.651683 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 5 15:41:06.651751 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Nov 5 15:41:06.651819 kernel: pci_bus 0000:01: extended config space not accessible Nov 5 15:41:06.651886 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 5 15:41:06.651953 kernel: pci_bus 0000:02: extended config space not accessible Nov 5 15:41:06.651963 kernel: acpiphp: Slot [32] registered Nov 5 15:41:06.651970 kernel: acpiphp: Slot [33] registered Nov 5 15:41:06.651977 kernel: acpiphp: Slot [34] registered Nov 5 15:41:06.651985 kernel: acpiphp: Slot [35] registered Nov 5 15:41:06.651992 kernel: acpiphp: Slot [36] registered Nov 5 15:41:06.651998 kernel: acpiphp: Slot [37] registered Nov 5 15:41:06.652005 kernel: acpiphp: Slot [38] registered Nov 5 15:41:06.652011 kernel: acpiphp: Slot [39] registered Nov 5 15:41:06.652018 kernel: acpiphp: Slot [40] registered Nov 5 15:41:06.652025 kernel: acpiphp: Slot [41] registered Nov 5 15:41:06.652032 kernel: acpiphp: Slot [42] registered Nov 5 15:41:06.652039 kernel: acpiphp: Slot [43] registered Nov 5 15:41:06.652045 kernel: acpiphp: Slot [44] registered Nov 5 15:41:06.652051 kernel: acpiphp: Slot [45] registered Nov 5 15:41:06.652058 kernel: acpiphp: Slot [46] registered Nov 5 15:41:06.652064 kernel: acpiphp: Slot [47] registered Nov 5 15:41:06.652070 kernel: acpiphp: Slot [48] registered Nov 5 15:41:06.652077 kernel: acpiphp: Slot [49] registered Nov 5 15:41:06.652084 kernel: acpiphp: Slot [50] registered Nov 5 15:41:06.652091 kernel: acpiphp: Slot [51] registered Nov 5 15:41:06.652097 kernel: acpiphp: Slot [52] registered Nov 5 15:41:06.652103 kernel: acpiphp: Slot [53] registered Nov 5 15:41:06.652110 kernel: acpiphp: Slot [54] registered Nov 5 15:41:06.652116 kernel: acpiphp: Slot [55] registered Nov 5 15:41:06.652122 kernel: acpiphp: Slot [56] registered Nov 5 15:41:06.652130 kernel: acpiphp: Slot [57] registered Nov 5 15:41:06.652136 kernel: acpiphp: Slot [58] registered Nov 5 15:41:06.652142 kernel: acpiphp: Slot [59] registered Nov 5 15:41:06.652149 kernel: acpiphp: Slot [60] registered Nov 5 15:41:06.652155 kernel: acpiphp: Slot [61] registered Nov 5 15:41:06.652161 kernel: acpiphp: Slot [62] registered Nov 5 15:41:06.652168 kernel: acpiphp: Slot [63] registered Nov 5 15:41:06.652235 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Nov 5 15:41:06.652301 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Nov 5 15:41:06.652374 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Nov 5 15:41:06.652438 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Nov 5 15:41:06.652503 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Nov 5 15:41:06.652567 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Nov 5 15:41:06.652639 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 PCIe Endpoint Nov 5 15:41:06.652710 kernel: pci 0000:03:00.0: BAR 0 [io 0x4000-0x4007] Nov 5 15:41:06.652777 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfd5f8000-0xfd5fffff 64bit] Nov 5 15:41:06.652844 kernel: pci 0000:03:00.0: ROM [mem 0x00000000-0x0000ffff pref] Nov 5 15:41:06.652911 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Nov 5 15:41:06.652978 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Nov 5 15:41:06.653047 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 5 15:41:06.653115 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 5 15:41:06.653185 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 5 15:41:06.653251 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 5 15:41:06.653330 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 5 15:41:06.653401 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 5 15:41:06.653471 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 5 15:41:06.653540 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 5 15:41:06.653613 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 PCIe Endpoint Nov 5 15:41:06.653681 kernel: pci 0000:0b:00.0: BAR 0 [mem 0xfd4fc000-0xfd4fcfff] Nov 5 15:41:06.653748 kernel: pci 0000:0b:00.0: BAR 1 [mem 0xfd4fd000-0xfd4fdfff] Nov 5 15:41:06.653813 kernel: pci 0000:0b:00.0: BAR 2 [mem 0xfd4fe000-0xfd4fffff] Nov 5 15:41:06.653881 kernel: pci 0000:0b:00.0: BAR 3 [io 0x5000-0x500f] Nov 5 15:41:06.656319 kernel: pci 0000:0b:00.0: ROM [mem 0x00000000-0x0000ffff pref] Nov 5 15:41:06.656414 kernel: pci 0000:0b:00.0: supports D1 D2 Nov 5 15:41:06.656736 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 5 15:41:06.656809 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Nov 5 15:41:06.656879 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 5 15:41:06.656954 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 5 15:41:06.657025 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 5 15:41:06.657094 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 5 15:41:06.657163 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 5 15:41:06.657230 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 5 15:41:06.657298 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 5 15:41:06.658404 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 5 15:41:06.658476 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 5 15:41:06.658546 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 5 15:41:06.658614 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 5 15:41:06.658683 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 5 15:41:06.658752 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 5 15:41:06.658819 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 5 15:41:06.658890 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 5 15:41:06.658959 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 5 15:41:06.659026 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 5 15:41:06.659092 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 5 15:41:06.659162 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 5 15:41:06.659230 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 5 15:41:06.659301 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 5 15:41:06.659378 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 5 15:41:06.659445 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 5 15:41:06.659512 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 5 15:41:06.659521 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Nov 5 15:41:06.659528 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Nov 5 15:41:06.659538 kernel: ACPI: PCI: Interrupt link LNKB disabled Nov 5 15:41:06.659544 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 5 15:41:06.659551 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Nov 5 15:41:06.659558 kernel: iommu: Default domain type: Translated Nov 5 15:41:06.659564 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 5 15:41:06.659571 kernel: PCI: Using ACPI for IRQ routing Nov 5 15:41:06.659577 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 5 15:41:06.659585 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Nov 5 15:41:06.659591 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Nov 5 15:41:06.659657 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Nov 5 15:41:06.659723 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Nov 5 15:41:06.659788 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 5 15:41:06.659797 kernel: vgaarb: loaded Nov 5 15:41:06.659804 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Nov 5 15:41:06.659813 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Nov 5 15:41:06.659820 kernel: clocksource: Switched to clocksource tsc-early Nov 5 15:41:06.659827 kernel: VFS: Disk quotas dquot_6.6.0 Nov 5 15:41:06.659834 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 5 15:41:06.659840 kernel: pnp: PnP ACPI init Nov 5 15:41:06.659913 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Nov 5 15:41:06.659978 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Nov 5 15:41:06.660039 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Nov 5 15:41:06.660107 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Nov 5 15:41:06.660171 kernel: pnp 00:06: [dma 2] Nov 5 15:41:06.660236 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Nov 5 15:41:06.660298 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Nov 5 15:41:06.663397 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Nov 5 15:41:06.663408 kernel: pnp: PnP ACPI: found 8 devices Nov 5 15:41:06.663415 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 5 15:41:06.663422 kernel: NET: Registered PF_INET protocol family Nov 5 15:41:06.663429 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 5 15:41:06.663436 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 5 15:41:06.663445 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 5 15:41:06.663451 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 5 15:41:06.663458 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 5 15:41:06.663464 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 5 15:41:06.663471 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 5 15:41:06.663477 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 5 15:41:06.663484 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 5 15:41:06.663491 kernel: NET: Registered PF_XDP protocol family Nov 5 15:41:06.663578 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Nov 5 15:41:06.663652 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 5 15:41:06.663727 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 5 15:41:06.663795 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 5 15:41:06.663863 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 5 15:41:06.663932 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Nov 5 15:41:06.664001 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Nov 5 15:41:06.664069 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Nov 5 15:41:06.664138 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Nov 5 15:41:06.664205 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Nov 5 15:41:06.664272 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Nov 5 15:41:06.664357 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Nov 5 15:41:06.664431 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Nov 5 15:41:06.664499 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Nov 5 15:41:06.664566 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Nov 5 15:41:06.664632 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Nov 5 15:41:06.664698 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Nov 5 15:41:06.664766 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Nov 5 15:41:06.664835 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Nov 5 15:41:06.664903 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Nov 5 15:41:06.664969 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Nov 5 15:41:06.665037 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Nov 5 15:41:06.665105 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Nov 5 15:41:06.665172 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref]: assigned Nov 5 15:41:06.665238 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref]: assigned Nov 5 15:41:06.668172 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.668280 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.668362 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.668432 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.668502 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.668569 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.668642 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.668709 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.668778 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.668845 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.668912 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.668978 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.669048 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.669113 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.669179 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.669245 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.669319 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.669390 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.669459 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.669528 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.669596 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.669662 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.669730 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.669796 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.669863 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.669931 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.669998 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.670063 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.670131 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.670210 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.670304 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.670397 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.670484 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.670567 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.670649 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.670734 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.670823 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.670895 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.670966 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.671032 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.671100 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.671165 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.671231 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.671296 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.671381 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.671447 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.671512 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.671577 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.671642 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.671708 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.671792 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.671861 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.671952 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.672020 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.672086 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.672152 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.672218 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.672286 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.672360 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.672435 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.672502 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.672569 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.672636 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.672703 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.672769 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.672835 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.672901 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.672969 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.673037 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.673103 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.673170 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.673236 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.673302 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.673411 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.673482 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.673548 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.673614 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.673680 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.673748 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.673817 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.673884 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.673949 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.674017 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Nov 5 15:41:06.674083 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Nov 5 15:41:06.674155 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 5 15:41:06.674507 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Nov 5 15:41:06.674582 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Nov 5 15:41:06.676322 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Nov 5 15:41:06.676408 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 5 15:41:06.676485 kernel: pci 0000:03:00.0: ROM [mem 0xfd500000-0xfd50ffff pref]: assigned Nov 5 15:41:06.676556 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 5 15:41:06.676623 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Nov 5 15:41:06.676689 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Nov 5 15:41:06.676759 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Nov 5 15:41:06.676827 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 5 15:41:06.676893 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Nov 5 15:41:06.676959 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Nov 5 15:41:06.677025 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Nov 5 15:41:06.677093 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 5 15:41:06.677159 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Nov 5 15:41:06.677228 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Nov 5 15:41:06.677292 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Nov 5 15:41:06.677376 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 5 15:41:06.677444 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Nov 5 15:41:06.677511 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Nov 5 15:41:06.677578 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 5 15:41:06.677644 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Nov 5 15:41:06.677713 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 5 15:41:06.677781 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 5 15:41:06.677846 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Nov 5 15:41:06.677912 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Nov 5 15:41:06.678000 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 5 15:41:06.678068 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Nov 5 15:41:06.678136 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Nov 5 15:41:06.678213 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 5 15:41:06.678282 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Nov 5 15:41:06.678357 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Nov 5 15:41:06.678434 kernel: pci 0000:0b:00.0: ROM [mem 0xfd400000-0xfd40ffff pref]: assigned Nov 5 15:41:06.678503 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 5 15:41:06.678572 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Nov 5 15:41:06.678636 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Nov 5 15:41:06.678702 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Nov 5 15:41:06.678768 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 5 15:41:06.678833 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Nov 5 15:41:06.678900 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Nov 5 15:41:06.678966 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Nov 5 15:41:06.679036 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 5 15:41:06.679111 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Nov 5 15:41:06.679178 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Nov 5 15:41:06.679243 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Nov 5 15:41:06.679321 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 5 15:41:06.679393 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Nov 5 15:41:06.679466 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 5 15:41:06.679548 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 5 15:41:06.679616 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Nov 5 15:41:06.679692 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 5 15:41:06.679770 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 5 15:41:06.679837 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Nov 5 15:41:06.679904 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Nov 5 15:41:06.679975 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 5 15:41:06.680042 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Nov 5 15:41:06.680108 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Nov 5 15:41:06.680176 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 5 15:41:06.680241 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Nov 5 15:41:06.680813 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 5 15:41:06.680896 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 5 15:41:06.680966 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Nov 5 15:41:06.681034 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Nov 5 15:41:06.681102 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 5 15:41:06.681171 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 5 15:41:06.681246 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Nov 5 15:41:06.681333 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Nov 5 15:41:06.681411 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Nov 5 15:41:06.683413 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 5 15:41:06.683494 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Nov 5 15:41:06.683565 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Nov 5 15:41:06.683633 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Nov 5 15:41:06.683701 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 5 15:41:06.683767 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Nov 5 15:41:06.683833 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 5 15:41:06.683913 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 5 15:41:06.683980 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Nov 5 15:41:06.684056 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 5 15:41:06.684126 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 5 15:41:06.684197 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Nov 5 15:41:06.684268 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Nov 5 15:41:06.686544 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 5 15:41:06.686641 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Nov 5 15:41:06.686712 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Nov 5 15:41:06.686784 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 5 15:41:06.686857 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Nov 5 15:41:06.686927 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 5 15:41:06.687010 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 5 15:41:06.687080 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Nov 5 15:41:06.687147 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Nov 5 15:41:06.687219 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Nov 5 15:41:06.687290 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 5 15:41:06.687371 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Nov 5 15:41:06.687440 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Nov 5 15:41:06.687509 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Nov 5 15:41:06.687577 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 5 15:41:06.687661 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Nov 5 15:41:06.687730 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Nov 5 15:41:06.687803 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 5 15:41:06.687873 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Nov 5 15:41:06.687942 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 5 15:41:06.688010 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 5 15:41:06.688083 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Nov 5 15:41:06.688153 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Nov 5 15:41:06.688225 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 5 15:41:06.688292 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Nov 5 15:41:06.689928 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Nov 5 15:41:06.690020 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 5 15:41:06.690096 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Nov 5 15:41:06.690167 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Nov 5 15:41:06.690245 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 5 15:41:06.690336 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Nov 5 15:41:06.690411 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 5 15:41:06.690485 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Nov 5 15:41:06.690550 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Nov 5 15:41:06.690612 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Nov 5 15:41:06.690671 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Nov 5 15:41:06.690731 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Nov 5 15:41:06.690800 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Nov 5 15:41:06.690872 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Nov 5 15:41:06.690941 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 5 15:41:06.691013 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Nov 5 15:41:06.691077 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Nov 5 15:41:06.691152 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Nov 5 15:41:06.691228 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Nov 5 15:41:06.691290 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Nov 5 15:41:06.691375 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Nov 5 15:41:06.691453 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Nov 5 15:41:06.691520 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Nov 5 15:41:06.691592 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Nov 5 15:41:06.691657 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Nov 5 15:41:06.691721 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Nov 5 15:41:06.691795 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Nov 5 15:41:06.691861 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Nov 5 15:41:06.691927 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Nov 5 15:41:06.692006 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Nov 5 15:41:06.692075 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Nov 5 15:41:06.692147 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Nov 5 15:41:06.692209 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 5 15:41:06.692281 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Nov 5 15:41:06.692362 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Nov 5 15:41:06.692440 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Nov 5 15:41:06.692504 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Nov 5 15:41:06.692570 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Nov 5 15:41:06.692635 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Nov 5 15:41:06.692716 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Nov 5 15:41:06.692782 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Nov 5 15:41:06.692848 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Nov 5 15:41:06.692919 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Nov 5 15:41:06.692981 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Nov 5 15:41:06.693042 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Nov 5 15:41:06.693114 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Nov 5 15:41:06.693183 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Nov 5 15:41:06.693257 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Nov 5 15:41:06.693522 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Nov 5 15:41:06.693589 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 5 15:41:06.693664 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Nov 5 15:41:06.693739 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 5 15:41:06.693810 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Nov 5 15:41:06.693871 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Nov 5 15:41:06.693940 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Nov 5 15:41:06.694001 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Nov 5 15:41:06.694082 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Nov 5 15:41:06.694160 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 5 15:41:06.694226 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Nov 5 15:41:06.694290 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Nov 5 15:41:06.694372 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 5 15:41:06.694454 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Nov 5 15:41:06.694523 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Nov 5 15:41:06.694587 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Nov 5 15:41:06.694654 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Nov 5 15:41:06.694714 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Nov 5 15:41:06.694775 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Nov 5 15:41:06.694843 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Nov 5 15:41:06.694923 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 5 15:41:06.694996 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Nov 5 15:41:06.695057 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 5 15:41:06.695126 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Nov 5 15:41:06.695199 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Nov 5 15:41:06.695278 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Nov 5 15:41:06.695365 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Nov 5 15:41:06.695433 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Nov 5 15:41:06.695494 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 5 15:41:06.695563 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Nov 5 15:41:06.695624 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Nov 5 15:41:06.695685 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Nov 5 15:41:06.695757 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Nov 5 15:41:06.695819 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Nov 5 15:41:06.695879 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Nov 5 15:41:06.695946 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Nov 5 15:41:06.696008 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Nov 5 15:41:06.696081 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Nov 5 15:41:06.696142 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 5 15:41:06.696209 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Nov 5 15:41:06.696273 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Nov 5 15:41:06.696355 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Nov 5 15:41:06.696429 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Nov 5 15:41:06.696509 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Nov 5 15:41:06.696589 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Nov 5 15:41:06.696670 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Nov 5 15:41:06.696746 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 5 15:41:06.696825 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 5 15:41:06.696836 kernel: PCI: CLS 32 bytes, default 64 Nov 5 15:41:06.696843 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 5 15:41:06.696850 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Nov 5 15:41:06.696859 kernel: clocksource: Switched to clocksource tsc Nov 5 15:41:06.696866 kernel: Initialise system trusted keyrings Nov 5 15:41:06.696873 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 5 15:41:06.696880 kernel: Key type asymmetric registered Nov 5 15:41:06.696886 kernel: Asymmetric key parser 'x509' registered Nov 5 15:41:06.696893 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 5 15:41:06.696900 kernel: io scheduler mq-deadline registered Nov 5 15:41:06.696906 kernel: io scheduler kyber registered Nov 5 15:41:06.696914 kernel: io scheduler bfq registered Nov 5 15:41:06.697022 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Nov 5 15:41:06.697093 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:41:06.697165 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Nov 5 15:41:06.697233 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:41:06.697320 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Nov 5 15:41:06.697400 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:41:06.697477 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Nov 5 15:41:06.697546 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:41:06.697615 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Nov 5 15:41:06.697684 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:41:06.697760 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Nov 5 15:41:06.697831 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:41:06.697898 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Nov 5 15:41:06.697964 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:41:06.698032 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Nov 5 15:41:06.698103 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:41:06.698172 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Nov 5 15:41:06.698241 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:41:06.698338 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Nov 5 15:41:06.698408 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:41:06.698495 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Nov 5 15:41:06.698586 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:41:06.698675 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Nov 5 15:41:06.698770 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:41:06.698865 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Nov 5 15:41:06.698935 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:41:06.699010 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Nov 5 15:41:06.699082 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:41:06.699169 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Nov 5 15:41:06.699241 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:41:06.699411 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Nov 5 15:41:06.699490 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:41:06.699561 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Nov 5 15:41:06.699629 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:41:06.699697 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Nov 5 15:41:06.699769 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:41:06.699843 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Nov 5 15:41:06.699914 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:41:06.699995 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Nov 5 15:41:06.700063 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:41:06.700138 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Nov 5 15:41:06.700247 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:41:06.700350 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Nov 5 15:41:06.700422 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:41:06.700502 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Nov 5 15:41:06.700590 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:41:06.700668 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Nov 5 15:41:06.700754 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:41:06.700837 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Nov 5 15:41:06.702585 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:41:06.702678 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Nov 5 15:41:06.702757 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:41:06.702830 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Nov 5 15:41:06.702903 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:41:06.702973 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Nov 5 15:41:06.703046 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:41:06.703697 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Nov 5 15:41:06.703788 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:41:06.703880 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Nov 5 15:41:06.703962 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:41:06.704033 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Nov 5 15:41:06.704101 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:41:06.704173 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Nov 5 15:41:06.704243 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 5 15:41:06.704256 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 5 15:41:06.704264 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 5 15:41:06.704271 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 5 15:41:06.704279 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Nov 5 15:41:06.704286 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 5 15:41:06.704293 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 5 15:41:06.705924 kernel: rtc_cmos 00:01: registered as rtc0 Nov 5 15:41:06.706018 kernel: rtc_cmos 00:01: setting system clock to 2025-11-05T15:41:05 UTC (1762357265) Nov 5 15:41:06.706113 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Nov 5 15:41:06.706128 kernel: intel_pstate: CPU model not supported Nov 5 15:41:06.706140 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 5 15:41:06.706153 kernel: NET: Registered PF_INET6 protocol family Nov 5 15:41:06.706165 kernel: Segment Routing with IPv6 Nov 5 15:41:06.706177 kernel: In-situ OAM (IOAM) with IPv6 Nov 5 15:41:06.706189 kernel: NET: Registered PF_PACKET protocol family Nov 5 15:41:06.706199 kernel: Key type dns_resolver registered Nov 5 15:41:06.706209 kernel: IPI shorthand broadcast: enabled Nov 5 15:41:06.706219 kernel: sched_clock: Marking stable (1518003429, 171707511)->(1705245096, -15534156) Nov 5 15:41:06.706226 kernel: registered taskstats version 1 Nov 5 15:41:06.706237 kernel: Loading compiled-in X.509 certificates Nov 5 15:41:06.706248 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 9f02cc8d588ce542f03b0da66dde47a90a145382' Nov 5 15:41:06.706263 kernel: Demotion targets for Node 0: null Nov 5 15:41:06.706274 kernel: Key type .fscrypt registered Nov 5 15:41:06.706286 kernel: Key type fscrypt-provisioning registered Nov 5 15:41:06.706293 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 5 15:41:06.706303 kernel: ima: Allocated hash algorithm: sha1 Nov 5 15:41:06.708775 kernel: ima: No architecture policies found Nov 5 15:41:06.708786 kernel: clk: Disabling unused clocks Nov 5 15:41:06.708799 kernel: Freeing unused kernel image (initmem) memory: 15964K Nov 5 15:41:06.708808 kernel: Write protecting the kernel read-only data: 40960k Nov 5 15:41:06.708816 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 5 15:41:06.708823 kernel: Run /init as init process Nov 5 15:41:06.708832 kernel: with arguments: Nov 5 15:41:06.708840 kernel: /init Nov 5 15:41:06.708849 kernel: with environment: Nov 5 15:41:06.708855 kernel: HOME=/ Nov 5 15:41:06.708865 kernel: TERM=linux Nov 5 15:41:06.708872 kernel: SCSI subsystem initialized Nov 5 15:41:06.708879 kernel: VMware PVSCSI driver - version 1.0.7.0-k Nov 5 15:41:06.708888 kernel: vmw_pvscsi: using 64bit dma Nov 5 15:41:06.708895 kernel: vmw_pvscsi: max_id: 16 Nov 5 15:41:06.708902 kernel: vmw_pvscsi: setting ring_pages to 8 Nov 5 15:41:06.708911 kernel: vmw_pvscsi: enabling reqCallThreshold Nov 5 15:41:06.708920 kernel: vmw_pvscsi: driver-based request coalescing enabled Nov 5 15:41:06.708929 kernel: vmw_pvscsi: using MSI-X Nov 5 15:41:06.709045 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Nov 5 15:41:06.709153 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Nov 5 15:41:06.709264 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Nov 5 15:41:06.709372 kernel: sd 0:0:0:0: [sda] 25804800 512-byte logical blocks: (13.2 GB/12.3 GiB) Nov 5 15:41:06.709465 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 5 15:41:06.709548 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Nov 5 15:41:06.709627 kernel: sd 0:0:0:0: [sda] Cache data unavailable Nov 5 15:41:06.709718 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Nov 5 15:41:06.709731 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 5 15:41:06.709819 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 5 15:41:06.709833 kernel: libata version 3.00 loaded. Nov 5 15:41:06.709929 kernel: ata_piix 0000:00:07.1: version 2.13 Nov 5 15:41:06.710030 kernel: scsi host1: ata_piix Nov 5 15:41:06.710122 kernel: scsi host2: ata_piix Nov 5 15:41:06.710134 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 lpm-pol 0 Nov 5 15:41:06.710143 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 lpm-pol 0 Nov 5 15:41:06.710150 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Nov 5 15:41:06.715046 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Nov 5 15:41:06.715174 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Nov 5 15:41:06.715187 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 5 15:41:06.715195 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 5 15:41:06.715202 kernel: device-mapper: uevent: version 1.0.3 Nov 5 15:41:06.715210 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 5 15:41:06.715290 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 5 15:41:06.715301 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 5 15:41:06.715545 kernel: raid6: avx2x4 gen() 46880 MB/s Nov 5 15:41:06.715555 kernel: raid6: avx2x2 gen() 53129 MB/s Nov 5 15:41:06.715562 kernel: raid6: avx2x1 gen() 43813 MB/s Nov 5 15:41:06.715569 kernel: raid6: using algorithm avx2x2 gen() 53129 MB/s Nov 5 15:41:06.715576 kernel: raid6: .... xor() 29589 MB/s, rmw enabled Nov 5 15:41:06.715585 kernel: raid6: using avx2x2 recovery algorithm Nov 5 15:41:06.715593 kernel: xor: automatically using best checksumming function avx Nov 5 15:41:06.715599 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 5 15:41:06.715607 kernel: BTRFS: device fsid a4c7be9c-39f6-471d-8a4c-d50144c6bf01 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (196) Nov 5 15:41:06.715617 kernel: BTRFS info (device dm-0): first mount of filesystem a4c7be9c-39f6-471d-8a4c-d50144c6bf01 Nov 5 15:41:06.715625 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:41:06.715632 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 5 15:41:06.715641 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 5 15:41:06.715650 kernel: BTRFS info (device dm-0): enabling free space tree Nov 5 15:41:06.715659 kernel: loop: module loaded Nov 5 15:41:06.715667 kernel: loop0: detected capacity change from 0 to 100120 Nov 5 15:41:06.715674 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 5 15:41:06.715683 systemd[1]: Successfully made /usr/ read-only. Nov 5 15:41:06.715696 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 15:41:06.715705 systemd[1]: Detected virtualization vmware. Nov 5 15:41:06.715715 systemd[1]: Detected architecture x86-64. Nov 5 15:41:06.715724 systemd[1]: Running in initrd. Nov 5 15:41:06.715732 systemd[1]: No hostname configured, using default hostname. Nov 5 15:41:06.715739 systemd[1]: Hostname set to . Nov 5 15:41:06.715750 systemd[1]: Initializing machine ID from random generator. Nov 5 15:41:06.715757 systemd[1]: Queued start job for default target initrd.target. Nov 5 15:41:06.715768 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 15:41:06.715780 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:41:06.715788 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:41:06.715799 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 5 15:41:06.715806 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 15:41:06.715818 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 5 15:41:06.715826 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 5 15:41:06.715833 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:41:06.715841 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:41:06.715852 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 5 15:41:06.715860 systemd[1]: Reached target paths.target - Path Units. Nov 5 15:41:06.715870 systemd[1]: Reached target slices.target - Slice Units. Nov 5 15:41:06.715878 systemd[1]: Reached target swap.target - Swaps. Nov 5 15:41:06.715885 systemd[1]: Reached target timers.target - Timer Units. Nov 5 15:41:06.715892 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 15:41:06.715899 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 15:41:06.715906 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 5 15:41:06.715913 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 5 15:41:06.715922 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:41:06.715932 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 15:41:06.715940 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:41:06.715951 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 15:41:06.715962 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Nov 5 15:41:06.715970 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 5 15:41:06.715979 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 15:41:06.715989 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 5 15:41:06.715997 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 5 15:41:06.716007 systemd[1]: Starting systemd-fsck-usr.service... Nov 5 15:41:06.716014 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 15:41:06.716023 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 15:41:06.716031 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:41:06.716042 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 5 15:41:06.716053 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:41:06.716081 systemd-journald[332]: Collecting audit messages is disabled. Nov 5 15:41:06.716105 systemd[1]: Finished systemd-fsck-usr.service. Nov 5 15:41:06.716113 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 15:41:06.716120 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 5 15:41:06.716130 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 15:41:06.716139 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 15:41:06.716149 kernel: Bridge firewalling registered Nov 5 15:41:06.716161 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 15:41:06.716170 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:41:06.716179 systemd-journald[332]: Journal started Nov 5 15:41:06.716196 systemd-journald[332]: Runtime Journal (/run/log/journal/36ea82e7ef394b4b985069627d54b59a) is 4.8M, max 38.5M, 33.7M free. Nov 5 15:41:06.689497 systemd-modules-load[334]: Inserted module 'br_netfilter' Nov 5 15:41:06.720765 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 15:41:06.720622 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:41:06.722696 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 15:41:06.730977 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Nov 5 15:41:06.731275 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:41:06.734526 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:41:06.735803 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 5 15:41:06.737369 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 15:41:06.740496 systemd-tmpfiles[357]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 5 15:41:06.743697 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:41:06.753177 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 15:41:06.756465 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 5 15:41:06.778449 dracut-cmdline[380]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 ip=139.178.70.108::139.178.70.97:28::ens192:off:1.1.1.1:1.0.0.1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:41:06.779897 systemd-resolved[364]: Positive Trust Anchors: Nov 5 15:41:06.779904 systemd-resolved[364]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 15:41:06.779906 systemd-resolved[364]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 15:41:06.779928 systemd-resolved[364]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 15:41:06.807381 systemd-resolved[364]: Defaulting to hostname 'linux'. Nov 5 15:41:06.808087 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 15:41:06.808226 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:41:06.864333 kernel: Loading iSCSI transport class v2.0-870. Nov 5 15:41:06.897330 kernel: iscsi: registered transport (tcp) Nov 5 15:41:06.926592 kernel: iscsi: registered transport (qla4xxx) Nov 5 15:41:06.926654 kernel: QLogic iSCSI HBA Driver Nov 5 15:41:06.954483 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 15:41:06.969405 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:41:06.970670 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 15:41:06.996594 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 5 15:41:06.997490 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 5 15:41:06.998377 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 5 15:41:07.021885 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 5 15:41:07.023409 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:41:07.042992 systemd-udevd[617]: Using default interface naming scheme 'v257'. Nov 5 15:41:07.050042 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:41:07.051393 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 5 15:41:07.067061 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 15:41:07.069394 dracut-pre-trigger[695]: rd.md=0: removing MD RAID activation Nov 5 15:41:07.069544 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 15:41:07.088745 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 15:41:07.090387 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 15:41:07.102128 systemd-networkd[736]: lo: Link UP Nov 5 15:41:07.102135 systemd-networkd[736]: lo: Gained carrier Nov 5 15:41:07.102487 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 15:41:07.102648 systemd[1]: Reached target network.target - Network. Nov 5 15:41:07.173822 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:41:07.174944 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 5 15:41:07.282424 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Nov 5 15:41:07.300323 kernel: VMware vmxnet3 virtual NIC driver - version 1.9.0.0-k-NAPI Nov 5 15:41:07.307498 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Nov 5 15:41:07.315340 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Nov 5 15:41:07.316279 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Nov 5 15:41:07.317357 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 5 15:41:07.327319 kernel: cryptd: max_cpu_qlen set to 1000 Nov 5 15:41:07.330323 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Nov 5 15:41:07.336169 systemd-networkd[736]: eth0: Interface name change detected, renamed to ens192. Nov 5 15:41:07.336431 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Nov 5 15:41:07.366051 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Nov 5 15:41:07.371820 systemd-networkd[736]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Nov 5 15:41:07.374002 (udev-worker)[766]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Nov 5 15:41:07.376066 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Nov 5 15:41:07.376215 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Nov 5 15:41:07.376793 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:41:07.376882 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:41:07.379171 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Nov 5 15:41:07.377018 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:41:07.378150 systemd-networkd[736]: ens192: Link UP Nov 5 15:41:07.378153 systemd-networkd[736]: ens192: Gained carrier Nov 5 15:41:07.379558 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:41:07.391501 kernel: AES CTR mode by8 optimization enabled Nov 5 15:41:07.440556 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:41:07.501794 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 5 15:41:07.502210 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 15:41:07.502373 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:41:07.502584 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 15:41:07.503444 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 5 15:41:07.520682 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 5 15:41:07.668627 systemd-resolved[364]: Detected conflict on linux IN A 139.178.70.108 Nov 5 15:41:07.668636 systemd-resolved[364]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Nov 5 15:41:08.441508 disk-uuid[802]: Warning: The kernel is still using the old partition table. Nov 5 15:41:08.441508 disk-uuid[802]: The new table will be used at the next reboot or after you Nov 5 15:41:08.441508 disk-uuid[802]: run partprobe(8) or kpartx(8) Nov 5 15:41:08.441508 disk-uuid[802]: The operation has completed successfully. Nov 5 15:41:08.448689 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 5 15:41:08.448787 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 5 15:41:08.449932 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 5 15:41:08.473324 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (890) Nov 5 15:41:08.475714 kernel: BTRFS info (device sda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:41:08.475741 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:41:08.479399 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 5 15:41:08.479463 kernel: BTRFS info (device sda6): enabling free space tree Nov 5 15:41:08.484452 kernel: BTRFS info (device sda6): last unmount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:41:08.484701 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 5 15:41:08.485970 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 5 15:41:08.680964 ignition[909]: Ignition 2.22.0 Nov 5 15:41:08.680975 ignition[909]: Stage: fetch-offline Nov 5 15:41:08.681002 ignition[909]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:41:08.681009 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 5 15:41:08.681064 ignition[909]: parsed url from cmdline: "" Nov 5 15:41:08.681065 ignition[909]: no config URL provided Nov 5 15:41:08.681069 ignition[909]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 15:41:08.681074 ignition[909]: no config at "/usr/lib/ignition/user.ign" Nov 5 15:41:08.681487 ignition[909]: config successfully fetched Nov 5 15:41:08.681506 ignition[909]: parsing config with SHA512: f16d6b8aacd8d17271612ec5adc164035a0bb0d38758f70a53c72ef42bad1f5df39e3651f7c1bee922f140f0b98955f0a3845bc53c17c31e0725cd12ec5cf721 Nov 5 15:41:08.685032 unknown[909]: fetched base config from "system" Nov 5 15:41:08.685250 ignition[909]: fetch-offline: fetch-offline passed Nov 5 15:41:08.685039 unknown[909]: fetched user config from "vmware" Nov 5 15:41:08.685283 ignition[909]: Ignition finished successfully Nov 5 15:41:08.686832 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 15:41:08.687211 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 5 15:41:08.687854 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 5 15:41:08.706365 ignition[915]: Ignition 2.22.0 Nov 5 15:41:08.706374 ignition[915]: Stage: kargs Nov 5 15:41:08.706489 ignition[915]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:41:08.706496 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 5 15:41:08.706998 ignition[915]: kargs: kargs passed Nov 5 15:41:08.707031 ignition[915]: Ignition finished successfully Nov 5 15:41:08.708778 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 5 15:41:08.709619 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 5 15:41:08.726714 ignition[921]: Ignition 2.22.0 Nov 5 15:41:08.726726 ignition[921]: Stage: disks Nov 5 15:41:08.726804 ignition[921]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:41:08.726809 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 5 15:41:08.727958 ignition[921]: disks: disks passed Nov 5 15:41:08.728070 ignition[921]: Ignition finished successfully Nov 5 15:41:08.729118 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 5 15:41:08.729469 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 5 15:41:08.729728 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 5 15:41:08.729984 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 15:41:08.730210 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 15:41:08.730425 systemd[1]: Reached target basic.target - Basic System. Nov 5 15:41:08.731130 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 5 15:41:08.877950 systemd-fsck[929]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Nov 5 15:41:08.881817 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 5 15:41:08.882714 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 5 15:41:09.007326 kernel: EXT4-fs (sda9): mounted filesystem f3db699e-c9e0-4f6b-8c2b-aa40a78cd116 r/w with ordered data mode. Quota mode: none. Nov 5 15:41:09.007456 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 5 15:41:09.007829 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 5 15:41:09.008998 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 15:41:09.009728 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 5 15:41:09.011559 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 5 15:41:09.011586 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 5 15:41:09.011603 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 15:41:09.019460 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 5 15:41:09.020292 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 5 15:41:09.025338 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (938) Nov 5 15:41:09.028943 kernel: BTRFS info (device sda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:41:09.029007 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:41:09.034347 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 5 15:41:09.034374 kernel: BTRFS info (device sda6): enabling free space tree Nov 5 15:41:09.035242 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 15:41:09.128000 initrd-setup-root[962]: cut: /sysroot/etc/passwd: No such file or directory Nov 5 15:41:09.131314 initrd-setup-root[969]: cut: /sysroot/etc/group: No such file or directory Nov 5 15:41:09.133726 initrd-setup-root[976]: cut: /sysroot/etc/shadow: No such file or directory Nov 5 15:41:09.136283 initrd-setup-root[983]: cut: /sysroot/etc/gshadow: No such file or directory Nov 5 15:41:09.240437 systemd-networkd[736]: ens192: Gained IPv6LL Nov 5 15:41:09.253400 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 5 15:41:09.254242 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 5 15:41:09.255387 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 5 15:41:09.272633 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 5 15:41:09.274344 kernel: BTRFS info (device sda6): last unmount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:41:09.293331 ignition[1050]: INFO : Ignition 2.22.0 Nov 5 15:41:09.293331 ignition[1050]: INFO : Stage: mount Nov 5 15:41:09.293699 ignition[1050]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:41:09.293699 ignition[1050]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 5 15:41:09.294044 ignition[1050]: INFO : mount: mount passed Nov 5 15:41:09.294153 ignition[1050]: INFO : Ignition finished successfully Nov 5 15:41:09.294930 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 5 15:41:09.295607 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 5 15:41:09.308466 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 15:41:09.328655 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 5 15:41:09.373007 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1060) Nov 5 15:41:09.373042 kernel: BTRFS info (device sda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:41:09.373052 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:41:09.376523 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 5 15:41:09.376550 kernel: BTRFS info (device sda6): enabling free space tree Nov 5 15:41:09.378185 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 15:41:09.400733 ignition[1080]: INFO : Ignition 2.22.0 Nov 5 15:41:09.400733 ignition[1080]: INFO : Stage: files Nov 5 15:41:09.401170 ignition[1080]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:41:09.401170 ignition[1080]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 5 15:41:09.401429 ignition[1080]: DEBUG : files: compiled without relabeling support, skipping Nov 5 15:41:09.401832 ignition[1080]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 5 15:41:09.401832 ignition[1080]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 5 15:41:09.404773 ignition[1080]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 5 15:41:09.404961 ignition[1080]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 5 15:41:09.405114 ignition[1080]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 5 15:41:09.405015 unknown[1080]: wrote ssh authorized keys file for user: core Nov 5 15:41:09.406924 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 5 15:41:09.407182 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 5 15:41:09.453742 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 5 15:41:09.547352 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 5 15:41:09.547352 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 5 15:41:09.547352 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 5 15:41:09.547352 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 5 15:41:09.547352 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 5 15:41:09.547352 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 15:41:09.547352 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 15:41:09.547352 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 15:41:09.548729 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 15:41:09.548729 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 15:41:09.548729 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 15:41:09.548729 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 15:41:09.550740 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 15:41:09.550961 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 15:41:09.550961 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 5 15:41:11.212082 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 5 15:41:12.603137 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 15:41:12.603539 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Nov 5 15:41:12.604392 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Nov 5 15:41:12.604392 ignition[1080]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 5 15:41:12.606009 ignition[1080]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 15:41:12.606392 ignition[1080]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 15:41:12.606392 ignition[1080]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 5 15:41:12.606392 ignition[1080]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 5 15:41:12.606823 ignition[1080]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 5 15:41:12.606823 ignition[1080]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 5 15:41:12.606823 ignition[1080]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 5 15:41:12.606823 ignition[1080]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 5 15:41:12.674545 ignition[1080]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 5 15:41:12.676775 ignition[1080]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 5 15:41:12.676935 ignition[1080]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 5 15:41:12.676935 ignition[1080]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 5 15:41:12.676935 ignition[1080]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 5 15:41:12.677366 ignition[1080]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 5 15:41:12.677366 ignition[1080]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 5 15:41:12.677366 ignition[1080]: INFO : files: files passed Nov 5 15:41:12.677366 ignition[1080]: INFO : Ignition finished successfully Nov 5 15:41:12.678311 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 5 15:41:12.679090 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 5 15:41:12.680375 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 5 15:41:12.690653 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 5 15:41:12.690714 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 5 15:41:12.694835 initrd-setup-root-after-ignition[1114]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:41:12.694835 initrd-setup-root-after-ignition[1114]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:41:12.695761 initrd-setup-root-after-ignition[1118]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:41:12.696479 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 15:41:12.696856 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 5 15:41:12.697446 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 5 15:41:12.723606 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 5 15:41:12.723685 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 5 15:41:12.723966 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 5 15:41:12.724094 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 5 15:41:12.724426 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 5 15:41:12.724931 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 5 15:41:12.740823 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 15:41:12.741871 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 5 15:41:12.756338 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 15:41:12.756481 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:41:12.756953 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:41:12.757283 systemd[1]: Stopped target timers.target - Timer Units. Nov 5 15:41:12.757566 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 5 15:41:12.757750 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 15:41:12.758165 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 5 15:41:12.758459 systemd[1]: Stopped target basic.target - Basic System. Nov 5 15:41:12.758699 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 5 15:41:12.759006 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 15:41:12.759301 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 5 15:41:12.759643 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 5 15:41:12.759951 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 5 15:41:12.760232 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 15:41:12.760528 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 5 15:41:12.760827 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 5 15:41:12.761100 systemd[1]: Stopped target swap.target - Swaps. Nov 5 15:41:12.761345 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 5 15:41:12.761523 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 5 15:41:12.761904 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:41:12.762196 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:41:12.762490 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 5 15:41:12.762648 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:41:12.762940 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 5 15:41:12.763005 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 5 15:41:12.763468 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 5 15:41:12.763542 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 15:41:12.763993 systemd[1]: Stopped target paths.target - Path Units. Nov 5 15:41:12.764230 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 5 15:41:12.764421 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:41:12.764741 systemd[1]: Stopped target slices.target - Slice Units. Nov 5 15:41:12.764881 systemd[1]: Stopped target sockets.target - Socket Units. Nov 5 15:41:12.765247 systemd[1]: iscsid.socket: Deactivated successfully. Nov 5 15:41:12.765302 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 15:41:12.765585 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 5 15:41:12.765634 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 15:41:12.766035 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 5 15:41:12.766104 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 15:41:12.766540 systemd[1]: ignition-files.service: Deactivated successfully. Nov 5 15:41:12.766606 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 5 15:41:12.767608 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 5 15:41:12.767844 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 5 15:41:12.768014 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:41:12.770395 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 5 15:41:12.770632 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 5 15:41:12.770819 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:41:12.771130 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 5 15:41:12.771202 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:41:12.771635 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 5 15:41:12.771700 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 15:41:12.774236 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 5 15:41:12.774569 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 5 15:41:12.785424 ignition[1139]: INFO : Ignition 2.22.0 Nov 5 15:41:12.785424 ignition[1139]: INFO : Stage: umount Nov 5 15:41:12.785818 ignition[1139]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:41:12.785818 ignition[1139]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 5 15:41:12.786290 ignition[1139]: INFO : umount: umount passed Nov 5 15:41:12.786290 ignition[1139]: INFO : Ignition finished successfully Nov 5 15:41:12.786925 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 5 15:41:12.788079 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 5 15:41:12.788150 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 5 15:41:12.788405 systemd[1]: Stopped target network.target - Network. Nov 5 15:41:12.788510 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 5 15:41:12.788537 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 5 15:41:12.788680 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 5 15:41:12.788704 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 5 15:41:12.788831 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 5 15:41:12.788856 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 5 15:41:12.789047 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 5 15:41:12.789068 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 5 15:41:12.789254 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 5 15:41:12.789542 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 5 15:41:12.798409 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 5 15:41:12.798484 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 5 15:41:12.800104 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 5 15:41:12.800165 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 5 15:41:12.801063 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 5 15:41:12.801195 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 5 15:41:12.801214 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:41:12.801817 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 5 15:41:12.801905 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 5 15:41:12.801932 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 15:41:12.802053 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Nov 5 15:41:12.802075 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Nov 5 15:41:12.802188 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 15:41:12.802209 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:41:12.802318 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 5 15:41:12.802343 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 5 15:41:12.802454 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:41:12.814063 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 5 15:41:12.814159 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:41:12.814423 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 5 15:41:12.814444 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 5 15:41:12.814653 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 5 15:41:12.814670 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:41:12.814831 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 5 15:41:12.814858 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 5 15:41:12.815126 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 5 15:41:12.815151 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 5 15:41:12.815925 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 5 15:41:12.815948 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 15:41:12.816709 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 5 15:41:12.816806 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 5 15:41:12.816834 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:41:12.816950 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 5 15:41:12.816972 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:41:12.817081 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:41:12.817103 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:41:12.826599 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 5 15:41:12.826668 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 5 15:41:12.849900 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 5 15:41:12.850146 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 5 15:41:13.140659 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 5 15:41:13.140727 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 5 15:41:13.141157 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 5 15:41:13.141272 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 5 15:41:13.141319 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 5 15:41:13.141923 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 5 15:41:13.159274 systemd[1]: Switching root. Nov 5 15:41:13.189883 systemd-journald[332]: Journal stopped Nov 5 15:41:14.330209 systemd-journald[332]: Received SIGTERM from PID 1 (systemd). Nov 5 15:41:14.330244 kernel: SELinux: policy capability network_peer_controls=1 Nov 5 15:41:14.330253 kernel: SELinux: policy capability open_perms=1 Nov 5 15:41:14.330260 kernel: SELinux: policy capability extended_socket_class=1 Nov 5 15:41:14.330266 kernel: SELinux: policy capability always_check_network=0 Nov 5 15:41:14.330272 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 5 15:41:14.330280 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 5 15:41:14.330287 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 5 15:41:14.330293 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 5 15:41:14.330299 kernel: SELinux: policy capability userspace_initial_context=0 Nov 5 15:41:14.331387 kernel: audit: type=1403 audit(1762357273.671:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 5 15:41:14.331409 systemd[1]: Successfully loaded SELinux policy in 52.628ms. Nov 5 15:41:14.331421 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 3.727ms. Nov 5 15:41:14.331430 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 15:41:14.331438 systemd[1]: Detected virtualization vmware. Nov 5 15:41:14.331447 systemd[1]: Detected architecture x86-64. Nov 5 15:41:14.331455 systemd[1]: Detected first boot. Nov 5 15:41:14.331462 systemd[1]: Initializing machine ID from random generator. Nov 5 15:41:14.331469 zram_generator::config[1182]: No configuration found. Nov 5 15:41:14.331577 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Nov 5 15:41:14.331591 kernel: Guest personality initialized and is active Nov 5 15:41:14.331598 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 5 15:41:14.331605 kernel: Initialized host personality Nov 5 15:41:14.331612 kernel: NET: Registered PF_VSOCK protocol family Nov 5 15:41:14.331619 systemd[1]: Populated /etc with preset unit settings. Nov 5 15:41:14.331627 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 5 15:41:14.331636 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Nov 5 15:41:14.331644 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 5 15:41:14.331651 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 5 15:41:14.331658 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 5 15:41:14.331665 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 5 15:41:14.331673 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 5 15:41:14.331681 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 5 15:41:14.331689 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 5 15:41:14.331696 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 5 15:41:14.331704 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 5 15:41:14.331711 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 5 15:41:14.331718 systemd[1]: Created slice user.slice - User and Session Slice. Nov 5 15:41:14.331727 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:41:14.331735 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:41:14.331744 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 5 15:41:14.331752 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 5 15:41:14.331760 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 5 15:41:14.331768 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 15:41:14.331776 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 5 15:41:14.331784 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:41:14.331792 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:41:14.331799 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 5 15:41:14.331807 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 5 15:41:14.331815 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 5 15:41:14.331823 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 5 15:41:14.331831 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:41:14.331840 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 15:41:14.331847 systemd[1]: Reached target slices.target - Slice Units. Nov 5 15:41:14.331854 systemd[1]: Reached target swap.target - Swaps. Nov 5 15:41:14.331862 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 5 15:41:14.331869 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 5 15:41:14.331878 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 5 15:41:14.331886 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:41:14.331894 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 15:41:14.331901 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:41:14.331910 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 5 15:41:14.331918 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 5 15:41:14.331925 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 5 15:41:14.331933 systemd[1]: Mounting media.mount - External Media Directory... Nov 5 15:41:14.331941 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:41:14.331949 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 5 15:41:14.331957 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 5 15:41:14.331966 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 5 15:41:14.331974 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 5 15:41:14.331981 systemd[1]: Reached target machines.target - Containers. Nov 5 15:41:14.331989 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 5 15:41:14.331996 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Nov 5 15:41:14.332004 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 15:41:14.332011 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 5 15:41:14.332020 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:41:14.332035 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 15:41:14.332043 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:41:14.332051 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 5 15:41:14.332058 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:41:14.332066 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 5 15:41:14.332075 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 5 15:41:14.332083 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 5 15:41:14.332091 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 5 15:41:14.332098 systemd[1]: Stopped systemd-fsck-usr.service. Nov 5 15:41:14.332106 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:41:14.332114 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 15:41:14.332122 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 15:41:14.332131 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 15:41:14.332139 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 5 15:41:14.332147 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 5 15:41:14.332169 systemd-journald[1265]: Collecting audit messages is disabled. Nov 5 15:41:14.332188 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 15:41:14.332196 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:41:14.332204 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 5 15:41:14.332212 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 5 15:41:14.332219 systemd[1]: Mounted media.mount - External Media Directory. Nov 5 15:41:14.332227 systemd-journald[1265]: Journal started Nov 5 15:41:14.332243 systemd-journald[1265]: Runtime Journal (/run/log/journal/d01bccf975db4ee08b25ad6235fb0c77) is 4.8M, max 38.5M, 33.7M free. Nov 5 15:41:14.185246 systemd[1]: Queued start job for default target multi-user.target. Nov 5 15:41:14.332996 kernel: fuse: init (API version 7.41) Nov 5 15:41:14.196105 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 5 15:41:14.196359 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 5 15:41:14.333347 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 15:41:14.333420 jq[1252]: true Nov 5 15:41:14.334224 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 5 15:41:14.335441 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 5 15:41:14.335610 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 5 15:41:14.336596 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:41:14.336841 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 5 15:41:14.336951 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 5 15:41:14.337183 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:41:14.337279 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:41:14.338466 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:41:14.338569 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:41:14.338978 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 5 15:41:14.339078 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 5 15:41:14.339299 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:41:14.339425 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:41:14.339679 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 15:41:14.339927 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 5 15:41:14.346702 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:41:14.348049 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 15:41:14.357320 jq[1274]: true Nov 5 15:41:14.354767 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 5 15:41:14.355894 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 5 15:41:14.358435 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 5 15:41:14.358571 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 5 15:41:14.358591 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 15:41:14.359270 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 5 15:41:14.364427 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:41:14.369429 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 5 15:41:14.373455 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 5 15:41:14.373618 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:41:14.375447 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 5 15:41:14.375600 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:41:14.381477 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:41:14.385837 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 5 15:41:14.391395 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 5 15:41:14.395491 kernel: ACPI: bus type drm_connector registered Nov 5 15:41:14.396923 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 5 15:41:14.397407 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 5 15:41:14.401731 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 15:41:14.401877 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 15:41:14.405405 systemd-journald[1265]: Time spent on flushing to /var/log/journal/d01bccf975db4ee08b25ad6235fb0c77 is 44.185ms for 1745 entries. Nov 5 15:41:14.405405 systemd-journald[1265]: System Journal (/var/log/journal/d01bccf975db4ee08b25ad6235fb0c77) is 8M, max 588.1M, 580.1M free. Nov 5 15:41:14.484557 systemd-journald[1265]: Received client request to flush runtime journal. Nov 5 15:41:14.484608 kernel: loop1: detected capacity change from 0 to 110984 Nov 5 15:41:14.475781 ignition[1316]: Ignition 2.22.0 Nov 5 15:41:14.408020 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 5 15:41:14.475977 ignition[1316]: deleting config from guestinfo properties Nov 5 15:41:14.408459 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 5 15:41:14.411489 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 5 15:41:14.431039 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 5 15:41:14.433022 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 5 15:41:14.474514 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:41:14.480736 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 5 15:41:14.487358 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 5 15:41:14.494593 ignition[1316]: Successfully deleted config Nov 5 15:41:14.497480 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 5 15:41:14.500043 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 15:41:14.507321 kernel: loop2: detected capacity change from 0 to 2960 Nov 5 15:41:14.505477 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 15:41:14.505825 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Nov 5 15:41:14.517070 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:41:14.525708 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 5 15:41:14.532338 systemd-tmpfiles[1350]: ACLs are not supported, ignoring. Nov 5 15:41:14.532350 systemd-tmpfiles[1350]: ACLs are not supported, ignoring. Nov 5 15:41:14.535842 kernel: loop3: detected capacity change from 0 to 229808 Nov 5 15:41:14.535604 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:41:14.559576 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 5 15:41:14.566319 kernel: loop4: detected capacity change from 0 to 128048 Nov 5 15:41:14.592320 kernel: loop5: detected capacity change from 0 to 110984 Nov 5 15:41:14.605324 kernel: loop6: detected capacity change from 0 to 2960 Nov 5 15:41:14.607063 systemd-resolved[1349]: Positive Trust Anchors: Nov 5 15:41:14.607073 systemd-resolved[1349]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 15:41:14.607076 systemd-resolved[1349]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 15:41:14.607098 systemd-resolved[1349]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 15:41:14.609746 systemd-resolved[1349]: Defaulting to hostname 'linux'. Nov 5 15:41:14.610499 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 15:41:14.610652 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:41:14.617323 kernel: loop7: detected capacity change from 0 to 229808 Nov 5 15:41:14.710352 kernel: loop1: detected capacity change from 0 to 128048 Nov 5 15:41:14.718009 (sd-merge)[1364]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-vmware.raw'. Nov 5 15:41:14.720226 (sd-merge)[1364]: Merged extensions into '/usr'. Nov 5 15:41:14.722807 systemd[1]: Reload requested from client PID 1312 ('systemd-sysext') (unit systemd-sysext.service)... Nov 5 15:41:14.722816 systemd[1]: Reloading... Nov 5 15:41:14.747333 zram_generator::config[1388]: No configuration found. Nov 5 15:41:14.854598 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 5 15:41:14.902075 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 5 15:41:14.902376 systemd[1]: Reloading finished in 179 ms. Nov 5 15:41:14.929240 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 5 15:41:14.938407 systemd[1]: Starting ensure-sysext.service... Nov 5 15:41:14.939513 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 15:41:14.950418 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 5 15:41:14.952403 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:41:14.955681 systemd[1]: Reload requested from client PID 1447 ('systemctl') (unit ensure-sysext.service)... Nov 5 15:41:14.955695 systemd[1]: Reloading... Nov 5 15:41:14.971091 systemd-tmpfiles[1448]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 5 15:41:14.971122 systemd-tmpfiles[1448]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 5 15:41:14.971295 systemd-tmpfiles[1448]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 5 15:41:14.971467 systemd-tmpfiles[1448]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 5 15:41:14.971942 systemd-tmpfiles[1448]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 5 15:41:14.972100 systemd-tmpfiles[1448]: ACLs are not supported, ignoring. Nov 5 15:41:14.972138 systemd-tmpfiles[1448]: ACLs are not supported, ignoring. Nov 5 15:41:14.977965 systemd-tmpfiles[1448]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 15:41:14.977971 systemd-tmpfiles[1448]: Skipping /boot Nov 5 15:41:14.983495 systemd-tmpfiles[1448]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 15:41:14.983502 systemd-tmpfiles[1448]: Skipping /boot Nov 5 15:41:14.989079 systemd-udevd[1451]: Using default interface naming scheme 'v257'. Nov 5 15:41:15.012333 zram_generator::config[1479]: No configuration found. Nov 5 15:41:15.098799 kernel: mousedev: PS/2 mouse device common for all mice Nov 5 15:41:15.108410 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 5 15:41:15.113327 kernel: ACPI: button: Power Button [PWRF] Nov 5 15:41:15.137675 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 5 15:41:15.191408 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Nov 5 15:41:15.202653 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 5 15:41:15.202962 systemd[1]: Reloading finished in 247 ms. Nov 5 15:41:15.210522 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:41:15.225423 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:41:15.239350 systemd[1]: Finished ensure-sysext.service. Nov 5 15:41:15.247020 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:41:15.249767 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 15:41:15.251108 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 5 15:41:15.255156 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 5 15:41:15.259497 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:41:15.262032 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 15:41:15.266597 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:41:15.269091 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:41:15.269467 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:41:15.269496 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:41:15.276235 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 5 15:41:15.278392 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 15:41:15.287957 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 5 15:41:15.292497 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 5 15:41:15.292623 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:41:15.293176 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:41:15.294366 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:41:15.294607 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 15:41:15.294723 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 15:41:15.294925 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:41:15.295023 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:41:15.295227 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:41:15.296547 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:41:15.297627 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:41:15.297672 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:41:15.324505 (udev-worker)[1493]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Nov 5 15:41:15.334814 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 5 15:41:15.342075 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 5 15:41:15.348724 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Nov 5 15:41:15.350488 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 5 15:41:15.359533 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:41:15.370580 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 5 15:41:15.394894 augenrules[1625]: No rules Nov 5 15:41:15.397340 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 15:41:15.397492 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 15:41:15.438878 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 5 15:41:15.439351 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 15:41:15.493665 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 5 15:41:15.496660 systemd[1]: Reached target time-set.target - System Time Set. Nov 5 15:41:15.497408 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:41:15.502400 systemd-networkd[1585]: lo: Link UP Nov 5 15:41:15.502550 systemd-networkd[1585]: lo: Gained carrier Nov 5 15:41:15.503669 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 15:41:15.503826 systemd[1]: Reached target network.target - Network. Nov 5 15:41:15.507100 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Nov 5 15:41:15.507257 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Nov 5 15:41:15.504712 systemd-networkd[1585]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Nov 5 15:41:15.507701 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 5 15:41:15.510384 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 5 15:41:15.510928 systemd-networkd[1585]: ens192: Link UP Nov 5 15:41:15.511103 systemd-networkd[1585]: ens192: Gained carrier Nov 5 15:41:15.514154 systemd-timesyncd[1589]: Network configuration changed, trying to establish connection. Nov 5 15:42:54.696693 systemd-timesyncd[1589]: Contacted time server 45.79.82.45:123 (1.flatcar.pool.ntp.org). Nov 5 15:42:54.696828 systemd-timesyncd[1589]: Initial clock synchronization to Wed 2025-11-05 15:42:54.696609 UTC. Nov 5 15:42:54.697013 systemd-resolved[1349]: Clock change detected. Flushing caches. Nov 5 15:42:54.715819 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 5 15:42:55.075148 ldconfig[1573]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 5 15:42:55.111220 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 5 15:42:55.112419 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 5 15:42:55.131720 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 5 15:42:55.131977 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 15:42:55.132144 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 5 15:42:55.132264 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 5 15:42:55.132387 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 5 15:42:55.132574 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 5 15:42:55.132730 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 5 15:42:55.132840 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 5 15:42:55.132956 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 5 15:42:55.132986 systemd[1]: Reached target paths.target - Path Units. Nov 5 15:42:55.133074 systemd[1]: Reached target timers.target - Timer Units. Nov 5 15:42:55.135355 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 5 15:42:55.136592 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 5 15:42:55.138219 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 5 15:42:55.138437 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 5 15:42:55.138550 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 5 15:42:55.140023 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 5 15:42:55.140330 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 5 15:42:55.140825 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 5 15:42:55.141385 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 15:42:55.141480 systemd[1]: Reached target basic.target - Basic System. Nov 5 15:42:55.141610 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 5 15:42:55.141628 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 5 15:42:55.142436 systemd[1]: Starting containerd.service - containerd container runtime... Nov 5 15:42:55.145060 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 5 15:42:55.148420 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 5 15:42:55.152013 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 5 15:42:55.152939 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 5 15:42:55.153046 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 5 15:42:55.155102 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 5 15:42:55.164667 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 5 15:42:55.166018 jq[1650]: false Nov 5 15:42:55.166234 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 5 15:42:55.168701 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 5 15:42:55.174691 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 5 15:42:55.181988 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 5 15:42:55.182118 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 5 15:42:55.182580 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 5 15:42:55.184512 systemd[1]: Starting update-engine.service - Update Engine... Nov 5 15:42:55.185554 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 5 15:42:55.185985 oslogin_cache_refresh[1652]: Refreshing passwd entry cache Nov 5 15:42:55.188269 google_oslogin_nss_cache[1652]: oslogin_cache_refresh[1652]: Refreshing passwd entry cache Nov 5 15:42:55.189084 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Nov 5 15:42:55.193342 google_oslogin_nss_cache[1652]: oslogin_cache_refresh[1652]: Failure getting users, quitting Nov 5 15:42:55.193342 google_oslogin_nss_cache[1652]: oslogin_cache_refresh[1652]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 15:42:55.193342 google_oslogin_nss_cache[1652]: oslogin_cache_refresh[1652]: Refreshing group entry cache Nov 5 15:42:55.193237 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 5 15:42:55.193062 oslogin_cache_refresh[1652]: Failure getting users, quitting Nov 5 15:42:55.193073 oslogin_cache_refresh[1652]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 15:42:55.193485 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 5 15:42:55.193098 oslogin_cache_refresh[1652]: Refreshing group entry cache Nov 5 15:42:55.193606 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 5 15:42:55.195568 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 5 15:42:55.196696 extend-filesystems[1651]: Found /dev/sda6 Nov 5 15:42:55.197615 google_oslogin_nss_cache[1652]: oslogin_cache_refresh[1652]: Failure getting groups, quitting Nov 5 15:42:55.197615 google_oslogin_nss_cache[1652]: oslogin_cache_refresh[1652]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 15:42:55.197214 oslogin_cache_refresh[1652]: Failure getting groups, quitting Nov 5 15:42:55.197219 oslogin_cache_refresh[1652]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 15:42:55.199235 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 5 15:42:55.199605 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 5 15:42:55.199732 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 5 15:42:55.201208 systemd[1]: motdgen.service: Deactivated successfully. Nov 5 15:42:55.201340 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 5 15:42:55.211833 extend-filesystems[1651]: Found /dev/sda9 Nov 5 15:42:55.214936 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Nov 5 15:42:55.217786 jq[1665]: true Nov 5 15:42:55.219066 update_engine[1663]: I20251105 15:42:55.216938 1663 main.cc:92] Flatcar Update Engine starting Nov 5 15:42:55.222350 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Nov 5 15:42:55.222708 tar[1673]: linux-amd64/LICENSE Nov 5 15:42:55.223581 extend-filesystems[1651]: Checking size of /dev/sda9 Nov 5 15:42:55.223803 tar[1673]: linux-amd64/helm Nov 5 15:42:55.231938 (ntainerd)[1687]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 5 15:42:55.234985 jq[1689]: true Nov 5 15:42:55.248619 extend-filesystems[1651]: Resized partition /dev/sda9 Nov 5 15:42:55.257438 unknown[1685]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Nov 5 15:42:55.259573 unknown[1685]: Core dump limit set to -1 Nov 5 15:42:55.261203 extend-filesystems[1704]: resize2fs 1.47.3 (8-Jul-2025) Nov 5 15:42:55.262785 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Nov 5 15:42:55.279163 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 1635323 blocks Nov 5 15:42:55.279204 kernel: EXT4-fs (sda9): resized filesystem to 1635323 Nov 5 15:42:55.291529 extend-filesystems[1704]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 5 15:42:55.291529 extend-filesystems[1704]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 5 15:42:55.291529 extend-filesystems[1704]: The filesystem on /dev/sda9 is now 1635323 (4k) blocks long. Nov 5 15:42:55.292611 extend-filesystems[1651]: Resized filesystem in /dev/sda9 Nov 5 15:42:55.293401 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 5 15:42:55.293565 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 5 15:42:55.297207 dbus-daemon[1648]: [system] SELinux support is enabled Nov 5 15:42:55.297860 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 5 15:42:55.299576 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 5 15:42:55.299595 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 5 15:42:55.299728 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 5 15:42:55.299741 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 5 15:42:55.309615 systemd[1]: Started update-engine.service - Update Engine. Nov 5 15:42:55.311911 update_engine[1663]: I20251105 15:42:55.311791 1663 update_check_scheduler.cc:74] Next update check in 9m47s Nov 5 15:42:55.314116 systemd-logind[1661]: Watching system buttons on /dev/input/event2 (Power Button) Nov 5 15:42:55.317485 systemd-logind[1661]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 5 15:42:55.318698 systemd-logind[1661]: New seat seat0. Nov 5 15:42:55.332943 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 5 15:42:55.333213 systemd[1]: Started systemd-logind.service - User Login Management. Nov 5 15:42:55.344775 bash[1725]: Updated "/home/core/.ssh/authorized_keys" Nov 5 15:42:55.349542 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 5 15:42:55.350257 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 5 15:42:55.365913 sshd_keygen[1682]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 5 15:42:55.386148 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 5 15:42:55.388410 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 5 15:42:55.420272 systemd[1]: issuegen.service: Deactivated successfully. Nov 5 15:42:55.420588 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 5 15:42:55.429899 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 5 15:42:55.454166 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 5 15:42:55.455338 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 5 15:42:55.457156 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 5 15:42:55.457368 systemd[1]: Reached target getty.target - Login Prompts. Nov 5 15:42:55.522131 locksmithd[1724]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 5 15:42:55.589745 containerd[1687]: time="2025-11-05T15:42:55Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 5 15:42:55.591792 containerd[1687]: time="2025-11-05T15:42:55.590556367Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 5 15:42:55.600019 containerd[1687]: time="2025-11-05T15:42:55.599995389Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="5.935µs" Nov 5 15:42:55.600019 containerd[1687]: time="2025-11-05T15:42:55.600013918Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 5 15:42:55.600088 containerd[1687]: time="2025-11-05T15:42:55.600025256Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 5 15:42:55.600135 containerd[1687]: time="2025-11-05T15:42:55.600119707Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 5 15:42:55.600155 containerd[1687]: time="2025-11-05T15:42:55.600134111Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 5 15:42:55.600155 containerd[1687]: time="2025-11-05T15:42:55.600150942Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 15:42:55.600195 containerd[1687]: time="2025-11-05T15:42:55.600185564Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 15:42:55.600195 containerd[1687]: time="2025-11-05T15:42:55.600193040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 15:42:55.600316 containerd[1687]: time="2025-11-05T15:42:55.600299885Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 15:42:55.600316 containerd[1687]: time="2025-11-05T15:42:55.600311181Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 15:42:55.600348 containerd[1687]: time="2025-11-05T15:42:55.600317989Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 15:42:55.600348 containerd[1687]: time="2025-11-05T15:42:55.600322500Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 5 15:42:55.600381 containerd[1687]: time="2025-11-05T15:42:55.600362653Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 5 15:42:55.600487 containerd[1687]: time="2025-11-05T15:42:55.600472378Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 15:42:55.600506 containerd[1687]: time="2025-11-05T15:42:55.600491681Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 15:42:55.600506 containerd[1687]: time="2025-11-05T15:42:55.600497756Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 5 15:42:55.600542 containerd[1687]: time="2025-11-05T15:42:55.600514352Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 5 15:42:55.600638 containerd[1687]: time="2025-11-05T15:42:55.600622957Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 5 15:42:55.600662 containerd[1687]: time="2025-11-05T15:42:55.600654449Z" level=info msg="metadata content store policy set" policy=shared Nov 5 15:42:55.603255 containerd[1687]: time="2025-11-05T15:42:55.603236258Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 5 15:42:55.603294 containerd[1687]: time="2025-11-05T15:42:55.603262769Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 5 15:42:55.603294 containerd[1687]: time="2025-11-05T15:42:55.603271231Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 5 15:42:55.603294 containerd[1687]: time="2025-11-05T15:42:55.603277828Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 5 15:42:55.603294 containerd[1687]: time="2025-11-05T15:42:55.603284610Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 5 15:42:55.603294 containerd[1687]: time="2025-11-05T15:42:55.603290368Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 5 15:42:55.603360 containerd[1687]: time="2025-11-05T15:42:55.603298068Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 5 15:42:55.603360 containerd[1687]: time="2025-11-05T15:42:55.603304800Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 5 15:42:55.603360 containerd[1687]: time="2025-11-05T15:42:55.603310546Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 5 15:42:55.603360 containerd[1687]: time="2025-11-05T15:42:55.603315612Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 5 15:42:55.603360 containerd[1687]: time="2025-11-05T15:42:55.603320169Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 5 15:42:55.603360 containerd[1687]: time="2025-11-05T15:42:55.603327485Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 5 15:42:55.603435 containerd[1687]: time="2025-11-05T15:42:55.603384890Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 5 15:42:55.603435 containerd[1687]: time="2025-11-05T15:42:55.603397244Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 5 15:42:55.603615 containerd[1687]: time="2025-11-05T15:42:55.603540114Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 5 15:42:55.603615 containerd[1687]: time="2025-11-05T15:42:55.603558729Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 5 15:42:55.603615 containerd[1687]: time="2025-11-05T15:42:55.603571549Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 5 15:42:55.603615 containerd[1687]: time="2025-11-05T15:42:55.603580613Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 5 15:42:55.603615 containerd[1687]: time="2025-11-05T15:42:55.603587854Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 5 15:42:55.603615 containerd[1687]: time="2025-11-05T15:42:55.603611466Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 5 15:42:55.603708 containerd[1687]: time="2025-11-05T15:42:55.603621274Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 5 15:42:55.603708 containerd[1687]: time="2025-11-05T15:42:55.603629496Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 5 15:42:55.603708 containerd[1687]: time="2025-11-05T15:42:55.603636158Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 5 15:42:55.603708 containerd[1687]: time="2025-11-05T15:42:55.603675735Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 5 15:42:55.603708 containerd[1687]: time="2025-11-05T15:42:55.603690537Z" level=info msg="Start snapshots syncer" Nov 5 15:42:55.603774 containerd[1687]: time="2025-11-05T15:42:55.603709505Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 5 15:42:55.603960 containerd[1687]: time="2025-11-05T15:42:55.603910981Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 5 15:42:55.604032 containerd[1687]: time="2025-11-05T15:42:55.603958946Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 5 15:42:55.604446 containerd[1687]: time="2025-11-05T15:42:55.604264469Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 5 15:42:55.604446 containerd[1687]: time="2025-11-05T15:42:55.604336110Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 5 15:42:55.604446 containerd[1687]: time="2025-11-05T15:42:55.604355181Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 5 15:42:55.604446 containerd[1687]: time="2025-11-05T15:42:55.604362920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 5 15:42:55.604446 containerd[1687]: time="2025-11-05T15:42:55.604372724Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 5 15:42:55.604446 containerd[1687]: time="2025-11-05T15:42:55.604382768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 5 15:42:55.604446 containerd[1687]: time="2025-11-05T15:42:55.604391458Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 5 15:42:55.604446 containerd[1687]: time="2025-11-05T15:42:55.604398479Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 5 15:42:55.604446 containerd[1687]: time="2025-11-05T15:42:55.604415324Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 5 15:42:55.604446 containerd[1687]: time="2025-11-05T15:42:55.604424671Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 5 15:42:55.604446 containerd[1687]: time="2025-11-05T15:42:55.604433761Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 5 15:42:55.604893 containerd[1687]: time="2025-11-05T15:42:55.604883166Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 15:42:55.604935 containerd[1687]: time="2025-11-05T15:42:55.604924515Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 15:42:55.605347 containerd[1687]: time="2025-11-05T15:42:55.605014031Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 15:42:55.605347 containerd[1687]: time="2025-11-05T15:42:55.605029944Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 15:42:55.605347 containerd[1687]: time="2025-11-05T15:42:55.605037795Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 5 15:42:55.605347 containerd[1687]: time="2025-11-05T15:42:55.605044383Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 5 15:42:55.605347 containerd[1687]: time="2025-11-05T15:42:55.605056263Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 5 15:42:55.605347 containerd[1687]: time="2025-11-05T15:42:55.605073205Z" level=info msg="runtime interface created" Nov 5 15:42:55.605347 containerd[1687]: time="2025-11-05T15:42:55.605076926Z" level=info msg="created NRI interface" Nov 5 15:42:55.605347 containerd[1687]: time="2025-11-05T15:42:55.605081945Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 5 15:42:55.605347 containerd[1687]: time="2025-11-05T15:42:55.605091491Z" level=info msg="Connect containerd service" Nov 5 15:42:55.605347 containerd[1687]: time="2025-11-05T15:42:55.605110940Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 5 15:42:55.605923 containerd[1687]: time="2025-11-05T15:42:55.605911393Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 15:42:55.646404 tar[1673]: linux-amd64/README.md Nov 5 15:42:55.655313 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 5 15:42:55.740414 containerd[1687]: time="2025-11-05T15:42:55.740387905Z" level=info msg="Start subscribing containerd event" Nov 5 15:42:55.740487 containerd[1687]: time="2025-11-05T15:42:55.740418790Z" level=info msg="Start recovering state" Nov 5 15:42:55.740487 containerd[1687]: time="2025-11-05T15:42:55.740471556Z" level=info msg="Start event monitor" Nov 5 15:42:55.740487 containerd[1687]: time="2025-11-05T15:42:55.740478837Z" level=info msg="Start cni network conf syncer for default" Nov 5 15:42:55.740487 containerd[1687]: time="2025-11-05T15:42:55.740482638Z" level=info msg="Start streaming server" Nov 5 15:42:55.740558 containerd[1687]: time="2025-11-05T15:42:55.740491098Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 5 15:42:55.740558 containerd[1687]: time="2025-11-05T15:42:55.740495200Z" level=info msg="runtime interface starting up..." Nov 5 15:42:55.740558 containerd[1687]: time="2025-11-05T15:42:55.740498007Z" level=info msg="starting plugins..." Nov 5 15:42:55.740558 containerd[1687]: time="2025-11-05T15:42:55.740505636Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 5 15:42:55.740719 containerd[1687]: time="2025-11-05T15:42:55.740707888Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 5 15:42:55.740754 containerd[1687]: time="2025-11-05T15:42:55.740741174Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 5 15:42:55.741006 containerd[1687]: time="2025-11-05T15:42:55.740774423Z" level=info msg="containerd successfully booted in 0.151267s" Nov 5 15:42:55.740858 systemd[1]: Started containerd.service - containerd container runtime. Nov 5 15:42:55.899063 systemd-networkd[1585]: ens192: Gained IPv6LL Nov 5 15:42:55.900755 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 5 15:42:55.901387 systemd[1]: Reached target network-online.target - Network is Online. Nov 5 15:42:55.902617 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Nov 5 15:42:55.905084 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:42:55.912066 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 5 15:42:55.936678 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 5 15:42:55.941432 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 5 15:42:55.941582 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Nov 5 15:42:55.941910 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 5 15:42:57.091702 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:42:57.092221 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 5 15:42:57.092820 systemd[1]: Startup finished in 2.361s (kernel) + 7.268s (initrd) + 4.300s (userspace) = 13.930s. Nov 5 15:42:57.094338 (kubelet)[1853]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:42:57.474582 login[1751]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 5 15:42:57.476327 login[1752]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 5 15:42:57.480793 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 5 15:42:57.481408 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 5 15:42:57.491290 systemd-logind[1661]: New session 2 of user core. Nov 5 15:42:57.494974 systemd-logind[1661]: New session 1 of user core. Nov 5 15:42:57.497850 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 5 15:42:57.501096 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 5 15:42:57.509750 (systemd)[1864]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 5 15:42:57.511794 systemd-logind[1661]: New session c1 of user core. Nov 5 15:42:57.594252 systemd[1864]: Queued start job for default target default.target. Nov 5 15:42:57.600059 systemd[1864]: Created slice app.slice - User Application Slice. Nov 5 15:42:57.600078 systemd[1864]: Reached target paths.target - Paths. Nov 5 15:42:57.600115 systemd[1864]: Reached target timers.target - Timers. Nov 5 15:42:57.600770 systemd[1864]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 5 15:42:57.607476 systemd[1864]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 5 15:42:57.607507 systemd[1864]: Reached target sockets.target - Sockets. Nov 5 15:42:57.607533 systemd[1864]: Reached target basic.target - Basic System. Nov 5 15:42:57.607555 systemd[1864]: Reached target default.target - Main User Target. Nov 5 15:42:57.607571 systemd[1864]: Startup finished in 92ms. Nov 5 15:42:57.607639 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 5 15:42:57.608591 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 5 15:42:57.609158 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 5 15:42:57.839426 kubelet[1853]: E1105 15:42:57.839346 1853 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:42:57.840922 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:42:57.841086 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:42:57.841451 systemd[1]: kubelet.service: Consumed 642ms CPU time, 268.1M memory peak. Nov 5 15:43:08.003648 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 5 15:43:08.005259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:43:08.361640 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:43:08.363905 (kubelet)[1903]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:43:08.427786 kubelet[1903]: E1105 15:43:08.427743 1903 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:43:08.431110 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:43:08.431300 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:43:08.431724 systemd[1]: kubelet.service: Consumed 113ms CPU time, 111.1M memory peak. Nov 5 15:43:18.503708 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 5 15:43:18.505548 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:43:18.857605 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:43:18.863326 (kubelet)[1918]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:43:18.898930 kubelet[1918]: E1105 15:43:18.898901 1918 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:43:18.900835 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:43:18.900961 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:43:18.901368 systemd[1]: kubelet.service: Consumed 100ms CPU time, 107.7M memory peak. Nov 5 15:43:25.368402 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 5 15:43:25.369167 systemd[1]: Started sshd@0-139.178.70.108:22-139.178.89.65:53680.service - OpenSSH per-connection server daemon (139.178.89.65:53680). Nov 5 15:43:25.416543 sshd[1926]: Accepted publickey for core from 139.178.89.65 port 53680 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:43:25.417746 sshd-session[1926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:43:25.420529 systemd-logind[1661]: New session 3 of user core. Nov 5 15:43:25.424040 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 5 15:43:25.481845 systemd[1]: Started sshd@1-139.178.70.108:22-139.178.89.65:53692.service - OpenSSH per-connection server daemon (139.178.89.65:53692). Nov 5 15:43:25.515216 sshd[1932]: Accepted publickey for core from 139.178.89.65 port 53692 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:43:25.516126 sshd-session[1932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:43:25.520749 systemd-logind[1661]: New session 4 of user core. Nov 5 15:43:25.530057 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 5 15:43:25.579583 sshd[1935]: Connection closed by 139.178.89.65 port 53692 Nov 5 15:43:25.579704 sshd-session[1932]: pam_unix(sshd:session): session closed for user core Nov 5 15:43:25.588528 systemd[1]: sshd@1-139.178.70.108:22-139.178.89.65:53692.service: Deactivated successfully. Nov 5 15:43:25.589770 systemd[1]: session-4.scope: Deactivated successfully. Nov 5 15:43:25.590365 systemd-logind[1661]: Session 4 logged out. Waiting for processes to exit. Nov 5 15:43:25.592265 systemd[1]: Started sshd@2-139.178.70.108:22-139.178.89.65:53704.service - OpenSSH per-connection server daemon (139.178.89.65:53704). Nov 5 15:43:25.593011 systemd-logind[1661]: Removed session 4. Nov 5 15:43:25.628687 sshd[1941]: Accepted publickey for core from 139.178.89.65 port 53704 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:43:25.629769 sshd-session[1941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:43:25.632799 systemd-logind[1661]: New session 5 of user core. Nov 5 15:43:25.647124 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 5 15:43:25.693118 sshd[1944]: Connection closed by 139.178.89.65 port 53704 Nov 5 15:43:25.693548 sshd-session[1941]: pam_unix(sshd:session): session closed for user core Nov 5 15:43:25.705435 systemd[1]: sshd@2-139.178.70.108:22-139.178.89.65:53704.service: Deactivated successfully. Nov 5 15:43:25.706502 systemd[1]: session-5.scope: Deactivated successfully. Nov 5 15:43:25.707388 systemd-logind[1661]: Session 5 logged out. Waiting for processes to exit. Nov 5 15:43:25.708572 systemd[1]: Started sshd@3-139.178.70.108:22-139.178.89.65:53708.service - OpenSSH per-connection server daemon (139.178.89.65:53708). Nov 5 15:43:25.710401 systemd-logind[1661]: Removed session 5. Nov 5 15:43:25.749687 sshd[1950]: Accepted publickey for core from 139.178.89.65 port 53708 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:43:25.750377 sshd-session[1950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:43:25.753744 systemd-logind[1661]: New session 6 of user core. Nov 5 15:43:25.759044 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 5 15:43:25.807970 sshd[1953]: Connection closed by 139.178.89.65 port 53708 Nov 5 15:43:25.808386 sshd-session[1950]: pam_unix(sshd:session): session closed for user core Nov 5 15:43:25.823712 systemd[1]: sshd@3-139.178.70.108:22-139.178.89.65:53708.service: Deactivated successfully. Nov 5 15:43:25.824886 systemd[1]: session-6.scope: Deactivated successfully. Nov 5 15:43:25.827126 systemd-logind[1661]: Session 6 logged out. Waiting for processes to exit. Nov 5 15:43:25.829153 systemd[1]: Started sshd@4-139.178.70.108:22-139.178.89.65:53710.service - OpenSSH per-connection server daemon (139.178.89.65:53710). Nov 5 15:43:25.830352 systemd-logind[1661]: Removed session 6. Nov 5 15:43:25.861644 sshd[1959]: Accepted publickey for core from 139.178.89.65 port 53710 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:43:25.862701 sshd-session[1959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:43:25.865476 systemd-logind[1661]: New session 7 of user core. Nov 5 15:43:25.873223 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 5 15:43:25.932339 sudo[1963]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 5 15:43:25.932726 sudo[1963]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:43:25.943245 sudo[1963]: pam_unix(sudo:session): session closed for user root Nov 5 15:43:25.944182 sshd[1962]: Connection closed by 139.178.89.65 port 53710 Nov 5 15:43:25.945028 sshd-session[1959]: pam_unix(sshd:session): session closed for user core Nov 5 15:43:25.953302 systemd[1]: sshd@4-139.178.70.108:22-139.178.89.65:53710.service: Deactivated successfully. Nov 5 15:43:25.954237 systemd[1]: session-7.scope: Deactivated successfully. Nov 5 15:43:25.954714 systemd-logind[1661]: Session 7 logged out. Waiting for processes to exit. Nov 5 15:43:25.956156 systemd[1]: Started sshd@5-139.178.70.108:22-139.178.89.65:53722.service - OpenSSH per-connection server daemon (139.178.89.65:53722). Nov 5 15:43:25.956722 systemd-logind[1661]: Removed session 7. Nov 5 15:43:25.986266 sshd[1969]: Accepted publickey for core from 139.178.89.65 port 53722 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:43:25.987036 sshd-session[1969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:43:25.989550 systemd-logind[1661]: New session 8 of user core. Nov 5 15:43:25.999031 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 5 15:43:26.048464 sudo[1974]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 5 15:43:26.048690 sudo[1974]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:43:26.051861 sudo[1974]: pam_unix(sudo:session): session closed for user root Nov 5 15:43:26.056096 sudo[1973]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 5 15:43:26.056246 sudo[1973]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:43:26.063355 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 15:43:26.092182 augenrules[1996]: No rules Nov 5 15:43:26.092486 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 15:43:26.092692 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 15:43:26.093308 sudo[1973]: pam_unix(sudo:session): session closed for user root Nov 5 15:43:26.094280 sshd[1972]: Connection closed by 139.178.89.65 port 53722 Nov 5 15:43:26.095914 sshd-session[1969]: pam_unix(sshd:session): session closed for user core Nov 5 15:43:26.101743 systemd[1]: sshd@5-139.178.70.108:22-139.178.89.65:53722.service: Deactivated successfully. Nov 5 15:43:26.102902 systemd[1]: session-8.scope: Deactivated successfully. Nov 5 15:43:26.103659 systemd-logind[1661]: Session 8 logged out. Waiting for processes to exit. Nov 5 15:43:26.105799 systemd[1]: Started sshd@6-139.178.70.108:22-139.178.89.65:39894.service - OpenSSH per-connection server daemon (139.178.89.65:39894). Nov 5 15:43:26.107362 systemd-logind[1661]: Removed session 8. Nov 5 15:43:26.142796 sshd[2005]: Accepted publickey for core from 139.178.89.65 port 39894 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:43:26.143773 sshd-session[2005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:43:26.147753 systemd-logind[1661]: New session 9 of user core. Nov 5 15:43:26.154093 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 5 15:43:26.203771 sudo[2009]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 5 15:43:26.203935 sudo[2009]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:43:26.771135 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 5 15:43:26.778229 (dockerd)[2027]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 5 15:43:27.116439 dockerd[2027]: time="2025-11-05T15:43:27.116227806Z" level=info msg="Starting up" Nov 5 15:43:27.116751 dockerd[2027]: time="2025-11-05T15:43:27.116727682Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 5 15:43:27.122673 dockerd[2027]: time="2025-11-05T15:43:27.122648277Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 5 15:43:27.158604 dockerd[2027]: time="2025-11-05T15:43:27.158561949Z" level=info msg="Loading containers: start." Nov 5 15:43:27.167024 kernel: Initializing XFRM netlink socket Nov 5 15:43:27.496329 systemd-networkd[1585]: docker0: Link UP Nov 5 15:43:27.497864 dockerd[2027]: time="2025-11-05T15:43:27.497841934Z" level=info msg="Loading containers: done." Nov 5 15:43:27.507823 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3018646025-merged.mount: Deactivated successfully. Nov 5 15:43:27.509388 dockerd[2027]: time="2025-11-05T15:43:27.509358175Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 5 15:43:27.509440 dockerd[2027]: time="2025-11-05T15:43:27.509421640Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 5 15:43:27.509498 dockerd[2027]: time="2025-11-05T15:43:27.509480787Z" level=info msg="Initializing buildkit" Nov 5 15:43:27.520893 dockerd[2027]: time="2025-11-05T15:43:27.520864155Z" level=info msg="Completed buildkit initialization" Nov 5 15:43:27.526362 dockerd[2027]: time="2025-11-05T15:43:27.526322443Z" level=info msg="Daemon has completed initialization" Nov 5 15:43:27.526963 dockerd[2027]: time="2025-11-05T15:43:27.526465435Z" level=info msg="API listen on /run/docker.sock" Nov 5 15:43:27.527347 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 5 15:43:29.003597 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 5 15:43:29.005045 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:43:29.022675 containerd[1687]: time="2025-11-05T15:43:29.022636332Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 5 15:43:29.410297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:43:29.418326 (kubelet)[2245]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:43:29.469882 kubelet[2245]: E1105 15:43:29.469854 2245 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:43:29.471177 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:43:29.471275 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:43:29.471489 systemd[1]: kubelet.service: Consumed 118ms CPU time, 110.5M memory peak. Nov 5 15:43:30.199461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3130134432.mount: Deactivated successfully. Nov 5 15:43:31.483041 containerd[1687]: time="2025-11-05T15:43:31.482698232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:43:31.483647 containerd[1687]: time="2025-11-05T15:43:31.483634793Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Nov 5 15:43:31.483941 containerd[1687]: time="2025-11-05T15:43:31.483901214Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:43:31.485692 containerd[1687]: time="2025-11-05T15:43:31.485670945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:43:31.486706 containerd[1687]: time="2025-11-05T15:43:31.486612099Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.463944802s" Nov 5 15:43:31.486706 containerd[1687]: time="2025-11-05T15:43:31.486629750Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 5 15:43:31.487044 containerd[1687]: time="2025-11-05T15:43:31.487012458Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 5 15:43:33.283988 containerd[1687]: time="2025-11-05T15:43:33.283374171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:43:33.300875 containerd[1687]: time="2025-11-05T15:43:33.300842329Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Nov 5 15:43:33.315393 containerd[1687]: time="2025-11-05T15:43:33.315354504Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:43:33.329650 containerd[1687]: time="2025-11-05T15:43:33.329612131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:43:33.330341 containerd[1687]: time="2025-11-05T15:43:33.330224624Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.84312654s" Nov 5 15:43:33.330341 containerd[1687]: time="2025-11-05T15:43:33.330248173Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 5 15:43:33.331013 containerd[1687]: time="2025-11-05T15:43:33.330891182Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 5 15:43:34.738082 containerd[1687]: time="2025-11-05T15:43:34.738047193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:43:34.738852 containerd[1687]: time="2025-11-05T15:43:34.738830525Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Nov 5 15:43:34.739225 containerd[1687]: time="2025-11-05T15:43:34.739196273Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:43:34.741331 containerd[1687]: time="2025-11-05T15:43:34.741310496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:43:34.742447 containerd[1687]: time="2025-11-05T15:43:34.742426945Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.411516683s" Nov 5 15:43:34.742479 containerd[1687]: time="2025-11-05T15:43:34.742452893Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 5 15:43:34.742890 containerd[1687]: time="2025-11-05T15:43:34.742826085Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 5 15:43:36.992825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount679569777.mount: Deactivated successfully. Nov 5 15:43:37.338535 containerd[1687]: time="2025-11-05T15:43:37.338501972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:43:37.339539 containerd[1687]: time="2025-11-05T15:43:37.339521871Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Nov 5 15:43:37.340564 containerd[1687]: time="2025-11-05T15:43:37.339960217Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:43:37.341304 containerd[1687]: time="2025-11-05T15:43:37.341287814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:43:37.342354 containerd[1687]: time="2025-11-05T15:43:37.342336851Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.599452589s" Nov 5 15:43:37.342438 containerd[1687]: time="2025-11-05T15:43:37.342425348Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 5 15:43:37.342916 containerd[1687]: time="2025-11-05T15:43:37.342893732Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 5 15:43:38.000905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2768059968.mount: Deactivated successfully. Nov 5 15:43:39.503672 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 5 15:43:39.505812 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:43:40.175371 update_engine[1663]: I20251105 15:43:40.174988 1663 update_attempter.cc:509] Updating boot flags... Nov 5 15:43:40.728132 containerd[1687]: time="2025-11-05T15:43:40.728087080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:43:40.830851 containerd[1687]: time="2025-11-05T15:43:40.830809133Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Nov 5 15:43:40.897812 containerd[1687]: time="2025-11-05T15:43:40.897765421Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:43:40.966977 containerd[1687]: time="2025-11-05T15:43:40.966797980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:43:40.967365 containerd[1687]: time="2025-11-05T15:43:40.967350842Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 3.624339574s" Nov 5 15:43:40.967424 containerd[1687]: time="2025-11-05T15:43:40.967413583Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 5 15:43:40.967916 containerd[1687]: time="2025-11-05T15:43:40.967815573Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 5 15:43:41.236710 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:43:41.240290 (kubelet)[2411]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:43:41.485474 kubelet[2411]: E1105 15:43:41.485427 2411 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:43:41.487218 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:43:41.487394 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:43:41.487804 systemd[1]: kubelet.service: Consumed 114ms CPU time, 106M memory peak. Nov 5 15:43:42.217247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1879223027.mount: Deactivated successfully. Nov 5 15:43:42.286262 containerd[1687]: time="2025-11-05T15:43:42.286214761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:43:42.298590 containerd[1687]: time="2025-11-05T15:43:42.298516530Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 5 15:43:42.310236 containerd[1687]: time="2025-11-05T15:43:42.310151285Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:43:42.317635 containerd[1687]: time="2025-11-05T15:43:42.317599232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:43:42.317954 containerd[1687]: time="2025-11-05T15:43:42.317922239Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.350090204s" Nov 5 15:43:42.317996 containerd[1687]: time="2025-11-05T15:43:42.317957023Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 5 15:43:42.318535 containerd[1687]: time="2025-11-05T15:43:42.318472509Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 5 15:43:43.283617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount426401257.mount: Deactivated successfully. Nov 5 15:43:46.390967 containerd[1687]: time="2025-11-05T15:43:46.390889619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:43:46.513878 containerd[1687]: time="2025-11-05T15:43:46.513855221Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Nov 5 15:43:46.526381 containerd[1687]: time="2025-11-05T15:43:46.526349200Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:43:46.630055 containerd[1687]: time="2025-11-05T15:43:46.630002625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:43:46.631273 containerd[1687]: time="2025-11-05T15:43:46.631011463Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.312520229s" Nov 5 15:43:46.631273 containerd[1687]: time="2025-11-05T15:43:46.631060615Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 5 15:43:49.310170 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:43:49.310267 systemd[1]: kubelet.service: Consumed 114ms CPU time, 106M memory peak. Nov 5 15:43:49.311752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:43:49.328848 systemd[1]: Reload requested from client PID 2505 ('systemctl') (unit session-9.scope)... Nov 5 15:43:49.328861 systemd[1]: Reloading... Nov 5 15:43:49.419995 zram_generator::config[2555]: No configuration found. Nov 5 15:43:49.491812 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 5 15:43:49.559186 systemd[1]: Reloading finished in 230 ms. Nov 5 15:43:49.668001 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 5 15:43:49.668100 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 5 15:43:49.668391 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:43:49.669937 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:43:50.029305 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:43:50.037162 (kubelet)[2616]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 15:43:50.092926 kubelet[2616]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:43:50.093160 kubelet[2616]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 15:43:50.093197 kubelet[2616]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:43:50.110354 kubelet[2616]: I1105 15:43:50.110334 2616 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 15:43:50.774722 kubelet[2616]: I1105 15:43:50.774688 2616 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 5 15:43:50.774722 kubelet[2616]: I1105 15:43:50.774715 2616 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 15:43:50.774880 kubelet[2616]: I1105 15:43:50.774867 2616 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 15:43:50.822711 kubelet[2616]: E1105 15:43:50.822674 2616 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.70.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 5 15:43:50.822899 kubelet[2616]: I1105 15:43:50.822775 2616 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 15:43:50.839608 kubelet[2616]: I1105 15:43:50.839592 2616 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 15:43:50.845834 kubelet[2616]: I1105 15:43:50.845775 2616 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 15:43:50.852132 kubelet[2616]: I1105 15:43:50.851975 2616 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 15:43:50.855075 kubelet[2616]: I1105 15:43:50.852012 2616 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 15:43:50.855226 kubelet[2616]: I1105 15:43:50.855218 2616 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 15:43:50.855270 kubelet[2616]: I1105 15:43:50.855263 2616 container_manager_linux.go:303] "Creating device plugin manager" Nov 5 15:43:50.856388 kubelet[2616]: I1105 15:43:50.856315 2616 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:43:50.859762 kubelet[2616]: I1105 15:43:50.859477 2616 kubelet.go:480] "Attempting to sync node with API server" Nov 5 15:43:50.859762 kubelet[2616]: I1105 15:43:50.859500 2616 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 15:43:50.859762 kubelet[2616]: I1105 15:43:50.859530 2616 kubelet.go:386] "Adding apiserver pod source" Nov 5 15:43:50.859762 kubelet[2616]: I1105 15:43:50.859546 2616 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 15:43:50.866564 kubelet[2616]: E1105 15:43:50.866539 2616 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 15:43:50.866735 kubelet[2616]: I1105 15:43:50.866723 2616 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 15:43:50.868203 kubelet[2616]: I1105 15:43:50.868191 2616 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 15:43:50.869966 kubelet[2616]: W1105 15:43:50.868911 2616 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 5 15:43:50.874078 kubelet[2616]: I1105 15:43:50.874058 2616 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 15:43:50.874204 kubelet[2616]: I1105 15:43:50.874196 2616 server.go:1289] "Started kubelet" Nov 5 15:43:50.875389 kubelet[2616]: E1105 15:43:50.874983 2616 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 15:43:50.876066 kubelet[2616]: I1105 15:43:50.876032 2616 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 15:43:50.879138 kubelet[2616]: I1105 15:43:50.878651 2616 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 15:43:50.879138 kubelet[2616]: I1105 15:43:50.879007 2616 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 15:43:50.883215 kubelet[2616]: I1105 15:43:50.882870 2616 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 15:43:50.885781 kubelet[2616]: E1105 15:43:50.881932 2616 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.108:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.108:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.187526befa865c4e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-05 15:43:50.874168398 +0000 UTC m=+0.834856729,LastTimestamp:2025-11-05 15:43:50.874168398 +0000 UTC m=+0.834856729,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 5 15:43:50.890042 kubelet[2616]: I1105 15:43:50.890022 2616 server.go:317] "Adding debug handlers to kubelet server" Nov 5 15:43:50.899959 kubelet[2616]: I1105 15:43:50.897597 2616 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 15:43:50.899959 kubelet[2616]: I1105 15:43:50.897687 2616 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 15:43:50.900383 kubelet[2616]: I1105 15:43:50.900366 2616 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 15:43:50.900414 kubelet[2616]: I1105 15:43:50.900411 2616 reconciler.go:26] "Reconciler: start to sync state" Nov 5 15:43:50.900988 kubelet[2616]: E1105 15:43:50.900972 2616 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 15:43:50.913518 kubelet[2616]: E1105 15:43:50.913501 2616 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 15:43:50.916523 kubelet[2616]: I1105 15:43:50.916510 2616 factory.go:223] Registration of the systemd container factory successfully Nov 5 15:43:50.916605 kubelet[2616]: I1105 15:43:50.916590 2616 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 15:43:50.922975 kubelet[2616]: I1105 15:43:50.922399 2616 factory.go:223] Registration of the containerd container factory successfully Nov 5 15:43:50.922975 kubelet[2616]: E1105 15:43:50.922461 2616 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="200ms" Nov 5 15:43:50.922975 kubelet[2616]: E1105 15:43:50.922533 2616 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 15:43:50.940417 kubelet[2616]: I1105 15:43:50.940354 2616 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 5 15:43:50.940564 kubelet[2616]: I1105 15:43:50.940510 2616 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 15:43:50.940564 kubelet[2616]: I1105 15:43:50.940517 2616 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 15:43:50.940564 kubelet[2616]: I1105 15:43:50.940526 2616 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:43:50.941205 kubelet[2616]: I1105 15:43:50.941086 2616 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 5 15:43:50.941205 kubelet[2616]: I1105 15:43:50.941096 2616 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 5 15:43:50.941205 kubelet[2616]: I1105 15:43:50.941110 2616 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 15:43:50.941205 kubelet[2616]: I1105 15:43:50.941113 2616 kubelet.go:2436] "Starting kubelet main sync loop" Nov 5 15:43:50.941205 kubelet[2616]: E1105 15:43:50.941131 2616 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 15:43:50.941859 kubelet[2616]: E1105 15:43:50.941847 2616 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 15:43:50.950382 kubelet[2616]: I1105 15:43:50.950253 2616 policy_none.go:49] "None policy: Start" Nov 5 15:43:50.950382 kubelet[2616]: I1105 15:43:50.950268 2616 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 15:43:50.950382 kubelet[2616]: I1105 15:43:50.950276 2616 state_mem.go:35] "Initializing new in-memory state store" Nov 5 15:43:50.970547 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 5 15:43:50.982206 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 5 15:43:50.984524 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 5 15:43:50.992611 kubelet[2616]: E1105 15:43:50.992569 2616 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 15:43:50.992808 kubelet[2616]: I1105 15:43:50.992736 2616 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 15:43:50.992808 kubelet[2616]: I1105 15:43:50.992747 2616 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 15:43:50.992975 kubelet[2616]: I1105 15:43:50.992969 2616 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 15:43:50.993606 kubelet[2616]: E1105 15:43:50.993597 2616 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 15:43:50.993686 kubelet[2616]: E1105 15:43:50.993660 2616 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 5 15:43:51.060121 systemd[1]: Created slice kubepods-burstable-pod009f4c95888184f46ae0676d53e368f5.slice - libcontainer container kubepods-burstable-pod009f4c95888184f46ae0676d53e368f5.slice. Nov 5 15:43:51.068871 kubelet[2616]: E1105 15:43:51.068828 2616 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:43:51.073388 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Nov 5 15:43:51.075642 kubelet[2616]: E1105 15:43:51.075410 2616 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:43:51.089260 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Nov 5 15:43:51.090850 kubelet[2616]: E1105 15:43:51.090723 2616 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:43:51.094613 kubelet[2616]: I1105 15:43:51.094601 2616 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 15:43:51.095111 kubelet[2616]: E1105 15:43:51.095092 2616 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" Nov 5 15:43:51.101449 kubelet[2616]: I1105 15:43:51.101364 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/009f4c95888184f46ae0676d53e368f5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"009f4c95888184f46ae0676d53e368f5\") " pod="kube-system/kube-apiserver-localhost" Nov 5 15:43:51.123865 kubelet[2616]: E1105 15:43:51.123834 2616 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="400ms" Nov 5 15:43:51.202160 kubelet[2616]: I1105 15:43:51.202125 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/009f4c95888184f46ae0676d53e368f5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"009f4c95888184f46ae0676d53e368f5\") " pod="kube-system/kube-apiserver-localhost" Nov 5 15:43:51.202421 kubelet[2616]: I1105 15:43:51.202294 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:43:51.202421 kubelet[2616]: I1105 15:43:51.202313 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:43:51.202421 kubelet[2616]: I1105 15:43:51.202366 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:43:51.202421 kubelet[2616]: I1105 15:43:51.202379 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:43:51.202421 kubelet[2616]: I1105 15:43:51.202390 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:43:51.202695 kubelet[2616]: I1105 15:43:51.202402 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 5 15:43:51.202740 kubelet[2616]: I1105 15:43:51.202630 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/009f4c95888184f46ae0676d53e368f5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"009f4c95888184f46ae0676d53e368f5\") " pod="kube-system/kube-apiserver-localhost" Nov 5 15:43:51.296702 kubelet[2616]: I1105 15:43:51.296504 2616 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 15:43:51.297969 kubelet[2616]: E1105 15:43:51.296871 2616 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" Nov 5 15:43:51.371889 containerd[1687]: time="2025-11-05T15:43:51.371573660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:009f4c95888184f46ae0676d53e368f5,Namespace:kube-system,Attempt:0,}" Nov 5 15:43:51.386927 containerd[1687]: time="2025-11-05T15:43:51.386809882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Nov 5 15:43:51.404344 containerd[1687]: time="2025-11-05T15:43:51.404140238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Nov 5 15:43:51.478591 containerd[1687]: time="2025-11-05T15:43:51.478552738Z" level=info msg="connecting to shim e609c79d1dba74a22726dd59fa6f1f34bacc8ec7868a22977308233a4fb460df" address="unix:///run/containerd/s/c14489c9041ef715e27189262ca5aa77a9c3dd786dc1ff6990ddd191e9428274" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:43:51.478994 containerd[1687]: time="2025-11-05T15:43:51.478980966Z" level=info msg="connecting to shim e7f9cd7c599392c5843d8e6c0f06026e82331ec8a8be1ae0c2b5a9a36a1681bf" address="unix:///run/containerd/s/fe8e732b7883a967a6c7c43967daad4a2449c063af1e4ce18f05dc791f405288" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:43:51.484444 containerd[1687]: time="2025-11-05T15:43:51.484426260Z" level=info msg="connecting to shim 285b089b4a35389f9711a379c10138fc7cd2e17b25caea7274b8261466640580" address="unix:///run/containerd/s/ac70e831337e7b7f3214192580d09c7e7eb22347c5307b76cb45344ffec430d8" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:43:51.525129 kubelet[2616]: E1105 15:43:51.525107 2616 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="800ms" Nov 5 15:43:51.590209 systemd[1]: Started cri-containerd-285b089b4a35389f9711a379c10138fc7cd2e17b25caea7274b8261466640580.scope - libcontainer container 285b089b4a35389f9711a379c10138fc7cd2e17b25caea7274b8261466640580. Nov 5 15:43:51.591209 systemd[1]: Started cri-containerd-e609c79d1dba74a22726dd59fa6f1f34bacc8ec7868a22977308233a4fb460df.scope - libcontainer container e609c79d1dba74a22726dd59fa6f1f34bacc8ec7868a22977308233a4fb460df. Nov 5 15:43:51.592688 systemd[1]: Started cri-containerd-e7f9cd7c599392c5843d8e6c0f06026e82331ec8a8be1ae0c2b5a9a36a1681bf.scope - libcontainer container e7f9cd7c599392c5843d8e6c0f06026e82331ec8a8be1ae0c2b5a9a36a1681bf. Nov 5 15:43:51.650113 containerd[1687]: time="2025-11-05T15:43:51.650037137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:009f4c95888184f46ae0676d53e368f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e609c79d1dba74a22726dd59fa6f1f34bacc8ec7868a22977308233a4fb460df\"" Nov 5 15:43:51.655211 containerd[1687]: time="2025-11-05T15:43:51.655151096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7f9cd7c599392c5843d8e6c0f06026e82331ec8a8be1ae0c2b5a9a36a1681bf\"" Nov 5 15:43:51.656617 containerd[1687]: time="2025-11-05T15:43:51.656131183Z" level=info msg="CreateContainer within sandbox \"e609c79d1dba74a22726dd59fa6f1f34bacc8ec7868a22977308233a4fb460df\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 5 15:43:51.658328 containerd[1687]: time="2025-11-05T15:43:51.658309604Z" level=info msg="CreateContainer within sandbox \"e7f9cd7c599392c5843d8e6c0f06026e82331ec8a8be1ae0c2b5a9a36a1681bf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 5 15:43:51.663685 containerd[1687]: time="2025-11-05T15:43:51.663665423Z" level=info msg="Container 4d634ef896f0ab8a7ebdcbebe6c49a9de3f88a9560d22bbee71cf5b151608a59: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:43:51.665616 containerd[1687]: time="2025-11-05T15:43:51.665596562Z" level=info msg="Container 96c8563b76d4aeef2332cc16fd25a22e85d7f0426d1cc734f21731a67b48a2a5: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:43:51.675875 containerd[1687]: time="2025-11-05T15:43:51.675825293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"285b089b4a35389f9711a379c10138fc7cd2e17b25caea7274b8261466640580\"" Nov 5 15:43:51.680032 containerd[1687]: time="2025-11-05T15:43:51.680010834Z" level=info msg="CreateContainer within sandbox \"285b089b4a35389f9711a379c10138fc7cd2e17b25caea7274b8261466640580\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 5 15:43:51.681360 containerd[1687]: time="2025-11-05T15:43:51.681340458Z" level=info msg="CreateContainer within sandbox \"e609c79d1dba74a22726dd59fa6f1f34bacc8ec7868a22977308233a4fb460df\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"96c8563b76d4aeef2332cc16fd25a22e85d7f0426d1cc734f21731a67b48a2a5\"" Nov 5 15:43:51.682327 containerd[1687]: time="2025-11-05T15:43:51.682308677Z" level=info msg="StartContainer for \"96c8563b76d4aeef2332cc16fd25a22e85d7f0426d1cc734f21731a67b48a2a5\"" Nov 5 15:43:51.682890 containerd[1687]: time="2025-11-05T15:43:51.682872694Z" level=info msg="CreateContainer within sandbox \"e7f9cd7c599392c5843d8e6c0f06026e82331ec8a8be1ae0c2b5a9a36a1681bf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4d634ef896f0ab8a7ebdcbebe6c49a9de3f88a9560d22bbee71cf5b151608a59\"" Nov 5 15:43:51.683185 containerd[1687]: time="2025-11-05T15:43:51.683163229Z" level=info msg="connecting to shim 96c8563b76d4aeef2332cc16fd25a22e85d7f0426d1cc734f21731a67b48a2a5" address="unix:///run/containerd/s/c14489c9041ef715e27189262ca5aa77a9c3dd786dc1ff6990ddd191e9428274" protocol=ttrpc version=3 Nov 5 15:43:51.683694 containerd[1687]: time="2025-11-05T15:43:51.683679895Z" level=info msg="StartContainer for \"4d634ef896f0ab8a7ebdcbebe6c49a9de3f88a9560d22bbee71cf5b151608a59\"" Nov 5 15:43:51.684340 containerd[1687]: time="2025-11-05T15:43:51.684322044Z" level=info msg="connecting to shim 4d634ef896f0ab8a7ebdcbebe6c49a9de3f88a9560d22bbee71cf5b151608a59" address="unix:///run/containerd/s/fe8e732b7883a967a6c7c43967daad4a2449c063af1e4ce18f05dc791f405288" protocol=ttrpc version=3 Nov 5 15:43:51.688806 kubelet[2616]: E1105 15:43:51.688723 2616 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 15:43:51.689506 containerd[1687]: time="2025-11-05T15:43:51.688928972Z" level=info msg="Container 3dcf3746dd723c2f4cfbb063eec2c9bdf5691e75872eae16085582cfb255286f: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:43:51.697962 kubelet[2616]: I1105 15:43:51.697938 2616 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 15:43:51.698317 kubelet[2616]: E1105 15:43:51.698305 2616 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" Nov 5 15:43:51.700087 systemd[1]: Started cri-containerd-4d634ef896f0ab8a7ebdcbebe6c49a9de3f88a9560d22bbee71cf5b151608a59.scope - libcontainer container 4d634ef896f0ab8a7ebdcbebe6c49a9de3f88a9560d22bbee71cf5b151608a59. Nov 5 15:43:51.703445 systemd[1]: Started cri-containerd-96c8563b76d4aeef2332cc16fd25a22e85d7f0426d1cc734f21731a67b48a2a5.scope - libcontainer container 96c8563b76d4aeef2332cc16fd25a22e85d7f0426d1cc734f21731a67b48a2a5. Nov 5 15:43:51.729232 containerd[1687]: time="2025-11-05T15:43:51.729204251Z" level=info msg="CreateContainer within sandbox \"285b089b4a35389f9711a379c10138fc7cd2e17b25caea7274b8261466640580\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3dcf3746dd723c2f4cfbb063eec2c9bdf5691e75872eae16085582cfb255286f\"" Nov 5 15:43:51.729779 containerd[1687]: time="2025-11-05T15:43:51.729766177Z" level=info msg="StartContainer for \"3dcf3746dd723c2f4cfbb063eec2c9bdf5691e75872eae16085582cfb255286f\"" Nov 5 15:43:51.731436 containerd[1687]: time="2025-11-05T15:43:51.731418821Z" level=info msg="connecting to shim 3dcf3746dd723c2f4cfbb063eec2c9bdf5691e75872eae16085582cfb255286f" address="unix:///run/containerd/s/ac70e831337e7b7f3214192580d09c7e7eb22347c5307b76cb45344ffec430d8" protocol=ttrpc version=3 Nov 5 15:43:51.749059 systemd[1]: Started cri-containerd-3dcf3746dd723c2f4cfbb063eec2c9bdf5691e75872eae16085582cfb255286f.scope - libcontainer container 3dcf3746dd723c2f4cfbb063eec2c9bdf5691e75872eae16085582cfb255286f. Nov 5 15:43:51.750920 containerd[1687]: time="2025-11-05T15:43:51.750882555Z" level=info msg="StartContainer for \"96c8563b76d4aeef2332cc16fd25a22e85d7f0426d1cc734f21731a67b48a2a5\" returns successfully" Nov 5 15:43:51.760284 containerd[1687]: time="2025-11-05T15:43:51.760256730Z" level=info msg="StartContainer for \"4d634ef896f0ab8a7ebdcbebe6c49a9de3f88a9560d22bbee71cf5b151608a59\" returns successfully" Nov 5 15:43:51.785177 kubelet[2616]: E1105 15:43:51.785156 2616 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 15:43:51.792596 containerd[1687]: time="2025-11-05T15:43:51.792573155Z" level=info msg="StartContainer for \"3dcf3746dd723c2f4cfbb063eec2c9bdf5691e75872eae16085582cfb255286f\" returns successfully" Nov 5 15:43:51.952450 kubelet[2616]: E1105 15:43:51.952378 2616 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:43:51.957027 kubelet[2616]: E1105 15:43:51.956943 2616 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:43:51.958183 kubelet[2616]: E1105 15:43:51.958169 2616 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:43:52.128617 kubelet[2616]: E1105 15:43:52.128570 2616 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 15:43:52.269957 kubelet[2616]: E1105 15:43:52.269704 2616 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 15:43:52.325665 kubelet[2616]: E1105 15:43:52.325637 2616 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="1.6s" Nov 5 15:43:52.500200 kubelet[2616]: I1105 15:43:52.500010 2616 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 15:43:52.500380 kubelet[2616]: E1105 15:43:52.500352 2616 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" Nov 5 15:43:52.914496 kubelet[2616]: E1105 15:43:52.914469 2616 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.70.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 5 15:43:52.970333 kubelet[2616]: E1105 15:43:52.970227 2616 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:43:52.970753 kubelet[2616]: E1105 15:43:52.970739 2616 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:43:54.101848 kubelet[2616]: I1105 15:43:54.101824 2616 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 15:43:54.420095 kubelet[2616]: E1105 15:43:54.419901 2616 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 5 15:43:54.579662 kubelet[2616]: I1105 15:43:54.579468 2616 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 5 15:43:54.579662 kubelet[2616]: E1105 15:43:54.579493 2616 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 5 15:43:54.595314 kubelet[2616]: E1105 15:43:54.595280 2616 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 15:43:54.696058 kubelet[2616]: E1105 15:43:54.695985 2616 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 15:43:54.797072 kubelet[2616]: E1105 15:43:54.797048 2616 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 15:43:54.811476 kubelet[2616]: I1105 15:43:54.811454 2616 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 15:43:54.815636 kubelet[2616]: E1105 15:43:54.815605 2616 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 5 15:43:54.816832 kubelet[2616]: I1105 15:43:54.816809 2616 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 15:43:54.817732 kubelet[2616]: E1105 15:43:54.817714 2616 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 5 15:43:54.817732 kubelet[2616]: I1105 15:43:54.817726 2616 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 15:43:54.818579 kubelet[2616]: E1105 15:43:54.818565 2616 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 5 15:43:54.818579 kubelet[2616]: I1105 15:43:54.818574 2616 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 15:43:54.819452 kubelet[2616]: E1105 15:43:54.819439 2616 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 5 15:43:54.876090 kubelet[2616]: I1105 15:43:54.876064 2616 apiserver.go:52] "Watching apiserver" Nov 5 15:43:54.901228 kubelet[2616]: I1105 15:43:54.901019 2616 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 15:43:56.326112 systemd[1]: Reload requested from client PID 2892 ('systemctl') (unit session-9.scope)... Nov 5 15:43:56.326121 systemd[1]: Reloading... Nov 5 15:43:56.394987 zram_generator::config[2936]: No configuration found. Nov 5 15:43:56.495448 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 5 15:43:56.574042 systemd[1]: Reloading finished in 247 ms. Nov 5 15:43:56.596993 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:43:56.610138 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 15:43:56.610281 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:43:56.610309 systemd[1]: kubelet.service: Consumed 948ms CPU time, 127.9M memory peak. Nov 5 15:43:56.612347 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:43:56.873697 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:43:56.884294 (kubelet)[3004]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 15:43:56.984612 kubelet[3004]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:43:56.984836 kubelet[3004]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 15:43:56.984865 kubelet[3004]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:43:56.984968 kubelet[3004]: I1105 15:43:56.984940 3004 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 15:43:56.988249 kubelet[3004]: I1105 15:43:56.988233 3004 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 5 15:43:56.988249 kubelet[3004]: I1105 15:43:56.988245 3004 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 15:43:56.988353 kubelet[3004]: I1105 15:43:56.988342 3004 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 15:43:56.988989 kubelet[3004]: I1105 15:43:56.988976 3004 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 5 15:43:56.993209 kubelet[3004]: I1105 15:43:56.993198 3004 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 15:43:57.006081 kubelet[3004]: I1105 15:43:57.006065 3004 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 15:43:57.007515 kubelet[3004]: I1105 15:43:57.007500 3004 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 15:43:57.007607 kubelet[3004]: I1105 15:43:57.007592 3004 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 15:43:57.007689 kubelet[3004]: I1105 15:43:57.007606 3004 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 15:43:57.007743 kubelet[3004]: I1105 15:43:57.007694 3004 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 15:43:57.007743 kubelet[3004]: I1105 15:43:57.007701 3004 container_manager_linux.go:303] "Creating device plugin manager" Nov 5 15:43:57.008648 kubelet[3004]: I1105 15:43:57.008632 3004 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:43:57.009728 kubelet[3004]: I1105 15:43:57.009714 3004 kubelet.go:480] "Attempting to sync node with API server" Nov 5 15:43:57.009754 kubelet[3004]: I1105 15:43:57.009732 3004 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 15:43:57.009754 kubelet[3004]: I1105 15:43:57.009750 3004 kubelet.go:386] "Adding apiserver pod source" Nov 5 15:43:57.009786 kubelet[3004]: I1105 15:43:57.009759 3004 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 15:43:57.015445 kubelet[3004]: I1105 15:43:57.015400 3004 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 15:43:57.015668 kubelet[3004]: I1105 15:43:57.015659 3004 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 15:43:57.018611 kubelet[3004]: I1105 15:43:57.018600 3004 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 15:43:57.018650 kubelet[3004]: I1105 15:43:57.018621 3004 server.go:1289] "Started kubelet" Nov 5 15:43:57.020337 kubelet[3004]: I1105 15:43:57.020226 3004 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 15:43:57.020500 kubelet[3004]: I1105 15:43:57.020493 3004 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 15:43:57.020575 kubelet[3004]: I1105 15:43:57.020555 3004 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 15:43:57.020615 kubelet[3004]: I1105 15:43:57.020607 3004 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 15:43:57.023562 kubelet[3004]: I1105 15:43:57.023036 3004 server.go:317] "Adding debug handlers to kubelet server" Nov 5 15:43:57.024909 kubelet[3004]: I1105 15:43:57.024505 3004 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 15:43:57.026923 kubelet[3004]: I1105 15:43:57.026911 3004 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 15:43:57.029718 kubelet[3004]: I1105 15:43:57.029698 3004 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 15:43:57.030004 kubelet[3004]: I1105 15:43:57.029943 3004 reconciler.go:26] "Reconciler: start to sync state" Nov 5 15:43:57.031401 kubelet[3004]: I1105 15:43:57.031391 3004 factory.go:223] Registration of the systemd container factory successfully Nov 5 15:43:57.031495 kubelet[3004]: I1105 15:43:57.031485 3004 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 15:43:57.032592 kubelet[3004]: E1105 15:43:57.032580 3004 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 15:43:57.032743 kubelet[3004]: I1105 15:43:57.032731 3004 factory.go:223] Registration of the containerd container factory successfully Nov 5 15:43:57.038679 kubelet[3004]: I1105 15:43:57.038651 3004 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 5 15:43:57.039249 kubelet[3004]: I1105 15:43:57.039233 3004 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 5 15:43:57.039249 kubelet[3004]: I1105 15:43:57.039244 3004 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 5 15:43:57.039305 kubelet[3004]: I1105 15:43:57.039257 3004 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 15:43:57.039305 kubelet[3004]: I1105 15:43:57.039261 3004 kubelet.go:2436] "Starting kubelet main sync loop" Nov 5 15:43:57.039305 kubelet[3004]: E1105 15:43:57.039281 3004 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 15:43:57.070180 kubelet[3004]: I1105 15:43:57.070162 3004 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 15:43:57.070180 kubelet[3004]: I1105 15:43:57.070173 3004 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 15:43:57.070180 kubelet[3004]: I1105 15:43:57.070184 3004 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:43:57.070287 kubelet[3004]: I1105 15:43:57.070256 3004 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 5 15:43:57.070287 kubelet[3004]: I1105 15:43:57.070273 3004 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 5 15:43:57.070287 kubelet[3004]: I1105 15:43:57.070287 3004 policy_none.go:49] "None policy: Start" Nov 5 15:43:57.070333 kubelet[3004]: I1105 15:43:57.070293 3004 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 15:43:57.070333 kubelet[3004]: I1105 15:43:57.070299 3004 state_mem.go:35] "Initializing new in-memory state store" Nov 5 15:43:57.070368 kubelet[3004]: I1105 15:43:57.070352 3004 state_mem.go:75] "Updated machine memory state" Nov 5 15:43:57.074477 kubelet[3004]: E1105 15:43:57.074427 3004 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 15:43:57.074522 kubelet[3004]: I1105 15:43:57.074514 3004 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 15:43:57.074541 kubelet[3004]: I1105 15:43:57.074520 3004 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 15:43:57.074826 kubelet[3004]: I1105 15:43:57.074704 3004 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 15:43:57.077717 kubelet[3004]: E1105 15:43:57.077315 3004 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 15:43:57.140819 kubelet[3004]: I1105 15:43:57.140754 3004 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 15:43:57.141155 kubelet[3004]: I1105 15:43:57.141144 3004 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 15:43:57.142290 kubelet[3004]: I1105 15:43:57.142238 3004 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 15:43:57.179412 kubelet[3004]: I1105 15:43:57.179399 3004 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 15:43:57.183942 kubelet[3004]: I1105 15:43:57.183724 3004 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 5 15:43:57.183942 kubelet[3004]: I1105 15:43:57.183771 3004 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 5 15:43:57.332410 kubelet[3004]: I1105 15:43:57.332356 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/009f4c95888184f46ae0676d53e368f5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"009f4c95888184f46ae0676d53e368f5\") " pod="kube-system/kube-apiserver-localhost" Nov 5 15:43:57.332594 kubelet[3004]: I1105 15:43:57.332390 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/009f4c95888184f46ae0676d53e368f5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"009f4c95888184f46ae0676d53e368f5\") " pod="kube-system/kube-apiserver-localhost" Nov 5 15:43:57.332594 kubelet[3004]: I1105 15:43:57.332571 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 5 15:43:57.332740 kubelet[3004]: I1105 15:43:57.332687 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/009f4c95888184f46ae0676d53e368f5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"009f4c95888184f46ae0676d53e368f5\") " pod="kube-system/kube-apiserver-localhost" Nov 5 15:43:57.332740 kubelet[3004]: I1105 15:43:57.332711 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:43:57.332941 kubelet[3004]: I1105 15:43:57.332726 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:43:57.332941 kubelet[3004]: I1105 15:43:57.332840 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:43:57.332941 kubelet[3004]: I1105 15:43:57.332853 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:43:57.333115 kubelet[3004]: I1105 15:43:57.333092 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:43:58.013224 kubelet[3004]: I1105 15:43:58.013197 3004 apiserver.go:52] "Watching apiserver" Nov 5 15:43:58.030844 kubelet[3004]: I1105 15:43:58.030821 3004 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 15:43:58.063979 kubelet[3004]: I1105 15:43:58.063011 3004 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 15:43:58.067226 kubelet[3004]: E1105 15:43:58.067176 3004 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 5 15:43:58.083235 kubelet[3004]: I1105 15:43:58.083023 3004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.083010335 podStartE2EDuration="1.083010335s" podCreationTimestamp="2025-11-05 15:43:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:43:58.077725878 +0000 UTC m=+1.125635961" watchObservedRunningTime="2025-11-05 15:43:58.083010335 +0000 UTC m=+1.130920408" Nov 5 15:43:58.088195 kubelet[3004]: I1105 15:43:58.088167 3004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.088157278 podStartE2EDuration="1.088157278s" podCreationTimestamp="2025-11-05 15:43:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:43:58.087366507 +0000 UTC m=+1.135276588" watchObservedRunningTime="2025-11-05 15:43:58.088157278 +0000 UTC m=+1.136067351" Nov 5 15:43:58.088360 kubelet[3004]: I1105 15:43:58.088345 3004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.088342255 podStartE2EDuration="1.088342255s" podCreationTimestamp="2025-11-05 15:43:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:43:58.083164106 +0000 UTC m=+1.131074188" watchObservedRunningTime="2025-11-05 15:43:58.088342255 +0000 UTC m=+1.136252337" Nov 5 15:44:03.310160 kubelet[3004]: I1105 15:44:03.310135 3004 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 5 15:44:03.310512 kubelet[3004]: I1105 15:44:03.310414 3004 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 5 15:44:03.310536 containerd[1687]: time="2025-11-05T15:44:03.310325670Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 5 15:44:04.074241 systemd[1]: Created slice kubepods-besteffort-podac8fd496_5dec_4819_843c_147ac9e5e803.slice - libcontainer container kubepods-besteffort-podac8fd496_5dec_4819_843c_147ac9e5e803.slice. Nov 5 15:44:04.075596 kubelet[3004]: I1105 15:44:04.075471 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ac8fd496-5dec-4819-843c-147ac9e5e803-kube-proxy\") pod \"kube-proxy-cn7j6\" (UID: \"ac8fd496-5dec-4819-843c-147ac9e5e803\") " pod="kube-system/kube-proxy-cn7j6" Nov 5 15:44:04.075596 kubelet[3004]: I1105 15:44:04.075503 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac8fd496-5dec-4819-843c-147ac9e5e803-xtables-lock\") pod \"kube-proxy-cn7j6\" (UID: \"ac8fd496-5dec-4819-843c-147ac9e5e803\") " pod="kube-system/kube-proxy-cn7j6" Nov 5 15:44:04.075596 kubelet[3004]: I1105 15:44:04.075523 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac8fd496-5dec-4819-843c-147ac9e5e803-lib-modules\") pod \"kube-proxy-cn7j6\" (UID: \"ac8fd496-5dec-4819-843c-147ac9e5e803\") " pod="kube-system/kube-proxy-cn7j6" Nov 5 15:44:04.075596 kubelet[3004]: I1105 15:44:04.075540 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfrpb\" (UniqueName: \"kubernetes.io/projected/ac8fd496-5dec-4819-843c-147ac9e5e803-kube-api-access-pfrpb\") pod \"kube-proxy-cn7j6\" (UID: \"ac8fd496-5dec-4819-843c-147ac9e5e803\") " pod="kube-system/kube-proxy-cn7j6" Nov 5 15:44:04.182443 kubelet[3004]: E1105 15:44:04.182245 3004 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 5 15:44:04.182443 kubelet[3004]: E1105 15:44:04.182320 3004 projected.go:194] Error preparing data for projected volume kube-api-access-pfrpb for pod kube-system/kube-proxy-cn7j6: configmap "kube-root-ca.crt" not found Nov 5 15:44:04.182443 kubelet[3004]: E1105 15:44:04.182400 3004 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ac8fd496-5dec-4819-843c-147ac9e5e803-kube-api-access-pfrpb podName:ac8fd496-5dec-4819-843c-147ac9e5e803 nodeName:}" failed. No retries permitted until 2025-11-05 15:44:04.682374534 +0000 UTC m=+7.730284612 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pfrpb" (UniqueName: "kubernetes.io/projected/ac8fd496-5dec-4819-843c-147ac9e5e803-kube-api-access-pfrpb") pod "kube-proxy-cn7j6" (UID: "ac8fd496-5dec-4819-843c-147ac9e5e803") : configmap "kube-root-ca.crt" not found Nov 5 15:44:04.482274 systemd[1]: Created slice kubepods-besteffort-pod748dae31_dedd_41aa_b3fa_7272b5122d70.slice - libcontainer container kubepods-besteffort-pod748dae31_dedd_41aa_b3fa_7272b5122d70.slice. Nov 5 15:44:04.577736 kubelet[3004]: I1105 15:44:04.577698 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnkgp\" (UniqueName: \"kubernetes.io/projected/748dae31-dedd-41aa-b3fa-7272b5122d70-kube-api-access-vnkgp\") pod \"tigera-operator-7dcd859c48-ng94s\" (UID: \"748dae31-dedd-41aa-b3fa-7272b5122d70\") " pod="tigera-operator/tigera-operator-7dcd859c48-ng94s" Nov 5 15:44:04.578180 kubelet[3004]: I1105 15:44:04.577844 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/748dae31-dedd-41aa-b3fa-7272b5122d70-var-lib-calico\") pod \"tigera-operator-7dcd859c48-ng94s\" (UID: \"748dae31-dedd-41aa-b3fa-7272b5122d70\") " pod="tigera-operator/tigera-operator-7dcd859c48-ng94s" Nov 5 15:44:04.787824 containerd[1687]: time="2025-11-05T15:44:04.787780795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-ng94s,Uid:748dae31-dedd-41aa-b3fa-7272b5122d70,Namespace:tigera-operator,Attempt:0,}" Nov 5 15:44:04.925505 containerd[1687]: time="2025-11-05T15:44:04.925473022Z" level=info msg="connecting to shim 8f2db66a4fd431add133047b28835f140b37fe257f6b97e8766dc4f4e7dff537" address="unix:///run/containerd/s/687b6d08797278f82e7ad7a6e212edd3125ce40e39367223aa608cb0802a40c1" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:44:04.945118 systemd[1]: Started cri-containerd-8f2db66a4fd431add133047b28835f140b37fe257f6b97e8766dc4f4e7dff537.scope - libcontainer container 8f2db66a4fd431add133047b28835f140b37fe257f6b97e8766dc4f4e7dff537. Nov 5 15:44:04.983776 containerd[1687]: time="2025-11-05T15:44:04.983751189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cn7j6,Uid:ac8fd496-5dec-4819-843c-147ac9e5e803,Namespace:kube-system,Attempt:0,}" Nov 5 15:44:04.989189 containerd[1687]: time="2025-11-05T15:44:04.989130535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-ng94s,Uid:748dae31-dedd-41aa-b3fa-7272b5122d70,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8f2db66a4fd431add133047b28835f140b37fe257f6b97e8766dc4f4e7dff537\"" Nov 5 15:44:04.990518 containerd[1687]: time="2025-11-05T15:44:04.990414082Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 5 15:44:05.131403 containerd[1687]: time="2025-11-05T15:44:05.131240048Z" level=info msg="connecting to shim b6db904ee0ba1258d56070a55e6a4fdd549d0b196f75811e2c375ebf62123824" address="unix:///run/containerd/s/332c91da35b57382653e1c2816a49125a5df342aa1bb6473974e0335e9e352aa" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:44:05.149162 systemd[1]: Started cri-containerd-b6db904ee0ba1258d56070a55e6a4fdd549d0b196f75811e2c375ebf62123824.scope - libcontainer container b6db904ee0ba1258d56070a55e6a4fdd549d0b196f75811e2c375ebf62123824. Nov 5 15:44:05.184732 containerd[1687]: time="2025-11-05T15:44:05.184666231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cn7j6,Uid:ac8fd496-5dec-4819-843c-147ac9e5e803,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6db904ee0ba1258d56070a55e6a4fdd549d0b196f75811e2c375ebf62123824\"" Nov 5 15:44:05.205581 containerd[1687]: time="2025-11-05T15:44:05.205553512Z" level=info msg="CreateContainer within sandbox \"b6db904ee0ba1258d56070a55e6a4fdd549d0b196f75811e2c375ebf62123824\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 5 15:44:05.268843 containerd[1687]: time="2025-11-05T15:44:05.268815219Z" level=info msg="Container 1a112bbf194bbc764ad8167d214b50cbd6e3d4ebbaa39280cae0d90d4ae6d8e4: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:44:05.317820 containerd[1687]: time="2025-11-05T15:44:05.317775710Z" level=info msg="CreateContainer within sandbox \"b6db904ee0ba1258d56070a55e6a4fdd549d0b196f75811e2c375ebf62123824\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1a112bbf194bbc764ad8167d214b50cbd6e3d4ebbaa39280cae0d90d4ae6d8e4\"" Nov 5 15:44:05.318408 containerd[1687]: time="2025-11-05T15:44:05.318389883Z" level=info msg="StartContainer for \"1a112bbf194bbc764ad8167d214b50cbd6e3d4ebbaa39280cae0d90d4ae6d8e4\"" Nov 5 15:44:05.319233 containerd[1687]: time="2025-11-05T15:44:05.319217569Z" level=info msg="connecting to shim 1a112bbf194bbc764ad8167d214b50cbd6e3d4ebbaa39280cae0d90d4ae6d8e4" address="unix:///run/containerd/s/332c91da35b57382653e1c2816a49125a5df342aa1bb6473974e0335e9e352aa" protocol=ttrpc version=3 Nov 5 15:44:05.341037 systemd[1]: Started cri-containerd-1a112bbf194bbc764ad8167d214b50cbd6e3d4ebbaa39280cae0d90d4ae6d8e4.scope - libcontainer container 1a112bbf194bbc764ad8167d214b50cbd6e3d4ebbaa39280cae0d90d4ae6d8e4. Nov 5 15:44:05.384086 containerd[1687]: time="2025-11-05T15:44:05.383968907Z" level=info msg="StartContainer for \"1a112bbf194bbc764ad8167d214b50cbd6e3d4ebbaa39280cae0d90d4ae6d8e4\" returns successfully" Nov 5 15:44:06.102420 kubelet[3004]: I1105 15:44:06.102340 3004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cn7j6" podStartSLOduration=2.102304701 podStartE2EDuration="2.102304701s" podCreationTimestamp="2025-11-05 15:44:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:44:06.101761114 +0000 UTC m=+9.149671190" watchObservedRunningTime="2025-11-05 15:44:06.102304701 +0000 UTC m=+9.150214781" Nov 5 15:44:07.376598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1929030589.mount: Deactivated successfully. Nov 5 15:44:07.928535 containerd[1687]: time="2025-11-05T15:44:07.928287347Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:44:07.934912 containerd[1687]: time="2025-11-05T15:44:07.934887627Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 5 15:44:07.942815 containerd[1687]: time="2025-11-05T15:44:07.942786639Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:44:07.955689 containerd[1687]: time="2025-11-05T15:44:07.955664154Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:44:07.956431 containerd[1687]: time="2025-11-05T15:44:07.956414120Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.965982829s" Nov 5 15:44:07.956463 containerd[1687]: time="2025-11-05T15:44:07.956434128Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 5 15:44:07.984420 containerd[1687]: time="2025-11-05T15:44:07.984379491Z" level=info msg="CreateContainer within sandbox \"8f2db66a4fd431add133047b28835f140b37fe257f6b97e8766dc4f4e7dff537\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 5 15:44:08.013399 containerd[1687]: time="2025-11-05T15:44:08.013050538Z" level=info msg="Container d10451409053829bb6487c67a4617b2a854b6e6a5695451f6ab04d651749a8ce: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:44:08.013592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount660559437.mount: Deactivated successfully. Nov 5 15:44:08.029096 containerd[1687]: time="2025-11-05T15:44:08.029001270Z" level=info msg="CreateContainer within sandbox \"8f2db66a4fd431add133047b28835f140b37fe257f6b97e8766dc4f4e7dff537\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d10451409053829bb6487c67a4617b2a854b6e6a5695451f6ab04d651749a8ce\"" Nov 5 15:44:08.030039 containerd[1687]: time="2025-11-05T15:44:08.029720823Z" level=info msg="StartContainer for \"d10451409053829bb6487c67a4617b2a854b6e6a5695451f6ab04d651749a8ce\"" Nov 5 15:44:08.031378 containerd[1687]: time="2025-11-05T15:44:08.031200421Z" level=info msg="connecting to shim d10451409053829bb6487c67a4617b2a854b6e6a5695451f6ab04d651749a8ce" address="unix:///run/containerd/s/687b6d08797278f82e7ad7a6e212edd3125ce40e39367223aa608cb0802a40c1" protocol=ttrpc version=3 Nov 5 15:44:08.049118 systemd[1]: Started cri-containerd-d10451409053829bb6487c67a4617b2a854b6e6a5695451f6ab04d651749a8ce.scope - libcontainer container d10451409053829bb6487c67a4617b2a854b6e6a5695451f6ab04d651749a8ce. Nov 5 15:44:08.074510 containerd[1687]: time="2025-11-05T15:44:08.074472357Z" level=info msg="StartContainer for \"d10451409053829bb6487c67a4617b2a854b6e6a5695451f6ab04d651749a8ce\" returns successfully" Nov 5 15:44:14.414206 sudo[2009]: pam_unix(sudo:session): session closed for user root Nov 5 15:44:14.426372 sshd[2008]: Connection closed by 139.178.89.65 port 39894 Nov 5 15:44:14.432290 sshd-session[2005]: pam_unix(sshd:session): session closed for user core Nov 5 15:44:14.434492 systemd[1]: sshd@6-139.178.70.108:22-139.178.89.65:39894.service: Deactivated successfully. Nov 5 15:44:14.437079 systemd[1]: session-9.scope: Deactivated successfully. Nov 5 15:44:14.437256 systemd[1]: session-9.scope: Consumed 3.362s CPU time, 156.7M memory peak. Nov 5 15:44:14.440869 systemd-logind[1661]: Session 9 logged out. Waiting for processes to exit. Nov 5 15:44:14.441908 systemd-logind[1661]: Removed session 9. Nov 5 15:44:19.612768 kubelet[3004]: I1105 15:44:19.612552 3004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-ng94s" podStartSLOduration=12.645467213 podStartE2EDuration="15.61253913s" podCreationTimestamp="2025-11-05 15:44:04 +0000 UTC" firstStartedPulling="2025-11-05 15:44:04.989842745 +0000 UTC m=+8.037752822" lastFinishedPulling="2025-11-05 15:44:07.956914668 +0000 UTC m=+11.004824739" observedRunningTime="2025-11-05 15:44:08.110766007 +0000 UTC m=+11.158676091" watchObservedRunningTime="2025-11-05 15:44:19.61253913 +0000 UTC m=+22.660449211" Nov 5 15:44:19.642533 systemd[1]: Created slice kubepods-besteffort-podeb9de50a_6c89_4b9b_9a3a_c7d9cff943c9.slice - libcontainer container kubepods-besteffort-podeb9de50a_6c89_4b9b_9a3a_c7d9cff943c9.slice. Nov 5 15:44:19.678691 kubelet[3004]: I1105 15:44:19.678627 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/eb9de50a-6c89-4b9b-9a3a-c7d9cff943c9-typha-certs\") pod \"calico-typha-55b7c95f67-25lpv\" (UID: \"eb9de50a-6c89-4b9b-9a3a-c7d9cff943c9\") " pod="calico-system/calico-typha-55b7c95f67-25lpv" Nov 5 15:44:19.678691 kubelet[3004]: I1105 15:44:19.678661 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb9de50a-6c89-4b9b-9a3a-c7d9cff943c9-tigera-ca-bundle\") pod \"calico-typha-55b7c95f67-25lpv\" (UID: \"eb9de50a-6c89-4b9b-9a3a-c7d9cff943c9\") " pod="calico-system/calico-typha-55b7c95f67-25lpv" Nov 5 15:44:19.678691 kubelet[3004]: I1105 15:44:19.678672 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6svl\" (UniqueName: \"kubernetes.io/projected/eb9de50a-6c89-4b9b-9a3a-c7d9cff943c9-kube-api-access-g6svl\") pod \"calico-typha-55b7c95f67-25lpv\" (UID: \"eb9de50a-6c89-4b9b-9a3a-c7d9cff943c9\") " pod="calico-system/calico-typha-55b7c95f67-25lpv" Nov 5 15:44:19.886004 systemd[1]: Created slice kubepods-besteffort-pode2c03afa_0a41_4386_81f3_3cbf2dbe588e.slice - libcontainer container kubepods-besteffort-pode2c03afa_0a41_4386_81f3_3cbf2dbe588e.slice. Nov 5 15:44:19.948507 containerd[1687]: time="2025-11-05T15:44:19.948473107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55b7c95f67-25lpv,Uid:eb9de50a-6c89-4b9b-9a3a-c7d9cff943c9,Namespace:calico-system,Attempt:0,}" Nov 5 15:44:19.970966 containerd[1687]: time="2025-11-05T15:44:19.970063196Z" level=info msg="connecting to shim 78696211540a387da609606370158008572e99f75f8213f8d3fa5200b753c641" address="unix:///run/containerd/s/8cff6c5b1ed6f8ad0c1b4bdae9f60cfcbd48fd569c635b5c7d81d4b81716b124" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:44:19.982625 kubelet[3004]: E1105 15:44:19.982439 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pwv49" podUID="aa307e49-5503-4739-ace7-169707e5fd38" Nov 5 15:44:19.988636 kubelet[3004]: I1105 15:44:19.988604 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e2c03afa-0a41-4386-81f3-3cbf2dbe588e-policysync\") pod \"calico-node-xl89x\" (UID: \"e2c03afa-0a41-4386-81f3-3cbf2dbe588e\") " pod="calico-system/calico-node-xl89x" Nov 5 15:44:19.989012 kubelet[3004]: I1105 15:44:19.988940 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e2c03afa-0a41-4386-81f3-3cbf2dbe588e-cni-net-dir\") pod \"calico-node-xl89x\" (UID: \"e2c03afa-0a41-4386-81f3-3cbf2dbe588e\") " pod="calico-system/calico-node-xl89x" Nov 5 15:44:19.989159 kubelet[3004]: I1105 15:44:19.989069 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e2c03afa-0a41-4386-81f3-3cbf2dbe588e-cni-bin-dir\") pod \"calico-node-xl89x\" (UID: \"e2c03afa-0a41-4386-81f3-3cbf2dbe588e\") " pod="calico-system/calico-node-xl89x" Nov 5 15:44:19.989159 kubelet[3004]: I1105 15:44:19.989092 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vxpp\" (UniqueName: \"kubernetes.io/projected/e2c03afa-0a41-4386-81f3-3cbf2dbe588e-kube-api-access-8vxpp\") pod \"calico-node-xl89x\" (UID: \"e2c03afa-0a41-4386-81f3-3cbf2dbe588e\") " pod="calico-system/calico-node-xl89x" Nov 5 15:44:19.989319 kubelet[3004]: I1105 15:44:19.989260 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e2c03afa-0a41-4386-81f3-3cbf2dbe588e-var-lib-calico\") pod \"calico-node-xl89x\" (UID: \"e2c03afa-0a41-4386-81f3-3cbf2dbe588e\") " pod="calico-system/calico-node-xl89x" Nov 5 15:44:19.989319 kubelet[3004]: I1105 15:44:19.989290 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e2c03afa-0a41-4386-81f3-3cbf2dbe588e-var-run-calico\") pod \"calico-node-xl89x\" (UID: \"e2c03afa-0a41-4386-81f3-3cbf2dbe588e\") " pod="calico-system/calico-node-xl89x" Nov 5 15:44:19.990437 kubelet[3004]: I1105 15:44:19.989305 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e2c03afa-0a41-4386-81f3-3cbf2dbe588e-cni-log-dir\") pod \"calico-node-xl89x\" (UID: \"e2c03afa-0a41-4386-81f3-3cbf2dbe588e\") " pod="calico-system/calico-node-xl89x" Nov 5 15:44:19.990437 kubelet[3004]: I1105 15:44:19.989886 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2c03afa-0a41-4386-81f3-3cbf2dbe588e-lib-modules\") pod \"calico-node-xl89x\" (UID: \"e2c03afa-0a41-4386-81f3-3cbf2dbe588e\") " pod="calico-system/calico-node-xl89x" Nov 5 15:44:19.990437 kubelet[3004]: I1105 15:44:19.989901 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e2c03afa-0a41-4386-81f3-3cbf2dbe588e-node-certs\") pod \"calico-node-xl89x\" (UID: \"e2c03afa-0a41-4386-81f3-3cbf2dbe588e\") " pod="calico-system/calico-node-xl89x" Nov 5 15:44:19.990437 kubelet[3004]: I1105 15:44:19.989909 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2c03afa-0a41-4386-81f3-3cbf2dbe588e-xtables-lock\") pod \"calico-node-xl89x\" (UID: \"e2c03afa-0a41-4386-81f3-3cbf2dbe588e\") " pod="calico-system/calico-node-xl89x" Nov 5 15:44:19.990437 kubelet[3004]: I1105 15:44:19.990090 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e2c03afa-0a41-4386-81f3-3cbf2dbe588e-flexvol-driver-host\") pod \"calico-node-xl89x\" (UID: \"e2c03afa-0a41-4386-81f3-3cbf2dbe588e\") " pod="calico-system/calico-node-xl89x" Nov 5 15:44:19.990995 kubelet[3004]: I1105 15:44:19.990104 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2c03afa-0a41-4386-81f3-3cbf2dbe588e-tigera-ca-bundle\") pod \"calico-node-xl89x\" (UID: \"e2c03afa-0a41-4386-81f3-3cbf2dbe588e\") " pod="calico-system/calico-node-xl89x" Nov 5 15:44:20.004229 systemd[1]: Started cri-containerd-78696211540a387da609606370158008572e99f75f8213f8d3fa5200b753c641.scope - libcontainer container 78696211540a387da609606370158008572e99f75f8213f8d3fa5200b753c641. Nov 5 15:44:20.052622 containerd[1687]: time="2025-11-05T15:44:20.052593822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55b7c95f67-25lpv,Uid:eb9de50a-6c89-4b9b-9a3a-c7d9cff943c9,Namespace:calico-system,Attempt:0,} returns sandbox id \"78696211540a387da609606370158008572e99f75f8213f8d3fa5200b753c641\"" Nov 5 15:44:20.053986 containerd[1687]: time="2025-11-05T15:44:20.053790295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 5 15:44:20.091463 kubelet[3004]: I1105 15:44:20.091260 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/aa307e49-5503-4739-ace7-169707e5fd38-registration-dir\") pod \"csi-node-driver-pwv49\" (UID: \"aa307e49-5503-4739-ace7-169707e5fd38\") " pod="calico-system/csi-node-driver-pwv49" Nov 5 15:44:20.092107 kubelet[3004]: I1105 15:44:20.091920 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/aa307e49-5503-4739-ace7-169707e5fd38-socket-dir\") pod \"csi-node-driver-pwv49\" (UID: \"aa307e49-5503-4739-ace7-169707e5fd38\") " pod="calico-system/csi-node-driver-pwv49" Nov 5 15:44:20.093289 kubelet[3004]: I1105 15:44:20.093242 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/aa307e49-5503-4739-ace7-169707e5fd38-varrun\") pod \"csi-node-driver-pwv49\" (UID: \"aa307e49-5503-4739-ace7-169707e5fd38\") " pod="calico-system/csi-node-driver-pwv49" Nov 5 15:44:20.093586 kubelet[3004]: I1105 15:44:20.093572 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv9hk\" (UniqueName: \"kubernetes.io/projected/aa307e49-5503-4739-ace7-169707e5fd38-kube-api-access-fv9hk\") pod \"csi-node-driver-pwv49\" (UID: \"aa307e49-5503-4739-ace7-169707e5fd38\") " pod="calico-system/csi-node-driver-pwv49" Nov 5 15:44:20.093724 kubelet[3004]: I1105 15:44:20.093713 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aa307e49-5503-4739-ace7-169707e5fd38-kubelet-dir\") pod \"csi-node-driver-pwv49\" (UID: \"aa307e49-5503-4739-ace7-169707e5fd38\") " pod="calico-system/csi-node-driver-pwv49" Nov 5 15:44:20.104176 kubelet[3004]: E1105 15:44:20.104147 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:20.105645 kubelet[3004]: W1105 15:44:20.104295 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:20.105645 kubelet[3004]: E1105 15:44:20.104330 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:20.106027 kubelet[3004]: E1105 15:44:20.105867 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:20.106027 kubelet[3004]: W1105 15:44:20.105885 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:20.106027 kubelet[3004]: E1105 15:44:20.105903 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:20.106229 kubelet[3004]: E1105 15:44:20.106219 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:20.106292 kubelet[3004]: W1105 15:44:20.106282 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:20.106615 kubelet[3004]: E1105 15:44:20.106342 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:20.107395 kubelet[3004]: E1105 15:44:20.107293 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:20.107794 kubelet[3004]: W1105 15:44:20.107568 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:20.107794 kubelet[3004]: E1105 15:44:20.107590 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:20.108301 kubelet[3004]: E1105 15:44:20.108117 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:20.108301 kubelet[3004]: W1105 15:44:20.108128 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:20.108301 kubelet[3004]: E1105 15:44:20.108137 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:20.108797 kubelet[3004]: E1105 15:44:20.108761 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:20.108996 kubelet[3004]: W1105 15:44:20.108772 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:20.108996 kubelet[3004]: E1105 15:44:20.108943 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:20.112008 kubelet[3004]: E1105 15:44:20.111924 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:20.112008 kubelet[3004]: W1105 15:44:20.111944 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:20.112008 kubelet[3004]: E1105 15:44:20.111982 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:20.192985 containerd[1687]: time="2025-11-05T15:44:20.192857994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xl89x,Uid:e2c03afa-0a41-4386-81f3-3cbf2dbe588e,Namespace:calico-system,Attempt:0,}" Nov 5 15:44:20.196216 kubelet[3004]: E1105 15:44:20.196175 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:20.196216 kubelet[3004]: W1105 15:44:20.196188 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:20.196216 kubelet[3004]: E1105 15:44:20.196203 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:20.196466 kubelet[3004]: E1105 15:44:20.196448 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:20.196466 kubelet[3004]: W1105 15:44:20.196454 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:20.196466 kubelet[3004]: E1105 15:44:20.196460 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:20.196663 kubelet[3004]: E1105 15:44:20.196638 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:20.196663 kubelet[3004]: W1105 15:44:20.196651 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:20.196757 kubelet[3004]: E1105 15:44:20.196718 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:20.196877 kubelet[3004]: E1105 15:44:20.196872 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:20.196966 kubelet[3004]: W1105 15:44:20.196909 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:20.196966 kubelet[3004]: E1105 15:44:20.196918 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:20.197140 kubelet[3004]: E1105 15:44:20.197115 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:20.197140 kubelet[3004]: W1105 15:44:20.197124 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:20.197140 kubelet[3004]: E1105 15:44:20.197132 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:20.197342 kubelet[3004]: E1105 15:44:20.197323 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:20.197342 kubelet[3004]: W1105 15:44:20.197329 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:20.197342 kubelet[3004]: E1105 15:44:20.197335 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:20.197512 kubelet[3004]: E1105 15:44:20.197495 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:20.197512 kubelet[3004]: W1105 15:44:20.197501 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:20.197512 kubelet[3004]: E1105 15:44:20.197506 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:20.197685 kubelet[3004]: E1105 15:44:20.197668 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:20.197685 kubelet[3004]: W1105 15:44:20.197675 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:20.197685 kubelet[3004]: E1105 15:44:20.197680 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:20.197888 kubelet[3004]: E1105 15:44:20.197871 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:20.197888 kubelet[3004]: W1105 15:44:20.197877 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:20.197888 kubelet[3004]: E1105 15:44:20.197882 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:20.198093 kubelet[3004]: E1105 15:44:20.198075 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:20.198093 kubelet[3004]: W1105 15:44:20.198082 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:20.198093 kubelet[3004]: E1105 15:44:20.198087 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:20.198295 kubelet[3004]: E1105 15:44:20.198277 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:20.198295 kubelet[3004]: W1105 15:44:20.198283 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:20.198295 kubelet[3004]: E1105 15:44:20.198289 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:20.198459 kubelet[3004]: E1105 15:44:20.198443 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:20.198459 kubelet[3004]: W1105 15:44:20.198449 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:20.198459 kubelet[3004]: E1105 15:44:20.198454 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:20.198660 kubelet[3004]: E1105 15:44:20.198643 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:20.198660 kubelet[3004]: W1105 15:44:20.198649 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:20.198660 kubelet[3004]: E1105 15:44:20.198654 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:20.198826 kubelet[3004]: E1105 15:44:20.198810 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:20.198826 kubelet[3004]: W1105 15:44:20.198815 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:20.198826 kubelet[3004]: E1105 15:44:20.198820 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:20.199005 kubelet[3004]: E1105 15:44:20.198988 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:20.199005 kubelet[3004]: W1105 15:44:20.198994 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:20.199005 kubelet[3004]: E1105 15:44:20.199000 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:20.214090 kubelet[3004]: E1105 15:44:20.214044 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:20.214090 kubelet[3004]: W1105 15:44:20.214061 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:20.214090 kubelet[3004]: E1105 15:44:20.214076 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:20.214380 kubelet[3004]: E1105 15:44:20.214361 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:20.214380 kubelet[3004]: W1105 15:44:20.214369 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:20.214380 kubelet[3004]: E1105 15:44:20.214374 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:20.214549 kubelet[3004]: E1105 15:44:20.214544 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:20.214599 kubelet[3004]: W1105 15:44:20.214584 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:20.214599 kubelet[3004]: E1105 15:44:20.214593 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:20.214742 kubelet[3004]: E1105 15:44:20.214736 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:20.214789 kubelet[3004]: W1105 15:44:20.214775 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:20.214789 kubelet[3004]: E1105 15:44:20.214783 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:20.214999 kubelet[3004]: E1105 15:44:20.214981 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:20.214999 kubelet[3004]: W1105 15:44:20.214987 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:20.214999 kubelet[3004]: E1105 15:44:20.214993 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:20.215239 kubelet[3004]: E1105 15:44:20.215220 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:20.215239 kubelet[3004]: W1105 15:44:20.215227 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:20.215239 kubelet[3004]: E1105 15:44:20.215232 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:20.215420 kubelet[3004]: E1105 15:44:20.215401 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:20.215420 kubelet[3004]: W1105 15:44:20.215408 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:20.215420 kubelet[3004]: E1105 15:44:20.215413 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:20.215622 kubelet[3004]: E1105 15:44:20.215605 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:20.215622 kubelet[3004]: W1105 15:44:20.215611 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:20.215622 kubelet[3004]: E1105 15:44:20.215616 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:20.219544 kubelet[3004]: E1105 15:44:20.215920 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:20.219544 kubelet[3004]: W1105 15:44:20.215926 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:20.219544 kubelet[3004]: E1105 15:44:20.215932 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:20.219544 kubelet[3004]: E1105 15:44:20.216059 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:20.219544 kubelet[3004]: W1105 15:44:20.216064 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:20.219544 kubelet[3004]: E1105 15:44:20.216069 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:20.227338 kubelet[3004]: E1105 15:44:20.227276 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:20.227338 kubelet[3004]: W1105 15:44:20.227292 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:20.227338 kubelet[3004]: E1105 15:44:20.227308 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:20.242367 containerd[1687]: time="2025-11-05T15:44:20.242262024Z" level=info msg="connecting to shim e0ab9798b106d2a6880f623d2572bcafecde3f9fa2ee887f6c06ce94778e49ce" address="unix:///run/containerd/s/9d8a8fca517133ccbeabb0905c93860820b45f1e2d2094373c4abaa68f53a5c0" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:44:20.270174 systemd[1]: Started cri-containerd-e0ab9798b106d2a6880f623d2572bcafecde3f9fa2ee887f6c06ce94778e49ce.scope - libcontainer container e0ab9798b106d2a6880f623d2572bcafecde3f9fa2ee887f6c06ce94778e49ce. Nov 5 15:44:20.306388 containerd[1687]: time="2025-11-05T15:44:20.306355130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xl89x,Uid:e2c03afa-0a41-4386-81f3-3cbf2dbe588e,Namespace:calico-system,Attempt:0,} returns sandbox id \"e0ab9798b106d2a6880f623d2572bcafecde3f9fa2ee887f6c06ce94778e49ce\"" Nov 5 15:44:22.039977 kubelet[3004]: E1105 15:44:22.039671 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pwv49" podUID="aa307e49-5503-4739-ace7-169707e5fd38" Nov 5 15:44:22.096601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2814428581.mount: Deactivated successfully. Nov 5 15:44:24.047453 kubelet[3004]: E1105 15:44:24.047375 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pwv49" podUID="aa307e49-5503-4739-ace7-169707e5fd38" Nov 5 15:44:25.104515 containerd[1687]: time="2025-11-05T15:44:25.104474994Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:44:25.108667 containerd[1687]: time="2025-11-05T15:44:25.108643443Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 5 15:44:25.110973 containerd[1687]: time="2025-11-05T15:44:25.110938087Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:44:25.117219 containerd[1687]: time="2025-11-05T15:44:25.117196626Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:44:25.117825 containerd[1687]: time="2025-11-05T15:44:25.117462655Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 5.063650928s" Nov 5 15:44:25.117825 containerd[1687]: time="2025-11-05T15:44:25.117482271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 5 15:44:25.118349 containerd[1687]: time="2025-11-05T15:44:25.118333363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 5 15:44:25.148652 containerd[1687]: time="2025-11-05T15:44:25.148625287Z" level=info msg="CreateContainer within sandbox \"78696211540a387da609606370158008572e99f75f8213f8d3fa5200b753c641\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 5 15:44:25.158764 containerd[1687]: time="2025-11-05T15:44:25.158740773Z" level=info msg="Container 2d3bc2fac7cb45a83eeb9564c6ec8f7c8b0aafb2809cddbae5f5d09399e3ab49: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:44:25.165043 containerd[1687]: time="2025-11-05T15:44:25.165017573Z" level=info msg="CreateContainer within sandbox \"78696211540a387da609606370158008572e99f75f8213f8d3fa5200b753c641\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2d3bc2fac7cb45a83eeb9564c6ec8f7c8b0aafb2809cddbae5f5d09399e3ab49\"" Nov 5 15:44:25.165550 containerd[1687]: time="2025-11-05T15:44:25.165530660Z" level=info msg="StartContainer for \"2d3bc2fac7cb45a83eeb9564c6ec8f7c8b0aafb2809cddbae5f5d09399e3ab49\"" Nov 5 15:44:25.166428 containerd[1687]: time="2025-11-05T15:44:25.166323792Z" level=info msg="connecting to shim 2d3bc2fac7cb45a83eeb9564c6ec8f7c8b0aafb2809cddbae5f5d09399e3ab49" address="unix:///run/containerd/s/8cff6c5b1ed6f8ad0c1b4bdae9f60cfcbd48fd569c635b5c7d81d4b81716b124" protocol=ttrpc version=3 Nov 5 15:44:25.184116 systemd[1]: Started cri-containerd-2d3bc2fac7cb45a83eeb9564c6ec8f7c8b0aafb2809cddbae5f5d09399e3ab49.scope - libcontainer container 2d3bc2fac7cb45a83eeb9564c6ec8f7c8b0aafb2809cddbae5f5d09399e3ab49. Nov 5 15:44:25.276510 containerd[1687]: time="2025-11-05T15:44:25.276432785Z" level=info msg="StartContainer for \"2d3bc2fac7cb45a83eeb9564c6ec8f7c8b0aafb2809cddbae5f5d09399e3ab49\" returns successfully" Nov 5 15:44:26.039965 kubelet[3004]: E1105 15:44:26.039864 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pwv49" podUID="aa307e49-5503-4739-ace7-169707e5fd38" Nov 5 15:44:26.201317 kubelet[3004]: E1105 15:44:26.201260 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:26.201317 kubelet[3004]: W1105 15:44:26.201278 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:26.201317 kubelet[3004]: E1105 15:44:26.201296 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:26.201628 kubelet[3004]: E1105 15:44:26.201582 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:26.201628 kubelet[3004]: W1105 15:44:26.201590 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:26.201628 kubelet[3004]: E1105 15:44:26.201598 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:26.201875 kubelet[3004]: E1105 15:44:26.201831 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:26.201875 kubelet[3004]: W1105 15:44:26.201839 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:26.201875 kubelet[3004]: E1105 15:44:26.201845 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:26.202188 kubelet[3004]: E1105 15:44:26.202138 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:26.202188 kubelet[3004]: W1105 15:44:26.202147 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:26.202188 kubelet[3004]: E1105 15:44:26.202154 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:26.210423 kubelet[3004]: E1105 15:44:26.202561 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:26.210423 kubelet[3004]: W1105 15:44:26.202568 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:26.210423 kubelet[3004]: E1105 15:44:26.202575 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:26.210423 kubelet[3004]: E1105 15:44:26.202705 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:26.210423 kubelet[3004]: W1105 15:44:26.202712 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:26.210423 kubelet[3004]: E1105 15:44:26.202718 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:26.210423 kubelet[3004]: E1105 15:44:26.202827 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:26.210423 kubelet[3004]: W1105 15:44:26.202833 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:26.210423 kubelet[3004]: E1105 15:44:26.202839 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:26.210423 kubelet[3004]: E1105 15:44:26.202963 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:26.210650 kubelet[3004]: W1105 15:44:26.202970 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:26.210650 kubelet[3004]: E1105 15:44:26.202976 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:26.210650 kubelet[3004]: E1105 15:44:26.203093 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:26.210650 kubelet[3004]: W1105 15:44:26.203099 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:26.210650 kubelet[3004]: E1105 15:44:26.203105 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:26.210650 kubelet[3004]: E1105 15:44:26.203235 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:26.210650 kubelet[3004]: W1105 15:44:26.203240 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:26.210650 kubelet[3004]: E1105 15:44:26.203246 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:26.210650 kubelet[3004]: E1105 15:44:26.203379 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:26.210650 kubelet[3004]: W1105 15:44:26.203386 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:26.210865 kubelet[3004]: E1105 15:44:26.203391 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:26.210865 kubelet[3004]: E1105 15:44:26.203509 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:26.210865 kubelet[3004]: W1105 15:44:26.203530 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:26.210865 kubelet[3004]: E1105 15:44:26.203539 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:26.210865 kubelet[3004]: E1105 15:44:26.203643 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:26.210865 kubelet[3004]: W1105 15:44:26.203648 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:26.210865 kubelet[3004]: E1105 15:44:26.203654 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:26.210865 kubelet[3004]: E1105 15:44:26.203777 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:26.210865 kubelet[3004]: W1105 15:44:26.203783 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:26.210865 kubelet[3004]: E1105 15:44:26.203788 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:26.211133 kubelet[3004]: E1105 15:44:26.203892 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:26.211133 kubelet[3004]: W1105 15:44:26.203899 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:26.211133 kubelet[3004]: E1105 15:44:26.203904 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:26.240361 kubelet[3004]: E1105 15:44:26.240306 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:26.240361 kubelet[3004]: W1105 15:44:26.240321 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:26.240361 kubelet[3004]: E1105 15:44:26.240334 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:26.240568 kubelet[3004]: E1105 15:44:26.240551 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:26.240568 kubelet[3004]: W1105 15:44:26.240565 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:26.240694 kubelet[3004]: E1105 15:44:26.240578 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:26.240694 kubelet[3004]: E1105 15:44:26.240685 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:26.240694 kubelet[3004]: W1105 15:44:26.240691 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:26.240843 kubelet[3004]: E1105 15:44:26.240698 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:26.240843 kubelet[3004]: E1105 15:44:26.240794 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:26.240843 kubelet[3004]: W1105 15:44:26.240800 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:26.240843 kubelet[3004]: E1105 15:44:26.240806 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:26.241015 kubelet[3004]: E1105 15:44:26.240923 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:26.241015 kubelet[3004]: W1105 15:44:26.240929 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:26.241015 kubelet[3004]: E1105 15:44:26.240935 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:26.241476 kubelet[3004]: E1105 15:44:26.241250 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:26.241476 kubelet[3004]: W1105 15:44:26.241258 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:26.241476 kubelet[3004]: E1105 15:44:26.241266 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:26.241476 kubelet[3004]: E1105 15:44:26.241433 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:26.241476 kubelet[3004]: W1105 15:44:26.241440 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:26.241476 kubelet[3004]: E1105 15:44:26.241447 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:26.241614 kubelet[3004]: E1105 15:44:26.241550 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:26.241614 kubelet[3004]: W1105 15:44:26.241555 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:26.241614 kubelet[3004]: E1105 15:44:26.241572 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:26.241706 kubelet[3004]: E1105 15:44:26.241684 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:26.241706 kubelet[3004]: W1105 15:44:26.241693 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:26.241706 kubelet[3004]: E1105 15:44:26.241700 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:26.241867 kubelet[3004]: E1105 15:44:26.241798 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:26.241867 kubelet[3004]: W1105 15:44:26.241804 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:26.241867 kubelet[3004]: E1105 15:44:26.241811 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:26.242024 kubelet[3004]: E1105 15:44:26.241895 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:26.242024 kubelet[3004]: W1105 15:44:26.241900 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:26.242024 kubelet[3004]: E1105 15:44:26.241906 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:26.242024 kubelet[3004]: E1105 15:44:26.242020 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:26.242330 kubelet[3004]: W1105 15:44:26.242027 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:26.242330 kubelet[3004]: E1105 15:44:26.242033 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:26.242330 kubelet[3004]: E1105 15:44:26.242226 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:26.242330 kubelet[3004]: W1105 15:44:26.242234 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:26.242330 kubelet[3004]: E1105 15:44:26.242242 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:26.242608 kubelet[3004]: E1105 15:44:26.242487 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:26.242608 kubelet[3004]: W1105 15:44:26.242493 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:26.242608 kubelet[3004]: E1105 15:44:26.242500 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:26.243519 kubelet[3004]: E1105 15:44:26.242713 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:26.243519 kubelet[3004]: W1105 15:44:26.242719 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:26.243519 kubelet[3004]: E1105 15:44:26.242726 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:26.243519 kubelet[3004]: E1105 15:44:26.242856 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:26.243519 kubelet[3004]: W1105 15:44:26.242863 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:26.243519 kubelet[3004]: E1105 15:44:26.242869 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:26.243519 kubelet[3004]: E1105 15:44:26.243114 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:26.243519 kubelet[3004]: W1105 15:44:26.243138 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:26.243519 kubelet[3004]: E1105 15:44:26.243150 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:26.243519 kubelet[3004]: E1105 15:44:26.243282 3004 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:44:26.243728 kubelet[3004]: W1105 15:44:26.243295 3004 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:44:26.243728 kubelet[3004]: E1105 15:44:26.243303 3004 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:44:26.770879 containerd[1687]: time="2025-11-05T15:44:26.770846892Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:44:26.777482 containerd[1687]: time="2025-11-05T15:44:26.777450238Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 5 15:44:26.780602 containerd[1687]: time="2025-11-05T15:44:26.780568105Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:44:26.784434 containerd[1687]: time="2025-11-05T15:44:26.784401648Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:44:26.792706 containerd[1687]: time="2025-11-05T15:44:26.784620810Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.66621694s" Nov 5 15:44:26.792706 containerd[1687]: time="2025-11-05T15:44:26.784639018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 5 15:44:26.793130 containerd[1687]: time="2025-11-05T15:44:26.793102677Z" level=info msg="CreateContainer within sandbox \"e0ab9798b106d2a6880f623d2572bcafecde3f9fa2ee887f6c06ce94778e49ce\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 5 15:44:26.826494 containerd[1687]: time="2025-11-05T15:44:26.826462190Z" level=info msg="Container 7098e6663f5cc0a95bb801aad181e62e4aba64b77287751fdef504b2c5111327: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:44:26.877900 containerd[1687]: time="2025-11-05T15:44:26.877411065Z" level=info msg="CreateContainer within sandbox \"e0ab9798b106d2a6880f623d2572bcafecde3f9fa2ee887f6c06ce94778e49ce\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7098e6663f5cc0a95bb801aad181e62e4aba64b77287751fdef504b2c5111327\"" Nov 5 15:44:26.878468 containerd[1687]: time="2025-11-05T15:44:26.878443552Z" level=info msg="StartContainer for \"7098e6663f5cc0a95bb801aad181e62e4aba64b77287751fdef504b2c5111327\"" Nov 5 15:44:26.880557 containerd[1687]: time="2025-11-05T15:44:26.880538873Z" level=info msg="connecting to shim 7098e6663f5cc0a95bb801aad181e62e4aba64b77287751fdef504b2c5111327" address="unix:///run/containerd/s/9d8a8fca517133ccbeabb0905c93860820b45f1e2d2094373c4abaa68f53a5c0" protocol=ttrpc version=3 Nov 5 15:44:26.898063 systemd[1]: Started cri-containerd-7098e6663f5cc0a95bb801aad181e62e4aba64b77287751fdef504b2c5111327.scope - libcontainer container 7098e6663f5cc0a95bb801aad181e62e4aba64b77287751fdef504b2c5111327. Nov 5 15:44:26.929500 systemd[1]: cri-containerd-7098e6663f5cc0a95bb801aad181e62e4aba64b77287751fdef504b2c5111327.scope: Deactivated successfully. Nov 5 15:44:26.933871 containerd[1687]: time="2025-11-05T15:44:26.932211317Z" level=info msg="StartContainer for \"7098e6663f5cc0a95bb801aad181e62e4aba64b77287751fdef504b2c5111327\" returns successfully" Nov 5 15:44:26.971547 containerd[1687]: time="2025-11-05T15:44:26.971513426Z" level=info msg="received exit event container_id:\"7098e6663f5cc0a95bb801aad181e62e4aba64b77287751fdef504b2c5111327\" id:\"7098e6663f5cc0a95bb801aad181e62e4aba64b77287751fdef504b2c5111327\" pid:3635 exited_at:{seconds:1762357466 nanos:934197394}" Nov 5 15:44:27.002417 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7098e6663f5cc0a95bb801aad181e62e4aba64b77287751fdef504b2c5111327-rootfs.mount: Deactivated successfully. Nov 5 15:44:27.007176 containerd[1687]: time="2025-11-05T15:44:27.005473129Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7098e6663f5cc0a95bb801aad181e62e4aba64b77287751fdef504b2c5111327\" id:\"7098e6663f5cc0a95bb801aad181e62e4aba64b77287751fdef504b2c5111327\" pid:3635 exited_at:{seconds:1762357466 nanos:934197394}" Nov 5 15:44:27.139359 kubelet[3004]: I1105 15:44:27.139031 3004 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 15:44:27.155966 kubelet[3004]: I1105 15:44:27.155786 3004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-55b7c95f67-25lpv" podStartSLOduration=3.091241474 podStartE2EDuration="8.155772983s" podCreationTimestamp="2025-11-05 15:44:19 +0000 UTC" firstStartedPulling="2025-11-05 15:44:20.053506373 +0000 UTC m=+23.101416443" lastFinishedPulling="2025-11-05 15:44:25.118037877 +0000 UTC m=+28.165947952" observedRunningTime="2025-11-05 15:44:26.149815922 +0000 UTC m=+29.197726006" watchObservedRunningTime="2025-11-05 15:44:27.155772983 +0000 UTC m=+30.203683073" Nov 5 15:44:28.039800 kubelet[3004]: E1105 15:44:28.039753 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pwv49" podUID="aa307e49-5503-4739-ace7-169707e5fd38" Nov 5 15:44:28.143041 containerd[1687]: time="2025-11-05T15:44:28.142989152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 5 15:44:30.040535 kubelet[3004]: E1105 15:44:30.040506 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pwv49" podUID="aa307e49-5503-4739-ace7-169707e5fd38" Nov 5 15:44:31.752168 containerd[1687]: time="2025-11-05T15:44:31.752142582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:44:31.752765 containerd[1687]: time="2025-11-05T15:44:31.752741199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 5 15:44:31.752989 containerd[1687]: time="2025-11-05T15:44:31.752975090Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:44:31.754280 containerd[1687]: time="2025-11-05T15:44:31.754265131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:44:31.754791 containerd[1687]: time="2025-11-05T15:44:31.754768988Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.611753715s" Nov 5 15:44:31.755013 containerd[1687]: time="2025-11-05T15:44:31.754793206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 5 15:44:31.756543 containerd[1687]: time="2025-11-05T15:44:31.756521458Z" level=info msg="CreateContainer within sandbox \"e0ab9798b106d2a6880f623d2572bcafecde3f9fa2ee887f6c06ce94778e49ce\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 5 15:44:31.762281 containerd[1687]: time="2025-11-05T15:44:31.762168999Z" level=info msg="Container c04c372a883194dd7fb5e4da77befd4151b6e4b364db65e679222ca09f1eced5: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:44:31.785897 containerd[1687]: time="2025-11-05T15:44:31.785869740Z" level=info msg="CreateContainer within sandbox \"e0ab9798b106d2a6880f623d2572bcafecde3f9fa2ee887f6c06ce94778e49ce\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c04c372a883194dd7fb5e4da77befd4151b6e4b364db65e679222ca09f1eced5\"" Nov 5 15:44:31.790469 containerd[1687]: time="2025-11-05T15:44:31.790030713Z" level=info msg="StartContainer for \"c04c372a883194dd7fb5e4da77befd4151b6e4b364db65e679222ca09f1eced5\"" Nov 5 15:44:31.791007 containerd[1687]: time="2025-11-05T15:44:31.790995395Z" level=info msg="connecting to shim c04c372a883194dd7fb5e4da77befd4151b6e4b364db65e679222ca09f1eced5" address="unix:///run/containerd/s/9d8a8fca517133ccbeabb0905c93860820b45f1e2d2094373c4abaa68f53a5c0" protocol=ttrpc version=3 Nov 5 15:44:31.811034 systemd[1]: Started cri-containerd-c04c372a883194dd7fb5e4da77befd4151b6e4b364db65e679222ca09f1eced5.scope - libcontainer container c04c372a883194dd7fb5e4da77befd4151b6e4b364db65e679222ca09f1eced5. Nov 5 15:44:31.834212 containerd[1687]: time="2025-11-05T15:44:31.834113452Z" level=info msg="StartContainer for \"c04c372a883194dd7fb5e4da77befd4151b6e4b364db65e679222ca09f1eced5\" returns successfully" Nov 5 15:44:32.040426 kubelet[3004]: E1105 15:44:32.040396 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pwv49" podUID="aa307e49-5503-4739-ace7-169707e5fd38" Nov 5 15:44:32.854988 systemd[1]: cri-containerd-c04c372a883194dd7fb5e4da77befd4151b6e4b364db65e679222ca09f1eced5.scope: Deactivated successfully. Nov 5 15:44:32.855448 systemd[1]: cri-containerd-c04c372a883194dd7fb5e4da77befd4151b6e4b364db65e679222ca09f1eced5.scope: Consumed 295ms CPU time, 159M memory peak, 288K read from disk, 171.3M written to disk. Nov 5 15:44:32.906717 containerd[1687]: time="2025-11-05T15:44:32.906534259Z" level=info msg="received exit event container_id:\"c04c372a883194dd7fb5e4da77befd4151b6e4b364db65e679222ca09f1eced5\" id:\"c04c372a883194dd7fb5e4da77befd4151b6e4b364db65e679222ca09f1eced5\" pid:3695 exited_at:{seconds:1762357472 nanos:906414875}" Nov 5 15:44:32.906717 containerd[1687]: time="2025-11-05T15:44:32.906700292Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c04c372a883194dd7fb5e4da77befd4151b6e4b364db65e679222ca09f1eced5\" id:\"c04c372a883194dd7fb5e4da77befd4151b6e4b364db65e679222ca09f1eced5\" pid:3695 exited_at:{seconds:1762357472 nanos:906414875}" Nov 5 15:44:32.929212 kubelet[3004]: I1105 15:44:32.929172 3004 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 5 15:44:32.940117 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c04c372a883194dd7fb5e4da77befd4151b6e4b364db65e679222ca09f1eced5-rootfs.mount: Deactivated successfully. Nov 5 15:44:33.101579 systemd[1]: Created slice kubepods-burstable-pod68de4903_f6e5_45c3_b76e_09034eb6e62e.slice - libcontainer container kubepods-burstable-pod68de4903_f6e5_45c3_b76e_09034eb6e62e.slice. Nov 5 15:44:33.108055 systemd[1]: Created slice kubepods-besteffort-pod8eefe537_f46b_421d_a847_6a36d2b266d7.slice - libcontainer container kubepods-besteffort-pod8eefe537_f46b_421d_a847_6a36d2b266d7.slice. Nov 5 15:44:33.113687 systemd[1]: Created slice kubepods-burstable-podab76474d_9152_4195_974c_65cb0f9f5e41.slice - libcontainer container kubepods-burstable-podab76474d_9152_4195_974c_65cb0f9f5e41.slice. Nov 5 15:44:33.117706 kubelet[3004]: I1105 15:44:33.117683 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg7kl\" (UniqueName: \"kubernetes.io/projected/2e5e65c4-fa11-4b09-8e02-96b75e14b836-kube-api-access-rg7kl\") pod \"calico-apiserver-5c8b78b8fb-cphp6\" (UID: \"2e5e65c4-fa11-4b09-8e02-96b75e14b836\") " pod="calico-apiserver/calico-apiserver-5c8b78b8fb-cphp6" Nov 5 15:44:33.117706 kubelet[3004]: I1105 15:44:33.117707 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/032507f8-ebdc-4047-b857-1ac842c7758e-whisker-backend-key-pair\") pod \"whisker-546df4544d-67ft2\" (UID: \"032507f8-ebdc-4047-b857-1ac842c7758e\") " pod="calico-system/whisker-546df4544d-67ft2" Nov 5 15:44:33.117918 kubelet[3004]: I1105 15:44:33.117719 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzg2v\" (UniqueName: \"kubernetes.io/projected/ab76474d-9152-4195-974c-65cb0f9f5e41-kube-api-access-rzg2v\") pod \"coredns-674b8bbfcf-qprtm\" (UID: \"ab76474d-9152-4195-974c-65cb0f9f5e41\") " pod="kube-system/coredns-674b8bbfcf-qprtm" Nov 5 15:44:33.117918 kubelet[3004]: I1105 15:44:33.117732 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68de4903-f6e5-45c3-b76e-09034eb6e62e-config-volume\") pod \"coredns-674b8bbfcf-p8vf5\" (UID: \"68de4903-f6e5-45c3-b76e-09034eb6e62e\") " pod="kube-system/coredns-674b8bbfcf-p8vf5" Nov 5 15:44:33.117918 kubelet[3004]: I1105 15:44:33.117743 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8eefe537-f46b-421d-a847-6a36d2b266d7-goldmane-ca-bundle\") pod \"goldmane-666569f655-vd7mt\" (UID: \"8eefe537-f46b-421d-a847-6a36d2b266d7\") " pod="calico-system/goldmane-666569f655-vd7mt" Nov 5 15:44:33.117918 kubelet[3004]: I1105 15:44:33.117752 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pkcj\" (UniqueName: \"kubernetes.io/projected/8eefe537-f46b-421d-a847-6a36d2b266d7-kube-api-access-9pkcj\") pod \"goldmane-666569f655-vd7mt\" (UID: \"8eefe537-f46b-421d-a847-6a36d2b266d7\") " pod="calico-system/goldmane-666569f655-vd7mt" Nov 5 15:44:33.117918 kubelet[3004]: I1105 15:44:33.117761 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2e5e65c4-fa11-4b09-8e02-96b75e14b836-calico-apiserver-certs\") pod \"calico-apiserver-5c8b78b8fb-cphp6\" (UID: \"2e5e65c4-fa11-4b09-8e02-96b75e14b836\") " pod="calico-apiserver/calico-apiserver-5c8b78b8fb-cphp6" Nov 5 15:44:33.119152 kubelet[3004]: I1105 15:44:33.117773 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/924e02ba-63d7-442f-ae24-772364097f08-calico-apiserver-certs\") pod \"calico-apiserver-5c8b78b8fb-6ngl7\" (UID: \"924e02ba-63d7-442f-ae24-772364097f08\") " pod="calico-apiserver/calico-apiserver-5c8b78b8fb-6ngl7" Nov 5 15:44:33.119152 kubelet[3004]: I1105 15:44:33.117785 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znk5w\" (UniqueName: \"kubernetes.io/projected/68de4903-f6e5-45c3-b76e-09034eb6e62e-kube-api-access-znk5w\") pod \"coredns-674b8bbfcf-p8vf5\" (UID: \"68de4903-f6e5-45c3-b76e-09034eb6e62e\") " pod="kube-system/coredns-674b8bbfcf-p8vf5" Nov 5 15:44:33.119152 kubelet[3004]: I1105 15:44:33.117795 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl94j\" (UniqueName: \"kubernetes.io/projected/924e02ba-63d7-442f-ae24-772364097f08-kube-api-access-rl94j\") pod \"calico-apiserver-5c8b78b8fb-6ngl7\" (UID: \"924e02ba-63d7-442f-ae24-772364097f08\") " pod="calico-apiserver/calico-apiserver-5c8b78b8fb-6ngl7" Nov 5 15:44:33.119152 kubelet[3004]: I1105 15:44:33.117805 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d011c6a-b4ab-4b64-b7bd-117fed5a2af3-tigera-ca-bundle\") pod \"calico-kube-controllers-5f57597689-7pcp2\" (UID: \"6d011c6a-b4ab-4b64-b7bd-117fed5a2af3\") " pod="calico-system/calico-kube-controllers-5f57597689-7pcp2" Nov 5 15:44:33.119152 kubelet[3004]: I1105 15:44:33.117815 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/032507f8-ebdc-4047-b857-1ac842c7758e-whisker-ca-bundle\") pod \"whisker-546df4544d-67ft2\" (UID: \"032507f8-ebdc-4047-b857-1ac842c7758e\") " pod="calico-system/whisker-546df4544d-67ft2" Nov 5 15:44:33.118693 systemd[1]: Created slice kubepods-besteffort-pod2e5e65c4_fa11_4b09_8e02_96b75e14b836.slice - libcontainer container kubepods-besteffort-pod2e5e65c4_fa11_4b09_8e02_96b75e14b836.slice. Nov 5 15:44:33.119423 kubelet[3004]: I1105 15:44:33.117824 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4nx6\" (UniqueName: \"kubernetes.io/projected/032507f8-ebdc-4047-b857-1ac842c7758e-kube-api-access-x4nx6\") pod \"whisker-546df4544d-67ft2\" (UID: \"032507f8-ebdc-4047-b857-1ac842c7758e\") " pod="calico-system/whisker-546df4544d-67ft2" Nov 5 15:44:33.119423 kubelet[3004]: I1105 15:44:33.117839 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8eefe537-f46b-421d-a847-6a36d2b266d7-goldmane-key-pair\") pod \"goldmane-666569f655-vd7mt\" (UID: \"8eefe537-f46b-421d-a847-6a36d2b266d7\") " pod="calico-system/goldmane-666569f655-vd7mt" Nov 5 15:44:33.119423 kubelet[3004]: I1105 15:44:33.117849 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8eefe537-f46b-421d-a847-6a36d2b266d7-config\") pod \"goldmane-666569f655-vd7mt\" (UID: \"8eefe537-f46b-421d-a847-6a36d2b266d7\") " pod="calico-system/goldmane-666569f655-vd7mt" Nov 5 15:44:33.119423 kubelet[3004]: I1105 15:44:33.117859 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwt25\" (UniqueName: \"kubernetes.io/projected/6d011c6a-b4ab-4b64-b7bd-117fed5a2af3-kube-api-access-kwt25\") pod \"calico-kube-controllers-5f57597689-7pcp2\" (UID: \"6d011c6a-b4ab-4b64-b7bd-117fed5a2af3\") " pod="calico-system/calico-kube-controllers-5f57597689-7pcp2" Nov 5 15:44:33.119423 kubelet[3004]: I1105 15:44:33.117868 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab76474d-9152-4195-974c-65cb0f9f5e41-config-volume\") pod \"coredns-674b8bbfcf-qprtm\" (UID: \"ab76474d-9152-4195-974c-65cb0f9f5e41\") " pod="kube-system/coredns-674b8bbfcf-qprtm" Nov 5 15:44:33.125395 systemd[1]: Created slice kubepods-besteffort-pod6d011c6a_b4ab_4b64_b7bd_117fed5a2af3.slice - libcontainer container kubepods-besteffort-pod6d011c6a_b4ab_4b64_b7bd_117fed5a2af3.slice. Nov 5 15:44:33.129810 systemd[1]: Created slice kubepods-besteffort-pod032507f8_ebdc_4047_b857_1ac842c7758e.slice - libcontainer container kubepods-besteffort-pod032507f8_ebdc_4047_b857_1ac842c7758e.slice. Nov 5 15:44:33.135103 systemd[1]: Created slice kubepods-besteffort-pod924e02ba_63d7_442f_ae24_772364097f08.slice - libcontainer container kubepods-besteffort-pod924e02ba_63d7_442f_ae24_772364097f08.slice. Nov 5 15:44:33.170487 containerd[1687]: time="2025-11-05T15:44:33.170203057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 5 15:44:33.410918 containerd[1687]: time="2025-11-05T15:44:33.410611394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vd7mt,Uid:8eefe537-f46b-421d-a847-6a36d2b266d7,Namespace:calico-system,Attempt:0,}" Nov 5 15:44:33.410918 containerd[1687]: time="2025-11-05T15:44:33.410611779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p8vf5,Uid:68de4903-f6e5-45c3-b76e-09034eb6e62e,Namespace:kube-system,Attempt:0,}" Nov 5 15:44:33.424409 containerd[1687]: time="2025-11-05T15:44:33.424196836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c8b78b8fb-cphp6,Uid:2e5e65c4-fa11-4b09-8e02-96b75e14b836,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:44:33.424409 containerd[1687]: time="2025-11-05T15:44:33.424347249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qprtm,Uid:ab76474d-9152-4195-974c-65cb0f9f5e41,Namespace:kube-system,Attempt:0,}" Nov 5 15:44:33.441791 containerd[1687]: time="2025-11-05T15:44:33.441548725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c8b78b8fb-6ngl7,Uid:924e02ba-63d7-442f-ae24-772364097f08,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:44:33.459963 containerd[1687]: time="2025-11-05T15:44:33.459914507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f57597689-7pcp2,Uid:6d011c6a-b4ab-4b64-b7bd-117fed5a2af3,Namespace:calico-system,Attempt:0,}" Nov 5 15:44:33.460326 containerd[1687]: time="2025-11-05T15:44:33.460309847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-546df4544d-67ft2,Uid:032507f8-ebdc-4047-b857-1ac842c7758e,Namespace:calico-system,Attempt:0,}" Nov 5 15:44:33.762962 containerd[1687]: time="2025-11-05T15:44:33.762920703Z" level=error msg="Failed to destroy network for sandbox \"88dea81d4b0f431e89696d1cfd7972db3689333a5c9d2fcb1738a0dac24a65d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:44:33.763606 containerd[1687]: time="2025-11-05T15:44:33.763582870Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qprtm,Uid:ab76474d-9152-4195-974c-65cb0f9f5e41,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"88dea81d4b0f431e89696d1cfd7972db3689333a5c9d2fcb1738a0dac24a65d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:44:33.765357 containerd[1687]: time="2025-11-05T15:44:33.765338306Z" level=error msg="Failed to destroy network for sandbox \"9b47f9adea1fadd78db2e98a87cce393861fbeb578e1c2024adca63febe5ad9b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:44:33.765937 containerd[1687]: time="2025-11-05T15:44:33.765915108Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c8b78b8fb-6ngl7,Uid:924e02ba-63d7-442f-ae24-772364097f08,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b47f9adea1fadd78db2e98a87cce393861fbeb578e1c2024adca63febe5ad9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:44:33.768074 kubelet[3004]: E1105 15:44:33.768019 3004 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88dea81d4b0f431e89696d1cfd7972db3689333a5c9d2fcb1738a0dac24a65d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:44:33.768183 kubelet[3004]: E1105 15:44:33.768173 3004 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88dea81d4b0f431e89696d1cfd7972db3689333a5c9d2fcb1738a0dac24a65d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qprtm" Nov 5 15:44:33.768571 kubelet[3004]: E1105 15:44:33.768496 3004 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88dea81d4b0f431e89696d1cfd7972db3689333a5c9d2fcb1738a0dac24a65d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qprtm" Nov 5 15:44:33.773868 kubelet[3004]: E1105 15:44:33.773822 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-qprtm_kube-system(ab76474d-9152-4195-974c-65cb0f9f5e41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-qprtm_kube-system(ab76474d-9152-4195-974c-65cb0f9f5e41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88dea81d4b0f431e89696d1cfd7972db3689333a5c9d2fcb1738a0dac24a65d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-qprtm" podUID="ab76474d-9152-4195-974c-65cb0f9f5e41" Nov 5 15:44:33.778502 containerd[1687]: time="2025-11-05T15:44:33.778212568Z" level=error msg="Failed to destroy network for sandbox \"d87d252590b5f1a012ff313e8e43188a06bfb5be4a444d4549581cb548369647\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:44:33.778772 kubelet[3004]: E1105 15:44:33.778670 3004 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b47f9adea1fadd78db2e98a87cce393861fbeb578e1c2024adca63febe5ad9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:44:33.778772 kubelet[3004]: E1105 15:44:33.778701 3004 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b47f9adea1fadd78db2e98a87cce393861fbeb578e1c2024adca63febe5ad9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c8b78b8fb-6ngl7" Nov 5 15:44:33.778772 kubelet[3004]: E1105 15:44:33.778714 3004 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b47f9adea1fadd78db2e98a87cce393861fbeb578e1c2024adca63febe5ad9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c8b78b8fb-6ngl7" Nov 5 15:44:33.778852 kubelet[3004]: E1105 15:44:33.778742 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c8b78b8fb-6ngl7_calico-apiserver(924e02ba-63d7-442f-ae24-772364097f08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c8b78b8fb-6ngl7_calico-apiserver(924e02ba-63d7-442f-ae24-772364097f08)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b47f9adea1fadd78db2e98a87cce393861fbeb578e1c2024adca63febe5ad9b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c8b78b8fb-6ngl7" podUID="924e02ba-63d7-442f-ae24-772364097f08" Nov 5 15:44:33.779498 containerd[1687]: time="2025-11-05T15:44:33.779430958Z" level=error msg="Failed to destroy network for sandbox \"8189c28bc089b68e0b8d42c787ec03fb0ba8b9b8d1f04a8d340dd41eeefbf8b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:44:33.780810 containerd[1687]: time="2025-11-05T15:44:33.779832193Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c8b78b8fb-cphp6,Uid:2e5e65c4-fa11-4b09-8e02-96b75e14b836,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d87d252590b5f1a012ff313e8e43188a06bfb5be4a444d4549581cb548369647\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:44:33.780879 containerd[1687]: time="2025-11-05T15:44:33.780498539Z" level=error msg="Failed to destroy network for sandbox \"836dace6c0f3c37dcc599db1b1430105e0b9f9d96b926ff6feb361e4a56bd972\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:44:33.782240 kubelet[3004]: E1105 15:44:33.781808 3004 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d87d252590b5f1a012ff313e8e43188a06bfb5be4a444d4549581cb548369647\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:44:33.782300 containerd[1687]: time="2025-11-05T15:44:33.782133138Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f57597689-7pcp2,Uid:6d011c6a-b4ab-4b64-b7bd-117fed5a2af3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"836dace6c0f3c37dcc599db1b1430105e0b9f9d96b926ff6feb361e4a56bd972\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:44:33.782973 kubelet[3004]: E1105 15:44:33.782368 3004 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d87d252590b5f1a012ff313e8e43188a06bfb5be4a444d4549581cb548369647\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c8b78b8fb-cphp6" Nov 5 15:44:33.782973 kubelet[3004]: E1105 15:44:33.782388 3004 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d87d252590b5f1a012ff313e8e43188a06bfb5be4a444d4549581cb548369647\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c8b78b8fb-cphp6" Nov 5 15:44:33.782973 kubelet[3004]: E1105 15:44:33.782424 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c8b78b8fb-cphp6_calico-apiserver(2e5e65c4-fa11-4b09-8e02-96b75e14b836)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c8b78b8fb-cphp6_calico-apiserver(2e5e65c4-fa11-4b09-8e02-96b75e14b836)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d87d252590b5f1a012ff313e8e43188a06bfb5be4a444d4549581cb548369647\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c8b78b8fb-cphp6" podUID="2e5e65c4-fa11-4b09-8e02-96b75e14b836" Nov 5 15:44:33.784244 containerd[1687]: time="2025-11-05T15:44:33.782516213Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vd7mt,Uid:8eefe537-f46b-421d-a847-6a36d2b266d7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8189c28bc089b68e0b8d42c787ec03fb0ba8b9b8d1f04a8d340dd41eeefbf8b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:44:33.784244 containerd[1687]: time="2025-11-05T15:44:33.783244287Z" level=error msg="Failed to destroy network for sandbox \"93ffa537e03d3bb56aac1561fbdd796d1261f776baf48da23f85d3a6e70a8ffe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:44:33.784244 containerd[1687]: time="2025-11-05T15:44:33.783624673Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p8vf5,Uid:68de4903-f6e5-45c3-b76e-09034eb6e62e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"93ffa537e03d3bb56aac1561fbdd796d1261f776baf48da23f85d3a6e70a8ffe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:44:33.784244 containerd[1687]: time="2025-11-05T15:44:33.783832527Z" level=error msg="Failed to destroy network for sandbox \"30b364e6744780be5cc0b334f7c67e8a2cb819fdd9bb4208b6fba2a42ff2048c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:44:33.784476 kubelet[3004]: E1105 15:44:33.783188 3004 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8189c28bc089b68e0b8d42c787ec03fb0ba8b9b8d1f04a8d340dd41eeefbf8b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:44:33.784476 kubelet[3004]: E1105 15:44:33.783206 3004 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8189c28bc089b68e0b8d42c787ec03fb0ba8b9b8d1f04a8d340dd41eeefbf8b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-vd7mt" Nov 5 15:44:33.784476 kubelet[3004]: E1105 15:44:33.783297 3004 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8189c28bc089b68e0b8d42c787ec03fb0ba8b9b8d1f04a8d340dd41eeefbf8b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-vd7mt" Nov 5 15:44:33.784536 containerd[1687]: time="2025-11-05T15:44:33.784132213Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-546df4544d-67ft2,Uid:032507f8-ebdc-4047-b857-1ac842c7758e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"30b364e6744780be5cc0b334f7c67e8a2cb819fdd9bb4208b6fba2a42ff2048c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:44:33.784567 kubelet[3004]: E1105 15:44:33.783324 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-vd7mt_calico-system(8eefe537-f46b-421d-a847-6a36d2b266d7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-vd7mt_calico-system(8eefe537-f46b-421d-a847-6a36d2b266d7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8189c28bc089b68e0b8d42c787ec03fb0ba8b9b8d1f04a8d340dd41eeefbf8b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-vd7mt" podUID="8eefe537-f46b-421d-a847-6a36d2b266d7" Nov 5 15:44:33.784567 kubelet[3004]: E1105 15:44:33.783347 3004 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"836dace6c0f3c37dcc599db1b1430105e0b9f9d96b926ff6feb361e4a56bd972\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:44:33.784567 kubelet[3004]: E1105 15:44:33.783358 3004 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"836dace6c0f3c37dcc599db1b1430105e0b9f9d96b926ff6feb361e4a56bd972\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f57597689-7pcp2" Nov 5 15:44:33.784636 kubelet[3004]: E1105 15:44:33.783460 3004 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"836dace6c0f3c37dcc599db1b1430105e0b9f9d96b926ff6feb361e4a56bd972\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f57597689-7pcp2" Nov 5 15:44:33.784636 kubelet[3004]: E1105 15:44:33.783561 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f57597689-7pcp2_calico-system(6d011c6a-b4ab-4b64-b7bd-117fed5a2af3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f57597689-7pcp2_calico-system(6d011c6a-b4ab-4b64-b7bd-117fed5a2af3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"836dace6c0f3c37dcc599db1b1430105e0b9f9d96b926ff6feb361e4a56bd972\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f57597689-7pcp2" podUID="6d011c6a-b4ab-4b64-b7bd-117fed5a2af3" Nov 5 15:44:33.784636 kubelet[3004]: E1105 15:44:33.783684 3004 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93ffa537e03d3bb56aac1561fbdd796d1261f776baf48da23f85d3a6e70a8ffe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:44:33.784700 kubelet[3004]: E1105 15:44:33.783698 3004 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93ffa537e03d3bb56aac1561fbdd796d1261f776baf48da23f85d3a6e70a8ffe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p8vf5" Nov 5 15:44:33.784700 kubelet[3004]: E1105 15:44:33.783704 3004 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93ffa537e03d3bb56aac1561fbdd796d1261f776baf48da23f85d3a6e70a8ffe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p8vf5" Nov 5 15:44:33.784700 kubelet[3004]: E1105 15:44:33.783721 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-p8vf5_kube-system(68de4903-f6e5-45c3-b76e-09034eb6e62e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-p8vf5_kube-system(68de4903-f6e5-45c3-b76e-09034eb6e62e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93ffa537e03d3bb56aac1561fbdd796d1261f776baf48da23f85d3a6e70a8ffe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-p8vf5" podUID="68de4903-f6e5-45c3-b76e-09034eb6e62e" Nov 5 15:44:33.784766 kubelet[3004]: E1105 15:44:33.784323 3004 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30b364e6744780be5cc0b334f7c67e8a2cb819fdd9bb4208b6fba2a42ff2048c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:44:33.784766 kubelet[3004]: E1105 15:44:33.784338 3004 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30b364e6744780be5cc0b334f7c67e8a2cb819fdd9bb4208b6fba2a42ff2048c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-546df4544d-67ft2" Nov 5 15:44:33.784766 kubelet[3004]: E1105 15:44:33.784347 3004 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30b364e6744780be5cc0b334f7c67e8a2cb819fdd9bb4208b6fba2a42ff2048c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-546df4544d-67ft2" Nov 5 15:44:33.784847 kubelet[3004]: E1105 15:44:33.784365 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-546df4544d-67ft2_calico-system(032507f8-ebdc-4047-b857-1ac842c7758e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-546df4544d-67ft2_calico-system(032507f8-ebdc-4047-b857-1ac842c7758e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30b364e6744780be5cc0b334f7c67e8a2cb819fdd9bb4208b6fba2a42ff2048c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-546df4544d-67ft2" podUID="032507f8-ebdc-4047-b857-1ac842c7758e" Nov 5 15:44:34.048599 systemd[1]: Created slice kubepods-besteffort-podaa307e49_5503_4739_ace7_169707e5fd38.slice - libcontainer container kubepods-besteffort-podaa307e49_5503_4739_ace7_169707e5fd38.slice. Nov 5 15:44:34.054775 containerd[1687]: time="2025-11-05T15:44:34.054748247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pwv49,Uid:aa307e49-5503-4739-ace7-169707e5fd38,Namespace:calico-system,Attempt:0,}" Nov 5 15:44:34.101796 containerd[1687]: time="2025-11-05T15:44:34.101765010Z" level=error msg="Failed to destroy network for sandbox \"55edd2209c41dad457a039d2cdc87dd872e056f1e7a7e0a7162c841711d32494\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:44:34.103159 containerd[1687]: time="2025-11-05T15:44:34.103135686Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pwv49,Uid:aa307e49-5503-4739-ace7-169707e5fd38,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"55edd2209c41dad457a039d2cdc87dd872e056f1e7a7e0a7162c841711d32494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:44:34.103402 kubelet[3004]: E1105 15:44:34.103373 3004 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55edd2209c41dad457a039d2cdc87dd872e056f1e7a7e0a7162c841711d32494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:44:34.103438 kubelet[3004]: E1105 15:44:34.103427 3004 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55edd2209c41dad457a039d2cdc87dd872e056f1e7a7e0a7162c841711d32494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pwv49" Nov 5 15:44:34.103459 kubelet[3004]: E1105 15:44:34.103445 3004 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55edd2209c41dad457a039d2cdc87dd872e056f1e7a7e0a7162c841711d32494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pwv49" Nov 5 15:44:34.103501 kubelet[3004]: E1105 15:44:34.103484 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pwv49_calico-system(aa307e49-5503-4739-ace7-169707e5fd38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pwv49_calico-system(aa307e49-5503-4739-ace7-169707e5fd38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55edd2209c41dad457a039d2cdc87dd872e056f1e7a7e0a7162c841711d32494\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pwv49" podUID="aa307e49-5503-4739-ace7-169707e5fd38" Nov 5 15:44:34.104421 systemd[1]: run-netns-cni\x2dcb7161b5\x2d3836\x2d3cc0\x2d1df4\x2ddcf9cf2b8bbf.mount: Deactivated successfully. Nov 5 15:44:38.883749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2095476241.mount: Deactivated successfully. Nov 5 15:44:39.055527 containerd[1687]: time="2025-11-05T15:44:39.050086546Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:44:39.061923 containerd[1687]: time="2025-11-05T15:44:39.044147825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 5 15:44:39.072517 containerd[1687]: time="2025-11-05T15:44:39.072497288Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:44:39.078388 containerd[1687]: time="2025-11-05T15:44:39.078286660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:44:39.084658 containerd[1687]: time="2025-11-05T15:44:39.084635709Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 5.91010852s" Nov 5 15:44:39.084701 containerd[1687]: time="2025-11-05T15:44:39.084659449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 5 15:44:39.810668 containerd[1687]: time="2025-11-05T15:44:39.810632031Z" level=info msg="CreateContainer within sandbox \"e0ab9798b106d2a6880f623d2572bcafecde3f9fa2ee887f6c06ce94778e49ce\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 5 15:44:39.921906 kubelet[3004]: I1105 15:44:39.884594 3004 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 15:44:40.018939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1564241896.mount: Deactivated successfully. Nov 5 15:44:40.023758 containerd[1687]: time="2025-11-05T15:44:40.019768048Z" level=info msg="Container dd1a844b27397d61eff7cfe5d11dc9922006c7b05e337864855037e5720ae325: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:44:40.314393 containerd[1687]: time="2025-11-05T15:44:40.314360130Z" level=info msg="CreateContainer within sandbox \"e0ab9798b106d2a6880f623d2572bcafecde3f9fa2ee887f6c06ce94778e49ce\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"dd1a844b27397d61eff7cfe5d11dc9922006c7b05e337864855037e5720ae325\"" Nov 5 15:44:40.315147 containerd[1687]: time="2025-11-05T15:44:40.314899568Z" level=info msg="StartContainer for \"dd1a844b27397d61eff7cfe5d11dc9922006c7b05e337864855037e5720ae325\"" Nov 5 15:44:40.326097 containerd[1687]: time="2025-11-05T15:44:40.326071284Z" level=info msg="connecting to shim dd1a844b27397d61eff7cfe5d11dc9922006c7b05e337864855037e5720ae325" address="unix:///run/containerd/s/9d8a8fca517133ccbeabb0905c93860820b45f1e2d2094373c4abaa68f53a5c0" protocol=ttrpc version=3 Nov 5 15:44:40.498132 systemd[1]: Started cri-containerd-dd1a844b27397d61eff7cfe5d11dc9922006c7b05e337864855037e5720ae325.scope - libcontainer container dd1a844b27397d61eff7cfe5d11dc9922006c7b05e337864855037e5720ae325. Nov 5 15:44:40.599318 containerd[1687]: time="2025-11-05T15:44:40.598921370Z" level=info msg="StartContainer for \"dd1a844b27397d61eff7cfe5d11dc9922006c7b05e337864855037e5720ae325\" returns successfully" Nov 5 15:44:40.853979 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 5 15:44:40.855841 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 5 15:44:41.209258 kubelet[3004]: I1105 15:44:41.206872 3004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xl89x" podStartSLOduration=3.366436452 podStartE2EDuration="22.205259133s" podCreationTimestamp="2025-11-05 15:44:19 +0000 UTC" firstStartedPulling="2025-11-05 15:44:20.307650552 +0000 UTC m=+23.355560625" lastFinishedPulling="2025-11-05 15:44:39.146473228 +0000 UTC m=+42.194383306" observedRunningTime="2025-11-05 15:44:41.205187044 +0000 UTC m=+44.253097126" watchObservedRunningTime="2025-11-05 15:44:41.205259133 +0000 UTC m=+44.253169210" Nov 5 15:44:41.277258 kubelet[3004]: I1105 15:44:41.276980 3004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4nx6\" (UniqueName: \"kubernetes.io/projected/032507f8-ebdc-4047-b857-1ac842c7758e-kube-api-access-x4nx6\") pod \"032507f8-ebdc-4047-b857-1ac842c7758e\" (UID: \"032507f8-ebdc-4047-b857-1ac842c7758e\") " Nov 5 15:44:41.277258 kubelet[3004]: I1105 15:44:41.277044 3004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/032507f8-ebdc-4047-b857-1ac842c7758e-whisker-ca-bundle\") pod \"032507f8-ebdc-4047-b857-1ac842c7758e\" (UID: \"032507f8-ebdc-4047-b857-1ac842c7758e\") " Nov 5 15:44:41.277258 kubelet[3004]: I1105 15:44:41.277066 3004 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/032507f8-ebdc-4047-b857-1ac842c7758e-whisker-backend-key-pair\") pod \"032507f8-ebdc-4047-b857-1ac842c7758e\" (UID: \"032507f8-ebdc-4047-b857-1ac842c7758e\") " Nov 5 15:44:41.282191 kubelet[3004]: I1105 15:44:41.281938 3004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/032507f8-ebdc-4047-b857-1ac842c7758e-kube-api-access-x4nx6" (OuterVolumeSpecName: "kube-api-access-x4nx6") pod "032507f8-ebdc-4047-b857-1ac842c7758e" (UID: "032507f8-ebdc-4047-b857-1ac842c7758e"). InnerVolumeSpecName "kube-api-access-x4nx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 15:44:41.282191 kubelet[3004]: I1105 15:44:41.282164 3004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/032507f8-ebdc-4047-b857-1ac842c7758e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "032507f8-ebdc-4047-b857-1ac842c7758e" (UID: "032507f8-ebdc-4047-b857-1ac842c7758e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 15:44:41.282322 systemd[1]: var-lib-kubelet-pods-032507f8\x2debdc\x2d4047\x2db857\x2d1ac842c7758e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx4nx6.mount: Deactivated successfully. Nov 5 15:44:41.286324 kubelet[3004]: I1105 15:44:41.286270 3004 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/032507f8-ebdc-4047-b857-1ac842c7758e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "032507f8-ebdc-4047-b857-1ac842c7758e" (UID: "032507f8-ebdc-4047-b857-1ac842c7758e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 5 15:44:41.287247 systemd[1]: var-lib-kubelet-pods-032507f8\x2debdc\x2d4047\x2db857\x2d1ac842c7758e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 5 15:44:41.378052 kubelet[3004]: I1105 15:44:41.378019 3004 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/032507f8-ebdc-4047-b857-1ac842c7758e-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 5 15:44:41.378052 kubelet[3004]: I1105 15:44:41.378041 3004 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/032507f8-ebdc-4047-b857-1ac842c7758e-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 5 15:44:41.378052 kubelet[3004]: I1105 15:44:41.378047 3004 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x4nx6\" (UniqueName: \"kubernetes.io/projected/032507f8-ebdc-4047-b857-1ac842c7758e-kube-api-access-x4nx6\") on node \"localhost\" DevicePath \"\"" Nov 5 15:44:41.450462 containerd[1687]: time="2025-11-05T15:44:41.449860466Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dd1a844b27397d61eff7cfe5d11dc9922006c7b05e337864855037e5720ae325\" id:\"0484d8bbdc3598d432cda7254b8854134c6d22cb21a41c2c8de965fec18bc12b\" pid:4039 exit_status:1 exited_at:{seconds:1762357481 nanos:444295092}" Nov 5 15:44:41.499262 systemd[1]: Removed slice kubepods-besteffort-pod032507f8_ebdc_4047_b857_1ac842c7758e.slice - libcontainer container kubepods-besteffort-pod032507f8_ebdc_4047_b857_1ac842c7758e.slice. Nov 5 15:44:41.572390 systemd[1]: Created slice kubepods-besteffort-podc9958fac_31c8_4b49_8704_7a3e667cd144.slice - libcontainer container kubepods-besteffort-podc9958fac_31c8_4b49_8704_7a3e667cd144.slice. Nov 5 15:44:41.699925 kubelet[3004]: I1105 15:44:41.699896 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c9958fac-31c8-4b49-8704-7a3e667cd144-whisker-backend-key-pair\") pod \"whisker-5b58b895f4-4cnd5\" (UID: \"c9958fac-31c8-4b49-8704-7a3e667cd144\") " pod="calico-system/whisker-5b58b895f4-4cnd5" Nov 5 15:44:41.700073 kubelet[3004]: I1105 15:44:41.699940 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9958fac-31c8-4b49-8704-7a3e667cd144-whisker-ca-bundle\") pod \"whisker-5b58b895f4-4cnd5\" (UID: \"c9958fac-31c8-4b49-8704-7a3e667cd144\") " pod="calico-system/whisker-5b58b895f4-4cnd5" Nov 5 15:44:41.700073 kubelet[3004]: I1105 15:44:41.699961 3004 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp2dq\" (UniqueName: \"kubernetes.io/projected/c9958fac-31c8-4b49-8704-7a3e667cd144-kube-api-access-pp2dq\") pod \"whisker-5b58b895f4-4cnd5\" (UID: \"c9958fac-31c8-4b49-8704-7a3e667cd144\") " pod="calico-system/whisker-5b58b895f4-4cnd5" Nov 5 15:44:41.891041 containerd[1687]: time="2025-11-05T15:44:41.890905938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b58b895f4-4cnd5,Uid:c9958fac-31c8-4b49-8704-7a3e667cd144,Namespace:calico-system,Attempt:0,}" Nov 5 15:44:42.447179 systemd-networkd[1585]: calie39794dee52: Link UP Nov 5 15:44:42.447585 systemd-networkd[1585]: calie39794dee52: Gained carrier Nov 5 15:44:42.459995 containerd[1687]: 2025-11-05 15:44:41.926 [INFO][4054] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 15:44:42.459995 containerd[1687]: 2025-11-05 15:44:41.972 [INFO][4054] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5b58b895f4--4cnd5-eth0 whisker-5b58b895f4- calico-system c9958fac-31c8-4b49-8704-7a3e667cd144 933 0 2025-11-05 15:44:41 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5b58b895f4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5b58b895f4-4cnd5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie39794dee52 [] [] }} ContainerID="c8ea4dcfec36ffc9f00bebec22288fab367de5edc188f76ef531ab89de984bd5" Namespace="calico-system" Pod="whisker-5b58b895f4-4cnd5" WorkloadEndpoint="localhost-k8s-whisker--5b58b895f4--4cnd5-" Nov 5 15:44:42.459995 containerd[1687]: 2025-11-05 15:44:41.973 [INFO][4054] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c8ea4dcfec36ffc9f00bebec22288fab367de5edc188f76ef531ab89de984bd5" Namespace="calico-system" Pod="whisker-5b58b895f4-4cnd5" WorkloadEndpoint="localhost-k8s-whisker--5b58b895f4--4cnd5-eth0" Nov 5 15:44:42.459995 containerd[1687]: 2025-11-05 15:44:42.341 [INFO][4065] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8ea4dcfec36ffc9f00bebec22288fab367de5edc188f76ef531ab89de984bd5" HandleID="k8s-pod-network.c8ea4dcfec36ffc9f00bebec22288fab367de5edc188f76ef531ab89de984bd5" Workload="localhost-k8s-whisker--5b58b895f4--4cnd5-eth0" Nov 5 15:44:42.461341 containerd[1687]: 2025-11-05 15:44:42.347 [INFO][4065] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c8ea4dcfec36ffc9f00bebec22288fab367de5edc188f76ef531ab89de984bd5" HandleID="k8s-pod-network.c8ea4dcfec36ffc9f00bebec22288fab367de5edc188f76ef531ab89de984bd5" Workload="localhost-k8s-whisker--5b58b895f4--4cnd5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000387d10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5b58b895f4-4cnd5", "timestamp":"2025-11-05 15:44:42.341944425 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:44:42.461341 containerd[1687]: 2025-11-05 15:44:42.347 [INFO][4065] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:44:42.461341 containerd[1687]: 2025-11-05 15:44:42.352 [INFO][4065] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:44:42.461341 containerd[1687]: 2025-11-05 15:44:42.353 [INFO][4065] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:44:42.461341 containerd[1687]: 2025-11-05 15:44:42.382 [INFO][4065] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c8ea4dcfec36ffc9f00bebec22288fab367de5edc188f76ef531ab89de984bd5" host="localhost" Nov 5 15:44:42.461341 containerd[1687]: 2025-11-05 15:44:42.405 [INFO][4065] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:44:42.461341 containerd[1687]: 2025-11-05 15:44:42.410 [INFO][4065] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:44:42.461341 containerd[1687]: 2025-11-05 15:44:42.411 [INFO][4065] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:44:42.461341 containerd[1687]: 2025-11-05 15:44:42.413 [INFO][4065] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:44:42.461341 containerd[1687]: 2025-11-05 15:44:42.413 [INFO][4065] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c8ea4dcfec36ffc9f00bebec22288fab367de5edc188f76ef531ab89de984bd5" host="localhost" Nov 5 15:44:42.464100 containerd[1687]: 2025-11-05 15:44:42.414 [INFO][4065] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c8ea4dcfec36ffc9f00bebec22288fab367de5edc188f76ef531ab89de984bd5 Nov 5 15:44:42.464100 containerd[1687]: 2025-11-05 15:44:42.417 [INFO][4065] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c8ea4dcfec36ffc9f00bebec22288fab367de5edc188f76ef531ab89de984bd5" host="localhost" Nov 5 15:44:42.464100 containerd[1687]: 2025-11-05 15:44:42.421 [INFO][4065] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.c8ea4dcfec36ffc9f00bebec22288fab367de5edc188f76ef531ab89de984bd5" host="localhost" Nov 5 15:44:42.464100 containerd[1687]: 2025-11-05 15:44:42.422 [INFO][4065] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.c8ea4dcfec36ffc9f00bebec22288fab367de5edc188f76ef531ab89de984bd5" host="localhost" Nov 5 15:44:42.464100 containerd[1687]: 2025-11-05 15:44:42.422 [INFO][4065] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:44:42.464100 containerd[1687]: 2025-11-05 15:44:42.422 [INFO][4065] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="c8ea4dcfec36ffc9f00bebec22288fab367de5edc188f76ef531ab89de984bd5" HandleID="k8s-pod-network.c8ea4dcfec36ffc9f00bebec22288fab367de5edc188f76ef531ab89de984bd5" Workload="localhost-k8s-whisker--5b58b895f4--4cnd5-eth0" Nov 5 15:44:42.464199 containerd[1687]: 2025-11-05 15:44:42.424 [INFO][4054] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c8ea4dcfec36ffc9f00bebec22288fab367de5edc188f76ef531ab89de984bd5" Namespace="calico-system" Pod="whisker-5b58b895f4-4cnd5" WorkloadEndpoint="localhost-k8s-whisker--5b58b895f4--4cnd5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5b58b895f4--4cnd5-eth0", GenerateName:"whisker-5b58b895f4-", Namespace:"calico-system", SelfLink:"", UID:"c9958fac-31c8-4b49-8704-7a3e667cd144", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 44, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b58b895f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5b58b895f4-4cnd5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie39794dee52", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:44:42.464199 containerd[1687]: 2025-11-05 15:44:42.425 [INFO][4054] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="c8ea4dcfec36ffc9f00bebec22288fab367de5edc188f76ef531ab89de984bd5" Namespace="calico-system" Pod="whisker-5b58b895f4-4cnd5" WorkloadEndpoint="localhost-k8s-whisker--5b58b895f4--4cnd5-eth0" Nov 5 15:44:42.464847 containerd[1687]: 2025-11-05 15:44:42.425 [INFO][4054] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie39794dee52 ContainerID="c8ea4dcfec36ffc9f00bebec22288fab367de5edc188f76ef531ab89de984bd5" Namespace="calico-system" Pod="whisker-5b58b895f4-4cnd5" WorkloadEndpoint="localhost-k8s-whisker--5b58b895f4--4cnd5-eth0" Nov 5 15:44:42.464847 containerd[1687]: 2025-11-05 15:44:42.444 [INFO][4054] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8ea4dcfec36ffc9f00bebec22288fab367de5edc188f76ef531ab89de984bd5" Namespace="calico-system" Pod="whisker-5b58b895f4-4cnd5" WorkloadEndpoint="localhost-k8s-whisker--5b58b895f4--4cnd5-eth0" Nov 5 15:44:42.465797 containerd[1687]: 2025-11-05 15:44:42.446 [INFO][4054] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c8ea4dcfec36ffc9f00bebec22288fab367de5edc188f76ef531ab89de984bd5" Namespace="calico-system" Pod="whisker-5b58b895f4-4cnd5" WorkloadEndpoint="localhost-k8s-whisker--5b58b895f4--4cnd5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5b58b895f4--4cnd5-eth0", GenerateName:"whisker-5b58b895f4-", Namespace:"calico-system", SelfLink:"", UID:"c9958fac-31c8-4b49-8704-7a3e667cd144", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 44, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b58b895f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c8ea4dcfec36ffc9f00bebec22288fab367de5edc188f76ef531ab89de984bd5", Pod:"whisker-5b58b895f4-4cnd5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie39794dee52", MAC:"72:bb:7a:d3:a7:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:44:42.466074 containerd[1687]: 2025-11-05 15:44:42.457 [INFO][4054] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c8ea4dcfec36ffc9f00bebec22288fab367de5edc188f76ef531ab89de984bd5" Namespace="calico-system" Pod="whisker-5b58b895f4-4cnd5" WorkloadEndpoint="localhost-k8s-whisker--5b58b895f4--4cnd5-eth0" Nov 5 15:44:42.524944 containerd[1687]: time="2025-11-05T15:44:42.524916697Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dd1a844b27397d61eff7cfe5d11dc9922006c7b05e337864855037e5720ae325\" id:\"967ee53c175f081b2c918063e89c5bc53dd6c331df78a554d0f3e65c1ae9a31e\" pid:4167 exit_status:1 exited_at:{seconds:1762357482 nanos:524136551}" Nov 5 15:44:42.604860 containerd[1687]: time="2025-11-05T15:44:42.604830947Z" level=info msg="connecting to shim c8ea4dcfec36ffc9f00bebec22288fab367de5edc188f76ef531ab89de984bd5" address="unix:///run/containerd/s/0c4b89efae01a00b671cd0c11ef7e4fe992a51a3ac40f1ca7e8fe57f53101b4c" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:44:42.632362 systemd[1]: Started cri-containerd-c8ea4dcfec36ffc9f00bebec22288fab367de5edc188f76ef531ab89de984bd5.scope - libcontainer container c8ea4dcfec36ffc9f00bebec22288fab367de5edc188f76ef531ab89de984bd5. Nov 5 15:44:42.654353 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:44:42.692337 containerd[1687]: time="2025-11-05T15:44:42.692296532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b58b895f4-4cnd5,Uid:c9958fac-31c8-4b49-8704-7a3e667cd144,Namespace:calico-system,Attempt:0,} returns sandbox id \"c8ea4dcfec36ffc9f00bebec22288fab367de5edc188f76ef531ab89de984bd5\"" Nov 5 15:44:42.708905 containerd[1687]: time="2025-11-05T15:44:42.708814659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:44:42.836343 systemd-networkd[1585]: vxlan.calico: Link UP Nov 5 15:44:42.836348 systemd-networkd[1585]: vxlan.calico: Gained carrier Nov 5 15:44:43.063335 kubelet[3004]: I1105 15:44:43.062315 3004 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="032507f8-ebdc-4047-b857-1ac842c7758e" path="/var/lib/kubelet/pods/032507f8-ebdc-4047-b857-1ac842c7758e/volumes" Nov 5 15:44:43.094082 containerd[1687]: time="2025-11-05T15:44:43.094042457Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:44:43.137296 containerd[1687]: time="2025-11-05T15:44:43.137227346Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:44:43.137296 containerd[1687]: time="2025-11-05T15:44:43.137276937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:44:43.137508 kubelet[3004]: E1105 15:44:43.137482 3004 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:44:43.140595 kubelet[3004]: E1105 15:44:43.140482 3004 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:44:43.145516 kubelet[3004]: E1105 15:44:43.145481 3004 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f8d2740034654cd6baf7021a15760839,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pp2dq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b58b895f4-4cnd5_calico-system(c9958fac-31c8-4b49-8704-7a3e667cd144): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:44:43.147892 containerd[1687]: time="2025-11-05T15:44:43.147846289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:44:43.566540 containerd[1687]: time="2025-11-05T15:44:43.566506921Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:44:43.575200 containerd[1687]: time="2025-11-05T15:44:43.575161670Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:44:43.575326 containerd[1687]: time="2025-11-05T15:44:43.575217040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:44:43.575369 kubelet[3004]: E1105 15:44:43.575315 3004 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:44:43.575369 kubelet[3004]: E1105 15:44:43.575347 3004 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:44:43.575465 kubelet[3004]: E1105 15:44:43.575433 3004 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pp2dq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b58b895f4-4cnd5_calico-system(c9958fac-31c8-4b49-8704-7a3e667cd144): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:44:43.576719 kubelet[3004]: E1105 15:44:43.576693 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b58b895f4-4cnd5" podUID="c9958fac-31c8-4b49-8704-7a3e667cd144" Nov 5 15:44:43.740064 systemd-networkd[1585]: calie39794dee52: Gained IPv6LL Nov 5 15:44:44.039995 containerd[1687]: time="2025-11-05T15:44:44.039944599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qprtm,Uid:ab76474d-9152-4195-974c-65cb0f9f5e41,Namespace:kube-system,Attempt:0,}" Nov 5 15:44:44.115997 systemd-networkd[1585]: cali24192b0b5f9: Link UP Nov 5 15:44:44.116106 systemd-networkd[1585]: cali24192b0b5f9: Gained carrier Nov 5 15:44:44.129869 containerd[1687]: 2025-11-05 15:44:44.063 [INFO][4354] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--qprtm-eth0 coredns-674b8bbfcf- kube-system ab76474d-9152-4195-974c-65cb0f9f5e41 856 0 2025-11-05 15:44:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-qprtm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali24192b0b5f9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1ac3a5bee292794916d2042e49a0a766e9eb2da2407b712701f23baa8f71958a" Namespace="kube-system" Pod="coredns-674b8bbfcf-qprtm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qprtm-" Nov 5 15:44:44.129869 containerd[1687]: 2025-11-05 15:44:44.063 [INFO][4354] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1ac3a5bee292794916d2042e49a0a766e9eb2da2407b712701f23baa8f71958a" Namespace="kube-system" Pod="coredns-674b8bbfcf-qprtm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qprtm-eth0" Nov 5 15:44:44.129869 containerd[1687]: 2025-11-05 15:44:44.089 [INFO][4366] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1ac3a5bee292794916d2042e49a0a766e9eb2da2407b712701f23baa8f71958a" HandleID="k8s-pod-network.1ac3a5bee292794916d2042e49a0a766e9eb2da2407b712701f23baa8f71958a" Workload="localhost-k8s-coredns--674b8bbfcf--qprtm-eth0" Nov 5 15:44:44.130092 containerd[1687]: 2025-11-05 15:44:44.089 [INFO][4366] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1ac3a5bee292794916d2042e49a0a766e9eb2da2407b712701f23baa8f71958a" HandleID="k8s-pod-network.1ac3a5bee292794916d2042e49a0a766e9eb2da2407b712701f23baa8f71958a" Workload="localhost-k8s-coredns--674b8bbfcf--qprtm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f1d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-qprtm", "timestamp":"2025-11-05 15:44:44.089498436 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:44:44.130092 containerd[1687]: 2025-11-05 15:44:44.089 [INFO][4366] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:44:44.130092 containerd[1687]: 2025-11-05 15:44:44.089 [INFO][4366] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:44:44.130092 containerd[1687]: 2025-11-05 15:44:44.089 [INFO][4366] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:44:44.130092 containerd[1687]: 2025-11-05 15:44:44.095 [INFO][4366] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1ac3a5bee292794916d2042e49a0a766e9eb2da2407b712701f23baa8f71958a" host="localhost" Nov 5 15:44:44.130092 containerd[1687]: 2025-11-05 15:44:44.098 [INFO][4366] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:44:44.130092 containerd[1687]: 2025-11-05 15:44:44.100 [INFO][4366] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:44:44.130092 containerd[1687]: 2025-11-05 15:44:44.101 [INFO][4366] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:44:44.130092 containerd[1687]: 2025-11-05 15:44:44.102 [INFO][4366] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:44:44.130092 containerd[1687]: 2025-11-05 15:44:44.102 [INFO][4366] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1ac3a5bee292794916d2042e49a0a766e9eb2da2407b712701f23baa8f71958a" host="localhost" Nov 5 15:44:44.131044 containerd[1687]: 2025-11-05 15:44:44.103 [INFO][4366] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1ac3a5bee292794916d2042e49a0a766e9eb2da2407b712701f23baa8f71958a Nov 5 15:44:44.131044 containerd[1687]: 2025-11-05 15:44:44.106 [INFO][4366] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1ac3a5bee292794916d2042e49a0a766e9eb2da2407b712701f23baa8f71958a" host="localhost" Nov 5 15:44:44.131044 containerd[1687]: 2025-11-05 15:44:44.110 [INFO][4366] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.1ac3a5bee292794916d2042e49a0a766e9eb2da2407b712701f23baa8f71958a" host="localhost" Nov 5 15:44:44.131044 containerd[1687]: 2025-11-05 15:44:44.110 [INFO][4366] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.1ac3a5bee292794916d2042e49a0a766e9eb2da2407b712701f23baa8f71958a" host="localhost" Nov 5 15:44:44.131044 containerd[1687]: 2025-11-05 15:44:44.110 [INFO][4366] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:44:44.131044 containerd[1687]: 2025-11-05 15:44:44.110 [INFO][4366] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="1ac3a5bee292794916d2042e49a0a766e9eb2da2407b712701f23baa8f71958a" HandleID="k8s-pod-network.1ac3a5bee292794916d2042e49a0a766e9eb2da2407b712701f23baa8f71958a" Workload="localhost-k8s-coredns--674b8bbfcf--qprtm-eth0" Nov 5 15:44:44.131181 containerd[1687]: 2025-11-05 15:44:44.112 [INFO][4354] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1ac3a5bee292794916d2042e49a0a766e9eb2da2407b712701f23baa8f71958a" Namespace="kube-system" Pod="coredns-674b8bbfcf-qprtm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qprtm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qprtm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ab76474d-9152-4195-974c-65cb0f9f5e41", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 44, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-qprtm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali24192b0b5f9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:44:44.131229 containerd[1687]: 2025-11-05 15:44:44.112 [INFO][4354] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="1ac3a5bee292794916d2042e49a0a766e9eb2da2407b712701f23baa8f71958a" Namespace="kube-system" Pod="coredns-674b8bbfcf-qprtm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qprtm-eth0" Nov 5 15:44:44.131229 containerd[1687]: 2025-11-05 15:44:44.112 [INFO][4354] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali24192b0b5f9 ContainerID="1ac3a5bee292794916d2042e49a0a766e9eb2da2407b712701f23baa8f71958a" Namespace="kube-system" Pod="coredns-674b8bbfcf-qprtm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qprtm-eth0" Nov 5 15:44:44.131229 containerd[1687]: 2025-11-05 15:44:44.116 [INFO][4354] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1ac3a5bee292794916d2042e49a0a766e9eb2da2407b712701f23baa8f71958a" Namespace="kube-system" Pod="coredns-674b8bbfcf-qprtm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qprtm-eth0" Nov 5 15:44:44.131282 containerd[1687]: 2025-11-05 15:44:44.116 [INFO][4354] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1ac3a5bee292794916d2042e49a0a766e9eb2da2407b712701f23baa8f71958a" Namespace="kube-system" Pod="coredns-674b8bbfcf-qprtm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qprtm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qprtm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ab76474d-9152-4195-974c-65cb0f9f5e41", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 44, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1ac3a5bee292794916d2042e49a0a766e9eb2da2407b712701f23baa8f71958a", Pod:"coredns-674b8bbfcf-qprtm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali24192b0b5f9", MAC:"0e:bf:78:29:9b:aa", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:44:44.131282 containerd[1687]: 2025-11-05 15:44:44.125 [INFO][4354] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1ac3a5bee292794916d2042e49a0a766e9eb2da2407b712701f23baa8f71958a" Namespace="kube-system" Pod="coredns-674b8bbfcf-qprtm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qprtm-eth0" Nov 5 15:44:44.145353 containerd[1687]: time="2025-11-05T15:44:44.145316065Z" level=info msg="connecting to shim 1ac3a5bee292794916d2042e49a0a766e9eb2da2407b712701f23baa8f71958a" address="unix:///run/containerd/s/ae0d16fce97ab604e3789c67743c82cead04ec2172c56306ad9537add75f6a97" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:44:44.166097 systemd[1]: Started cri-containerd-1ac3a5bee292794916d2042e49a0a766e9eb2da2407b712701f23baa8f71958a.scope - libcontainer container 1ac3a5bee292794916d2042e49a0a766e9eb2da2407b712701f23baa8f71958a. Nov 5 15:44:44.175663 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:44:44.199075 kubelet[3004]: E1105 15:44:44.199010 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b58b895f4-4cnd5" podUID="c9958fac-31c8-4b49-8704-7a3e667cd144" Nov 5 15:44:44.202732 containerd[1687]: time="2025-11-05T15:44:44.202710098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qprtm,Uid:ab76474d-9152-4195-974c-65cb0f9f5e41,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ac3a5bee292794916d2042e49a0a766e9eb2da2407b712701f23baa8f71958a\"" Nov 5 15:44:44.207640 containerd[1687]: time="2025-11-05T15:44:44.207571418Z" level=info msg="CreateContainer within sandbox \"1ac3a5bee292794916d2042e49a0a766e9eb2da2407b712701f23baa8f71958a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 15:44:44.230634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2346138020.mount: Deactivated successfully. Nov 5 15:44:44.240530 containerd[1687]: time="2025-11-05T15:44:44.240497107Z" level=info msg="Container 327373d9b18c81b7a2bcb35e248533d241337e4de80b1787e5aab61f75e9f918: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:44:44.243516 containerd[1687]: time="2025-11-05T15:44:44.243443899Z" level=info msg="CreateContainer within sandbox \"1ac3a5bee292794916d2042e49a0a766e9eb2da2407b712701f23baa8f71958a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"327373d9b18c81b7a2bcb35e248533d241337e4de80b1787e5aab61f75e9f918\"" Nov 5 15:44:44.244020 containerd[1687]: time="2025-11-05T15:44:44.244003564Z" level=info msg="StartContainer for \"327373d9b18c81b7a2bcb35e248533d241337e4de80b1787e5aab61f75e9f918\"" Nov 5 15:44:44.244507 containerd[1687]: time="2025-11-05T15:44:44.244492711Z" level=info msg="connecting to shim 327373d9b18c81b7a2bcb35e248533d241337e4de80b1787e5aab61f75e9f918" address="unix:///run/containerd/s/ae0d16fce97ab604e3789c67743c82cead04ec2172c56306ad9537add75f6a97" protocol=ttrpc version=3 Nov 5 15:44:44.252693 systemd-networkd[1585]: vxlan.calico: Gained IPv6LL Nov 5 15:44:44.266888 systemd[1]: Started cri-containerd-327373d9b18c81b7a2bcb35e248533d241337e4de80b1787e5aab61f75e9f918.scope - libcontainer container 327373d9b18c81b7a2bcb35e248533d241337e4de80b1787e5aab61f75e9f918. Nov 5 15:44:44.346119 containerd[1687]: time="2025-11-05T15:44:44.346044523Z" level=info msg="StartContainer for \"327373d9b18c81b7a2bcb35e248533d241337e4de80b1787e5aab61f75e9f918\" returns successfully" Nov 5 15:44:45.212047 systemd-networkd[1585]: cali24192b0b5f9: Gained IPv6LL Nov 5 15:44:45.226294 kubelet[3004]: I1105 15:44:45.222631 3004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-qprtm" podStartSLOduration=41.222613159 podStartE2EDuration="41.222613159s" podCreationTimestamp="2025-11-05 15:44:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:44:45.208098032 +0000 UTC m=+48.256008107" watchObservedRunningTime="2025-11-05 15:44:45.222613159 +0000 UTC m=+48.270523236" Nov 5 15:44:47.041429 containerd[1687]: time="2025-11-05T15:44:47.041385998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c8b78b8fb-cphp6,Uid:2e5e65c4-fa11-4b09-8e02-96b75e14b836,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:44:47.041875 containerd[1687]: time="2025-11-05T15:44:47.041590410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c8b78b8fb-6ngl7,Uid:924e02ba-63d7-442f-ae24-772364097f08,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:44:47.041875 containerd[1687]: time="2025-11-05T15:44:47.041840990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f57597689-7pcp2,Uid:6d011c6a-b4ab-4b64-b7bd-117fed5a2af3,Namespace:calico-system,Attempt:0,}" Nov 5 15:44:47.283732 systemd-networkd[1585]: calib176338bab3: Link UP Nov 5 15:44:47.284944 systemd-networkd[1585]: calib176338bab3: Gained carrier Nov 5 15:44:47.298298 containerd[1687]: 2025-11-05 15:44:47.099 [INFO][4476] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5c8b78b8fb--cphp6-eth0 calico-apiserver-5c8b78b8fb- calico-apiserver 2e5e65c4-fa11-4b09-8e02-96b75e14b836 857 0 2025-11-05 15:44:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c8b78b8fb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5c8b78b8fb-cphp6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib176338bab3 [] [] }} ContainerID="41005b381129256a5b6a2becff4b3bfb1313ac8ff6044572b75465f29489a339" Namespace="calico-apiserver" Pod="calico-apiserver-5c8b78b8fb-cphp6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c8b78b8fb--cphp6-" Nov 5 15:44:47.298298 containerd[1687]: 2025-11-05 15:44:47.100 [INFO][4476] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="41005b381129256a5b6a2becff4b3bfb1313ac8ff6044572b75465f29489a339" Namespace="calico-apiserver" Pod="calico-apiserver-5c8b78b8fb-cphp6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c8b78b8fb--cphp6-eth0" Nov 5 15:44:47.298298 containerd[1687]: 2025-11-05 15:44:47.249 [INFO][4512] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41005b381129256a5b6a2becff4b3bfb1313ac8ff6044572b75465f29489a339" HandleID="k8s-pod-network.41005b381129256a5b6a2becff4b3bfb1313ac8ff6044572b75465f29489a339" Workload="localhost-k8s-calico--apiserver--5c8b78b8fb--cphp6-eth0" Nov 5 15:44:47.298298 containerd[1687]: 2025-11-05 15:44:47.249 [INFO][4512] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="41005b381129256a5b6a2becff4b3bfb1313ac8ff6044572b75465f29489a339" HandleID="k8s-pod-network.41005b381129256a5b6a2becff4b3bfb1313ac8ff6044572b75465f29489a339" Workload="localhost-k8s-calico--apiserver--5c8b78b8fb--cphp6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037d740), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5c8b78b8fb-cphp6", "timestamp":"2025-11-05 15:44:47.249174314 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:44:47.298298 containerd[1687]: 2025-11-05 15:44:47.249 [INFO][4512] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:44:47.298298 containerd[1687]: 2025-11-05 15:44:47.249 [INFO][4512] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:44:47.298298 containerd[1687]: 2025-11-05 15:44:47.249 [INFO][4512] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:44:47.298298 containerd[1687]: 2025-11-05 15:44:47.262 [INFO][4512] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.41005b381129256a5b6a2becff4b3bfb1313ac8ff6044572b75465f29489a339" host="localhost" Nov 5 15:44:47.298298 containerd[1687]: 2025-11-05 15:44:47.265 [INFO][4512] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:44:47.298298 containerd[1687]: 2025-11-05 15:44:47.267 [INFO][4512] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:44:47.298298 containerd[1687]: 2025-11-05 15:44:47.268 [INFO][4512] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:44:47.298298 containerd[1687]: 2025-11-05 15:44:47.269 [INFO][4512] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:44:47.298298 containerd[1687]: 2025-11-05 15:44:47.269 [INFO][4512] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.41005b381129256a5b6a2becff4b3bfb1313ac8ff6044572b75465f29489a339" host="localhost" Nov 5 15:44:47.298298 containerd[1687]: 2025-11-05 15:44:47.270 [INFO][4512] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.41005b381129256a5b6a2becff4b3bfb1313ac8ff6044572b75465f29489a339 Nov 5 15:44:47.298298 containerd[1687]: 2025-11-05 15:44:47.273 [INFO][4512] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.41005b381129256a5b6a2becff4b3bfb1313ac8ff6044572b75465f29489a339" host="localhost" Nov 5 15:44:47.298298 containerd[1687]: 2025-11-05 15:44:47.275 [INFO][4512] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.41005b381129256a5b6a2becff4b3bfb1313ac8ff6044572b75465f29489a339" host="localhost" Nov 5 15:44:47.298298 containerd[1687]: 2025-11-05 15:44:47.275 [INFO][4512] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.41005b381129256a5b6a2becff4b3bfb1313ac8ff6044572b75465f29489a339" host="localhost" Nov 5 15:44:47.298298 containerd[1687]: 2025-11-05 15:44:47.275 [INFO][4512] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:44:47.298298 containerd[1687]: 2025-11-05 15:44:47.275 [INFO][4512] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="41005b381129256a5b6a2becff4b3bfb1313ac8ff6044572b75465f29489a339" HandleID="k8s-pod-network.41005b381129256a5b6a2becff4b3bfb1313ac8ff6044572b75465f29489a339" Workload="localhost-k8s-calico--apiserver--5c8b78b8fb--cphp6-eth0" Nov 5 15:44:47.300734 containerd[1687]: 2025-11-05 15:44:47.279 [INFO][4476] cni-plugin/k8s.go 418: Populated endpoint ContainerID="41005b381129256a5b6a2becff4b3bfb1313ac8ff6044572b75465f29489a339" Namespace="calico-apiserver" Pod="calico-apiserver-5c8b78b8fb-cphp6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c8b78b8fb--cphp6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c8b78b8fb--cphp6-eth0", GenerateName:"calico-apiserver-5c8b78b8fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"2e5e65c4-fa11-4b09-8e02-96b75e14b836", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 44, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c8b78b8fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5c8b78b8fb-cphp6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib176338bab3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:44:47.300734 containerd[1687]: 2025-11-05 15:44:47.279 [INFO][4476] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="41005b381129256a5b6a2becff4b3bfb1313ac8ff6044572b75465f29489a339" Namespace="calico-apiserver" Pod="calico-apiserver-5c8b78b8fb-cphp6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c8b78b8fb--cphp6-eth0" Nov 5 15:44:47.300734 containerd[1687]: 2025-11-05 15:44:47.280 [INFO][4476] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib176338bab3 ContainerID="41005b381129256a5b6a2becff4b3bfb1313ac8ff6044572b75465f29489a339" Namespace="calico-apiserver" Pod="calico-apiserver-5c8b78b8fb-cphp6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c8b78b8fb--cphp6-eth0" Nov 5 15:44:47.300734 containerd[1687]: 2025-11-05 15:44:47.286 [INFO][4476] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41005b381129256a5b6a2becff4b3bfb1313ac8ff6044572b75465f29489a339" Namespace="calico-apiserver" Pod="calico-apiserver-5c8b78b8fb-cphp6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c8b78b8fb--cphp6-eth0" Nov 5 15:44:47.300734 containerd[1687]: 2025-11-05 15:44:47.287 [INFO][4476] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="41005b381129256a5b6a2becff4b3bfb1313ac8ff6044572b75465f29489a339" Namespace="calico-apiserver" Pod="calico-apiserver-5c8b78b8fb-cphp6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c8b78b8fb--cphp6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c8b78b8fb--cphp6-eth0", GenerateName:"calico-apiserver-5c8b78b8fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"2e5e65c4-fa11-4b09-8e02-96b75e14b836", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 44, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c8b78b8fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"41005b381129256a5b6a2becff4b3bfb1313ac8ff6044572b75465f29489a339", Pod:"calico-apiserver-5c8b78b8fb-cphp6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib176338bab3", MAC:"7e:3e:40:03:06:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:44:47.300734 containerd[1687]: 2025-11-05 15:44:47.293 [INFO][4476] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="41005b381129256a5b6a2becff4b3bfb1313ac8ff6044572b75465f29489a339" Namespace="calico-apiserver" Pod="calico-apiserver-5c8b78b8fb-cphp6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c8b78b8fb--cphp6-eth0" Nov 5 15:44:47.326287 containerd[1687]: time="2025-11-05T15:44:47.326228031Z" level=info msg="connecting to shim 41005b381129256a5b6a2becff4b3bfb1313ac8ff6044572b75465f29489a339" address="unix:///run/containerd/s/34fd8a52a3420d1d59b8920960f26b51239651077531b98024dbe116fd423962" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:44:47.367202 systemd[1]: Started cri-containerd-41005b381129256a5b6a2becff4b3bfb1313ac8ff6044572b75465f29489a339.scope - libcontainer container 41005b381129256a5b6a2becff4b3bfb1313ac8ff6044572b75465f29489a339. Nov 5 15:44:47.382618 systemd-networkd[1585]: cali90015b8ce22: Link UP Nov 5 15:44:47.383469 systemd-networkd[1585]: cali90015b8ce22: Gained carrier Nov 5 15:44:47.398639 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:44:47.403614 containerd[1687]: 2025-11-05 15:44:47.097 [INFO][4479] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5c8b78b8fb--6ngl7-eth0 calico-apiserver-5c8b78b8fb- calico-apiserver 924e02ba-63d7-442f-ae24-772364097f08 859 0 2025-11-05 15:44:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c8b78b8fb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5c8b78b8fb-6ngl7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali90015b8ce22 [] [] }} ContainerID="baeddf244fb790d75991b3e24c37f11ff29e68ee4af2556e8c46193aea80d636" Namespace="calico-apiserver" Pod="calico-apiserver-5c8b78b8fb-6ngl7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c8b78b8fb--6ngl7-" Nov 5 15:44:47.403614 containerd[1687]: 2025-11-05 15:44:47.102 [INFO][4479] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="baeddf244fb790d75991b3e24c37f11ff29e68ee4af2556e8c46193aea80d636" Namespace="calico-apiserver" Pod="calico-apiserver-5c8b78b8fb-6ngl7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c8b78b8fb--6ngl7-eth0" Nov 5 15:44:47.403614 containerd[1687]: 2025-11-05 15:44:47.249 [INFO][4516] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="baeddf244fb790d75991b3e24c37f11ff29e68ee4af2556e8c46193aea80d636" HandleID="k8s-pod-network.baeddf244fb790d75991b3e24c37f11ff29e68ee4af2556e8c46193aea80d636" Workload="localhost-k8s-calico--apiserver--5c8b78b8fb--6ngl7-eth0" Nov 5 15:44:47.403614 containerd[1687]: 2025-11-05 15:44:47.249 [INFO][4516] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="baeddf244fb790d75991b3e24c37f11ff29e68ee4af2556e8c46193aea80d636" HandleID="k8s-pod-network.baeddf244fb790d75991b3e24c37f11ff29e68ee4af2556e8c46193aea80d636" Workload="localhost-k8s-calico--apiserver--5c8b78b8fb--6ngl7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035fa70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5c8b78b8fb-6ngl7", "timestamp":"2025-11-05 15:44:47.249722573 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:44:47.403614 containerd[1687]: 2025-11-05 15:44:47.249 [INFO][4516] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:44:47.403614 containerd[1687]: 2025-11-05 15:44:47.275 [INFO][4516] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:44:47.403614 containerd[1687]: 2025-11-05 15:44:47.276 [INFO][4516] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:44:47.403614 containerd[1687]: 2025-11-05 15:44:47.362 [INFO][4516] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.baeddf244fb790d75991b3e24c37f11ff29e68ee4af2556e8c46193aea80d636" host="localhost" Nov 5 15:44:47.403614 containerd[1687]: 2025-11-05 15:44:47.366 [INFO][4516] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:44:47.403614 containerd[1687]: 2025-11-05 15:44:47.368 [INFO][4516] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:44:47.403614 containerd[1687]: 2025-11-05 15:44:47.369 [INFO][4516] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:44:47.403614 containerd[1687]: 2025-11-05 15:44:47.371 [INFO][4516] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:44:47.403614 containerd[1687]: 2025-11-05 15:44:47.371 [INFO][4516] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.baeddf244fb790d75991b3e24c37f11ff29e68ee4af2556e8c46193aea80d636" host="localhost" Nov 5 15:44:47.403614 containerd[1687]: 2025-11-05 15:44:47.371 [INFO][4516] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.baeddf244fb790d75991b3e24c37f11ff29e68ee4af2556e8c46193aea80d636 Nov 5 15:44:47.403614 containerd[1687]: 2025-11-05 15:44:47.374 [INFO][4516] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.baeddf244fb790d75991b3e24c37f11ff29e68ee4af2556e8c46193aea80d636" host="localhost" Nov 5 15:44:47.403614 containerd[1687]: 2025-11-05 15:44:47.376 [INFO][4516] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.baeddf244fb790d75991b3e24c37f11ff29e68ee4af2556e8c46193aea80d636" host="localhost" Nov 5 15:44:47.403614 containerd[1687]: 2025-11-05 15:44:47.376 [INFO][4516] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.baeddf244fb790d75991b3e24c37f11ff29e68ee4af2556e8c46193aea80d636" host="localhost" Nov 5 15:44:47.403614 containerd[1687]: 2025-11-05 15:44:47.376 [INFO][4516] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:44:47.403614 containerd[1687]: 2025-11-05 15:44:47.377 [INFO][4516] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="baeddf244fb790d75991b3e24c37f11ff29e68ee4af2556e8c46193aea80d636" HandleID="k8s-pod-network.baeddf244fb790d75991b3e24c37f11ff29e68ee4af2556e8c46193aea80d636" Workload="localhost-k8s-calico--apiserver--5c8b78b8fb--6ngl7-eth0" Nov 5 15:44:47.409799 containerd[1687]: 2025-11-05 15:44:47.378 [INFO][4479] cni-plugin/k8s.go 418: Populated endpoint ContainerID="baeddf244fb790d75991b3e24c37f11ff29e68ee4af2556e8c46193aea80d636" Namespace="calico-apiserver" Pod="calico-apiserver-5c8b78b8fb-6ngl7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c8b78b8fb--6ngl7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c8b78b8fb--6ngl7-eth0", GenerateName:"calico-apiserver-5c8b78b8fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"924e02ba-63d7-442f-ae24-772364097f08", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 44, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c8b78b8fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5c8b78b8fb-6ngl7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali90015b8ce22", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:44:47.409799 containerd[1687]: 2025-11-05 15:44:47.378 [INFO][4479] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="baeddf244fb790d75991b3e24c37f11ff29e68ee4af2556e8c46193aea80d636" Namespace="calico-apiserver" Pod="calico-apiserver-5c8b78b8fb-6ngl7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c8b78b8fb--6ngl7-eth0" Nov 5 15:44:47.409799 containerd[1687]: 2025-11-05 15:44:47.378 [INFO][4479] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90015b8ce22 ContainerID="baeddf244fb790d75991b3e24c37f11ff29e68ee4af2556e8c46193aea80d636" Namespace="calico-apiserver" Pod="calico-apiserver-5c8b78b8fb-6ngl7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c8b78b8fb--6ngl7-eth0" Nov 5 15:44:47.409799 containerd[1687]: 2025-11-05 15:44:47.384 [INFO][4479] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="baeddf244fb790d75991b3e24c37f11ff29e68ee4af2556e8c46193aea80d636" Namespace="calico-apiserver" Pod="calico-apiserver-5c8b78b8fb-6ngl7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c8b78b8fb--6ngl7-eth0" Nov 5 15:44:47.409799 containerd[1687]: 2025-11-05 15:44:47.384 [INFO][4479] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="baeddf244fb790d75991b3e24c37f11ff29e68ee4af2556e8c46193aea80d636" Namespace="calico-apiserver" Pod="calico-apiserver-5c8b78b8fb-6ngl7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c8b78b8fb--6ngl7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c8b78b8fb--6ngl7-eth0", GenerateName:"calico-apiserver-5c8b78b8fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"924e02ba-63d7-442f-ae24-772364097f08", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 44, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c8b78b8fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"baeddf244fb790d75991b3e24c37f11ff29e68ee4af2556e8c46193aea80d636", Pod:"calico-apiserver-5c8b78b8fb-6ngl7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali90015b8ce22", MAC:"ca:6e:87:4d:a6:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:44:47.409799 containerd[1687]: 2025-11-05 15:44:47.401 [INFO][4479] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="baeddf244fb790d75991b3e24c37f11ff29e68ee4af2556e8c46193aea80d636" Namespace="calico-apiserver" Pod="calico-apiserver-5c8b78b8fb-6ngl7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c8b78b8fb--6ngl7-eth0" Nov 5 15:44:47.456595 containerd[1687]: time="2025-11-05T15:44:47.456352766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c8b78b8fb-cphp6,Uid:2e5e65c4-fa11-4b09-8e02-96b75e14b836,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"41005b381129256a5b6a2becff4b3bfb1313ac8ff6044572b75465f29489a339\"" Nov 5 15:44:47.457833 containerd[1687]: time="2025-11-05T15:44:47.457676914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:44:47.506687 containerd[1687]: time="2025-11-05T15:44:47.506661700Z" level=info msg="connecting to shim baeddf244fb790d75991b3e24c37f11ff29e68ee4af2556e8c46193aea80d636" address="unix:///run/containerd/s/c9a322f7e89b77ca2b30043419ec200b8f1e9e806711c76064a1ea61a1685645" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:44:47.516888 systemd-networkd[1585]: calia1953b96524: Link UP Nov 5 15:44:47.517012 systemd-networkd[1585]: calia1953b96524: Gained carrier Nov 5 15:44:47.535235 systemd[1]: Started cri-containerd-baeddf244fb790d75991b3e24c37f11ff29e68ee4af2556e8c46193aea80d636.scope - libcontainer container baeddf244fb790d75991b3e24c37f11ff29e68ee4af2556e8c46193aea80d636. Nov 5 15:44:47.541768 containerd[1687]: 2025-11-05 15:44:47.106 [INFO][4471] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5f57597689--7pcp2-eth0 calico-kube-controllers-5f57597689- calico-system 6d011c6a-b4ab-4b64-b7bd-117fed5a2af3 861 0 2025-11-05 15:44:20 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5f57597689 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5f57597689-7pcp2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia1953b96524 [] [] }} ContainerID="73b4e0539883c7b8250fb11967c8088173e25b8fea9f7a30cb54b5b6e8908805" Namespace="calico-system" Pod="calico-kube-controllers-5f57597689-7pcp2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f57597689--7pcp2-" Nov 5 15:44:47.541768 containerd[1687]: 2025-11-05 15:44:47.106 [INFO][4471] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="73b4e0539883c7b8250fb11967c8088173e25b8fea9f7a30cb54b5b6e8908805" Namespace="calico-system" Pod="calico-kube-controllers-5f57597689-7pcp2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f57597689--7pcp2-eth0" Nov 5 15:44:47.541768 containerd[1687]: 2025-11-05 15:44:47.257 [INFO][4514] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="73b4e0539883c7b8250fb11967c8088173e25b8fea9f7a30cb54b5b6e8908805" HandleID="k8s-pod-network.73b4e0539883c7b8250fb11967c8088173e25b8fea9f7a30cb54b5b6e8908805" Workload="localhost-k8s-calico--kube--controllers--5f57597689--7pcp2-eth0" Nov 5 15:44:47.541768 containerd[1687]: 2025-11-05 15:44:47.257 [INFO][4514] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="73b4e0539883c7b8250fb11967c8088173e25b8fea9f7a30cb54b5b6e8908805" HandleID="k8s-pod-network.73b4e0539883c7b8250fb11967c8088173e25b8fea9f7a30cb54b5b6e8908805" Workload="localhost-k8s-calico--kube--controllers--5f57597689--7pcp2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002bd5e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5f57597689-7pcp2", "timestamp":"2025-11-05 15:44:47.257300494 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:44:47.541768 containerd[1687]: 2025-11-05 15:44:47.257 [INFO][4514] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:44:47.541768 containerd[1687]: 2025-11-05 15:44:47.377 [INFO][4514] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:44:47.541768 containerd[1687]: 2025-11-05 15:44:47.377 [INFO][4514] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:44:47.541768 containerd[1687]: 2025-11-05 15:44:47.462 [INFO][4514] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.73b4e0539883c7b8250fb11967c8088173e25b8fea9f7a30cb54b5b6e8908805" host="localhost" Nov 5 15:44:47.541768 containerd[1687]: 2025-11-05 15:44:47.470 [INFO][4514] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:44:47.541768 containerd[1687]: 2025-11-05 15:44:47.473 [INFO][4514] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:44:47.541768 containerd[1687]: 2025-11-05 15:44:47.476 [INFO][4514] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:44:47.541768 containerd[1687]: 2025-11-05 15:44:47.478 [INFO][4514] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:44:47.541768 containerd[1687]: 2025-11-05 15:44:47.478 [INFO][4514] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.73b4e0539883c7b8250fb11967c8088173e25b8fea9f7a30cb54b5b6e8908805" host="localhost" Nov 5 15:44:47.541768 containerd[1687]: 2025-11-05 15:44:47.479 [INFO][4514] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.73b4e0539883c7b8250fb11967c8088173e25b8fea9f7a30cb54b5b6e8908805 Nov 5 15:44:47.541768 containerd[1687]: 2025-11-05 15:44:47.490 [INFO][4514] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.73b4e0539883c7b8250fb11967c8088173e25b8fea9f7a30cb54b5b6e8908805" host="localhost" Nov 5 15:44:47.541768 containerd[1687]: 2025-11-05 15:44:47.512 [INFO][4514] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.73b4e0539883c7b8250fb11967c8088173e25b8fea9f7a30cb54b5b6e8908805" host="localhost" Nov 5 15:44:47.541768 containerd[1687]: 2025-11-05 15:44:47.512 [INFO][4514] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.73b4e0539883c7b8250fb11967c8088173e25b8fea9f7a30cb54b5b6e8908805" host="localhost" Nov 5 15:44:47.541768 containerd[1687]: 2025-11-05 15:44:47.512 [INFO][4514] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:44:47.541768 containerd[1687]: 2025-11-05 15:44:47.512 [INFO][4514] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="73b4e0539883c7b8250fb11967c8088173e25b8fea9f7a30cb54b5b6e8908805" HandleID="k8s-pod-network.73b4e0539883c7b8250fb11967c8088173e25b8fea9f7a30cb54b5b6e8908805" Workload="localhost-k8s-calico--kube--controllers--5f57597689--7pcp2-eth0" Nov 5 15:44:47.542247 containerd[1687]: 2025-11-05 15:44:47.514 [INFO][4471] cni-plugin/k8s.go 418: Populated endpoint ContainerID="73b4e0539883c7b8250fb11967c8088173e25b8fea9f7a30cb54b5b6e8908805" Namespace="calico-system" Pod="calico-kube-controllers-5f57597689-7pcp2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f57597689--7pcp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f57597689--7pcp2-eth0", GenerateName:"calico-kube-controllers-5f57597689-", Namespace:"calico-system", SelfLink:"", UID:"6d011c6a-b4ab-4b64-b7bd-117fed5a2af3", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 44, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f57597689", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5f57597689-7pcp2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia1953b96524", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:44:47.542247 containerd[1687]: 2025-11-05 15:44:47.514 [INFO][4471] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="73b4e0539883c7b8250fb11967c8088173e25b8fea9f7a30cb54b5b6e8908805" Namespace="calico-system" Pod="calico-kube-controllers-5f57597689-7pcp2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f57597689--7pcp2-eth0" Nov 5 15:44:47.542247 containerd[1687]: 2025-11-05 15:44:47.514 [INFO][4471] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia1953b96524 ContainerID="73b4e0539883c7b8250fb11967c8088173e25b8fea9f7a30cb54b5b6e8908805" Namespace="calico-system" Pod="calico-kube-controllers-5f57597689-7pcp2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f57597689--7pcp2-eth0" Nov 5 15:44:47.542247 containerd[1687]: 2025-11-05 15:44:47.516 [INFO][4471] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="73b4e0539883c7b8250fb11967c8088173e25b8fea9f7a30cb54b5b6e8908805" Namespace="calico-system" Pod="calico-kube-controllers-5f57597689-7pcp2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f57597689--7pcp2-eth0" Nov 5 15:44:47.542247 containerd[1687]: 2025-11-05 15:44:47.517 [INFO][4471] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="73b4e0539883c7b8250fb11967c8088173e25b8fea9f7a30cb54b5b6e8908805" Namespace="calico-system" Pod="calico-kube-controllers-5f57597689-7pcp2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f57597689--7pcp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f57597689--7pcp2-eth0", GenerateName:"calico-kube-controllers-5f57597689-", Namespace:"calico-system", SelfLink:"", UID:"6d011c6a-b4ab-4b64-b7bd-117fed5a2af3", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 44, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f57597689", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"73b4e0539883c7b8250fb11967c8088173e25b8fea9f7a30cb54b5b6e8908805", Pod:"calico-kube-controllers-5f57597689-7pcp2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia1953b96524", MAC:"76:ba:4a:09:d1:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:44:47.542247 containerd[1687]: 2025-11-05 15:44:47.539 [INFO][4471] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="73b4e0539883c7b8250fb11967c8088173e25b8fea9f7a30cb54b5b6e8908805" Namespace="calico-system" Pod="calico-kube-controllers-5f57597689-7pcp2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f57597689--7pcp2-eth0" Nov 5 15:44:47.574871 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:44:47.622527 containerd[1687]: time="2025-11-05T15:44:47.622489493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c8b78b8fb-6ngl7,Uid:924e02ba-63d7-442f-ae24-772364097f08,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"baeddf244fb790d75991b3e24c37f11ff29e68ee4af2556e8c46193aea80d636\"" Nov 5 15:44:47.632615 containerd[1687]: time="2025-11-05T15:44:47.632583379Z" level=info msg="connecting to shim 73b4e0539883c7b8250fb11967c8088173e25b8fea9f7a30cb54b5b6e8908805" address="unix:///run/containerd/s/257ad2f3767cfae4c926fa68f6581557f0999ad64e7f69dfd479612f85f90019" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:44:47.650062 systemd[1]: Started cri-containerd-73b4e0539883c7b8250fb11967c8088173e25b8fea9f7a30cb54b5b6e8908805.scope - libcontainer container 73b4e0539883c7b8250fb11967c8088173e25b8fea9f7a30cb54b5b6e8908805. Nov 5 15:44:47.658737 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:44:47.688647 containerd[1687]: time="2025-11-05T15:44:47.688586228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f57597689-7pcp2,Uid:6d011c6a-b4ab-4b64-b7bd-117fed5a2af3,Namespace:calico-system,Attempt:0,} returns sandbox id \"73b4e0539883c7b8250fb11967c8088173e25b8fea9f7a30cb54b5b6e8908805\"" Nov 5 15:44:47.823510 containerd[1687]: time="2025-11-05T15:44:47.823472037Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:44:47.823840 containerd[1687]: time="2025-11-05T15:44:47.823815712Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:44:47.823925 containerd[1687]: time="2025-11-05T15:44:47.823872686Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:44:47.824109 kubelet[3004]: E1105 15:44:47.824072 3004 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:44:47.824464 kubelet[3004]: E1105 15:44:47.824116 3004 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:44:47.824464 kubelet[3004]: E1105 15:44:47.824303 3004 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rg7kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c8b78b8fb-cphp6_calico-apiserver(2e5e65c4-fa11-4b09-8e02-96b75e14b836): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:44:47.825781 kubelet[3004]: E1105 15:44:47.825476 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c8b78b8fb-cphp6" podUID="2e5e65c4-fa11-4b09-8e02-96b75e14b836" Nov 5 15:44:47.825884 containerd[1687]: time="2025-11-05T15:44:47.824688611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:44:48.193928 containerd[1687]: time="2025-11-05T15:44:48.193830775Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:44:48.199564 containerd[1687]: time="2025-11-05T15:44:48.199535953Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:44:48.199628 containerd[1687]: time="2025-11-05T15:44:48.199594048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:44:48.199745 kubelet[3004]: E1105 15:44:48.199711 3004 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:44:48.199783 kubelet[3004]: E1105 15:44:48.199754 3004 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:44:48.199936 kubelet[3004]: E1105 15:44:48.199900 3004 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rl94j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c8b78b8fb-6ngl7_calico-apiserver(924e02ba-63d7-442f-ae24-772364097f08): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:44:48.200415 containerd[1687]: time="2025-11-05T15:44:48.200224158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:44:48.201932 kubelet[3004]: E1105 15:44:48.201899 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c8b78b8fb-6ngl7" podUID="924e02ba-63d7-442f-ae24-772364097f08" Nov 5 15:44:48.263024 kubelet[3004]: E1105 15:44:48.262990 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c8b78b8fb-6ngl7" podUID="924e02ba-63d7-442f-ae24-772364097f08" Nov 5 15:44:48.264128 kubelet[3004]: E1105 15:44:48.264059 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c8b78b8fb-cphp6" podUID="2e5e65c4-fa11-4b09-8e02-96b75e14b836" Nov 5 15:44:48.552253 containerd[1687]: time="2025-11-05T15:44:48.552202974Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:44:48.552575 containerd[1687]: time="2025-11-05T15:44:48.552541651Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:44:48.553331 containerd[1687]: time="2025-11-05T15:44:48.552599261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:44:48.553377 kubelet[3004]: E1105 15:44:48.552727 3004 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:44:48.553377 kubelet[3004]: E1105 15:44:48.552764 3004 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:44:48.553377 kubelet[3004]: E1105 15:44:48.553197 3004 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kwt25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5f57597689-7pcp2_calico-system(6d011c6a-b4ab-4b64-b7bd-117fed5a2af3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:44:48.554427 kubelet[3004]: E1105 15:44:48.554385 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f57597689-7pcp2" podUID="6d011c6a-b4ab-4b64-b7bd-117fed5a2af3" Nov 5 15:44:48.603096 systemd-networkd[1585]: calib176338bab3: Gained IPv6LL Nov 5 15:44:48.923084 systemd-networkd[1585]: calia1953b96524: Gained IPv6LL Nov 5 15:44:49.041090 containerd[1687]: time="2025-11-05T15:44:49.040788606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pwv49,Uid:aa307e49-5503-4739-ace7-169707e5fd38,Namespace:calico-system,Attempt:0,}" Nov 5 15:44:49.041090 containerd[1687]: time="2025-11-05T15:44:49.040848612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vd7mt,Uid:8eefe537-f46b-421d-a847-6a36d2b266d7,Namespace:calico-system,Attempt:0,}" Nov 5 15:44:49.041090 containerd[1687]: time="2025-11-05T15:44:49.040792893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p8vf5,Uid:68de4903-f6e5-45c3-b76e-09034eb6e62e,Namespace:kube-system,Attempt:0,}" Nov 5 15:44:49.115115 systemd-networkd[1585]: cali90015b8ce22: Gained IPv6LL Nov 5 15:44:49.188523 systemd-networkd[1585]: cali513dc9bbdd1: Link UP Nov 5 15:44:49.194354 systemd-networkd[1585]: cali513dc9bbdd1: Gained carrier Nov 5 15:44:49.212222 containerd[1687]: 2025-11-05 15:44:49.102 [INFO][4712] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--pwv49-eth0 csi-node-driver- calico-system aa307e49-5503-4739-ace7-169707e5fd38 741 0 2025-11-05 15:44:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-pwv49 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali513dc9bbdd1 [] [] }} ContainerID="aba2b36c5fa358e7ddefbaf30467737b0e4160cf0ea05d923cb2bbf95f7725b7" Namespace="calico-system" Pod="csi-node-driver-pwv49" WorkloadEndpoint="localhost-k8s-csi--node--driver--pwv49-" Nov 5 15:44:49.212222 containerd[1687]: 2025-11-05 15:44:49.102 [INFO][4712] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aba2b36c5fa358e7ddefbaf30467737b0e4160cf0ea05d923cb2bbf95f7725b7" Namespace="calico-system" Pod="csi-node-driver-pwv49" WorkloadEndpoint="localhost-k8s-csi--node--driver--pwv49-eth0" Nov 5 15:44:49.212222 containerd[1687]: 2025-11-05 15:44:49.137 [INFO][4750] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aba2b36c5fa358e7ddefbaf30467737b0e4160cf0ea05d923cb2bbf95f7725b7" HandleID="k8s-pod-network.aba2b36c5fa358e7ddefbaf30467737b0e4160cf0ea05d923cb2bbf95f7725b7" Workload="localhost-k8s-csi--node--driver--pwv49-eth0" Nov 5 15:44:49.212222 containerd[1687]: 2025-11-05 15:44:49.137 [INFO][4750] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="aba2b36c5fa358e7ddefbaf30467737b0e4160cf0ea05d923cb2bbf95f7725b7" HandleID="k8s-pod-network.aba2b36c5fa358e7ddefbaf30467737b0e4160cf0ea05d923cb2bbf95f7725b7" Workload="localhost-k8s-csi--node--driver--pwv49-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-pwv49", "timestamp":"2025-11-05 15:44:49.137055475 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:44:49.212222 containerd[1687]: 2025-11-05 15:44:49.137 [INFO][4750] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:44:49.212222 containerd[1687]: 2025-11-05 15:44:49.137 [INFO][4750] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:44:49.212222 containerd[1687]: 2025-11-05 15:44:49.137 [INFO][4750] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:44:49.212222 containerd[1687]: 2025-11-05 15:44:49.145 [INFO][4750] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aba2b36c5fa358e7ddefbaf30467737b0e4160cf0ea05d923cb2bbf95f7725b7" host="localhost" Nov 5 15:44:49.212222 containerd[1687]: 2025-11-05 15:44:49.148 [INFO][4750] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:44:49.212222 containerd[1687]: 2025-11-05 15:44:49.150 [INFO][4750] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:44:49.212222 containerd[1687]: 2025-11-05 15:44:49.151 [INFO][4750] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:44:49.212222 containerd[1687]: 2025-11-05 15:44:49.154 [INFO][4750] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:44:49.212222 containerd[1687]: 2025-11-05 15:44:49.154 [INFO][4750] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aba2b36c5fa358e7ddefbaf30467737b0e4160cf0ea05d923cb2bbf95f7725b7" host="localhost" Nov 5 15:44:49.212222 containerd[1687]: 2025-11-05 15:44:49.154 [INFO][4750] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.aba2b36c5fa358e7ddefbaf30467737b0e4160cf0ea05d923cb2bbf95f7725b7 Nov 5 15:44:49.212222 containerd[1687]: 2025-11-05 15:44:49.159 [INFO][4750] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aba2b36c5fa358e7ddefbaf30467737b0e4160cf0ea05d923cb2bbf95f7725b7" host="localhost" Nov 5 15:44:49.212222 containerd[1687]: 2025-11-05 15:44:49.183 [INFO][4750] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.aba2b36c5fa358e7ddefbaf30467737b0e4160cf0ea05d923cb2bbf95f7725b7" host="localhost" Nov 5 15:44:49.212222 containerd[1687]: 2025-11-05 15:44:49.183 [INFO][4750] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.aba2b36c5fa358e7ddefbaf30467737b0e4160cf0ea05d923cb2bbf95f7725b7" host="localhost" Nov 5 15:44:49.212222 containerd[1687]: 2025-11-05 15:44:49.183 [INFO][4750] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:44:49.212222 containerd[1687]: 2025-11-05 15:44:49.183 [INFO][4750] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="aba2b36c5fa358e7ddefbaf30467737b0e4160cf0ea05d923cb2bbf95f7725b7" HandleID="k8s-pod-network.aba2b36c5fa358e7ddefbaf30467737b0e4160cf0ea05d923cb2bbf95f7725b7" Workload="localhost-k8s-csi--node--driver--pwv49-eth0" Nov 5 15:44:49.213010 containerd[1687]: 2025-11-05 15:44:49.185 [INFO][4712] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aba2b36c5fa358e7ddefbaf30467737b0e4160cf0ea05d923cb2bbf95f7725b7" Namespace="calico-system" Pod="csi-node-driver-pwv49" WorkloadEndpoint="localhost-k8s-csi--node--driver--pwv49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pwv49-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"aa307e49-5503-4739-ace7-169707e5fd38", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 44, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-pwv49", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali513dc9bbdd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:44:49.213010 containerd[1687]: 2025-11-05 15:44:49.185 [INFO][4712] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="aba2b36c5fa358e7ddefbaf30467737b0e4160cf0ea05d923cb2bbf95f7725b7" Namespace="calico-system" Pod="csi-node-driver-pwv49" WorkloadEndpoint="localhost-k8s-csi--node--driver--pwv49-eth0" Nov 5 15:44:49.213010 containerd[1687]: 2025-11-05 15:44:49.185 [INFO][4712] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali513dc9bbdd1 ContainerID="aba2b36c5fa358e7ddefbaf30467737b0e4160cf0ea05d923cb2bbf95f7725b7" Namespace="calico-system" Pod="csi-node-driver-pwv49" WorkloadEndpoint="localhost-k8s-csi--node--driver--pwv49-eth0" Nov 5 15:44:49.213010 containerd[1687]: 2025-11-05 15:44:49.194 [INFO][4712] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aba2b36c5fa358e7ddefbaf30467737b0e4160cf0ea05d923cb2bbf95f7725b7" Namespace="calico-system" Pod="csi-node-driver-pwv49" WorkloadEndpoint="localhost-k8s-csi--node--driver--pwv49-eth0" Nov 5 15:44:49.213010 containerd[1687]: 2025-11-05 15:44:49.195 [INFO][4712] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aba2b36c5fa358e7ddefbaf30467737b0e4160cf0ea05d923cb2bbf95f7725b7" Namespace="calico-system" Pod="csi-node-driver-pwv49" WorkloadEndpoint="localhost-k8s-csi--node--driver--pwv49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pwv49-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"aa307e49-5503-4739-ace7-169707e5fd38", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 44, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aba2b36c5fa358e7ddefbaf30467737b0e4160cf0ea05d923cb2bbf95f7725b7", Pod:"csi-node-driver-pwv49", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali513dc9bbdd1", MAC:"86:f0:c6:9f:5a:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:44:49.213010 containerd[1687]: 2025-11-05 15:44:49.209 [INFO][4712] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aba2b36c5fa358e7ddefbaf30467737b0e4160cf0ea05d923cb2bbf95f7725b7" Namespace="calico-system" Pod="csi-node-driver-pwv49" WorkloadEndpoint="localhost-k8s-csi--node--driver--pwv49-eth0" Nov 5 15:44:49.266683 kubelet[3004]: E1105 15:44:49.266641 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c8b78b8fb-6ngl7" podUID="924e02ba-63d7-442f-ae24-772364097f08" Nov 5 15:44:49.267211 kubelet[3004]: E1105 15:44:49.266793 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c8b78b8fb-cphp6" podUID="2e5e65c4-fa11-4b09-8e02-96b75e14b836" Nov 5 15:44:49.267211 kubelet[3004]: E1105 15:44:49.266918 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f57597689-7pcp2" podUID="6d011c6a-b4ab-4b64-b7bd-117fed5a2af3" Nov 5 15:44:49.284192 systemd-networkd[1585]: cali6eb82ec7637: Link UP Nov 5 15:44:49.284419 systemd-networkd[1585]: cali6eb82ec7637: Gained carrier Nov 5 15:44:49.306863 containerd[1687]: 2025-11-05 15:44:49.103 [INFO][4721] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--p8vf5-eth0 coredns-674b8bbfcf- kube-system 68de4903-f6e5-45c3-b76e-09034eb6e62e 855 0 2025-11-05 15:44:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-p8vf5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6eb82ec7637 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ac3aa84254d0cec914a96db5c00056c5ba7c2a070356ac64cacbe720d2eac8e9" Namespace="kube-system" Pod="coredns-674b8bbfcf-p8vf5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--p8vf5-" Nov 5 15:44:49.306863 containerd[1687]: 2025-11-05 15:44:49.103 [INFO][4721] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ac3aa84254d0cec914a96db5c00056c5ba7c2a070356ac64cacbe720d2eac8e9" Namespace="kube-system" Pod="coredns-674b8bbfcf-p8vf5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--p8vf5-eth0" Nov 5 15:44:49.306863 containerd[1687]: 2025-11-05 15:44:49.144 [INFO][4755] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac3aa84254d0cec914a96db5c00056c5ba7c2a070356ac64cacbe720d2eac8e9" HandleID="k8s-pod-network.ac3aa84254d0cec914a96db5c00056c5ba7c2a070356ac64cacbe720d2eac8e9" Workload="localhost-k8s-coredns--674b8bbfcf--p8vf5-eth0" Nov 5 15:44:49.306863 containerd[1687]: 2025-11-05 15:44:49.145 [INFO][4755] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ac3aa84254d0cec914a96db5c00056c5ba7c2a070356ac64cacbe720d2eac8e9" HandleID="k8s-pod-network.ac3aa84254d0cec914a96db5c00056c5ba7c2a070356ac64cacbe720d2eac8e9" Workload="localhost-k8s-coredns--674b8bbfcf--p8vf5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-p8vf5", "timestamp":"2025-11-05 15:44:49.144076261 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:44:49.306863 containerd[1687]: 2025-11-05 15:44:49.145 [INFO][4755] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:44:49.306863 containerd[1687]: 2025-11-05 15:44:49.183 [INFO][4755] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:44:49.306863 containerd[1687]: 2025-11-05 15:44:49.183 [INFO][4755] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:44:49.306863 containerd[1687]: 2025-11-05 15:44:49.246 [INFO][4755] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ac3aa84254d0cec914a96db5c00056c5ba7c2a070356ac64cacbe720d2eac8e9" host="localhost" Nov 5 15:44:49.306863 containerd[1687]: 2025-11-05 15:44:49.249 [INFO][4755] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:44:49.306863 containerd[1687]: 2025-11-05 15:44:49.252 [INFO][4755] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:44:49.306863 containerd[1687]: 2025-11-05 15:44:49.252 [INFO][4755] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:44:49.306863 containerd[1687]: 2025-11-05 15:44:49.254 [INFO][4755] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:44:49.306863 containerd[1687]: 2025-11-05 15:44:49.254 [INFO][4755] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ac3aa84254d0cec914a96db5c00056c5ba7c2a070356ac64cacbe720d2eac8e9" host="localhost" Nov 5 15:44:49.306863 containerd[1687]: 2025-11-05 15:44:49.255 [INFO][4755] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ac3aa84254d0cec914a96db5c00056c5ba7c2a070356ac64cacbe720d2eac8e9 Nov 5 15:44:49.306863 containerd[1687]: 2025-11-05 15:44:49.263 [INFO][4755] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ac3aa84254d0cec914a96db5c00056c5ba7c2a070356ac64cacbe720d2eac8e9" host="localhost" Nov 5 15:44:49.306863 containerd[1687]: 2025-11-05 15:44:49.278 [INFO][4755] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.ac3aa84254d0cec914a96db5c00056c5ba7c2a070356ac64cacbe720d2eac8e9" host="localhost" Nov 5 15:44:49.306863 containerd[1687]: 2025-11-05 15:44:49.280 [INFO][4755] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.ac3aa84254d0cec914a96db5c00056c5ba7c2a070356ac64cacbe720d2eac8e9" host="localhost" Nov 5 15:44:49.306863 containerd[1687]: 2025-11-05 15:44:49.280 [INFO][4755] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:44:49.306863 containerd[1687]: 2025-11-05 15:44:49.280 [INFO][4755] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="ac3aa84254d0cec914a96db5c00056c5ba7c2a070356ac64cacbe720d2eac8e9" HandleID="k8s-pod-network.ac3aa84254d0cec914a96db5c00056c5ba7c2a070356ac64cacbe720d2eac8e9" Workload="localhost-k8s-coredns--674b8bbfcf--p8vf5-eth0" Nov 5 15:44:49.316333 containerd[1687]: 2025-11-05 15:44:49.282 [INFO][4721] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ac3aa84254d0cec914a96db5c00056c5ba7c2a070356ac64cacbe720d2eac8e9" Namespace="kube-system" Pod="coredns-674b8bbfcf-p8vf5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--p8vf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--p8vf5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"68de4903-f6e5-45c3-b76e-09034eb6e62e", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 44, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-p8vf5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6eb82ec7637", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:44:49.316333 containerd[1687]: 2025-11-05 15:44:49.282 [INFO][4721] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="ac3aa84254d0cec914a96db5c00056c5ba7c2a070356ac64cacbe720d2eac8e9" Namespace="kube-system" Pod="coredns-674b8bbfcf-p8vf5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--p8vf5-eth0" Nov 5 15:44:49.316333 containerd[1687]: 2025-11-05 15:44:49.282 [INFO][4721] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6eb82ec7637 ContainerID="ac3aa84254d0cec914a96db5c00056c5ba7c2a070356ac64cacbe720d2eac8e9" Namespace="kube-system" Pod="coredns-674b8bbfcf-p8vf5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--p8vf5-eth0" Nov 5 15:44:49.316333 containerd[1687]: 2025-11-05 15:44:49.284 [INFO][4721] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ac3aa84254d0cec914a96db5c00056c5ba7c2a070356ac64cacbe720d2eac8e9" Namespace="kube-system" Pod="coredns-674b8bbfcf-p8vf5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--p8vf5-eth0" Nov 5 15:44:49.316333 containerd[1687]: 2025-11-05 15:44:49.284 [INFO][4721] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ac3aa84254d0cec914a96db5c00056c5ba7c2a070356ac64cacbe720d2eac8e9" Namespace="kube-system" Pod="coredns-674b8bbfcf-p8vf5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--p8vf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--p8vf5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"68de4903-f6e5-45c3-b76e-09034eb6e62e", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 44, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ac3aa84254d0cec914a96db5c00056c5ba7c2a070356ac64cacbe720d2eac8e9", Pod:"coredns-674b8bbfcf-p8vf5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6eb82ec7637", MAC:"ae:9c:02:a0:d4:94", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:44:49.316333 containerd[1687]: 2025-11-05 15:44:49.300 [INFO][4721] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ac3aa84254d0cec914a96db5c00056c5ba7c2a070356ac64cacbe720d2eac8e9" Namespace="kube-system" Pod="coredns-674b8bbfcf-p8vf5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--p8vf5-eth0" Nov 5 15:44:49.426362 systemd-networkd[1585]: cali27e67228a9c: Link UP Nov 5 15:44:49.427162 systemd-networkd[1585]: cali27e67228a9c: Gained carrier Nov 5 15:44:49.431373 containerd[1687]: time="2025-11-05T15:44:49.431345905Z" level=info msg="connecting to shim ac3aa84254d0cec914a96db5c00056c5ba7c2a070356ac64cacbe720d2eac8e9" address="unix:///run/containerd/s/e0734cec7140c722acf42bd222d719d5ff2e1734660ff7df03d2ce5720d69fd5" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:44:49.452585 containerd[1687]: 2025-11-05 15:44:49.083 [INFO][4711] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--vd7mt-eth0 goldmane-666569f655- calico-system 8eefe537-f46b-421d-a847-6a36d2b266d7 858 0 2025-11-05 15:44:17 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-vd7mt eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali27e67228a9c [] [] }} ContainerID="4f2473a8bc0ca4a987ac9df35f98eabe66147f23f776236daf946a6187bd2bea" Namespace="calico-system" Pod="goldmane-666569f655-vd7mt" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vd7mt-" Nov 5 15:44:49.452585 containerd[1687]: 2025-11-05 15:44:49.083 [INFO][4711] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4f2473a8bc0ca4a987ac9df35f98eabe66147f23f776236daf946a6187bd2bea" Namespace="calico-system" Pod="goldmane-666569f655-vd7mt" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vd7mt-eth0" Nov 5 15:44:49.452585 containerd[1687]: 2025-11-05 15:44:49.146 [INFO][4742] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f2473a8bc0ca4a987ac9df35f98eabe66147f23f776236daf946a6187bd2bea" HandleID="k8s-pod-network.4f2473a8bc0ca4a987ac9df35f98eabe66147f23f776236daf946a6187bd2bea" Workload="localhost-k8s-goldmane--666569f655--vd7mt-eth0" Nov 5 15:44:49.452585 containerd[1687]: 2025-11-05 15:44:49.147 [INFO][4742] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4f2473a8bc0ca4a987ac9df35f98eabe66147f23f776236daf946a6187bd2bea" HandleID="k8s-pod-network.4f2473a8bc0ca4a987ac9df35f98eabe66147f23f776236daf946a6187bd2bea" Workload="localhost-k8s-goldmane--666569f655--vd7mt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-vd7mt", "timestamp":"2025-11-05 15:44:49.146872432 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:44:49.452585 containerd[1687]: 2025-11-05 15:44:49.147 [INFO][4742] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:44:49.452585 containerd[1687]: 2025-11-05 15:44:49.280 [INFO][4742] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:44:49.452585 containerd[1687]: 2025-11-05 15:44:49.280 [INFO][4742] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:44:49.452585 containerd[1687]: 2025-11-05 15:44:49.347 [INFO][4742] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4f2473a8bc0ca4a987ac9df35f98eabe66147f23f776236daf946a6187bd2bea" host="localhost" Nov 5 15:44:49.452585 containerd[1687]: 2025-11-05 15:44:49.350 [INFO][4742] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:44:49.452585 containerd[1687]: 2025-11-05 15:44:49.354 [INFO][4742] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:44:49.452585 containerd[1687]: 2025-11-05 15:44:49.376 [INFO][4742] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:44:49.452585 containerd[1687]: 2025-11-05 15:44:49.378 [INFO][4742] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:44:49.452585 containerd[1687]: 2025-11-05 15:44:49.378 [INFO][4742] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4f2473a8bc0ca4a987ac9df35f98eabe66147f23f776236daf946a6187bd2bea" host="localhost" Nov 5 15:44:49.452585 containerd[1687]: 2025-11-05 15:44:49.379 [INFO][4742] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4f2473a8bc0ca4a987ac9df35f98eabe66147f23f776236daf946a6187bd2bea Nov 5 15:44:49.452585 containerd[1687]: 2025-11-05 15:44:49.389 [INFO][4742] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4f2473a8bc0ca4a987ac9df35f98eabe66147f23f776236daf946a6187bd2bea" host="localhost" Nov 5 15:44:49.452585 containerd[1687]: 2025-11-05 15:44:49.413 [INFO][4742] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.4f2473a8bc0ca4a987ac9df35f98eabe66147f23f776236daf946a6187bd2bea" host="localhost" Nov 5 15:44:49.452585 containerd[1687]: 2025-11-05 15:44:49.420 [INFO][4742] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.4f2473a8bc0ca4a987ac9df35f98eabe66147f23f776236daf946a6187bd2bea" host="localhost" Nov 5 15:44:49.452585 containerd[1687]: 2025-11-05 15:44:49.420 [INFO][4742] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:44:49.452585 containerd[1687]: 2025-11-05 15:44:49.420 [INFO][4742] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="4f2473a8bc0ca4a987ac9df35f98eabe66147f23f776236daf946a6187bd2bea" HandleID="k8s-pod-network.4f2473a8bc0ca4a987ac9df35f98eabe66147f23f776236daf946a6187bd2bea" Workload="localhost-k8s-goldmane--666569f655--vd7mt-eth0" Nov 5 15:44:49.457011 containerd[1687]: 2025-11-05 15:44:49.422 [INFO][4711] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4f2473a8bc0ca4a987ac9df35f98eabe66147f23f776236daf946a6187bd2bea" Namespace="calico-system" Pod="goldmane-666569f655-vd7mt" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vd7mt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--vd7mt-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"8eefe537-f46b-421d-a847-6a36d2b266d7", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 44, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-vd7mt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali27e67228a9c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:44:49.457011 containerd[1687]: 2025-11-05 15:44:49.422 [INFO][4711] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="4f2473a8bc0ca4a987ac9df35f98eabe66147f23f776236daf946a6187bd2bea" Namespace="calico-system" Pod="goldmane-666569f655-vd7mt" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vd7mt-eth0" Nov 5 15:44:49.457011 containerd[1687]: 2025-11-05 15:44:49.422 [INFO][4711] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali27e67228a9c ContainerID="4f2473a8bc0ca4a987ac9df35f98eabe66147f23f776236daf946a6187bd2bea" Namespace="calico-system" Pod="goldmane-666569f655-vd7mt" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vd7mt-eth0" Nov 5 15:44:49.457011 containerd[1687]: 2025-11-05 15:44:49.428 [INFO][4711] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4f2473a8bc0ca4a987ac9df35f98eabe66147f23f776236daf946a6187bd2bea" Namespace="calico-system" Pod="goldmane-666569f655-vd7mt" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vd7mt-eth0" Nov 5 15:44:49.457011 containerd[1687]: 2025-11-05 15:44:49.428 [INFO][4711] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4f2473a8bc0ca4a987ac9df35f98eabe66147f23f776236daf946a6187bd2bea" Namespace="calico-system" Pod="goldmane-666569f655-vd7mt" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vd7mt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--vd7mt-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"8eefe537-f46b-421d-a847-6a36d2b266d7", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 44, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4f2473a8bc0ca4a987ac9df35f98eabe66147f23f776236daf946a6187bd2bea", Pod:"goldmane-666569f655-vd7mt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali27e67228a9c", MAC:"0e:a4:7c:97:19:50", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:44:49.457011 containerd[1687]: 2025-11-05 15:44:49.440 [INFO][4711] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4f2473a8bc0ca4a987ac9df35f98eabe66147f23f776236daf946a6187bd2bea" Namespace="calico-system" Pod="goldmane-666569f655-vd7mt" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vd7mt-eth0" Nov 5 15:44:49.465133 containerd[1687]: time="2025-11-05T15:44:49.464063574Z" level=info msg="connecting to shim aba2b36c5fa358e7ddefbaf30467737b0e4160cf0ea05d923cb2bbf95f7725b7" address="unix:///run/containerd/s/5f27c9425d2fb08a14cae509bac125623018f246e39c159dce1a2c1edfc4a2a0" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:44:49.478368 systemd[1]: Started cri-containerd-ac3aa84254d0cec914a96db5c00056c5ba7c2a070356ac64cacbe720d2eac8e9.scope - libcontainer container ac3aa84254d0cec914a96db5c00056c5ba7c2a070356ac64cacbe720d2eac8e9. Nov 5 15:44:49.479790 containerd[1687]: time="2025-11-05T15:44:49.479747538Z" level=info msg="connecting to shim 4f2473a8bc0ca4a987ac9df35f98eabe66147f23f776236daf946a6187bd2bea" address="unix:///run/containerd/s/f4678403483fd6352515bd8972170c19edb3e7bf761d887c3100d7fe3dec9a2f" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:44:49.494621 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:44:49.500072 systemd[1]: Started cri-containerd-aba2b36c5fa358e7ddefbaf30467737b0e4160cf0ea05d923cb2bbf95f7725b7.scope - libcontainer container aba2b36c5fa358e7ddefbaf30467737b0e4160cf0ea05d923cb2bbf95f7725b7. Nov 5 15:44:49.505852 systemd[1]: Started cri-containerd-4f2473a8bc0ca4a987ac9df35f98eabe66147f23f776236daf946a6187bd2bea.scope - libcontainer container 4f2473a8bc0ca4a987ac9df35f98eabe66147f23f776236daf946a6187bd2bea. Nov 5 15:44:49.523784 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:44:49.525714 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:44:49.533650 containerd[1687]: time="2025-11-05T15:44:49.533530318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p8vf5,Uid:68de4903-f6e5-45c3-b76e-09034eb6e62e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac3aa84254d0cec914a96db5c00056c5ba7c2a070356ac64cacbe720d2eac8e9\"" Nov 5 15:44:49.540604 containerd[1687]: time="2025-11-05T15:44:49.540575991Z" level=info msg="CreateContainer within sandbox \"ac3aa84254d0cec914a96db5c00056c5ba7c2a070356ac64cacbe720d2eac8e9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 15:44:49.545911 containerd[1687]: time="2025-11-05T15:44:49.545865218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pwv49,Uid:aa307e49-5503-4739-ace7-169707e5fd38,Namespace:calico-system,Attempt:0,} returns sandbox id \"aba2b36c5fa358e7ddefbaf30467737b0e4160cf0ea05d923cb2bbf95f7725b7\"" Nov 5 15:44:49.547141 containerd[1687]: time="2025-11-05T15:44:49.547127680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:44:49.548036 containerd[1687]: time="2025-11-05T15:44:49.548019505Z" level=info msg="Container 4f127f653647954458907705046f6edbedae8756a0fc080e8d5de31746656c7c: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:44:49.551863 containerd[1687]: time="2025-11-05T15:44:49.551757726Z" level=info msg="CreateContainer within sandbox \"ac3aa84254d0cec914a96db5c00056c5ba7c2a070356ac64cacbe720d2eac8e9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4f127f653647954458907705046f6edbedae8756a0fc080e8d5de31746656c7c\"" Nov 5 15:44:49.565899 containerd[1687]: time="2025-11-05T15:44:49.565498032Z" level=info msg="StartContainer for \"4f127f653647954458907705046f6edbedae8756a0fc080e8d5de31746656c7c\"" Nov 5 15:44:49.568298 containerd[1687]: time="2025-11-05T15:44:49.568274567Z" level=info msg="connecting to shim 4f127f653647954458907705046f6edbedae8756a0fc080e8d5de31746656c7c" address="unix:///run/containerd/s/e0734cec7140c722acf42bd222d719d5ff2e1734660ff7df03d2ce5720d69fd5" protocol=ttrpc version=3 Nov 5 15:44:49.572997 containerd[1687]: time="2025-11-05T15:44:49.572978328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vd7mt,Uid:8eefe537-f46b-421d-a847-6a36d2b266d7,Namespace:calico-system,Attempt:0,} returns sandbox id \"4f2473a8bc0ca4a987ac9df35f98eabe66147f23f776236daf946a6187bd2bea\"" Nov 5 15:44:49.590067 systemd[1]: Started cri-containerd-4f127f653647954458907705046f6edbedae8756a0fc080e8d5de31746656c7c.scope - libcontainer container 4f127f653647954458907705046f6edbedae8756a0fc080e8d5de31746656c7c. Nov 5 15:44:49.615089 containerd[1687]: time="2025-11-05T15:44:49.615066033Z" level=info msg="StartContainer for \"4f127f653647954458907705046f6edbedae8756a0fc080e8d5de31746656c7c\" returns successfully" Nov 5 15:44:49.901970 containerd[1687]: time="2025-11-05T15:44:49.901895911Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:44:49.908846 containerd[1687]: time="2025-11-05T15:44:49.908770221Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:44:49.908846 containerd[1687]: time="2025-11-05T15:44:49.908830250Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:44:49.909029 kubelet[3004]: E1105 15:44:49.908992 3004 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:44:49.909093 kubelet[3004]: E1105 15:44:49.909036 3004 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:44:49.909423 containerd[1687]: time="2025-11-05T15:44:49.909368473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:44:49.917111 kubelet[3004]: E1105 15:44:49.917059 3004 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fv9hk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pwv49_calico-system(aa307e49-5503-4739-ace7-169707e5fd38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:44:50.267057 systemd-networkd[1585]: cali513dc9bbdd1: Gained IPv6LL Nov 5 15:44:50.287414 kubelet[3004]: I1105 15:44:50.287362 3004 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-p8vf5" podStartSLOduration=46.287346835 podStartE2EDuration="46.287346835s" podCreationTimestamp="2025-11-05 15:44:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:44:50.278769826 +0000 UTC m=+53.326679903" watchObservedRunningTime="2025-11-05 15:44:50.287346835 +0000 UTC m=+53.335256918" Nov 5 15:44:50.326099 containerd[1687]: time="2025-11-05T15:44:50.326063120Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:44:50.326431 containerd[1687]: time="2025-11-05T15:44:50.326413099Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:44:50.326483 containerd[1687]: time="2025-11-05T15:44:50.326469099Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:44:50.326611 kubelet[3004]: E1105 15:44:50.326581 3004 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:44:50.326648 kubelet[3004]: E1105 15:44:50.326614 3004 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:44:50.326815 kubelet[3004]: E1105 15:44:50.326769 3004 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9pkcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vd7mt_calico-system(8eefe537-f46b-421d-a847-6a36d2b266d7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:44:50.327001 containerd[1687]: time="2025-11-05T15:44:50.326898507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:44:50.328697 kubelet[3004]: E1105 15:44:50.328153 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vd7mt" podUID="8eefe537-f46b-421d-a847-6a36d2b266d7" Nov 5 15:44:50.523201 systemd-networkd[1585]: cali27e67228a9c: Gained IPv6LL Nov 5 15:44:50.685076 containerd[1687]: time="2025-11-05T15:44:50.685032390Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:44:50.688545 containerd[1687]: time="2025-11-05T15:44:50.688474591Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:44:50.688601 containerd[1687]: time="2025-11-05T15:44:50.688545710Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:44:50.688679 kubelet[3004]: E1105 15:44:50.688649 3004 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:44:50.688864 kubelet[3004]: E1105 15:44:50.688686 3004 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:44:50.690037 kubelet[3004]: E1105 15:44:50.689986 3004 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fv9hk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pwv49_calico-system(aa307e49-5503-4739-ace7-169707e5fd38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:44:50.691595 kubelet[3004]: E1105 15:44:50.691550 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pwv49" podUID="aa307e49-5503-4739-ace7-169707e5fd38" Nov 5 15:44:50.779061 systemd-networkd[1585]: cali6eb82ec7637: Gained IPv6LL Nov 5 15:44:51.271932 kubelet[3004]: E1105 15:44:51.271778 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vd7mt" podUID="8eefe537-f46b-421d-a847-6a36d2b266d7" Nov 5 15:44:51.273317 kubelet[3004]: E1105 15:44:51.273257 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pwv49" podUID="aa307e49-5503-4739-ace7-169707e5fd38" Nov 5 15:44:59.040771 containerd[1687]: time="2025-11-05T15:44:59.040741657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:44:59.414635 containerd[1687]: time="2025-11-05T15:44:59.414555014Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:44:59.414935 containerd[1687]: time="2025-11-05T15:44:59.414900745Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:44:59.415228 containerd[1687]: time="2025-11-05T15:44:59.414961151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:44:59.415262 kubelet[3004]: E1105 15:44:59.415067 3004 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:44:59.415262 kubelet[3004]: E1105 15:44:59.415107 3004 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:44:59.415262 kubelet[3004]: E1105 15:44:59.415186 3004 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f8d2740034654cd6baf7021a15760839,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pp2dq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b58b895f4-4cnd5_calico-system(c9958fac-31c8-4b49-8704-7a3e667cd144): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:44:59.416938 containerd[1687]: time="2025-11-05T15:44:59.416899715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:44:59.838632 containerd[1687]: time="2025-11-05T15:44:59.838458331Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:44:59.838904 containerd[1687]: time="2025-11-05T15:44:59.838881557Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:44:59.839042 containerd[1687]: time="2025-11-05T15:44:59.838938421Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:44:59.839131 kubelet[3004]: E1105 15:44:59.839085 3004 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:44:59.839184 kubelet[3004]: E1105 15:44:59.839136 3004 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:44:59.839256 kubelet[3004]: E1105 15:44:59.839226 3004 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pp2dq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b58b895f4-4cnd5_calico-system(c9958fac-31c8-4b49-8704-7a3e667cd144): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:44:59.840972 kubelet[3004]: E1105 15:44:59.840439 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b58b895f4-4cnd5" podUID="c9958fac-31c8-4b49-8704-7a3e667cd144" Nov 5 15:45:00.040594 containerd[1687]: time="2025-11-05T15:45:00.040545445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:45:00.491984 containerd[1687]: time="2025-11-05T15:45:00.491908933Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:45:00.492344 containerd[1687]: time="2025-11-05T15:45:00.492308243Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:45:00.492401 containerd[1687]: time="2025-11-05T15:45:00.492386527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:45:00.492520 kubelet[3004]: E1105 15:45:00.492493 3004 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:45:00.492648 kubelet[3004]: E1105 15:45:00.492529 3004 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:45:00.492648 kubelet[3004]: E1105 15:45:00.492612 3004 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kwt25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5f57597689-7pcp2_calico-system(6d011c6a-b4ab-4b64-b7bd-117fed5a2af3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:45:00.494426 kubelet[3004]: E1105 15:45:00.494394 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f57597689-7pcp2" podUID="6d011c6a-b4ab-4b64-b7bd-117fed5a2af3" Nov 5 15:45:03.041474 containerd[1687]: time="2025-11-05T15:45:03.041208462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:45:03.417110 containerd[1687]: time="2025-11-05T15:45:03.417034612Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:45:03.417359 containerd[1687]: time="2025-11-05T15:45:03.417334623Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:45:03.417423 containerd[1687]: time="2025-11-05T15:45:03.417394741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:45:03.417527 kubelet[3004]: E1105 15:45:03.417499 3004 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:45:03.417698 kubelet[3004]: E1105 15:45:03.417532 3004 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:45:03.417698 kubelet[3004]: E1105 15:45:03.417615 3004 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rg7kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c8b78b8fb-cphp6_calico-apiserver(2e5e65c4-fa11-4b09-8e02-96b75e14b836): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:45:03.418899 kubelet[3004]: E1105 15:45:03.418866 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c8b78b8fb-cphp6" podUID="2e5e65c4-fa11-4b09-8e02-96b75e14b836" Nov 5 15:45:05.041582 containerd[1687]: time="2025-11-05T15:45:05.041561529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:45:05.417766 containerd[1687]: time="2025-11-05T15:45:05.417687036Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:45:05.418129 containerd[1687]: time="2025-11-05T15:45:05.418100359Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:45:05.418209 containerd[1687]: time="2025-11-05T15:45:05.418103803Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:45:05.418266 kubelet[3004]: E1105 15:45:05.418235 3004 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:45:05.418428 kubelet[3004]: E1105 15:45:05.418274 3004 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:45:05.418587 containerd[1687]: time="2025-11-05T15:45:05.418575459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:45:05.419623 kubelet[3004]: E1105 15:45:05.419513 3004 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fv9hk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pwv49_calico-system(aa307e49-5503-4739-ace7-169707e5fd38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:45:05.759686 containerd[1687]: time="2025-11-05T15:45:05.759644550Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:45:05.760052 containerd[1687]: time="2025-11-05T15:45:05.759998420Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:45:05.760052 containerd[1687]: time="2025-11-05T15:45:05.760042008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:45:05.760220 kubelet[3004]: E1105 15:45:05.760201 3004 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:45:05.760282 kubelet[3004]: E1105 15:45:05.760274 3004 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:45:05.760976 kubelet[3004]: E1105 15:45:05.760583 3004 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rl94j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c8b78b8fb-6ngl7_calico-apiserver(924e02ba-63d7-442f-ae24-772364097f08): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:45:05.761049 containerd[1687]: time="2025-11-05T15:45:05.760651986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:45:05.761755 kubelet[3004]: E1105 15:45:05.761687 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c8b78b8fb-6ngl7" podUID="924e02ba-63d7-442f-ae24-772364097f08" Nov 5 15:45:06.195073 containerd[1687]: time="2025-11-05T15:45:06.194978568Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:45:06.195756 containerd[1687]: time="2025-11-05T15:45:06.195716174Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:45:06.195871 containerd[1687]: time="2025-11-05T15:45:06.195778694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:45:06.195926 kubelet[3004]: E1105 15:45:06.195901 3004 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:45:06.196002 kubelet[3004]: E1105 15:45:06.195935 3004 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:45:06.196129 kubelet[3004]: E1105 15:45:06.196100 3004 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fv9hk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pwv49_calico-system(aa307e49-5503-4739-ace7-169707e5fd38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:45:06.196751 containerd[1687]: time="2025-11-05T15:45:06.196525983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:45:06.197741 kubelet[3004]: E1105 15:45:06.197582 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pwv49" podUID="aa307e49-5503-4739-ace7-169707e5fd38" Nov 5 15:45:06.584650 containerd[1687]: time="2025-11-05T15:45:06.584575878Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:45:06.585058 containerd[1687]: time="2025-11-05T15:45:06.585033368Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:45:06.585148 containerd[1687]: time="2025-11-05T15:45:06.585095113Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:45:06.585307 kubelet[3004]: E1105 15:45:06.585263 3004 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:45:06.585555 kubelet[3004]: E1105 15:45:06.585314 3004 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:45:06.585555 kubelet[3004]: E1105 15:45:06.585407 3004 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9pkcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vd7mt_calico-system(8eefe537-f46b-421d-a847-6a36d2b266d7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:45:06.586905 kubelet[3004]: E1105 15:45:06.586880 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vd7mt" podUID="8eefe537-f46b-421d-a847-6a36d2b266d7" Nov 5 15:45:12.042981 kubelet[3004]: E1105 15:45:12.041395 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b58b895f4-4cnd5" podUID="c9958fac-31c8-4b49-8704-7a3e667cd144" Nov 5 15:45:12.275396 containerd[1687]: time="2025-11-05T15:45:12.275365695Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dd1a844b27397d61eff7cfe5d11dc9922006c7b05e337864855037e5720ae325\" id:\"2fed39f80aaffac067435fd3e0072e68a3e46a8a831df3e67e141829518462f9\" pid:5015 exited_at:{seconds:1762357512 nanos:275147851}" Nov 5 15:45:15.041503 kubelet[3004]: E1105 15:45:15.041473 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f57597689-7pcp2" podUID="6d011c6a-b4ab-4b64-b7bd-117fed5a2af3" Nov 5 15:45:16.040639 kubelet[3004]: E1105 15:45:16.040609 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c8b78b8fb-cphp6" podUID="2e5e65c4-fa11-4b09-8e02-96b75e14b836" Nov 5 15:45:17.041142 kubelet[3004]: E1105 15:45:17.041059 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c8b78b8fb-6ngl7" podUID="924e02ba-63d7-442f-ae24-772364097f08" Nov 5 15:45:17.079099 systemd[1]: Started sshd@7-139.178.70.108:22-139.178.89.65:35666.service - OpenSSH per-connection server daemon (139.178.89.65:35666). Nov 5 15:45:17.461633 sshd[5031]: Accepted publickey for core from 139.178.89.65 port 35666 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:45:17.463632 sshd-session[5031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:45:17.467754 systemd-logind[1661]: New session 10 of user core. Nov 5 15:45:17.472073 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 5 15:45:18.042613 kubelet[3004]: E1105 15:45:18.041879 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vd7mt" podUID="8eefe537-f46b-421d-a847-6a36d2b266d7" Nov 5 15:45:18.043391 kubelet[3004]: E1105 15:45:18.043358 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pwv49" podUID="aa307e49-5503-4739-ace7-169707e5fd38" Nov 5 15:45:18.472330 sshd[5035]: Connection closed by 139.178.89.65 port 35666 Nov 5 15:45:18.472642 sshd-session[5031]: pam_unix(sshd:session): session closed for user core Nov 5 15:45:18.494209 systemd[1]: sshd@7-139.178.70.108:22-139.178.89.65:35666.service: Deactivated successfully. Nov 5 15:45:18.497657 systemd[1]: session-10.scope: Deactivated successfully. Nov 5 15:45:18.498776 systemd-logind[1661]: Session 10 logged out. Waiting for processes to exit. Nov 5 15:45:18.500630 systemd-logind[1661]: Removed session 10. Nov 5 15:45:23.495385 systemd[1]: Started sshd@8-139.178.70.108:22-139.178.89.65:35682.service - OpenSSH per-connection server daemon (139.178.89.65:35682). Nov 5 15:45:23.592059 sshd[5058]: Accepted publickey for core from 139.178.89.65 port 35682 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:45:23.593319 sshd-session[5058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:45:23.598928 systemd-logind[1661]: New session 11 of user core. Nov 5 15:45:23.604074 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 5 15:45:23.816018 sshd[5061]: Connection closed by 139.178.89.65 port 35682 Nov 5 15:45:23.817474 sshd-session[5058]: pam_unix(sshd:session): session closed for user core Nov 5 15:45:23.823721 systemd[1]: sshd@8-139.178.70.108:22-139.178.89.65:35682.service: Deactivated successfully. Nov 5 15:45:23.826400 systemd[1]: session-11.scope: Deactivated successfully. Nov 5 15:45:23.828973 systemd-logind[1661]: Session 11 logged out. Waiting for processes to exit. Nov 5 15:45:23.829757 systemd-logind[1661]: Removed session 11. Nov 5 15:45:25.049244 containerd[1687]: time="2025-11-05T15:45:25.049067088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:45:25.471229 containerd[1687]: time="2025-11-05T15:45:25.471067789Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:45:25.471705 containerd[1687]: time="2025-11-05T15:45:25.471661818Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:45:25.471736 containerd[1687]: time="2025-11-05T15:45:25.471721130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:45:25.471901 kubelet[3004]: E1105 15:45:25.471871 3004 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:45:25.473490 kubelet[3004]: E1105 15:45:25.473469 3004 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:45:25.476829 kubelet[3004]: E1105 15:45:25.476798 3004 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f8d2740034654cd6baf7021a15760839,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pp2dq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b58b895f4-4cnd5_calico-system(c9958fac-31c8-4b49-8704-7a3e667cd144): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:45:25.478360 containerd[1687]: time="2025-11-05T15:45:25.478338723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:45:25.865087 containerd[1687]: time="2025-11-05T15:45:25.865056544Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:45:25.865607 containerd[1687]: time="2025-11-05T15:45:25.865534215Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:45:25.865607 containerd[1687]: time="2025-11-05T15:45:25.865591777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:45:25.865756 kubelet[3004]: E1105 15:45:25.865706 3004 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:45:25.865826 kubelet[3004]: E1105 15:45:25.865763 3004 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:45:25.865988 kubelet[3004]: E1105 15:45:25.865852 3004 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pp2dq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b58b895f4-4cnd5_calico-system(c9958fac-31c8-4b49-8704-7a3e667cd144): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:45:25.867057 kubelet[3004]: E1105 15:45:25.867008 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b58b895f4-4cnd5" podUID="c9958fac-31c8-4b49-8704-7a3e667cd144" Nov 5 15:45:28.041011 containerd[1687]: time="2025-11-05T15:45:28.040896167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:45:28.378704 containerd[1687]: time="2025-11-05T15:45:28.378311656Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:45:28.378919 containerd[1687]: time="2025-11-05T15:45:28.378902244Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:45:28.379033 containerd[1687]: time="2025-11-05T15:45:28.378968929Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:45:28.379180 kubelet[3004]: E1105 15:45:28.379149 3004 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:45:28.379353 kubelet[3004]: E1105 15:45:28.379187 3004 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:45:28.379353 kubelet[3004]: E1105 15:45:28.379305 3004 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rl94j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c8b78b8fb-6ngl7_calico-apiserver(924e02ba-63d7-442f-ae24-772364097f08): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:45:28.381015 kubelet[3004]: E1105 15:45:28.380979 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c8b78b8fb-6ngl7" podUID="924e02ba-63d7-442f-ae24-772364097f08" Nov 5 15:45:28.830755 systemd[1]: Started sshd@9-139.178.70.108:22-139.178.89.65:47970.service - OpenSSH per-connection server daemon (139.178.89.65:47970). Nov 5 15:45:29.043642 containerd[1687]: time="2025-11-05T15:45:29.043619149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:45:29.155643 sshd[5076]: Accepted publickey for core from 139.178.89.65 port 47970 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:45:29.156265 sshd-session[5076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:45:29.160115 systemd-logind[1661]: New session 12 of user core. Nov 5 15:45:29.168065 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 5 15:45:29.296341 sshd[5079]: Connection closed by 139.178.89.65 port 47970 Nov 5 15:45:29.298159 sshd-session[5076]: pam_unix(sshd:session): session closed for user core Nov 5 15:45:29.303428 systemd[1]: sshd@9-139.178.70.108:22-139.178.89.65:47970.service: Deactivated successfully. Nov 5 15:45:29.306258 systemd[1]: session-12.scope: Deactivated successfully. Nov 5 15:45:29.307349 systemd-logind[1661]: Session 12 logged out. Waiting for processes to exit. Nov 5 15:45:29.311254 systemd[1]: Started sshd@10-139.178.70.108:22-139.178.89.65:47974.service - OpenSSH per-connection server daemon (139.178.89.65:47974). Nov 5 15:45:29.315060 systemd-logind[1661]: Removed session 12. Nov 5 15:45:29.366627 sshd[5092]: Accepted publickey for core from 139.178.89.65 port 47974 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:45:29.367441 sshd-session[5092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:45:29.370245 systemd-logind[1661]: New session 13 of user core. Nov 5 15:45:29.375067 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 5 15:45:29.383466 containerd[1687]: time="2025-11-05T15:45:29.383441393Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:45:29.384784 containerd[1687]: time="2025-11-05T15:45:29.384767259Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:45:29.384890 containerd[1687]: time="2025-11-05T15:45:29.384833262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:45:29.385029 kubelet[3004]: E1105 15:45:29.384997 3004 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:45:29.385269 kubelet[3004]: E1105 15:45:29.385045 3004 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:45:29.385269 kubelet[3004]: E1105 15:45:29.385219 3004 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kwt25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5f57597689-7pcp2_calico-system(6d011c6a-b4ab-4b64-b7bd-117fed5a2af3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:45:29.386014 containerd[1687]: time="2025-11-05T15:45:29.385455670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:45:29.386519 kubelet[3004]: E1105 15:45:29.386481 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f57597689-7pcp2" podUID="6d011c6a-b4ab-4b64-b7bd-117fed5a2af3" Nov 5 15:45:29.515491 sshd[5095]: Connection closed by 139.178.89.65 port 47974 Nov 5 15:45:29.515866 sshd-session[5092]: pam_unix(sshd:session): session closed for user core Nov 5 15:45:29.523373 systemd[1]: sshd@10-139.178.70.108:22-139.178.89.65:47974.service: Deactivated successfully. Nov 5 15:45:29.524451 systemd[1]: session-13.scope: Deactivated successfully. Nov 5 15:45:29.524983 systemd-logind[1661]: Session 13 logged out. Waiting for processes to exit. Nov 5 15:45:29.527685 systemd[1]: Started sshd@11-139.178.70.108:22-139.178.89.65:47980.service - OpenSSH per-connection server daemon (139.178.89.65:47980). Nov 5 15:45:29.528724 systemd-logind[1661]: Removed session 13. Nov 5 15:45:29.568066 sshd[5106]: Accepted publickey for core from 139.178.89.65 port 47980 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:45:29.569791 sshd-session[5106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:45:29.574347 systemd-logind[1661]: New session 14 of user core. Nov 5 15:45:29.581102 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 5 15:45:29.706306 sshd[5109]: Connection closed by 139.178.89.65 port 47980 Nov 5 15:45:29.706728 sshd-session[5106]: pam_unix(sshd:session): session closed for user core Nov 5 15:45:29.708895 systemd[1]: sshd@11-139.178.70.108:22-139.178.89.65:47980.service: Deactivated successfully. Nov 5 15:45:29.710142 systemd[1]: session-14.scope: Deactivated successfully. Nov 5 15:45:29.711031 systemd-logind[1661]: Session 14 logged out. Waiting for processes to exit. Nov 5 15:45:29.711762 systemd-logind[1661]: Removed session 14. Nov 5 15:45:29.852416 containerd[1687]: time="2025-11-05T15:45:29.852336364Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:45:29.856472 containerd[1687]: time="2025-11-05T15:45:29.856430317Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:45:29.856655 containerd[1687]: time="2025-11-05T15:45:29.856486980Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:45:29.856768 kubelet[3004]: E1105 15:45:29.856589 3004 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:45:29.856768 kubelet[3004]: E1105 15:45:29.856705 3004 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:45:29.857203 kubelet[3004]: E1105 15:45:29.856904 3004 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fv9hk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pwv49_calico-system(aa307e49-5503-4739-ace7-169707e5fd38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:45:29.857460 containerd[1687]: time="2025-11-05T15:45:29.857105948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:45:30.240051 containerd[1687]: time="2025-11-05T15:45:30.239830517Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:45:30.240279 containerd[1687]: time="2025-11-05T15:45:30.240134511Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:45:30.240279 containerd[1687]: time="2025-11-05T15:45:30.240193406Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:45:30.242189 kubelet[3004]: E1105 15:45:30.242159 3004 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:45:30.242249 kubelet[3004]: E1105 15:45:30.242202 3004 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:45:30.242614 containerd[1687]: time="2025-11-05T15:45:30.242404252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:45:30.242657 kubelet[3004]: E1105 15:45:30.242595 3004 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rg7kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c8b78b8fb-cphp6_calico-apiserver(2e5e65c4-fa11-4b09-8e02-96b75e14b836): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:45:30.243726 kubelet[3004]: E1105 15:45:30.243704 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c8b78b8fb-cphp6" podUID="2e5e65c4-fa11-4b09-8e02-96b75e14b836" Nov 5 15:45:31.154328 containerd[1687]: time="2025-11-05T15:45:31.154296738Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:45:31.160250 containerd[1687]: time="2025-11-05T15:45:31.160209517Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:45:31.160343 containerd[1687]: time="2025-11-05T15:45:31.160278566Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:45:31.160452 kubelet[3004]: E1105 15:45:31.160403 3004 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:45:31.160633 kubelet[3004]: E1105 15:45:31.160459 3004 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:45:31.160677 kubelet[3004]: E1105 15:45:31.160647 3004 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fv9hk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pwv49_calico-system(aa307e49-5503-4739-ace7-169707e5fd38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:45:31.161831 kubelet[3004]: E1105 15:45:31.161782 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pwv49" podUID="aa307e49-5503-4739-ace7-169707e5fd38" Nov 5 15:45:32.040885 containerd[1687]: time="2025-11-05T15:45:32.040832538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:45:32.443159 containerd[1687]: time="2025-11-05T15:45:32.442884303Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:45:32.443318 containerd[1687]: time="2025-11-05T15:45:32.443297735Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:45:32.443374 containerd[1687]: time="2025-11-05T15:45:32.443358841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:45:32.443530 kubelet[3004]: E1105 15:45:32.443502 3004 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:45:32.443719 kubelet[3004]: E1105 15:45:32.443540 3004 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:45:32.443719 kubelet[3004]: E1105 15:45:32.443630 3004 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9pkcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vd7mt_calico-system(8eefe537-f46b-421d-a847-6a36d2b266d7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:45:32.445683 kubelet[3004]: E1105 15:45:32.445632 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vd7mt" podUID="8eefe537-f46b-421d-a847-6a36d2b266d7" Nov 5 15:45:34.722187 systemd[1]: Started sshd@12-139.178.70.108:22-139.178.89.65:47990.service - OpenSSH per-connection server daemon (139.178.89.65:47990). Nov 5 15:45:34.775060 sshd[5121]: Accepted publickey for core from 139.178.89.65 port 47990 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:45:34.775904 sshd-session[5121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:45:34.778766 systemd-logind[1661]: New session 15 of user core. Nov 5 15:45:34.787109 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 5 15:45:34.941833 sshd[5124]: Connection closed by 139.178.89.65 port 47990 Nov 5 15:45:34.944163 sshd-session[5121]: pam_unix(sshd:session): session closed for user core Nov 5 15:45:34.952420 systemd[1]: sshd@12-139.178.70.108:22-139.178.89.65:47990.service: Deactivated successfully. Nov 5 15:45:34.954058 systemd[1]: session-15.scope: Deactivated successfully. Nov 5 15:45:34.955448 systemd-logind[1661]: Session 15 logged out. Waiting for processes to exit. Nov 5 15:45:34.957056 systemd-logind[1661]: Removed session 15. Nov 5 15:45:39.042737 kubelet[3004]: E1105 15:45:39.042679 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b58b895f4-4cnd5" podUID="c9958fac-31c8-4b49-8704-7a3e667cd144" Nov 5 15:45:39.954255 systemd[1]: Started sshd@13-139.178.70.108:22-139.178.89.65:48754.service - OpenSSH per-connection server daemon (139.178.89.65:48754). Nov 5 15:45:40.055688 sshd[5144]: Accepted publickey for core from 139.178.89.65 port 48754 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:45:40.056567 sshd-session[5144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:45:40.060314 systemd-logind[1661]: New session 16 of user core. Nov 5 15:45:40.066225 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 5 15:45:40.187473 sshd[5147]: Connection closed by 139.178.89.65 port 48754 Nov 5 15:45:40.187802 sshd-session[5144]: pam_unix(sshd:session): session closed for user core Nov 5 15:45:40.191496 systemd[1]: sshd@13-139.178.70.108:22-139.178.89.65:48754.service: Deactivated successfully. Nov 5 15:45:40.193468 systemd[1]: session-16.scope: Deactivated successfully. Nov 5 15:45:40.195273 systemd-logind[1661]: Session 16 logged out. Waiting for processes to exit. Nov 5 15:45:40.196886 systemd-logind[1661]: Removed session 16. Nov 5 15:45:42.042395 kubelet[3004]: E1105 15:45:42.041658 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c8b78b8fb-6ngl7" podUID="924e02ba-63d7-442f-ae24-772364097f08" Nov 5 15:45:42.042395 kubelet[3004]: E1105 15:45:42.041918 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f57597689-7pcp2" podUID="6d011c6a-b4ab-4b64-b7bd-117fed5a2af3" Nov 5 15:45:42.328080 containerd[1687]: time="2025-11-05T15:45:42.327828415Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dd1a844b27397d61eff7cfe5d11dc9922006c7b05e337864855037e5720ae325\" id:\"eb79006154ca0768f817ba949456bdb992e48d4efb2ea31caa6520f5259a8832\" pid:5171 exited_at:{seconds:1762357542 nanos:327214142}" Nov 5 15:45:45.041486 kubelet[3004]: E1105 15:45:45.041452 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c8b78b8fb-cphp6" podUID="2e5e65c4-fa11-4b09-8e02-96b75e14b836" Nov 5 15:45:45.042358 kubelet[3004]: E1105 15:45:45.042325 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pwv49" podUID="aa307e49-5503-4739-ace7-169707e5fd38" Nov 5 15:45:45.205014 systemd[1]: Started sshd@14-139.178.70.108:22-139.178.89.65:48770.service - OpenSSH per-connection server daemon (139.178.89.65:48770). Nov 5 15:45:45.246384 sshd[5183]: Accepted publickey for core from 139.178.89.65 port 48770 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:45:45.247180 sshd-session[5183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:45:45.250043 systemd-logind[1661]: New session 17 of user core. Nov 5 15:45:45.261222 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 5 15:45:45.375373 sshd[5186]: Connection closed by 139.178.89.65 port 48770 Nov 5 15:45:45.376070 sshd-session[5183]: pam_unix(sshd:session): session closed for user core Nov 5 15:45:45.378134 systemd[1]: sshd@14-139.178.70.108:22-139.178.89.65:48770.service: Deactivated successfully. Nov 5 15:45:45.379354 systemd[1]: session-17.scope: Deactivated successfully. Nov 5 15:45:45.379937 systemd-logind[1661]: Session 17 logged out. Waiting for processes to exit. Nov 5 15:45:45.381352 systemd-logind[1661]: Removed session 17. Nov 5 15:45:46.041113 kubelet[3004]: E1105 15:45:46.041077 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vd7mt" podUID="8eefe537-f46b-421d-a847-6a36d2b266d7" Nov 5 15:45:50.388197 systemd[1]: Started sshd@15-139.178.70.108:22-139.178.89.65:41384.service - OpenSSH per-connection server daemon (139.178.89.65:41384). Nov 5 15:45:50.429539 sshd[5198]: Accepted publickey for core from 139.178.89.65 port 41384 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:45:50.430595 sshd-session[5198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:45:50.434070 systemd-logind[1661]: New session 18 of user core. Nov 5 15:45:50.439121 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 5 15:45:50.563796 sshd[5201]: Connection closed by 139.178.89.65 port 41384 Nov 5 15:45:50.564312 sshd-session[5198]: pam_unix(sshd:session): session closed for user core Nov 5 15:45:50.574003 systemd[1]: sshd@15-139.178.70.108:22-139.178.89.65:41384.service: Deactivated successfully. Nov 5 15:45:50.575795 systemd[1]: session-18.scope: Deactivated successfully. Nov 5 15:45:50.577087 systemd-logind[1661]: Session 18 logged out. Waiting for processes to exit. Nov 5 15:45:50.581230 systemd[1]: Started sshd@16-139.178.70.108:22-139.178.89.65:41396.service - OpenSSH per-connection server daemon (139.178.89.65:41396). Nov 5 15:45:50.583568 systemd-logind[1661]: Removed session 18. Nov 5 15:45:50.766912 sshd[5213]: Accepted publickey for core from 139.178.89.65 port 41396 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:45:50.767876 sshd-session[5213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:45:50.770795 systemd-logind[1661]: New session 19 of user core. Nov 5 15:45:50.777036 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 5 15:45:51.527478 sshd[5216]: Connection closed by 139.178.89.65 port 41396 Nov 5 15:45:51.537460 systemd[1]: Started sshd@17-139.178.70.108:22-139.178.89.65:41408.service - OpenSSH per-connection server daemon (139.178.89.65:41408). Nov 5 15:45:51.544160 sshd-session[5213]: pam_unix(sshd:session): session closed for user core Nov 5 15:45:51.587372 systemd[1]: sshd@16-139.178.70.108:22-139.178.89.65:41396.service: Deactivated successfully. Nov 5 15:45:51.588638 systemd[1]: session-19.scope: Deactivated successfully. Nov 5 15:45:51.590292 systemd-logind[1661]: Session 19 logged out. Waiting for processes to exit. Nov 5 15:45:51.591689 systemd-logind[1661]: Removed session 19. Nov 5 15:45:51.640617 sshd[5224]: Accepted publickey for core from 139.178.89.65 port 41408 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:45:51.642304 sshd-session[5224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:45:51.646262 systemd-logind[1661]: New session 20 of user core. Nov 5 15:45:51.651151 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 5 15:45:52.444901 sshd[5230]: Connection closed by 139.178.89.65 port 41408 Nov 5 15:45:52.444840 sshd-session[5224]: pam_unix(sshd:session): session closed for user core Nov 5 15:45:52.451509 systemd[1]: sshd@17-139.178.70.108:22-139.178.89.65:41408.service: Deactivated successfully. Nov 5 15:45:52.453016 systemd[1]: session-20.scope: Deactivated successfully. Nov 5 15:45:52.453777 systemd-logind[1661]: Session 20 logged out. Waiting for processes to exit. Nov 5 15:45:52.457721 systemd[1]: Started sshd@18-139.178.70.108:22-139.178.89.65:41410.service - OpenSSH per-connection server daemon (139.178.89.65:41410). Nov 5 15:45:52.459694 systemd-logind[1661]: Removed session 20. Nov 5 15:45:52.586637 sshd[5245]: Accepted publickey for core from 139.178.89.65 port 41410 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:45:52.588746 sshd-session[5245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:45:52.593828 systemd-logind[1661]: New session 21 of user core. Nov 5 15:45:52.598097 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 5 15:45:53.042003 kubelet[3004]: E1105 15:45:53.041937 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b58b895f4-4cnd5" podUID="c9958fac-31c8-4b49-8704-7a3e667cd144" Nov 5 15:45:53.163039 sshd[5251]: Connection closed by 139.178.89.65 port 41410 Nov 5 15:45:53.164099 sshd-session[5245]: pam_unix(sshd:session): session closed for user core Nov 5 15:45:53.173695 systemd[1]: sshd@18-139.178.70.108:22-139.178.89.65:41410.service: Deactivated successfully. Nov 5 15:45:53.175912 systemd[1]: session-21.scope: Deactivated successfully. Nov 5 15:45:53.179249 systemd-logind[1661]: Session 21 logged out. Waiting for processes to exit. Nov 5 15:45:53.188121 systemd[1]: Started sshd@19-139.178.70.108:22-139.178.89.65:41414.service - OpenSSH per-connection server daemon (139.178.89.65:41414). Nov 5 15:45:53.190740 systemd-logind[1661]: Removed session 21. Nov 5 15:45:53.244611 sshd[5261]: Accepted publickey for core from 139.178.89.65 port 41414 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:45:53.245495 sshd-session[5261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:45:53.249596 systemd-logind[1661]: New session 22 of user core. Nov 5 15:45:53.256686 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 5 15:45:53.407466 sshd[5264]: Connection closed by 139.178.89.65 port 41414 Nov 5 15:45:53.408052 sshd-session[5261]: pam_unix(sshd:session): session closed for user core Nov 5 15:45:53.410844 systemd[1]: sshd@19-139.178.70.108:22-139.178.89.65:41414.service: Deactivated successfully. Nov 5 15:45:53.412352 systemd[1]: session-22.scope: Deactivated successfully. Nov 5 15:45:53.412872 systemd-logind[1661]: Session 22 logged out. Waiting for processes to exit. Nov 5 15:45:53.413783 systemd-logind[1661]: Removed session 22. Nov 5 15:45:56.141863 kubelet[3004]: E1105 15:45:56.141815 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pwv49" podUID="aa307e49-5503-4739-ace7-169707e5fd38" Nov 5 15:45:57.042484 kubelet[3004]: E1105 15:45:57.042174 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c8b78b8fb-cphp6" podUID="2e5e65c4-fa11-4b09-8e02-96b75e14b836" Nov 5 15:45:57.087857 kubelet[3004]: E1105 15:45:57.087834 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c8b78b8fb-6ngl7" podUID="924e02ba-63d7-442f-ae24-772364097f08" Nov 5 15:45:57.088058 kubelet[3004]: E1105 15:45:57.088044 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f57597689-7pcp2" podUID="6d011c6a-b4ab-4b64-b7bd-117fed5a2af3" Nov 5 15:45:58.419116 systemd[1]: Started sshd@20-139.178.70.108:22-139.178.89.65:51416.service - OpenSSH per-connection server daemon (139.178.89.65:51416). Nov 5 15:45:58.491506 sshd[5283]: Accepted publickey for core from 139.178.89.65 port 51416 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:45:58.492155 sshd-session[5283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:45:58.496155 systemd-logind[1661]: New session 23 of user core. Nov 5 15:45:58.502465 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 5 15:45:58.678871 sshd[5286]: Connection closed by 139.178.89.65 port 51416 Nov 5 15:45:58.679578 sshd-session[5283]: pam_unix(sshd:session): session closed for user core Nov 5 15:45:58.682637 systemd-logind[1661]: Session 23 logged out. Waiting for processes to exit. Nov 5 15:45:58.684140 systemd[1]: sshd@20-139.178.70.108:22-139.178.89.65:51416.service: Deactivated successfully. Nov 5 15:45:58.685931 systemd[1]: session-23.scope: Deactivated successfully. Nov 5 15:45:58.688230 systemd-logind[1661]: Removed session 23. Nov 5 15:45:59.043047 kubelet[3004]: E1105 15:45:59.043015 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vd7mt" podUID="8eefe537-f46b-421d-a847-6a36d2b266d7" Nov 5 15:46:03.698986 systemd[1]: Started sshd@21-139.178.70.108:22-139.178.89.65:51428.service - OpenSSH per-connection server daemon (139.178.89.65:51428). Nov 5 15:46:04.105285 sshd[5304]: Accepted publickey for core from 139.178.89.65 port 51428 ssh2: RSA SHA256:T4n6gxFFqnJQq5kwyjY8FxLcDQgPqB9qdVS/VvHGNjA Nov 5 15:46:04.109413 sshd-session[5304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:46:04.121330 systemd-logind[1661]: New session 24 of user core. Nov 5 15:46:04.127134 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 5 15:46:04.640491 sshd[5307]: Connection closed by 139.178.89.65 port 51428 Nov 5 15:46:04.639819 sshd-session[5304]: pam_unix(sshd:session): session closed for user core Nov 5 15:46:04.644170 systemd[1]: sshd@21-139.178.70.108:22-139.178.89.65:51428.service: Deactivated successfully. Nov 5 15:46:04.646540 systemd[1]: session-24.scope: Deactivated successfully. Nov 5 15:46:04.649872 systemd-logind[1661]: Session 24 logged out. Waiting for processes to exit. Nov 5 15:46:04.651449 systemd-logind[1661]: Removed session 24. Nov 5 15:46:06.041037 containerd[1687]: time="2025-11-05T15:46:06.041007744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:46:06.476320 containerd[1687]: time="2025-11-05T15:46:06.476060816Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:46:06.476551 containerd[1687]: time="2025-11-05T15:46:06.476419614Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:46:06.476551 containerd[1687]: time="2025-11-05T15:46:06.476494411Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:46:06.476652 kubelet[3004]: E1105 15:46:06.476620 3004 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:46:06.477998 kubelet[3004]: E1105 15:46:06.476678 3004 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:46:06.477998 kubelet[3004]: E1105 15:46:06.476776 3004 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f8d2740034654cd6baf7021a15760839,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pp2dq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b58b895f4-4cnd5_calico-system(c9958fac-31c8-4b49-8704-7a3e667cd144): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:46:06.479067 containerd[1687]: time="2025-11-05T15:46:06.479017355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:46:06.853465 containerd[1687]: time="2025-11-05T15:46:06.853436075Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:46:06.853747 containerd[1687]: time="2025-11-05T15:46:06.853726627Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:46:06.853796 containerd[1687]: time="2025-11-05T15:46:06.853784155Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:46:06.853912 kubelet[3004]: E1105 15:46:06.853882 3004 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:46:06.853971 kubelet[3004]: E1105 15:46:06.853915 3004 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:46:06.854066 kubelet[3004]: E1105 15:46:06.854034 3004 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pp2dq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b58b895f4-4cnd5_calico-system(c9958fac-31c8-4b49-8704-7a3e667cd144): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:46:06.855353 kubelet[3004]: E1105 15:46:06.855331 3004 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b58b895f4-4cnd5" podUID="c9958fac-31c8-4b49-8704-7a3e667cd144"